Search Results
Found 1 results
510(k) Data Aggregation
(154 days)
Accuro 3S
The Accuro 3S is a diagnostic ultrasound imaging system for use by qualified and trained healthcare professionals. Accuro 3S is intended to be used in a hospital or medical clinic environment at the point of care.
Accuro 3S supports B-mode imaging and a SpineNav-AI™ image processing software. Accuro 3S clinical applications include: musculoskeletal conventional and superficial, and guidance for needle or catheter placement. A typical examination using Accuro 3S is guidance of neuraxial anesthesia.
The Accuro® 3S is an ultrasound imaging device intended for use by qualified and trained healthcare professionals in hospital or medical clinic environments. The device offers B-mode imaging and a SpineNav-AI™ image processing software.
The Accuro 3S is a portable system with a small footprint that can be easily maneuvered within the intended use environment and at the point of care. The device features a touchscreen interface and articulated monitor arm to optimize viewing angle. An integrated battery pack allows the system to operate without wall power. Accuro 3S interfaces with healthcare IT networks to implement DICOM-based patient and image archival workflows, with image storage in an external PACS. The device utilizes a Dual-Array™ convex probe.
SpineNav-AI is an automated software tool that utilizes machine learning technologies to facilitate workflows associated with musculoskeletal imaging assessments of the lumbar spine.
The Accuro 3S Dual-Array probe comprises a side-by-side convex transducer array design. Each convex array has identical specifications: 64 elements, 3.5 – 4.0 MHz nominal center frequency, 480-micron pitch, and 50 mm radius of curvature.
Here's a breakdown of the acceptance criteria and the study that proves the Accuro 3S device meets them, based on the provided FDA 510(k) clearance letter.
Important Note: The provided document is an FDA 510(k) clearance letter, not a detailed study report. Therefore, some information, especially regarding the specifics of the AI study design (e.g., adjudication method beyond concensus, full statistical analysis of MRMC), is inferred or not explicitly stated in the provided text.
1. Table of Acceptance Criteria and Reported Device Performance
Parameter/Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
SpineNav-AI™ Anatomical Annotation | ||
Per-frame accuracy (Individual anatomical features) | $\ge$ 70% | 82.1% - 99.3% (Range across all anatomical structures) |
Per-sequence detection success rate (Individual anatomical features) | $\ge$ 80% | 91.7% - 100% (Range across all anatomical structures) |
DICE score (for all anatomical structures) | Not explicitly stated as an "acceptance criteria" but included in results | 0.64 - 0.87 (Range across all anatomical structures) |
Epidural Region Indicator | ||
Accuracy (Lateral dimension) | Not explicitly stated as a numerical "acceptance criteria" | 1.61 (± 2.57) mm compared to radiologist panel ground truth |
Accuracy (Depth dimension) | Not explicitly stated as a numerical "acceptance criteria" | 2.42 (± 3.41) mm compared to radiologist panel ground truth |
Per-frame detection success rate | 95.5% (This appears to be the result that also functioned as the specific acceptance criteria for this metric) | 95.5% |
Note on Acceptance Criteria: The document states that "success criteria for each anatomical annotation produced with SpineNav-AI were derived via a preliminary survey with intended users." For the Epidural Region Indicator's accuracy, specific numerical acceptance criteria are not explicitly stated in the same manner as the per-frame and per-sequence detection rates. The reported values are presented as the results of the evaluation against the radiologist panel ground truth.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): 120 sequences and 10,080 image frames obtained from a total of 81 individuals.
- Data Provenance:
- Country of origin: Not explicitly stated, but collected at "seven (7) geographically diverse sites," implying multi-site collection.
- Retrospective or Prospective: Not explicitly stated, but the description of "test dataset comprising a diverse range of demographic variables" and being "collected at seven (7) geographically diverse sites" suggests data collected specifically for this validation, making it likely prospective or a carefully curated retrospective collection. The phrase "completely distinct from the test dataset used for validation" for the training data further supports the idea of a dedicated test set.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Three board-certified radiologists.
- Qualifications: "Board certified radiologists." No specific years of experience or subspecialty are detailed beyond board certification.
4. Adjudication Method for the Test Set
- Adjudication Method: The ground truth for anatomical labels was "established from a panel of three board certified radiologists." This strongly implies a consensus method where the three radiologists collectively determined the ground truth. The specific nature of how disagreements were resolved (e.g., majority vote, discussion-based consensus, or presence of a tie-breaker) is not explicitly detailed but falls under a consensus approach. It is not a 2+1 or 3+1 setup, but rather a direct panel consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, an MRMC comparative effectiveness study was not explicitly stated as having been done. The study focuses on the standalone performance of the AI against expert ground truth. Assistance to human readers is not mentioned as part of the validation design.
- Effect size of improvement: Not applicable, as an MRMC study comparing human readers with and without AI assistance was not described.
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Yes. The "AI Summary of Tests" section exclusively describes the performance of the SpineNav-AI™ software tool against ground truth established by experts, without human intervention or interaction. Metrics like per-frame accuracy, per-sequence detection success rate, and DICE scores are typical for standalone algorithm performance evaluation.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert Concensus. The ground truth for anatomical labels was "established from a panel of three board certified radiologists." This means the truth was defined by the agreement of multiple human experts.
8. Sample Size for the Training Set
- Sample Size (Training Set): 25,536 images from 147 subjects.
9. How Ground Truth for the Training Set Was Established
- How Ground Truth Was Established: The document states that the training data and test data are "completely distinct." While it explicitly mentions how ground truth was established for the test set (panel of three board-certified radiologists), it does not explicitly state how the ground truth for the training set was established. It is a common practice for training data ground truth to also be established by experts, potentially with varying levels of expert involvement or adjudication compared to a highly scrutinized test set, but this specific detail is not provided in the submitted text.
Ask a specific question about this device
Page 1 of 1