Search Results
Found 1 results
510(k) Data Aggregation
(150 days)
Nerveblox assists qualified healthcare professionals in identifying anatomical structures in the following ultrasound-guided peripheral nerve block regions for use prior to any needle intervention and is used for adult patients 18 years of age or older. It is not used in combination with needles or during needle insertion.
Nerveblox supports users in the following block regions:
- Interscalene Brachial Plexus
- Supraclavicular Brachial Plexus
- Infraclavicular Brachial Plexus
- Cervical Plexus
- Axillary Brachial Plexus
- PECS I & II
- Transversus Abdominis Plane (TAP)
- Rectus Sheath
- Femoral Nerve
- Adductor Canal
- Popliteal Sciatic
- Erector Spinae Plane (ESP)
Nerveblox is a software as a medical device, designed to assist clinicians in identifying anatomy for ultrasound-guided peripheral nerve blocks.
Integrated into commercially available Venue (Venue K240111, Venue Go K240053, Venue Fit K234106 and Venue Sprint K240206) ultrasound systems (GE HealthCare, Chicago, IL), Nerveblox utilizes non-adaptive AI/ML functionalities to highlight anatomical structures by applying color overlays, adding name labels, and providing a quality score that informs the user about the overall image's suitability for anatomical assessment and the completeness level of detected anatomy regarding the key anatomical structures.
While Nerveblox enhances visualization, it does not replace the clinician's expertise but supports anatomical identification prior to the procedure.
Here's a breakdown of the acceptance criteria and the study proving Nerveblox meets them, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document presents two main areas for acceptance criteria: Analytical Validation and Clinical Safety and Accuracy Validation.
| Verification/Validation Methods | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Analytical Validation | ||
| Anatomical Structure Detection Accuracy | > 0.8 | Acceptance criteria were successfully met for all block regions. (Specific value not provided for test set, but clinical study results for accuracy are 97%) |
| Dice Similarity Score | > 0.75 | Acceptance criteria were successfully met for all block regions. |
| Quality Meter Accuracy | > 0.85 | Acceptance criteria were successfully met for all block regions. (Clinical study reports PPA and NPA for agreement with experts on quality score levels, and weighted Kappa.) |
| Clinical Safety and Accuracy Validation | ||
| Anatomical Structures: | ||
| Accuracy (TP + TN) | Correct highlighting of safety critical anatomical structures ≥ 80% | 97.2% (933 out of 960 scans in the clinical study). The clinical study also separately reported a true positive rate of 98% and a true negative rate of 90%. |
| Misidentification (FP) Rate | < 5% | 1.0% (10 out of 960 scans). The clinical study also separately reported a false-positive rate (FPr) of 10.4% from expert assessment, suggesting a potential discrepancy or different calculation method between the summary table and the clinical study section for this specific metric. Given the clinical study provides more detail, the 10.4% FPr seems to be the more specific finding from the expert evaluation. |
| Non-identification (FN) Rate | < 15% | 1.8% (17 out of 960 scans). The clinical study also separately reported a false-negative rate (FNr) of 2% from expert assessment. |
| Image Quality Score: | ||
| Accuracy of identifying the correct block region | > 90% | 95.3% |
| Error Rate of identifying the correct block region | < 5% | 4.7% |
| Fair or above agreement with the experts on quality score levels | Weighted Kappa Coefficient (κ) ≥ 0.77 Positive Percentage Agreement (PPA) ≥ 61.5 % Negative Percentage Agreement (NPA) ≥ 88.9 % | Weighted Kappa Coefficient (κ) ranged from 0.77 to 0.98 across all block regions, indicating substantial agreement. PPA ranged from 61.5% to 100.0% across individual score levels and regions. NPA ranged from 88.9% to 100.0% across individual score levels and regions. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The clinical validation study involved 80 distinct ultrasound scans from 40 healthy volunteers.
- Data Provenance: The study was a prospective clinical validation study conducted by anesthesiologists. The volunteers were from a general population with varying BMIs. The country of origin of the data is not explicitly stated in the provided text, however, it implies a primary clinical study rather than a collection from diverse retrospective sources.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The document states that the results were "evaluated by expert U.S. board-certified anesthesiologists." The exact number of experts is not specified.
- Qualifications of Experts: They were "expert U.S. board-certified anesthesiologists." Their years of experience are not explicitly mentioned.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It only mentions that the AI results were "evaluated by expert U.S. board-certified anesthesiologists." This suggests expert consensus or individual expert review, but the specific process for resolving discrepancies is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
An MRMC comparative effectiveness study was not explicitly described as a primary endpoint comparing human readers with AI assistance vs. without AI assistance. Instead, the clinical safety and accuracy validation focused on the performance of the AI itself and its agreement with expert evaluations.
However, the clinical study section states: "Expert assessments indicated that AI-assisted highlighting reduced the perceived risk of adverse events in 61.67% of cases and reduced the risk of block failure in 66.36%." While this hints at a perceived benefit for human users, it's not a formal MRMC study showing quantifiable improved performance of human readers with AI assistance compared to without.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, a standalone performance evaluation of the algorithm was conducted. The "Clinical Safety and Accuracy Validation" and "Analytical validation" sections primarily report the algorithm's performance (e.g., accuracy, Dice similarity, FP/FN rates, agreement with experts on quality score) measured without active human intervention in the loop during the actual AI processing. The experts evaluated the AI's output against ground truth.
7. Type of Ground Truth Used
The ground truth for the clinical validation appears to be expert evaluation/consensus by "expert U.S. board-certified anesthesiologists." The initial ultrasound scans were performed by anesthesiologists, implying that the images themselves might be implicitly curated, but the "truth" against which the AI was measured was derived from expert review of the AI's outputs on those images. The document doesn't mention pathology or outcomes data as the direct ground truth.
8. Sample Size for the Training Set
The document does not provide the sample size used for the training set. It mentions that Nerveblox utilizes "non-adaptive AI/ML functionalities" and "Locked deep learning models," but details about the training data (e.g., size, diversity) are absent.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. This information is typically proprietary or part of the internal development process and is often not fully disclosed in 510(k) summaries beyond stating the use of "locked deep learning models."
Ask a specific question about this device
Page 1 of 1