Search Results
Found 3 results
510(k) Data Aggregation
(150 days)
Nerveblox assists qualified healthcare professionals in identifying anatomical structures in the following ultrasound-guided peripheral nerve block regions for use prior to any needle intervention and is used for adult patients 18 years of age or older. It is not used in combination with needles or during needle insertion.
Nerveblox supports users in the following block regions:
- Interscalene Brachial Plexus
- Supraclavicular Brachial Plexus
- Infraclavicular Brachial Plexus
- Cervical Plexus
- Axillary Brachial Plexus
- PECS I & II
- Transversus Abdominis Plane (TAP)
- Rectus Sheath
- Femoral Nerve
- Adductor Canal
- Popliteal Sciatic
- Erector Spinae Plane (ESP)
Nerveblox is a software as a medical device, designed to assist clinicians in identifying anatomy for ultrasound-guided peripheral nerve blocks.
Integrated into commercially available Venue (Venue K240111, Venue Go K240053, Venue Fit K234106 and Venue Sprint K240206) ultrasound systems (GE HealthCare, Chicago, IL), Nerveblox utilizes non-adaptive AI/ML functionalities to highlight anatomical structures by applying color overlays, adding name labels, and providing a quality score that informs the user about the overall image's suitability for anatomical assessment and the completeness level of detected anatomy regarding the key anatomical structures.
While Nerveblox enhances visualization, it does not replace the clinician's expertise but supports anatomical identification prior to the procedure.
Here's a breakdown of the acceptance criteria and the study proving Nerveblox meets them, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document presents two main areas for acceptance criteria: Analytical Validation and Clinical Safety and Accuracy Validation.
| Verification/Validation Methods | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Analytical Validation | ||
| Anatomical Structure Detection Accuracy | > 0.8 | Acceptance criteria were successfully met for all block regions. (Specific value not provided for test set, but clinical study results for accuracy are 97%) |
| Dice Similarity Score | > 0.75 | Acceptance criteria were successfully met for all block regions. |
| Quality Meter Accuracy | > 0.85 | Acceptance criteria were successfully met for all block regions. (Clinical study reports PPA and NPA for agreement with experts on quality score levels, and weighted Kappa.) |
| Clinical Safety and Accuracy Validation | ||
| Anatomical Structures: | ||
| Accuracy (TP + TN) | Correct highlighting of safety critical anatomical structures ≥ 80% | 97.2% (933 out of 960 scans in the clinical study). The clinical study also separately reported a true positive rate of 98% and a true negative rate of 90%. |
| Misidentification (FP) Rate | < 5% | 1.0% (10 out of 960 scans). The clinical study also separately reported a false-positive rate (FPr) of 10.4% from expert assessment, suggesting a potential discrepancy or different calculation method between the summary table and the clinical study section for this specific metric. Given the clinical study provides more detail, the 10.4% FPr seems to be the more specific finding from the expert evaluation. |
| Non-identification (FN) Rate | < 15% | 1.8% (17 out of 960 scans). The clinical study also separately reported a false-negative rate (FNr) of 2% from expert assessment. |
| Image Quality Score: | ||
| Accuracy of identifying the correct block region | > 90% | 95.3% |
| Error Rate of identifying the correct block region | < 5% | 4.7% |
| Fair or above agreement with the experts on quality score levels | Weighted Kappa Coefficient (κ) ≥ 0.77 Positive Percentage Agreement (PPA) ≥ 61.5 % Negative Percentage Agreement (NPA) ≥ 88.9 % | Weighted Kappa Coefficient (κ) ranged from 0.77 to 0.98 across all block regions, indicating substantial agreement. PPA ranged from 61.5% to 100.0% across individual score levels and regions. NPA ranged from 88.9% to 100.0% across individual score levels and regions. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The clinical validation study involved 80 distinct ultrasound scans from 40 healthy volunteers.
- Data Provenance: The study was a prospective clinical validation study conducted by anesthesiologists. The volunteers were from a general population with varying BMIs. The country of origin of the data is not explicitly stated in the provided text, however, it implies a primary clinical study rather than a collection from diverse retrospective sources.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The document states that the results were "evaluated by expert U.S. board-certified anesthesiologists." The exact number of experts is not specified.
- Qualifications of Experts: They were "expert U.S. board-certified anesthesiologists." Their years of experience are not explicitly mentioned.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It only mentions that the AI results were "evaluated by expert U.S. board-certified anesthesiologists." This suggests expert consensus or individual expert review, but the specific process for resolving discrepancies is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
An MRMC comparative effectiveness study was not explicitly described as a primary endpoint comparing human readers with AI assistance vs. without AI assistance. Instead, the clinical safety and accuracy validation focused on the performance of the AI itself and its agreement with expert evaluations.
However, the clinical study section states: "Expert assessments indicated that AI-assisted highlighting reduced the perceived risk of adverse events in 61.67% of cases and reduced the risk of block failure in 66.36%." While this hints at a perceived benefit for human users, it's not a formal MRMC study showing quantifiable improved performance of human readers with AI assistance compared to without.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, a standalone performance evaluation of the algorithm was conducted. The "Clinical Safety and Accuracy Validation" and "Analytical validation" sections primarily report the algorithm's performance (e.g., accuracy, Dice similarity, FP/FN rates, agreement with experts on quality score) measured without active human intervention in the loop during the actual AI processing. The experts evaluated the AI's output against ground truth.
7. Type of Ground Truth Used
The ground truth for the clinical validation appears to be expert evaluation/consensus by "expert U.S. board-certified anesthesiologists." The initial ultrasound scans were performed by anesthesiologists, implying that the images themselves might be implicitly curated, but the "truth" against which the AI was measured was derived from expert review of the AI's outputs on those images. The document doesn't mention pathology or outcomes data as the direct ground truth.
8. Sample Size for the Training Set
The document does not provide the sample size used for the training set. It mentions that Nerveblox utilizes "non-adaptive AI/ML functionalities" and "Locked deep learning models," but details about the training data (e.g., size, diversity) are absent.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. This information is typically proprietary or part of the internal development process and is often not fully disclosed in 510(k) summaries beyond stating the use of "locked deep learning models."
Ask a specific question about this device
(25 days)
ScanNav Anatomy Peripheral Nerve Block is indicated to assist qualified healthcare professionals to identify and label the below mentioned anatomy in live ultrasound images in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.
The highlighting of structures in the following anatomical regions is supported:
- · Axillary level brachial plexus
- · Erector spinae plane
- · Interscalene level brachial plexus
- · Popliteal level sciatic nerve
- · Rectus sheath plane
- · Sub-sartorial femoral triangle / Adductor canal
- · Superior trunk of brachial plexus
- · Supraclavicular level brachial plexus
- · Longitudinal suprainguinal fascia iliaca plane
- Femoral block
ScanNav Anatomy Peripheral Nerve Block is an accessory to compatible general-purpose diagnostic ultrasound systems.
ScanNav Anatomy Peripheral Nerve Block is a software (SaMD) which assists qualified healthcare professionals to identify and label relevant anatomical structures in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.
The device receives ultrasound images in real-time from a compatible general-purpose ultrasound machine. It processes these images using deep learning artificial intelligence algorithms and highlights relevant anatomical structures. The ultrasound machine display remains unaffected, and the highlighting is only displayed on a general-purpose panel PC provided with the device.
Here's a breakdown of the acceptance criteria and the study details for the ScanNav Anatomy Peripheral Nerve Block device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
| Acceptance Criteria | Reported Device Performance (Femoral Block) |
|---|---|
| FP rate: Mis-identification rate of safety critical anatomical structures in the indicated procedures is less than 5%. | 1.1% (2 out of 183 scans) |
| Accuracy (TP+TN) rate: Correct highlighting of safety critical anatomical structures in the indicated procedures at least 80% of the time. | 96.7% (177 out of 183 scans) |
| FN rate: Non-identification rate of safety critical anatomical structures in the indicated procedures is less than 15%. | 2.2% (4 out of 183 scans) |
Note: The software tests also had acceptance criteria (e.g., successful completion of unit test, integration test, etc.) but specific quantitative performance metrics were not provided in the summary, only that "all software tests have been successfully completed without any anomalies."
Study Details
1. Sample Size for the Test Set and Data Provenance:
- Sample Size: 183 scans were used for the safety and accuracy validation of the Femoral Nerve Block.
- Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective. It only states that the tests used "established protocol and acceptance criteria same as that used for the predicate device."
2. Number of Experts Used to Establish Ground Truth and Qualifications:
- This information is not provided in the given 510(k) summary.
3. Adjudication Method for the Test Set:
- This information is not provided in the given 510(k) summary.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- A MRMC comparative effectiveness study was not explicitly mentioned in this summary. The study described focuses on the device's standalone performance against defined criteria, rather than comparing human readers with and without AI assistance.
5. Standalone (Algorithm Only) Performance:
- Yes, the provided performance metrics (FP rate, Accuracy, FN rate) directly reflect the standalone performance of the algorithm for the Femoral Nerve Block. The device processes ultrasound images using AI algorithms and highlights structures, and the reported rates indicate how well the algorithm performs this task independently.
6. Type of Ground Truth Used:
- The summary states that the validation involved "safety and Accuracy validation" through "established test protocols." While not explicitly named (e.g., "expert consensus" or "pathology"), given the context of identifying anatomical structures in ultrasound images, it is highly probable that the ground truth was established by expert consensus (e.g., skilled sonographers or regional anesthetists manually identifying and labeling the structures) or well-defined anatomical landmarks.
7. Sample Size for the Training Set:
- This information is not provided in the given 510(k) summary.
8. How the Ground Truth for the Training Set Was Established:
- This information is not provided in the given 510(k) summary.
Ask a specific question about this device
(193 days)
ScanNav Anatomy Peripheral Nerve Block is indicated to assist qualified healthcare professionals to identify and label the below mentioned anatomy in live ultrasound images in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.
The highlighting of structures in the following anatomical regions is supported:
- Axillary level brachial plexus ●
- Erector spinae plane .
- Interscalene level brachial plexus ●
- Popliteal level sciatic nerve .
- Rectus sheath plane
- Sub-sartorial femoral triangle / Adductor canal .
- Superior trunk of brachial plexus
- Supraclavicular level brachial plexus ●
- . Longitudinal suprainguinal fascia iliaca plane
ScanNav Anatomy Peripheral Nerve Block is an accessory to compatible general-purpose diagnostic ultrasound systems.
ScanNav Anatomy Peripheral Nerve Block is a software medical device which assists anesthetists and other qualified healthcare professionals in the identification of anatomical structures within ultrasound images during ultrasound-guided regional anesthesia (UGRA) procedures by highlighting the relevant anatomical structures in realtime.
The device performs the highlighting by using deep learning artificial intelligence technology based on convolutional networks (CNNs). These deep-learning models generate a colored overlay that allows the user to identify the specific anatomical structures of interest for the procedure. A separate monitor displays the highlighted images as an overlay on top of the ultrasound image, so the original view from the ultrasound machine is not affected. The deep learning models are locked, and they do not continue to learn in the field.
The device interfaces with ultrasound machine with an external monitor output that meets the compatibility requirements. The ScanNav Anatomy Peripheral Nerve Block is run on a mobile computing platform (a commercial off the shelf panel PC) performing the processing with an integrated touchscreen monitor to display the user interface and anatomy highlighting
The Software as a Medical Device is packaged with a tablet PC, power cable, compatible plug, and mounting bracket and instructions for mounting the tablet to the ultrasound host. This acts as a separate monitor to display the highlighted images as an overlay on top of the ultrasound image, so the original view from the ultrasound machine is not affected. The ScanNav Anatomy Peripheral Nerve Block system is composed of a software medical device and other non-medical devices such as a panel PC, power supply, an HDMI interface cable and a VESA mount.
Acceptance Criteria and Device Performance Study for ScanNav Anatomy Peripheral Nerve Block
This document outlines the acceptance criteria for the ScanNav Anatomy Peripheral Nerve Block device and details the studies conducted to demonstrate its compliance.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Metric | Acceptance Criterion | Reported Device Performance |
|---|---|---|---|
| Clinical Performance (Primary Endpoint) | Assistance in obtaining correct ultrasound view prior to needle insertion | Majority view (at least 8/15 participants) agree the device assists. | 63% (19/30) of participants were assisted, meeting the criterion. |
| Clinical Performance (Secondary Endpoint 1) | Assistance in identification of anatomical structures (up to BMI 35 kg/m2) | Majority view (at least 8/15 participants) agree the device assists. | 70% (21/30) of participants were assisted, meeting the criterion. |
| Clinical Performance (Secondary Endpoint 2) | Assistance in supervision and training for anatomical structure identification | Majority view (at least 8/15 supervising experts) agree the device assists. | 87% (13/15) of experts were assisted, meeting the criterion. |
| Clinical Performance (Secondary Endpoint 3) | Improvement in operator confidence | Majority view (at least 8/15 participants) agree the device improves confidence. | 63% (19/30) of participants had improved confidence, meeting the criterion. |
| Safety (Misidentification) | Frequency of misidentification (FP Rate) of anatomical structures | Not explicitly stated as a numerical criterion for all blocks, but assessed as a primary endpoint in one study. | Varies by block, ranging from 0% (ESP, Adductor) to 21.9% (SFIC). Details in section 2 below. |
| Safety (Adverse Events Risk) | Frequency of highlighting risking an adverse event | <= 5% of total for each specified adverse event risk. | Specific percentages are redacted (b)(4), but implied to be within acceptable limits as the device was granted De Novo. |
| Accuracy (Correct Identification) | Frequency of correct identification (TP+TN) | >= 80% of total for each block. | Varies by block, ranging from 76.2% (SFIC) to 98.3% (SC). Details in section 2 below. |
| Human Factors | Successful completion of essential and critical tasks by users | All participants complete essential and critical tasks without patterns of use failures, confusion, or difficulties. | All 30 participants completed tasks. No UI design issues, use errors, or task failures were found. |
| Software | Compliance with design and safety standards | Software documentation acceptable, with Major Level of Concern addressed. | Documentation reviewed and accepted. Supports cybersecurity and hazard analysis. |
| Electromagnetic Compatibility & Electrical Safety | Compliance with IEC 60601-1-12 and IEC 60601-1-2 | Test results support electrical safety and electromagnetic compatibility. | Test results support compliance. |
2. Study Details for Device Performance
The provided documentation describes two main studies relevant to device performance: a Clinical Validation Study to assess Performance and predict Adverse Events (IU2021 AG 07) and a Human Factors (HF) Study Design.
Clinical Validation Study (IU2021 AG 07)
- Sample Size: 40 volunteers.
- Data Provenance: Single-center, prospective validation study conducted in the USA (Oregon Health & Science University, Portland).
- Number of Experts for Ground Truth: Three (3) expert anesthesiologists in UGRA.
- Qualifications of Experts: Anesthesiologists competent to perform independent UGRA.
- Adjudication Method: Majority opinion (2/3) determined TP, TN, FP, FN, and AE rates.
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: No, this study was primarily a standalone validation of the device's highlighting performance against expert consensus. It did not directly compare human readers with and without AI assistance to quantify an effect size in human improvement. However, the Human Factors study (described below) did involve performance "with and without the aid of the device," providing some insight into assisted performance.
- Standalone Performance (Algorithm-only): Yes, the study assessed the "device output" and its highlighting performance, with experts evaluating the device's interpretations in isolation. Experts answered questions on the device's highlighting performance (TP, TN, FP, FN).
- Type of Ground Truth: Expert consensus. Three expert anesthesiologists viewed recorded ultrasound clips side-by-side with the ScanNav Anatomy PNB output and determined the correctness of the highlighting.
- Sample Size for Training Set: Not explicitly stated in the provided text. The device uses "deep learning artificial intelligence technology based on convolutional networks (CNNs)."
- How Ground Truth for Training Set was Established: Not explicitly stated, but typically involves expert annotation of ultrasound images for relevant anatomical structures.
Key Performance Metrics (from IU2021 AG 07):
- Primary Endpoint (Misidentification - FP Rate):
- Axillary: 0.3%
- ESP: 0.0%
- IS: 1.3%
- Pop: 0.6%
- RS: 3.2%
- Adductor: 0.0%
- ST: 5.2%
- SC: 0.8%
- SFIC: 21.9%
- Secondary Endpoint (Accuracy - Correct Identification - TP+TN Rate):
- Axillary: 97.7%
- ESP: 88.8%
- IS: 94.1%
- Pop: 98.1%
- RS: 96.8%
- Adductor: 90.4%
- ST: 90.9%
- SC: 98.3%
- SFIC: 76.2%
- Adverse Event Rates: Redacted sections (b)(4) indicate specific risks (PONS, LAST, Pneumothorax, Peritoneum risk) were assessed per block, aiming for <= 5% frequency.
Human Factors (HF) Study Design
- Sample Size: 30 anesthesiologists (15 Expert, 15 Trainee).
- Data Provenance: Summative usability validation study conducted in a simulated interventional procedural lab setting in the USA (b)(4).
- Number of Experts for Ground Truth: The study involved participants' self-assessment via questionnaires and a panel of three (3) independent experts who reviewed recorded scans for later analysis.
- Qualifications of Experts:
- Expert Participants (15): Capable of independent clinical practice of UGRA, deliver regular clinical care, 14 are members of relevant professional societies, 11 hold advanced further training in UGRA.
- Trainee Participants (15): Undergoing training for UGRA procedures; 7 deliver regular clinical care, 13 are members of relevant professional societies.
- Independent Expert Panel (3): Expertise in UGRA, reviewed recorded scans.
- Adjudication Method: For the expert panel, majority panel view determined the device's performance and safety profile for each scan. For primary and secondary clinical endpoints, it was based on majority view of participants (at least 8/15) for each group (experts, trainees, or combined as 30 participants).
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: Yes, in essence. The study involved participants (
readers) performing "scans with and without the aid of the device," on two models (cases). This directly assesses the impact of the AI assistance on user performance and perceptions.- Effect Size of Human Readers Improve with AI vs. Without AI Assistance:
- Assistance in obtaining correct ultrasound view: 63% (19/30) participants were assisted.
- Assistance in identification of anatomical structures: 70% (21/30) participants were assisted.
- Assistance in supervision and training (for experts): 87% (13/15) experts were assisted.
- Improved operator confidence: 63% (19/30) participants reported improved confidence.
- Reduction of risk of mistaking an incorrect view: 70% reduction in this risk reported (presumably from expert panel analysis based on device use in the study).
- (b)(4) increase in risk (redacted amount) where incorrect highlighting increased the risk.
- Effect Size of Human Readers Improve with AI vs. Without AI Assistance:
- Standalone Performance (Algorithm-only): The expert panel independently evaluated ScanNav Anatomy PNB highlighting on recorded scans, which can be seen as an assessment of the algorithm's standalone performance, albeit within the context of generated images from user interaction. However, the clinical validation study (IU2021 AG 07) is a more direct evaluation of standalone performance (TP, TN, FP, FN rates).
- Type of Ground Truth:
- Participant self-assessment/perception: Through questionnaires regarding assistance, confidence, and identification.
- Expert consensus: The independent panel of three experts reviewed recorded scans and device highlighting, completing the same questionnaire as participants, with their "majority panel view" forming a ground truth for device performance and safety.
- Sample Size for Training Set & How Ground Truth for Training Set was Established: Not stated in the provided text.
Ask a specific question about this device
Page 1 of 1