Search Results
Found 3 results
510(k) Data Aggregation
(119 days)
Exo Imaging
Al Platform 2.0 is intended for noninvasive processing of ultrasound images to detect, measure, and calculate relevant medical parameters of structures and function of patients with suspected disease. In addition, it can provide Quality Score feedback to assist healthcare professionals, trained and qualified to conduct echocardiography and lung ultrasound scans in the current standard of care while acquiring ultrasound images. The device is intended to be used on images of adult patients.
Exo Al Platform 2.0 (AIP 2.0) is a software as a medical device (SaMD) that helps qualified users with image-based assessment of ultrasound examinations in adult patients. It is designed to simplify workflow by helping trained healthcare providers evaluate, quantify, and generate reports for ultrasound images. AIP 2.0 takes as an input in the Digital Imaging and Communications in Medicine (DICOM) format from ultrasound scanners of a specific range and allows users to detect, measure, and calculate relevant medical parameters of structures and function of patients with suspected disease. In addition, it provides frame and clip quality score in real-time for the Left Ventricle from the four-chamber apical and parasternal long axis views of the heart and lung scans. In addition, the Al modules are provided as a software component to be integrated by another computer programmer into their legally marketed ultrasound imaging device. Essentially, the Algorithm and API, which are modules, are medical device accessories.
Key features of the software are
- Lung Al: An Al-assisted tool for suggesting the presence of lung structures and artifacts on ultrasound images, namely A-lines. Additionally, a per-frame and per-clip quality score is generated for each lung scan.
- Cardiac Al: An Al-assisted tool for the quantification of Left Ventricular Ejection Fraction (LVEF), Myocardium wall thickness (Interventricular Septum (IVSd), Posterior wall (PWd)), and IVC diameter on cardiac ultrasound images. Additionally, a per-frame and per-clip quality score is generated for each Apical and PLAX cardiac scan.
The provided text describes the acceptance criteria and the study that proves the device, AI Platform 2.0 (AIP002), meets these criteria for specific functionalities. This device is a software as a medical device (SaMD) intended for processing ultrasound images for adult patients, including detecting, measuring, and calculating medical parameters, and providing quality score feedback during image acquisition.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document specifies performance metrics for two main functionalities tested: Left Ventricle Wall Thickness and Inferior Vena Cava (IVC) measurements, and Quality AI (for frames and clips). The acceptance criteria are implicitly high correlation with expert measurements, indicated by high Interclass Correlation (ICC) values.
Functionality/Measurement | Acceptance Criteria (Implicit) | Reported Device Performance (ICC with 95% CI) |
---|---|---|
LV Wall Thickness | High correlation with experts | |
InterVentricular Septum (IVSd) | 0.93 (0.89 – 0.96) | |
Posterior Wall (PWd) | 0.94 (0.89 – 0.97) | |
Inferior Vena Cava (IVC) | High correlation with experts | |
IVC Dmin | 0.93 (0.90 – 0.95) | |
IVC Dmax | 0.94 (0.90 – 0.96) | |
Quality AI | High agreement with experts | |
Overall agreement (frames) | 0.94 (0.94 – 0.95) | |
Overall agreement (clips) | 0.94 (0.92 – 0.95) | |
Diagnostic Classification | >95% agreement with experts (ACEP score >=3) | 98.3% of clips rated ACEP >=3 by experts received at least "Minimum criteria met for diagnosis" by Clip Quality AI. |
98.0% of scans considered "Minimal criteria met for diagnosis" or "good" by Quality AI were deemed diagnostic by experts (ACEP score of 3 or higher). |
2. Sample size used for the test set and the data provenance
- LV Wall Thickness and IVC measurements: 100 subjects.
- Quality AI (Section a): 184 patients, resulting in 226 clips (29,732 frames).
- Quality AI (Section b, real-time scanning): 396 lung and cardiac scans.
- Data Provenance: The test data encompassed diverse demographic variables (gender, age, ethnicity) from multiple sites in metropolitan cities with diverse racial patient populations. The text states the data was entirely separated from the training/tuning datasets. The studies were retrospective for the initial quality evaluation (comparing to previously acquired data rated by sonographers) and prospective for the real-time quality AI evaluation (data acquired while using the AI in real-time by users with varying experience).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- LV Wall Thickness and IVC measurements: Ground truth was established as the average measurement of three experts. Their specific qualifications (e.g., years of experience, specialty) are not explicitly stated beyond "experts."
- Quality AI (Section a): Ground truth was established by "experienced sonographers." Their number and specific qualifications are not detailed beyond "experienced."
- Quality AI (Section b, real-time scanning): Ground truth for diagnostic classification was established by "expert readers" (ACEP score of 3 or above). Their number and specific qualifications are not detailed beyond "expert readers."
4. Adjudication method for the test set
- LV Wall Thickness and IVC measurements: The adjudication method was taking the average measurement of three experts. This implies a form of consensus or central tendency for ground truth.
- Quality AI (Section a): Ground truth was based on "quality rating by experienced sonographers on each frame and the entire clip." It doesn't explicitly state an adjudication method beyond this, implying individual expert ratings were used or a single consensus was reached, but not a specific multi-reader adjudication process like 2+1 or 3+1.
- Quality AI (Section b): Ground truth was based on "ACEP quality of 3 or above by expert readers." Similar to Section a, a specific adjudication method beyond "expert readers" is not detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not explicitly describe a traditional MRMC comparative effectiveness study that directly quantifies the improvement of human readers with AI assistance versus without AI assistance.
The Quality AI section (b) indicates that 26 users (including 18 novice users) conducted 396 lung and cardiac scans using the real-time quality AI feedback. This suggests an evaluation of the AI's ability to guide users to acquire diagnostic quality images, which is an indirect measure of assisting human performance. However, it does not provide an effect size of how much human readers improve in their interpretation or diagnosis with AI assistance. The study focuses on the AI's ability to help users acquire diagnostic quality images.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, standalone performance was evaluated for the following:
- Left Ventricle Wall Thickness and IVC measurements: The performance (ICC) was calculated directly between the AI's measurements and the expert-derived ground truth. This is a standalone performance metric.
- Quality AI (Section a): The overall agreement (ICC) between the Quality AI and quality ratings by experienced sonographers was calculated. This also represents standalone performance of the AI's quality assessment function.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for the evaluated functionalities was expert consensus/measurement:
- LV Wall Thickness and IVC measurements: Average measurement of three experts.
- Quality AI: Quality ratings by experienced sonographers (Section a) and ACEP quality scores by expert readers (Section b).
No mention of pathology or outcomes data as ground truth.
8. The sample size for the training set
The document explicitly states: "The test data was entirely separated from the training/tuning datasets and was not used for any part of the training/tuning." However, it does not provide the specific sample size for the training set.
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the AI models use "non-adaptive machine learning algorithms trained with clinical data." The Predetermined Change Control Plan also refers to "new training data" and augmenting the training dataset, but without details on ground truth establishment for these training datasets.
Ask a specific question about this device
(110 days)
Exo Imaging Inc.
Exo Iris is indicated for use by qualified and trained healthcare professionals in environments where healthcare is provided to enable diagnostic ultrasound imaging and measurement of anatomical structures and fluids of adult and pediatric patients for the following clinical applications: Peripheral Vessel (including carotid, deep vein thrombosis and arterial studies), Procedural Guidance, Small Organ (including thyroid, scrotum and breast), Cardiac, Abdominal, Urology, Fetal/Obstetric, Gynecological, Musculoskeletal (conventional), Musculoskeletal (superficial) and Ophthalmic. Modes of operation include: B-Mode + Color Doppler, B-Mode + M-Mode.
The subject device, Exo Iris is a hand-held, general purpose diagnostic imaging system used to enable visualization of anatomical structures and fluid of adult and pediatric patients. The system is intended to be used by trained healthcare professionals.
The system generates 2D images using a single ultrasound transducer with broad imaqing capabilities. The imaqes are displayed on a commercial off-the-shelf mobile device (iPhone) by means of a proprietary mobile application (Exo Iris app) provided by Exo Imaging. Images can be displayed in the following modes: B-Mode, B-Mode + Color Doppler, B-Mode+ M-Mode.
The mobile application's user interface includes touchscreen menus, buttons, controls, indicators, and navigation icons that allow the operator to control the system and to view ultrasound images.
The provided text does not contain detailed information about the acceptance criteria and study proving the device meets those criteria in a format that allows for a direct population of all the requested fields. The document primarily focuses on FDA 510(k) clearance, asserting substantial equivalence to a predicate device.
Here's an attempt to extract and infer information based on the text:
1. Table of acceptance criteria and the reported device performance
The document broadly states that "All specifications for Exo Iris have been verified and validated... and the results demonstrated that the predetermined acceptance criteria were met." However, it does not provide a specific table of acceptance criteria with corresponding performance results. Instead, it lists the standards against which testing was conducted.
Acceptance Criteria Category (Inferred from standards) | Reported Device Performance (General Statement) |
---|---|
Electrical Safety (per ANSI/AAMI ES60601-1) | Compliant with applicable electrical safety standards |
Electromagnetic Compatibility (EMC) (per IEC 60601-1-2, FCC Part 15) | Compliant with applicable EMC standards |
Ultrasound Safety and Performance (per IEC 60601-2-37, NEMA UD-2) | Meets safety and performance requirements for ultrasonic medical diagnostic and monitoring equipment; Meets standard for acoustic output measurement |
Biocompatibility (per ISO 10993) | Compliant with ISO 10993 |
Software Life Cycle Processes (per IEC 62304) | Compliant with Medical Device Software - Software Life Cycle Processes |
Design Control and Risk Mitigation (per 21 CFR Part 820.30, ISO 14971) | All design verification and validation activities performed; predetermined acceptance criteria met; all risk mitigations satisfactorily verified and validated. |
2. Sample size used for the test set and the data provenance
The document explicitly states: "No human clinical data is provided to support substantial equivalence."
Therefore, there is no information on a specific "test set" in terms of patient data. The performance evaluations were primarily through bench testing against established standards.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable, as no human clinical data (and thus no ground truth derived from it) was used for substantial equivalence.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no human clinical data (and thus no adjudication of ground truth) was used for substantial equivalence.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was done, as the submission explicitly states "No human clinical data is provided to support substantial equivalence." The device is a diagnostic ultrasound system, not an AI-assisted diagnostic tool in the context of this submission that would require demonstrating an improvement in human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to the device itself as a diagnostic ultrasound system. Its performance evaluation was done through bench testing to ensure it meets technical standards, not as an algorithm performing standalone diagnostics on patient cases.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for the device's performance is derived from its ability to meet the technical specifications and requirements defined by the various industry standards mentioned (e.g., electrical safety, acoustic output, software quality). There is no "ground truth" related to disease detection or diagnosis established through expert consensus, pathology, or outcomes data in this submission.
8. The sample size for the training set
Not applicable. The document describes a diagnostic ultrasound system, not an AI/ML device that requires a training set of data.
9. How the ground truth for the training set was established
Not applicable, as no training set was used.
Ask a specific question about this device
(95 days)
Exo Imaging Inc.
Exo Iris is indicated for use by qualified and trained healthcare professionals in environments where healthcare is provided to enable diagnostic ultrasound imaging and measurement of anatomical structures and fluids of adult and pediatric patients for the following clinical applications: Peripheral Vessel (including carotid, deep vein thrombosis and arterial studies), Small Organ (including thyroid, scrotum and breast), Cardiac, Abdominal, Urology, Fetal/Obstetric, Gynecological, Musculoskeletal (conventional), Musculoskeletal (superficial). Modes of operation include: B-mode, B-mode + Color Doppler.
Exo Iris is a hand-held, general purpose diagnostic imaging system used to enable visualization of anatomical structures and fluid of adult and pediatric patients. The system is intended to be used by trained healthcare professionals.
The system generates 2D images using a single ultrasound transducer with broad imaging capabilities. The images are displayed on a commercial off-the-shelf mobile device (iPhone) by means of a proprietary mobile application (Exo Iris app) provided by Exo Imaging. Images can be displayed in the following modes: B-Mode, B-Mode + Color Doppler.
The mobile application's user interface includes touchscreen menus, buttons, controls, indicators, and navigation icons that allow the operator to control the system and to view ultrasound images.
This document is an FDA 510(k) clearance letter for the Exo Iris Ultrasound System. It primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed study report on acceptance criteria for an AI/ML-driven device.
Based on the provided text, the Exo Iris is a general diagnostic ultrasound system, not explicitly described as having integrated AI/ML functionality that would necessitate specific performance studies for AI/ML algorithms. The letter states "Clinical data were not required for this type of device." and "Software testing, consisted of verification and validation testing including test cases related to off the shelf software, as well as cybersecurity features." This suggests the software component is more for device control and data display rather than for diagnostic AI/ML algorithms.
Therefore, many of the requested details regarding AI/ML study components (such as expert ground truth, adjudication, MRMC studies, standalone performance, training set details) are not present in this document because the device, as described for this 510(k), does not appear to involve a diagnostic AI/ML algorithm requiring such evaluations.
However, I will extract and present the information available regarding acceptance criteria and performance data for the device itself, acknowledging that it's from a device safety/performance perspective rather than an AI/ML algorithm's diagnostic performance.
Acceptance Criteria and Device Performance (as per general device clearance):
Acceptance Criteria Category | Reported Device Performance |
---|---|
Electrical Safety | Yes, compliant with applicable electrical safety standards (IEC 60601-1, IEC 60601-1-2) |
Mechanical Safety | Meets mechanical safety standards for a Class II medical device |
Biocompatibility | Yes, compliant with ISO 10993 |
Acoustic Output | Compliant with NEMA UD-2 and applicable FDA Guidance |
Software Functionality | Verification and validation testing, including test cases related to off-the-shelf software and cybersecurity features, demonstrated meeting predetermined acceptance criteria. |
Risk Management | Potential risks identified per ISO 14971, analyzed, mitigations implemented and tested. All risk mitigations satisfactorily verified and validated. |
Design Control | All specifications verified and validated per company's Design Control Process (in compliance with 21 CFR Part 820.30); results demonstrated predetermined acceptance criteria were met. |
Substantial Equivalence | Demonstrated substantial equivalence to the predicate device (Butterfly iQ Ultrasound System, K202406) based on indications for use and technological characteristics. |
Information NOT Available in the Document (due to the nature of the device and submission):
As the document indicates, clinical data were not required for this type of device, and the focus is on device safety and operational performance compared to a predicate, not AI/ML diagnostic performance. Therefore, the following are not applicable or not provided:
- Sample size used for the test set and the data provenance: Not applicable for a standalone AI/ML performance study as this device is a general diagnostic ultrasound system. Device testing involved bench testing and software verification, not a clinical test set of images for an AI.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable as there was no AI/ML specific diagnostic test set requiring expert ground truth.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable.
- The sample size for the training set: Not applicable.
- How the ground truth for the training set was established: Not applicable.
Summary of what is present:
The document describes the regulatory clearance for a diagnostic ultrasound system (Exo Iris). The performance data presented are primarily technical and safety verifications required for medical devices (electrical safety, mechanical safety, biocompatibility, acoustic output, software functionality, and risk management) to demonstrate substantial equivalence to a predicate device, rather than the performance of a diagnostic AI/ML algorithm.
Ask a specific question about this device
Page 1 of 1