(179 days)
The LungVision System is intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The Lung Vision System is designed to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The System is intended to assist the guidance of endobronchial tools to areas of interest inside a patient's lungs. The System allows the user to mark lesion locations and pathways to marked lesions using a patient's CT scan. During the endoscopic procedure, the System overlays planning information on real-time fluoroscopic images to guide endobronchial tool navigation. The System also provides tomographic images for lesion identification, as well as 3D views for understanding tool and lesion proximity and orientation. The System is designed to be integrated with fluoroscopic imaging systems and external displays.
The Lung Vision system includes a main unit and a tablet. Image processing algorithms are executed on the main unit and the tablet is used as a primary method of interacting with the system.
The provided text describes specific acceptance criteria and the studies conducted to demonstrate the subject device's performance.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance (Mean Accuracy ± STD) |
---|---|
AI-Tomo Accuracy (Simulated cases) | 3.15 mm ± 3.2 mm (N=500) |
AI-Tomo Accuracy (Rigid body model cases) | 3.64 mm ± 1.57 mm (N=93) |
AI-Tomo Accuracy (CBCT scans cases) | 5.34 mm ± 3.32 mm (N=191) |
Lesion marking accuracy (AI-Tomo vs. CABT) | Comparable |
2. Sample Size and Data Provenance for Test Set
- Simulated Cases: 500 cases, based on "real human's CT scans," synthetically simulated.
- Rigid Body Model Cases: 93 cases, using "only one lung anatomy." Data provenance is laboratory testing in a simulated environment.
- CBCT Scans Cases: 191 cases, based on "real human's CT scans and real human's CBCT scans taken in real procedures." The images are CABTs collected from real cases performed by physicians using the LungVision system in the USA, Italy, and Israel. This represents retrospective real-world data.
3. Number of Experts and Qualifications for Ground Truth Establishment
The document mentions "clinical validation was performed by physicians" but does not specify the exact number of experts or their qualifications (e.g., years of experience, specific medical specialty).
4. Adjudication Method for Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It states that the output (from AI-Tomo post-processing) is "evaluated using the ground truthing methodology." The ground truth methods are listed as "Geometry test" and "Lesion contrast on real data tests (with and without tool)," implying a direct comparison against a defined standard rather than an expert consensus adjudication protocol.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document states, "The clinical validation of AI Tomo confirmed that lesion marking accuracy was comparable when using CABT and AI Tomo images." This implies a comparison between human performance with CABT and human performance with AI-Tomo (assisted by physicians), but it does not explicitly detail an MRMC study comparing human readers with and without AI assistance (i.e., effect size of human improvement with AI assistance). It focuses on the comparison of the output of two imaging modalities for similar accuracy.
6. Standalone (Algorithm Only) Performance
Yes, standalone (algorithm only) performance was conducted for AI-Tomo. The accuracy metrics (mean accuracy and standard deviation) presented for simulated, rigid body model, and CBCT scan cases refer to the performance of the AI-Tomo algorithm itself in reconstructing images or guiding, based on input. The statement "The images are CABTs collected from real cases ... and the output is evaluated using the ground truthing methodology" further supports this, indicating the AI's output was evaluated against a ground truth.
7. Type of Ground Truth Used
The ground truth used includes:
- Geometry test: This likely refers to a predefined geometric standard or measurement.
- Lesion contrast on real data tests (with and without tool): This suggests evaluation of the AI's ability to delineate lesions and maintain contrast.
- For the simulated and rigid body model cases, the ground truth would be precisely known due to the controlled nature of these environments (e.g., in simulated cases, the intended registration error, or in rigid body models, the pre-defined transform).
- For CBCT cases, the ground truth methodology is stated, but the ultimate true "lesion location" reference is implied to be established through the "Geometry test" and "Lesion contrast" evaluations.
8. Sample Size for Training Set
The document does not specify the sample size for the training set used to develop the AI-Tomo feature. It mentions that "CABTs are used as input data to the AI-Tomo post processing," implying these real CABT cases were used for some phase of development or validation, but it doesn't separate training from testing data clearly in terms of counts.
9. How Ground Truth for Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It describes ground truth methods for evaluation, but not specifically for the data used to train the AI model.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).