(140 days)
The Lung Vision System is intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The LungVision System is designed enabling users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The Lung Vision System is designed to assist the physician in guiding endobronchial tools towards the target area of interest inside the patient lungs. Prior to the endoscopic procedure the system allows planning the target location and the path to the target area on the CT scan. During the endoscopic procedure the system overlays planned data over fluoroscopic images to support endobronchial tool navigation towards the area of interest. The system does not include the Fluoroscope, Bronchoscope or the external monitor.
Lung Vision image processing algorithms are executed on a PC based hardware platform, which can perform the following functions:
- segment previously acquired DICOM 3D CT image data, .
- register DICOM 3D CT image data with live fluoroscopic X-ray image
- overlay the segmented 3D CT dataset over a live fluoroscopic X-ray image of the same anatomy, ● obtained on a Fluoroscopic system.
The provided document describes the LungVision System, a device intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy to support catheter/device navigation during pulmonary procedures.
The 510(k) summary (pages 3-6) focuses on demonstrating substantial equivalence to a predicate device (Philips EP-Navigator K062650) rather than providing detailed acceptance criteria and a study proving the device meets those specific criteria. The document states "No clinical testing was performed." and "No animal testing was performed."
Therefore, based solely on the provided text, a comprehensive answer to your request is not possible. However, I can extract the available information regarding performance and testing:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics. Instead, it describes general claims of equivalence to a predicate device. The "Performance and Specifications" section states: "The performance and specifications demonstrate that the Lung Vision and predicate devices perform the same functions using the same technologies thus can be found substantially equivalent."
The "Nonclinical / Bench" section states: "We have performed bench tests and found that the Body Vision met all requirements specifications and was found to be equivalent in comparison to the predicate. Testing includes verification testing of the requirements, testing of hazards mitigations, and performance testing of the system."
Without the specific "requirements specifications" or direct comparative performance data, it's impossible to generate a table of acceptance criteria and reported device performance.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "bench tests" and "testing has also been performed with pig lungs to test accuracy in deformable tissue." However, it does not specify the sample size for these tests, the data provenance (e.g., country of origin), or whether the tests were retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not describe the establishment of a ground truth for a test set using experts. Since no clinical or animal testing was performed to evaluate the diagnostic or navigational accuracy against defined ground truth, this information is not available.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no clinical or animal studies with a test set requiring expert adjudication for ground truth were described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. The document explicitly states "No clinical testing was performed."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document mentions "bench tests" and "verification testing of the requirements, testing of hazards mitigations, and performance testing of the system," as well as "testing has also been performed with pig lungs to test accuracy in deformable tissue." These appear to be standalone tests of the device's technical performance attributes. However, specific metrics of "algorithm only" performance (e.g., accuracy of segmentation or registration) are not provided, nor is the "performance" explicitly defined in terms of measurable outcomes. The application is for a navigation aid where a human is in the loop.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
For the "testing has also been performed with pig lungs to test accuracy in deformable tissue," the type of ground truth is not specified. Given it was in vitro testing on pig lungs, it would likely involve physical measurements or anatomical references, but this is not detailed. For other bench tests, "ground truth" would refer to the expected functional behavior or output as per the requirements specifications, rather than a clinical ground truth like pathology.
8. The sample size for the training set
The document describes the device as a "PC based software application" that utilizes algorithms to process existing 3D CT datasets and live fluoroscopic images. This suggests the system employs algorithms that might have been developed or trained. However, the document does not provide any information regarding a training set, its sample size, or how it was used in the development of the device's algorithms.
9. How the ground truth for the training set was established
Not applicable, as no information about a training set is provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).