Search Results
Found 2 results
510(k) Data Aggregation
(118 days)
Lung Vision System
The LungVision System is intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The Lung Vision System (K163622) is designed to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The Lung Vision System is designed to assist the physician in guiding endobronchial tools towards the target area of interest inside the patient lungs. Prior to the endoscopic procedure the system allows planning the target location and the path to the target area on the CT scan. During the endoscopic procedure the system overlays planned data over fluoroscopic images to support endobronchial tool navigation towards the area of interest. The system does not include the Fluoroscope, Bronchoscope or the external monitor. Lung Vision system includes a main unit and a tablet vs. the previous PC based hardware platform. Image processing algorithms are executed on the main unit and the tablet is used as a primary method of interacting with the system. The Tablet is for planning but is not for diagnostic purposes. Both can perform the following functions: segment previously acquired DICOM 3D CT image data, register DICOM 3D CT image data with live fluoroscopic X-ray image, overlay the segmented 3D CT dataset over a live fluoroscopic X-ray image of the same anatomy, obtained on a Fluoroscopic system.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text.
Important Note: The provided text is a 510(k) Summary, which is a premarket notification for new medical devices. It primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting detailed clinical trial results for efficacy or a comprehensive standalone AI performance study. Therefore, some information typically found in a clinical study report (like detailed MRMC results, specific effect sizes, or a large, prospectively collected, expert-adjudicated dataset for de novo AI training) is not present here. The performance testing described is primarily focused on demonstrating that the device functions as intended and safely, matching the predicate.
Acceptance Criteria and Reported Device Performance
The provided text doesn't explicitly state quantitative acceptance criteria in a table format with specific performance metrics (e.g., "accuracy > X%", "sensitivity > Y%"). Instead, the acceptance criteria are implicitly defined by demonstrating substantial equivalence to a predicate device and successful completion of bench testing to ensure the device meets its own "requirement specifications" and "hazards mitigations."
The performance is reported in terms of functional equivalence and safety:
Acceptance Criteria Category | Reported Device Performance (Summary of Findings) |
---|---|
Functional Equivalence | The Lung Vision System performs the same functions as the predicate device (LungVision - K163622). |
It includes new features (Virtual Bronchoscopy, C-Arm based Tomography, Multi-view set-up, Real-time compensation, 3D Guidance, Tablet use) which are similar in functionality or technological characteristics to either the predicate or reference devices (Covidien - superDimension™ Navigation System V7.2 - K173244). | |
Performance & Accuracy | "met all requirements specifications" |
"was found to be equivalent in comparison to the predicate" | |
Accuracy testing in "deformable tissue" was performed using pig lungs. (No specific quantitative metric provided) | |
Safety | Complies with ANSI/AAMI/ES 60601-1:2005(2012) and IEC 60601-1-2:2014 standards (electrical and electromagnetic compatibility). |
No patient-contacting parts (no biocompatibility testing needed). | |
Clinical Efficacy | Not directly assessed; presumed equivalent based on substantial equivalence to predicate. |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: Not explicitly stated as a separate "test set" in the context of typical AI model validation. The "bench tests" included "verification testing of the requirements, testing of hazards mitigations and performance testing of the system." This suggests internal testing against pre-defined specifications rather than a distinct, large, clinical test dataset.
- Data Provenance: Not specified. The document mentions "pig lungs" were used for some accuracy testing in "deformable tissue." This implies animal (in vitro/ex vivo) testing. There is no mention of human clinical data for the performance testing cited.
- Retrospective or Prospective: Not explicitly stated for performance testing. Given the "bench tests" and no clinical testing, it's likely internal, controlled testing rather than a large-scale retrospective or prospective patient study.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Not applicable/Not stated. The "ground truth" for the performance testing described here (bench tests, pig lungs) relates to the system's technical specifications and physical performance (e.g., registration accuracy on deformable tissue) rather than diagnostic interpretations by human experts.
-
Adjudication Method for the Test Set:
- Not applicable/Not stated. Since the "test set" described is for bench performance, there's no mention of human expert adjudication.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No. The document explicitly states: "Clinical: No clinical testing was performed." Therefore, no MRMC study comparing human readers with and without AI assistance was conducted or reported.
-
Standalone (Algorithm Only Without Human-in-the-Loop) Performance Study:
- A standalone performance study for the algorithm itself in terms of diagnostic accuracy (e.g., detecting lesions) was not the focus or purpose of this 510(k) submission. The device is a "system, image processing, radiological" intended to support catheter/device navigation by overlaying pre-acquired CT data onto live X-ray. Its performance is evaluated on its ability to accurately segment, register, and overlay images, and perform real-time compensation, not on its ability to autonomously diagnose.
- The "Performance Testing" section states, "We have performed bench tests and found that the Lung Vision met all requirements specifications and was found to be equivalent in comparison to the predicate. Testing includes verification testing of the requirements, testing of hazards mitigations) and performance testing of the system." This implies internal measurements of system performance (e.g., accuracy of registration or compensation) but not a standalone diagnostic outcome.
-
Type of Ground Truth Used:
- For the bench testing, the ground truth would have been the engineering specifications of the device's functions (e.g., the expected accuracy of C-Arm based Tomography or real-time compensation).
- For testing in "deformable tissue (pig lungs)," the ground truth would likely be physical measurements or imaging references to assess the system's ability to maintain accuracy in a dynamic environment, rather than a clinical diagnosis or pathology.
-
Sample Size for the Training Set:
- Not stated. The document doesn't discuss the details of the training data used for any algorithms (e.g., segmentation, registration, real-time compensation). As this is primarily a system modification and not a de novo AI diagnostic device, the specifics of algorithm training or AI model validation datasets are not a required part of this 510(k) summary. The "software upgrades" are described as "an algorithm improvement" or "a standard technology," implying refinements rather than entirely new AI models requiring extensive separate training data disclosure for this type of submission.
-
How the Ground Truth for the Training Set Was Established:
- Not applicable/Not stated. As the training set details are not provided, the method for establishing its ground truth is also not mentioned.
Ask a specific question about this device
(140 days)
Lung Vision
The Lung Vision System is intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The LungVision System is designed enabling users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy in order to support catheter/device navigation during pulmonary procedures.
The Lung Vision System is designed to assist the physician in guiding endobronchial tools towards the target area of interest inside the patient lungs. Prior to the endoscopic procedure the system allows planning the target location and the path to the target area on the CT scan. During the endoscopic procedure the system overlays planned data over fluoroscopic images to support endobronchial tool navigation towards the area of interest. The system does not include the Fluoroscope, Bronchoscope or the external monitor.
Lung Vision image processing algorithms are executed on a PC based hardware platform, which can perform the following functions:
- segment previously acquired DICOM 3D CT image data, .
- register DICOM 3D CT image data with live fluoroscopic X-ray image
- overlay the segmented 3D CT dataset over a live fluoroscopic X-ray image of the same anatomy, ● obtained on a Fluoroscopic system.
The provided document describes the LungVision System, a device intended to enable users to segment previously acquired 3D CT datasets and overlay and register these 3D segmented data sets with fluoroscopic live X-ray images of the same anatomy to support catheter/device navigation during pulmonary procedures.
The 510(k) summary (pages 3-6) focuses on demonstrating substantial equivalence to a predicate device (Philips EP-Navigator K062650) rather than providing detailed acceptance criteria and a study proving the device meets those specific criteria. The document states "No clinical testing was performed." and "No animal testing was performed."
Therefore, based solely on the provided text, a comprehensive answer to your request is not possible. However, I can extract the available information regarding performance and testing:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics. Instead, it describes general claims of equivalence to a predicate device. The "Performance and Specifications" section states: "The performance and specifications demonstrate that the Lung Vision and predicate devices perform the same functions using the same technologies thus can be found substantially equivalent."
The "Nonclinical / Bench" section states: "We have performed bench tests and found that the Body Vision met all requirements specifications and was found to be equivalent in comparison to the predicate. Testing includes verification testing of the requirements, testing of hazards mitigations, and performance testing of the system."
Without the specific "requirements specifications" or direct comparative performance data, it's impossible to generate a table of acceptance criteria and reported device performance.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "bench tests" and "testing has also been performed with pig lungs to test accuracy in deformable tissue." However, it does not specify the sample size for these tests, the data provenance (e.g., country of origin), or whether the tests were retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not describe the establishment of a ground truth for a test set using experts. Since no clinical or animal testing was performed to evaluate the diagnostic or navigational accuracy against defined ground truth, this information is not available.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no clinical or animal studies with a test set requiring expert adjudication for ground truth were described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. The document explicitly states "No clinical testing was performed."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document mentions "bench tests" and "verification testing of the requirements, testing of hazards mitigations, and performance testing of the system," as well as "testing has also been performed with pig lungs to test accuracy in deformable tissue." These appear to be standalone tests of the device's technical performance attributes. However, specific metrics of "algorithm only" performance (e.g., accuracy of segmentation or registration) are not provided, nor is the "performance" explicitly defined in terms of measurable outcomes. The application is for a navigation aid where a human is in the loop.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
For the "testing has also been performed with pig lungs to test accuracy in deformable tissue," the type of ground truth is not specified. Given it was in vitro testing on pig lungs, it would likely involve physical measurements or anatomical references, but this is not detailed. For other bench tests, "ground truth" would refer to the expected functional behavior or output as per the requirements specifications, rather than a clinical ground truth like pathology.
8. The sample size for the training set
The document describes the device as a "PC based software application" that utilizes algorithms to process existing 3D CT datasets and live fluoroscopic images. This suggests the system employs algorithms that might have been developed or trained. However, the document does not provide any information regarding a training set, its sample size, or how it was used in the development of the device's algorithms.
9. How the ground truth for the training set was established
Not applicable, as no information about a training set is provided.
Ask a specific question about this device
Page 1 of 1