K Number
K152352
Date Cleared
2016-01-20

(153 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Vision 2, TrackVision 2 and HeartVision 2 software applications are intended to enable users to load 3D datasets and overlay and register in real time these 3D datasets with radioscopic or radiographic images of the same anatomy in order to support catheter/device guidance during interventional procedures.

The Stereo 3D option enables physicians to visualize needles, points, and segments on a 3D model/space using a stereotaction of radioscopic or radiographic images at a significantly lower dose than use of a full cone beam CT acquisition. This information is intended to assist the physician during interventional procedures.

Device Description

Vision Applications (K092639) includes Vision 2, TrackVision 2 and HeartVision 2 applications. Vision Applications can load and dynamically fuse in real-time live 2D X-ray images from the X-ray system with 3D models from X-Ray (DICOM 3D XA), CT or MR system.

Stereo 3D is a new option and the subject of this submission. The Stereo 3D option is designed to be used with the Vision 2 and TrackVision 2 applications which are part of Vision Applications. The Stereo 3D option enables the user to reconstruct 3D objects from radioscopic or radiographic images.

The Stereo 3D option intends to provide an alternative to intra operative CBCT (cone beam CT) usually performed for the same purpose: to localize needles and markers within the 3D anatomy. The Stereo 3D option provides a method to reconstruct 3D contrasted objects (point and segments) from a pair of 2D X-ray images, e.g. acquisition of fluoroscopic images from different C-arm positions (2 different projections). The 3D object reconstruction is then fused in the 3D space with the 3D model used at the fusion of the x-ray image.

Stereo 3D has a workflow that is significantly guided, to support clear and easy use of the reconstruction procedure. The workflow contains the following 4 high level steps:
Image acquisition and registration adjustment
Automatic or manual object identification
Quality assessment of 3D reconstruction
Display of the reconstructed point(s) and segment(s) on a cross section of the 3D model.
The second step (object identification) can be done manually or automatically:

Manual point(s) or segment(s) identification:
After the acquisition and registration of the two x-ray images acquired at two different C-arm positions, the user has to manually select points on the two x-ray images which correspond to the object to reconstruct (e.g. endograph markers and needles).

Automatic Mode for needles (only with Trackvision 2):
The user first selects a planned trajectory with a needle inserted
After the acquisition of the two X-ray images, and the registration adjustment phase, the needle will automatically be detected and reconstructed.

AI/ML Overview

The provided document refers to the "Stereo 3D option for Vision Applications" (K152352). This device is an enhancement to existing GE Vision Applications (Vision 2, TrackVision 2, and HeartVision 2) and aims to reconstruct 3D objects (needles, points, and segments) from 2D X-ray images.

Based on the provided text, the device did not involve a study to establish acceptance criteria for its performance in terms of diagnostic accuracy or reader improvement. Instead, the submission relies on non-clinical tests to demonstrate substantial equivalence to predicate devices and adherence to relevant standards.

Here's a breakdown of the requested information based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state quantitative acceptance criteria for device performance in terms of accuracy or clinical effectiveness for the Stereo 3D option. Instead, it focuses on demonstrating compliance with standards and successful completion of various engineering and usability tests.

Acceptance Criteria (Implied)Reported Device Performance
Compliance with NEMA PS 3.1 - 3.20 (2011) DICOM Set"The Stereo 3D option for Vision Applications comply with NEMA PS 3.1 - 3.20 (2011) Digital Imaging and Communications in Medicine (DICOM) Set..."
Compliance with IEC 62304 (2006) (Software Lifecycle Processes)"...and with voluntary standards IEC 62304 (2006) and IEC 62366 (2007)." (Implies successful adherence to software development and risk management for medical devices)
Compliance with IEC 62366 (2007) (Usability)"Usability validation testing is conducted to confirm that the product can be used safely and effectively." (Reported as completed and successful, with no unexpected results.)
Software Verification (conformance to requirements)"Product verification ensures the software conforms to its requirements including hazard mitigations risk management requirements. The verification tests confirmed that design output meets design input requirements. The tests were executed at component, software subsystems, and system levels. Functional testing and performance testing are part of system level verification." (Reported as completed and successful)
Performance Confirmation (bench testing)"Performance has been confirmed with bench testing." and "Additional bench testing was performed to substantiate Stereo 3D's product claims." and "engineering bench testing was able to be performed using existing phantoms, methods, and performance metrics. The requirements were met and there were not any unexpected results." (Reported as completed and successful, substantiating claims.)
Simulated Use Testing (conformance to user needs/intended uses)"Simulated Use Testing ensured the system conforms to user needs and intended uses through simulated clinical workflows using step-by step procedures that would be performed for representative clinical applications." (Reported as completed and successful, with no unexpected results.)
Hazard Mitigation"All causes of hazard relative to the introduction of Stereo 3D option have been identified and mitigated." and "Verification and Validation testing has demonstrated that the design inputs, user requirements, and risk mitigations have been met." (Reported as adequately addressed.)
No new issues of safety and effectiveness"The results of design validation did not raise new issues of safety and effectiveness." and "The Stereo 3D Option for Vision Applications does not raise new issues of safety and effectiveness. The Stereo 3D Option for Vision Applications does not introduce new fundamental scientific technology." (Conclusion of the submission, implying this criterion was met through the various non-clinical tests and substantial equivalence argument.)

2. Sample size used for the test set and the data provenance

The document explicitly states: "The Stereo 3D option for Vision Applications did not require clinical studies to assess safety and effectiveness and, thus, to establish the substantial equivalence."

Therefore, there is no mention of a "test set" in the context of clinical data, sample size, or data provenance (country of origin, retrospective/prospective). The assessment was based on non-clinical testing, including "bench testing" and "simulated clinical workflows using step-by-step procedures."

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

Not applicable, as no clinical test set with human expert ground truth was used for assessing the device's performance. The "usability validation testing" involved "licensed and/or clinically trained healthcare providers or users," but this was for confirming usability, not establishing ground truth for reconstructive accuracy.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

Not applicable, as no clinical test set necessitating adjudication was used.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

No MRMC comparative effectiveness study was done. The submission explicitly states no clinical studies were required.

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

The document mentions "bench testing" and "performance testing" to confirm the device's functionality and "substantiate Stereo 3D's product claims." It also notes: "Stereo 3D contains the algorithms used to detect the 2D needles on the image and to reconstruct points, needles and segments in 3D from fluoroscopic images." This implies that the algorithms themselves were tested, which can be seen as a form of standalone testing in a controlled, non-clinical environment (e.g., using phantoms). However, specific metrics of standalone algorithmic performance (e.g., accuracy of 3D reconstruction against synthetic ground truth) or detailed study designs for this have not been provided beyond general statements about "performance being confirmed."

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

For the non-clinical "bench testing" and "performance testing," the ground truth likely involved phantom data with known 3D object positions and measurements. The document states: "engineering bench testing was able to be performed using existing phantoms, methods, and performance metrics."

8. The sample size for the training set

The document does not mention a "training set" in the context of machine learning or AI models with a large dataset. The "Stereo 3D option" is described as containing "algorithms used to detect the 2D needles... and to reconstruct points, needles and segments in 3D." It is based on "established GE technology" and the 3D reconstruction technology of an earlier predicate device ("Innova 3D"). This suggests that if there was any "training" in a modern AI sense, it happened as part of the development of the underlying algorithms, which are considered "established technology," and no specific training set size or methodology is provided for this submission.

9. How the ground truth for the training set was established

Not applicable, as no "training set" in a modern AI context is described or detailed for this submission. The technology is based on "established GE technology," implying algorithms developed and potentially validated previously, likely using phantom data or engineered models to establish ground truth for calibration and development of reconstruction techniques.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).