Search Results
Found 2 results
510(k) Data Aggregation
(64 days)
The Digital Fluoroscopic Imaging System is indicated for use in generating fluoroscopic images of human anatomy for diagnostic and intervention angiography procedures. It is intended to replace fluoroscopic images obtained through the image intensifier technology. This device is not intended for mammography applications.
The Digital Fluoroscopic Imaging System is indicated for use in diagnostic and interventional angiographic procedures of human anatomy. It is intended to replace image intensifier fluoroscopic systems in all diagnostic or interventionnal procedures. This device is not intended for mammography applications.
The Digital Fluoroscopic Imaging System is designed to perform fluoroscopic x-ray examinations. The detector is comprised of amorphous silicon with a cesium iodide scintillator. The resulting digital image can be sent through a Fiber Channel link to an acquisition equipment then to network (in using DICOM) for applications such as post-processing, printing, viewing and archiving. Digital Fluoroscopic Imaging System consists of an angiographic monoplane positioner, a vascular table, an X-RAY system and a digital detector.
The GE Medical Systems Digital Fluoroscopic Imaging System (Innova 4100) was studied for its diagnostic capabilities compared to a predicate device, the Innova 2000. The primary goal was to demonstrate equivalent image diagnostic capability.
Here's a breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Explicit or Implied) | Reported Device Performance |
---|---|
Equivalent image diagnostic capability | "found that the digital images from the Innova 4100 had equivalent image diagnostic capability." |
2. Sample Size and Data Provenance
- Test Set Sample Size: 11 pairs of patient sequences. This relatively small number suggests a comparative efficacy study.
- Data Provenance: The study was conducted at three hospitals:
- Saint-Luke's Hospital (Bethlehem, Pennsylvania - US)
- Saint-Francis Hospital (Peoria, Illinois - US)
- Centre Paris Nord (Sarcelles - France)
This indicates a mix of retrospective and prospective data, given the comparison of existing Innova 2000 images with new Innova 4100 images. The involvement of different hospitals across different countries suggests a more diverse dataset than if it were confined to a single institution or country.
3. Number of Experts and Qualifications
- Number of Experts: 6 radiologists.
- Qualifications: Not explicitly stated beyond "radiologists." It's implied they are qualified to interpret angiographic images for diagnostic purposes.
4. Adjudication Method
- The document describes the radiologists "compared digital images" and "found that the digital images from the Innova 4100 had equivalent image diagnostic capability." This phrasing suggests a consensus or majority opinion among the radiologists, but a specific adjudication method (e.g., 2+1, 3+1) is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a clinical comparison study was done with 6 radiologists evaluating images.
- Effect Size of Human Readers with vs. without AI: Not applicable. This study does not involve AI assistance to human readers. It's a comparison of two different fluoroscopic imaging systems (Innova 2000 vs. Innova 4100) and their inherent diagnostic image quality.
6. Standalone Performance (Algorithm Only)
- Not applicable. This is a medical device clearance for an imaging system, not an artificial intelligence algorithm. The device itself is the entire system outputting images.
7. Type of Ground Truth Used
- Expert Consensus/Clinical Agreement: The "gold standard" for determining equivalent diagnostic capability was the collective evaluation and agreement of the 6 radiologists. There's no mention of pathology or long-term outcomes data being used as ground truth for this particular comparison.
8. Sample Size for the Training Set
- Not applicable. This device is a digital fluoroscopic imaging system, not an AI algorithm that requires a distinct training set. The "training" for the device would be its engineering and design, informed by established medical imaging principles and prior device iterations (like the Innova 2000).
9. How Ground Truth for the Training Set was Established
- Not applicable, as there is no specific "training set" in the context of an AI algorithm. The device's fundamental design is based on known physics, engineering principles, and clinical requirements for fluoroscopic imaging.
Ask a specific question about this device
(72 days)
The ImageChecker-CT is indicated for use as a general imaging workstation, and is intended to be used to acquire, store, transmit and display images from medical scanning devices.
Specific indications for use for the ImageChecker-CT Workstation are the display of a composite view of 2D cross-sections, and 3D volumes of chest CT images, including findings or regions of interest ("ROI") identified by the radiologist or Computer Assisted Detection ("CAD") findings.
The general indications for use of the ImageChecker-CT Workstation are as a general imaging workstation to assist radiologists in reviewing digital computed Tomography (CT) images of the chest.
Specific indications for use for the ImageChecker-CT Workstation are the display of a composite view of 2D cross-sections, and 3D volumes of chest CT images, including findings or regions of interest ("ROI") identified by the radiologist or Computer Assisted Detection ("CAD") findings.
The ImageChecker-CT System is a combination of dedicated computer software and hardware. The System uses an off-the-shelf personal computer with Windows and Linux-based CPUs, a hard drive, and a single monitor.
The provided document, K023003 for the ImageChecker-CT Workstation, does not contain information about acceptance criteria or a study proving the device meets specific performance criteria.
Instead, the document primarily focuses on establishing substantial equivalence to predicate devices. This means that instead of presenting a stand-alone performance study with acceptance criteria, the manufacturer is arguing that their device is as safe and effective as other legally marketed devices with similar intended use and technological characteristics.
Therefore, many of the requested details cannot be extracted from this specific 510(k) summary. I can only provide information directly mentioned or inferable from the document regarding the type of evaluation conducted.
Here's a breakdown of the requested information based on the provided text:
1. Table of acceptance criteria and the reported device performance
The document does not specify performance acceptance criteria or report device performance metrics in the way a clinical performance study would for a new device. The "Studies" section states, "The ImageChecker-CT Workstation will undergo design verification tests for conformance with specifications." This implies internal testing against design specifications, not necessarily clinical performance metrics.
2. Sample size used for the test set and the data provenance
Not applicable. The document describes a comparison to predicate devices for substantial equivalence, not a performance study with a distinct test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The document does not describe a performance study involving ground truth establishment by experts.
4. Adjudication method for the test set
Not applicable. No test set or expert adjudication is described for a performance study.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document makes no mention of an MRMC study or any assessment of human reader improvement with AI assistance. The device is described as a "general imaging workstation" that can display CAD findings, but its effectiveness with CAD is not studied here.
6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done
No. The document describes a workstation for displaying images and CAD findings, but it does not report on the standalone performance of any algorithm. The 510(k) focuses on the workstation itself, not the performance of a specific CAD algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not applicable. No performance study with ground truth is described.
8. The sample size for the training set
Not applicable. The document does not describe the development or training of an algorithm.
9. How the ground truth for the training set was established
Not applicable. The document does not describe the development or training of an algorithm.
Ask a specific question about this device
Page 1 of 1