Search Results
Found 1 results
510(k) Data Aggregation
(144 days)
Philips Ingenuity CT
The Ingenuity CT is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipment supports, components and accessories. The Ingenuity CT is indicated for head, whole body, cardiac and vascular X-ray Computed Tomography applications in patients of all ages.
These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011: 365:395-409) and subsequent literature, for further information,
The Philips Ingenuity CT consists of three system configurations, the Philips Ingenuity CT, the Philips Ingenuity Core and the Philips Ingenuity Core128. These systems are Computed Tomography X-Ray Systems intended to produce cross-sectional images of the body by computer reconstruction of X-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient, and equipment supports, components and accessories. These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of Jung nodules that may represent cancer*.
The main components (detection system, the reconstruction algorithm, and the x-ray system) that are used in the Philips Ingenuity CT have the same fundamental design characteristics and are based on comparable technologies as the predicate.
The main system modules and functionalities are:
-
- Gantry. The Gantry consists of 4 main internal units:
- a. Stator a fixed mechanical frame that carries HW and SW.
- b. Rotor A rotating circular stiff frame that is mounted in and supported by the stator.
- c. X-Ray Tube (XRT) and Generator fixed to the Rotor frame.
- d. Data Measurement System (DMS) a detectors array, fixed to the Rotor frame.
-
- Patient Support (Couch) carries the patient in and out through the Gantry bore synchronized with the scan.
-
- Console A two part subsystem containing a Host computer and display that is the primary user interface and the Common Image Reconstruction System (CIRS) - a dedicated powerful image reconstruction computer.
In addition to the above components and the software operating them, each system includes a workstation hardware and software for data acquisition, display, manipulation, storage and filming as well as post-processing into views other than the original axial images. Patient supports (positioning aids) are used to position the patient.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Important Note: The provided document is a 510(k) submission for a CT scanner (Philips Ingenuity CT), which focuses on demonstrating substantial equivalence to a predicate device (Philips Plus CT Scanner), rather than establishing new performance claims with specific acceptance criteria and clinical trial results typical for entirely novel AI/ML devices. Therefore, much of the requested information, particularly regarding AI-specific performance (like effect size of human reader improvement with AI, standalone AI performance, ground truth for training AI models) is not directly present. The clinical evaluation described is a comparative image quality assessment rather than a diagnostic accuracy clinical trial.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of this 510(k) submission, the "acceptance criteria" are primarily established against international and FDA-recognized consensus standards for medical electrical equipment and CT systems, and against the performance of the predicate device. The "reported device performance" refers to the successful verification against these standards and equivalence to the predicate.
Acceptance Criteria Category | Specific Criteria / Standard Met | Reported Device Performance |
---|---|---|
Safety and Essential Performance (General) | IEC 60601-1:2006 (Medical electrical equipment Part 1: General requirements for basic safety and essential performance) | All verification tests were executed and passed the specified requirements. |
Electromagnetic Compatibility (EMC) | IEC 60601-1-2:2007 (Medical electrical equipment Part 1-2: General requirements for basic safety and essential performance - Collateral Standard: Electromagnetic disturbances -Requirements and tests) | All verification tests were executed and passed the specified requirements. |
Radiation Protection | IEC 60601-1-3 Ed 2.0:2008 (Medical electrical equipment Part 1-3: General requirements for basic safety - Collateral standard: Radiation protection in diagnostic X-ray equipment) | All verification tests were executed and passed the specified requirements, including radiation metrics. |
Usability | IEC 60601-1-6:2010 (Medical electrical equipment -- Part 1-6: General requirements for basic safety and essential performance - Collateral standard: Usability) | All verification tests were executed and passed the specified requirements. |
Safety of X-ray Equipment (Specific) | IEC 60601-2-44:2009 (Medical electrical equipment Part 2-44: Particular requirements for the safety of X-ray equipment) | All verification tests were executed and passed the specified requirements. |
Software Life Cycle Processes | IEC 62304:2006 (Medical device software Software life cycle processes) | Software Documentation for a Moderate Level of Concern (per FDA guidance) was included. All verification tests were executed and passed the specified requirements. |
Risk Management | ISO 14971 (Medical devices Application of risk management to medical devices (Ed. 2.0, 2007)) | Traceability between requirements, hazard mitigations and test protocols described. Test results per requirement and per hazard mitigation show successful mitigation. |
Image Quality Metrics (Comparative to Predicate) | CT number accuracy and uniformity, MTF, noise reduction performance (i.e., iDose4 vs. FBP), slice thickness, slice sensitivity profiles. Diagnostic image quality for brain, chest, abdomen, pelvis/orthopedic. | Bench tests included patient support/gantry positioning repeatability and accuracy, laser alignment accuracy, CT image quality metrics testing. Sample phantom images provided. Clinical evaluation found no difference in image quality between iDose4 and FBP, with iDose4 scoring higher in most cases, maintaining diagnostic quality. |
Functional and Non-Functional Requirements (System Level) | System Requirements Specification, Subsystem Requirement Specifications, User Interface Verification | Functional and non-functional regression tests, as well as user interface verification, provided in the Traceability Matrix (successful). |
Clinical Validation (Workflow & Features) | Covered requirements related to clinical workflows and features. | Validation test plan executed as planned, acceptance criteria met for each requirement. All validation tests demonstrate safety and effectiveness. |
Serviceability Validation | Covered requirements related to upgrade, installation, servicing, and troubleshooting. | Validation test plan executed as planned, acceptance criteria met for each requirement. |
Manufacturing Validation | Covered requirements related to operations and manufacturing. | Validation test plan executed as planned, acceptance criteria met for each requirement. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a distinct "test set" sample size in the sense of a number of clinical cases or patient images used for a diagnostic accuracy study. Instead, it refers to:
- Bench tests: These involved phantom images and physical testing of the system (e.g., patient support/gantry positioning repeatability and accuracy, laser alignment accuracy, CT image quality metrics testing). No sample size for these is given.
- Clinical Evaluation: An "image evaluation" was performed involving "images of the brain, chest, abdomen and pelvis/peripheral orthopedic body areas." The number of images or patient cases used for this evaluation is not specified.
- Data Provenance: Not explicitly stated, but given it's a Philips product, it's likely internal development and validation data. There is no mention of external datasets or specific countries of origin. The evaluation compares FBP and iDose4 reconstructions of the same images. The clinical evaluation implicitly relates to retrospective data as it compares reconstructed images.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: "a qualified radiologist". So, one expert.
- Qualifications of Experts: Described only as "a qualified radiologist." No specific experience (e.g., years of experience, subspecialty) is provided.
4. Adjudication Method for the Test Set
The evaluation was performed by a single radiologist using a 5-point Likert scale. Therefore, no adjudication method (like 2+1, 3+1 consensus) was used as there was only one reviewer.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done. The document describes an image evaluation by a single radiologist, not multiple readers. It also describes a comparison of image quality between reconstruction techniques (FBP vs. iDose4), not a comparison of human reader diagnostic performance with vs. without AI assistance.
- Effect size of human readers improving with AI vs without AI assistance: This information is not applicable as this type of study was not performed. The study evaluated if iDose4-reconstructed images (which is an iterative reconstruction technique for image quality improvement and dose reduction, not an AI for diagnosis) maintained diagnostic quality compared to standard FBP.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Yes, in spirit, the primary evaluation is about the algorithm's output quality. The iDose4 iterative reconstruction algorithm directly produces images without human intervention, and these images were then evaluated by a radiologist. The core of this 510(k) is about the technical performance and safety of the CT scanner and its components, including its reconstruction algorithms. The evaluation described ("image evaluation...") is a standalone assessment of the image quality produced by the iDose4 algorithm compared to standard FBP. It is not an "AI diagnostic algorithm" standalone performance, but rather an "image reconstruction algorithm" standalone performance.
7. The Type of Ground Truth Used
For the clinical image evaluation, the "ground truth" was established by the evaluation of a qualified radiologist using a 5-point Likert scale to determine if images were of "diagnostic quality" and for comparing image quality between reconstruction methods. This could be considered a form of "expert consensus," albeit from a single expert in this case. There is no mention of pathology or outcomes data being used as ground truth for this specific image quality assessment.
8. The Sample Size for the Training Set
Not applicable in the context of this 510(k) as presented.
The device (Philips Ingenuity CT) is a hardware CT scanner with associated software, including image reconstruction algorithms (like iDose4). While iterative reconstruction algorithms might involve some form of "training" or optimization during their development, the document does not speak to a "training set" in the sense of a dataset used to train a machine learning model for a specific diagnostic task that would typically be described in an AI/ML device submission. The description focuses on technical modifications and adherence to engineering and safety standards, and performance against a predicate device.
9. How the Ground Truth for the Training Set was Established
Not applicable for the reasons stated above (no "training set" for an AI/ML diagnostic model described).
Ask a specific question about this device
Page 1 of 1