Search Results
Found 8 results
510(k) Data Aggregation
(202 days)
The AutoChamber software is an opportunistic AI-powered quantitative imaging tool that measures and reports cardiac chambers volumes comprising left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and left ventricular wall (LVW) from non-contrast chest CT scans including coronary artery calcium (CAC) scans and lung CT scans. AutoChamber is not intended to rule out the risk of a cardiovascular disease, and the results should not be used for any purpose other than to enable physicians to investigate patients that AutoChamber shows signs of enlarged heart (cardiomegaly), enlarged cardiac chambers, and left ventricular hypertrophy (LVH) whose conditions are otherwise missed by human eyes in non-contrast chest CT scans. AutoChamber similarly measures and reports LA, LV, RA, RV, and LVW in contrast-enhanced coronary CT angiography (CCTA) scans. Additionally, AutoChamber measures and reports cardiothoracic ratio (CTR) in both contrast and non-contrast CT scans where the entire thoracic cavity is in the axial field of view. AutoChamber quantitative imaging measurements are adjusted by body surface area (BSA) and are reported both in cubic centimeter volume (cc) and percentiles by gender using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). AutoChamber should not be ordered as a standalone CT scan but instead should be used as an opportunistic add-on to existing and new CT scans of the chest, such as CAC and lung CT scans, as well as CCTA scans.
Using AutoChamber quantitative imaging measurements and their clinical evaluation, healthcare providers can investigate asymptomatic patients who are unaware of their risk of heart failure, atrial fibrillation, stroke and other life-threatening conditions associated with enlarged cardiac chambers, and LVH that may warrant additional risk-assessment or follow-up. AutoChamber quantitative imaging measurements are to be reviewed by radiologists or other medical professionals and should only be used by healthcare providers in conjunction with clinical evaluation.
The AutoChamber Software is an opportunistic AI-powered quantitative imaging tool that provides an estimate of cardiac volume, cardiac chambers volumes and left ventricular (LV) mass from non-contrast chest CT scans as well as contrast-enhanced chest CT scans. In addition to cardiac chambers volume and LV mass, AutoChamber measures and reports cardiothoracic ratio (CTR).
AutoChamber Software reads a CT scan (in DICOM format) and extracts scan specific information like acquisition time, pixel size and scanner type. The AutoChamber Software uses an AI trained model to identify cardiac chambers in the field of view and measure the volume of each chamber including left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and LV wall (LVW). AutoChamber calculates the volume of each chamber as well as the corresponding total volume of all cardiac chambers and, if the field of view contains the entire width of the thoracic cavity in the axial view, it calculates and reports cardiothoracic ratio (CTR).
AutoChamber calculates the volume of each chamber based upon the volume of each pixel multiplied by the number of pixels in the region of interest per slice, multiplied by the number of slices included in each chamber's segmentation. The total volume per chamber is reported in cubic centimeters (CC). In addition to reporting the measured volume in CC per chamber report shows volumes adjusted by body surface area (BSA) and corresponding percentiles using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The default cut-off value for further investigations is the 75th percentile but it is optional and subject to provider's judgement.
AutoChamber does not provide a numerical individualized risk score/prediction or categorial assessment for whether an individual patient will develop cardiovascular disease over a specified period of their percentile(s).
AutoChamber is a post-processing quantitative imaging software that works on existing and new CT scans. The AutoChamber Software is a software module installed by trained personnel only. The AutoChamber Software is executed via a parent software which provides the necessary input and visualizes the output data. The software itself does not offer user controls or access. The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done. If rejected, the user must retry with a new series of images or conduct an alternate method to measure cardiac chamber volume. The expert's review solely pertains to the region of interest being properly located.
Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge. Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy. Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.
Based on the provided text, here's a description of the acceptance criteria and the study proving the device meets them:
Acceptance Criteria and Device Performance
The document does not explicitly state a table of quantitative acceptance criteria for the performance of the AutoChamber software (e.g., a specific mean absolute error for volume measurements or a target F1-score for segmentation). Instead, the software validation section states: "Software Verification and Validation testing was completed to demonstrate the safety and effectiveness of the device. Testing demonstrates the AutoChamber Software meets all its functional requirements and performance specifications."
The closest the document comes to defining acceptance criteria is in the "Principles of Ops" section, describing conditions for software pass/fail from a user's perspective:
- Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge.
- Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy.
- Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.
- The only user interaction is to accept or reject the region where the cardiac chamber volume measurement is done, with rejection leading to a retry or alternate method. "The expert's review solely pertains to the region of interest being properly located."
Given this, the qualitative acceptance criteria appear to be centered on the correct anatomical localization of the measured cardiac chambers by the AI, as confirmed by expert review.
Reported Device Performance:
The document does not provide specific metrics (e.g., mean absolute error, Dice coefficient, accuracy, sensitivity, specificity) for the performance of the AutoChamber software against its ground truth. It only states that "AutoChamber results were compared with measurements previously made by cardiac MRI" and other CT scans. Therefore, a table of acceptance criteria vs. reported device performance cannot be fully constructed from the provided text, as the specific performance outcomes are not detailed, nor are the quantitative acceptance thresholds.
Study Details:
The clinical validation of the AutoChamber software was based on retrospective analyses.
-
Sample sizes used for the test set and data provenance:
- Study 1: 5003 cases where AutoChamber results from non-contrast cardiac CT scans were compared with measurements previously made by cardiac MRI.
- Study 2: 1433 patients with paired non-contrast and contrast-enhanced cardiac CT scans.
- Study 3: 171 patients who underwent both ECG-gated cardiac CT scan and non-gated full chest lung scan.
- Study 4: 131 cases where AutoChamber results were compared directly with a Reference device (K060937).
- Data Provenance: The reference data for percentiles is from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The specific country of origin for the test set data (the 5003, 1433, 171, and 131 cases) is not explicitly stated, but MESA is a US-based study. All studies were retrospective analyses of existing databases.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document implies that "expert knowledge" is used to confirm the correct placement of cardiac chamber volumes and LV mass. However, it does not specify the number of experts, their qualifications (e.g., specific board certifications, years of experience), or the process by which they established ground truth for the volumes themselves (e.g., manual segmentation by experts, or if the "cardiac MRI" measurements served as the primary ground truth, and if so, how those were established).
-
Adjudication method for the test set:
- The document does not specify a formal adjudication method (e.g., 2+1, 3+1 consensus) for the expert review or the establishment of ground truth for the test set. It mentions "The expert's review solely pertains to the region of interest being properly located," implying individual expert qualitative assessment of the AI's output, rather than a multi-reader consensus process for establishing the ground truth values themselves.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure how much human readers improve with AI vs. without AI assistance is not described. The document states that the AI measurements were compared against existing data (e.g., MRI measurements, other CT scans). The AI is presented as a "post-processing quantitative imaging software" that helps physicians investigate patients and is to be reviewed by radiologists or medical professionals. This implies an assistive role, but a formal MRMC study demonstrating enhancement of human reader performance is not mentioned.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the clinical validation involved comparing the AutoChamber software's measurements directly with other established measurement methods (cardiac MRI, other CT scans, and a reference device). This indicates a standalone performance evaluation of the algorithm's output against a reference. The "Principles of Ops" section states, "The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done." This suggests the algorithm performs autonomously, and its output is then presented for acceptance/rejection based on anatomical placement.
-
The type of ground truth used:
- The primary ground truth appears to be measurements previously made by cardiac MRI in one key study (5003 cases), and measurements from other CT scans or a cleared reference device (K060937) in other studies. The document does not explicitly state that these "measurements" were derived from pathology or clinical outcomes data, but rather from other imaging modalities considered reference standards (MRI) or other devices. The qualitative "expert knowledge" mentioned for passing/failing the software seems to be about the anatomical correctness of the AI's segmentation/placement rather than the true quantitative values themselves.
-
The sample size for the training set:
- The sample size for the training set is not specified in the provided text. It only mentions that the AutoChamber Software uses an "AI trained model."
-
How the ground truth for the training set was established:
- The method for establishing ground truth for the training set is not specified in the provided text.
Ask a specific question about this device
(169 days)
The Lung Nodule Assessment and Comparison Option is intended for use as a diagnostic patient-imaging tool. It is intended for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include diameter, volume and volume over time. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed.
The Lung Nodule Assessment and Comparison Option application is intended for use as a diagnostic patient-imaging tool. It is intended for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed. The user interface and automated tools help to determine growth patterns and compose comparative reviews. The Lung Nodule Assessment and Comparison Option application requires the user to identify a nodule and to determine the type of nodule in order to use the appropriate characterization tool. Lung Nodule Assessment and Comparison Option may be utilized in both diagnostic and screening evaluations supporting Low Dose CT Lung Cancer Screening*.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Device Name: Lung Nodule Assessment and Comparison Option (LNA)
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary does not explicitly list quantified acceptance criteria with numerical targets. Instead, it indicates that the device was tested against its defined functional requirements and performance claims, and that it "meets the acceptance criteria and is adequate for its intended use and specifications." The "acceptance criteria" are implied by the verification and validation tests performed to ensure the device's design meets user needs and intended use, and that its technological characteristics claims are met.
However, based on the description of the device's capabilities, we can infer some key performance areas that would have been subject to acceptance criteria:
Acceptance Criteria (Inferred from features and V&V activities) | Reported Device Performance |
---|---|
Accuracy of Lung and Lobe Segmentation | Validation activities assure that the lung and lobe segmentation are adequate from an overall product perspective. |
Accuracy of Nodule Segmentation (Single-click and Manual Editing) | Verified and validated as part of the overall design and functionality. |
Accuracy of Nodule Measurements (Diameter, Volume, Mean HU) | Automatic software calculation of these measurements is a key feature, and the device was tested to meet its defined functionality requirements and performance claims. Manual editing with automatic recalculation is also validated. |
Functionality and Accuracy of Comparison and Matching for Temporal Studies | Validation activities assure that the comparison, as well as the nodule matching and propagation functionality, are adequate from an overall product perspective. Automatic calculations of doubling time and percent/absolute changes in measurements were tested. |
Functionality of Lung-RADS™ Reporting | Validation activities assure the Prefill functionality for the Lung RADS score is adequate. |
Accuracy and Functionality of Risk Calculator Tool | The risk prediction functionality was validated. Based on McWilliams et al. (2013) study, which showed excellent discrimination and calibration (AUC > 0.90). The LNA's risk calculator is based on this model and its performance was validated. |
Usability of the Software | A usability study was conducted according to standards. |
Compliance with Relevant Standards and Guidance Documents | Complies with ISO 14971, IEC 62304, IEC 62366-1, and FDA guidance for software in medical devices. |
Overall functionality and performance of the clinical workflow | Each test case was evaluated for the complete clinical workflow in a validation study using real recorded clinical data. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify a numerical sample size for the internal validation studies conducted by Philips for the LNA application. It states that the LNA application was validated "using real recorded clinical data cases in order to simulate the actual use of the software."
- Data Provenance for Philips' Internal Tests: The text implicitly suggests the data was retrospective, as it refers to "real recorded clinical data cases." The country of origin for these internal test cases is not specified.
- Data Provenance for the Risk Calculator (McWilliams et al. study):
- Development Data Set: Participants from the Pan-Canadian Early Detection of Lung Cancer Study (PanCan).
- Validation Data Set: Participants from chemoprevention trials at the British Columbia Cancer Agency (BCCA), sponsored by the U.S. National Cancer Institute.
- This indicates the data was from Canada (PanCan, BCCA in British Columbia) and supported by the U.S. National Cancer Institute. Both were prospective population-based studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts or their qualifications for establishing ground truth specifically for Philips' internal V&V test set. It mentions the LNA application was validated to address "user needs" and simulate "actual use of the software," which implies expert input, but no details are provided.
For the Risk Calculator, the ground truth for malignancy in the McWilliams et al. study was established through tracking the final outcomes of all detected nodules. This likely involved pathology reports and clinical follow-up, adjudicated by clinical experts, but the exact number and qualifications of these experts are not detailed in this summary.
4. Adjudication Method for the Test Set
The document does not describe a specific adjudication method (e.g., 2+1, 3+1) for Philips' internal V&V test set. The validation process involved evaluating each test case for the complete clinical workflow and ensuring the design meets user needs, which might involve expert review, but the formal adjudication protocol is not elaborated upon in this summary.
For the Risk Calculator's underlying study (McWilliams et al.), the "final outcomes of all nodules" suggests a definitive ground truth based on pathology or long-term clinical stability/progression, but the adjudication method for these biological outcomes is not specified within this document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
The document does not report an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The studies described focus on the standalone performance and validation of the LNA application's features and the underlying model for the risk calculator.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone performance was evaluated for various features of the LNA application:
- The automatic segmentation capabilities (lungs, lobes, nodules) were validated to be "adequate."
- The automatic measurement calculations (diameters, volume, mean HU) were tested to comply with "defined functionality requirements and performance claims."
- The comparison and matching functionality and "Prefill functionality for the Lung RADS score and the risk prediction" were assured to be "adequate."
- The Risk Calculator tool itself (based on McWilliams et al.) demonstrated standalone predictive performance with "excellent discrimination and calibration, with areas under the receiver-operating-characteristic curve of more than 0.90." This indicates strong standalone performance of the algorithm in predicting malignancy.
7. The Type of Ground Truth Used
- For Philips' Internal V&V: The ground truth appears to be based on "real recorded clinical data cases," implying clinical diagnoses, measurements, and potentially pathology results where applicable, as evaluated against the software's specified functionality and user needs. The specific hierarchy or gold standard used for each feature's ground truth (e.g., expert consensus for segmentation, pathology for nodule type) is not explicitly detailed.
- For the Risk Calculator (McWilliams et al. study): The ground truth for malignancy was established by tracking "the final outcomes of all nodules," which would primarily be pathology results for cancerous nodules and long-term clinical outcome data (stability or benign diagnosis) for non-cancerous ones.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set used for the LNA application's algorithms, including the segmentation, measurement, and comparison features.
For the Risk Calculator's underlying model (McWilliams et al.):
- The "development data set" (training set) included participants from the Pan-Canadian Early Detection of Lung Cancer Study (PanCan). The exact number of participants or nodules is not provided in this summary but the PanCan study is a large, population-based study.
9. How the Ground Truth for the Training Set Was Established
For the Risk Calculator's underlying model (McWilliams et al.):
- The ground truth for the development data set (PanCan study) was established by tracking "the final outcomes of all nodules of any size that were detected on baseline low-dose CT scans." This indicates that the ground truth for malignancy was based on definitive pathological diagnosis or long-term clinical follow-up confirming benignity or stability.
Ask a specific question about this device
(88 days)
Philips IntelliSpace Portal Platform is a software medical device that allows multiple users clinical applications from compatible computers on a network.
The system allows networking, selection, processing and filming of multimodality DICOM images.
This software is for use with off-the-shelf PC computer technology that meets defined minimum specifications .
Philips IntelliSpace Portal Platform is intended to be used by trained professionals, including but not limited to physicians and medical technicians.
This medical device is not to be used for mammography.
The device is not intended for diagnosis of lossy compressed images.
Philips IntelliSpace Portal Platform is a software medical device that allows multiple users to remotely access clinical applications from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images. This software is for use with offthe-shelf PC computer technology that meets defined minimum specifications.
The IntelliSpace Portal Platform communicates with imaging systems of different modalities using the DICOM-3 standard.
Here's an analysis of the provided text regarding the acceptance criteria and study for the IntelliSpace Portal Platform (K162025):
The submitted document is a 510(k) Premarket Notification for the Philips IntelliSpace Portal Platform. This submission aims to demonstrate substantial equivalence to a legally marketed predicate device (GE AW Server K081985).
Important Note: The document focuses on demonstrating substantial equivalence for a Picture Archiving and Communications System (PACS) and related functionalities. Unlike AI/ML-driven diagnostic devices, the information provided here does not detail performance metrics like sensitivity, specificity, or AUC against a specific clinical condition using a test set of images with established ground truth from a clinical study. Instead, the acceptance criteria and "study" refer to engineering and functional verification and validation testing to ensure the software performs as intended and safely, consistent with a PACS system.
Here's the breakdown based on your requested information:
-
A table of acceptance criteria and the reported device performance
The document does not provide a table with specific quantitative acceptance criteria or reported performance results in the classical sense (e.g., sensitivity, specificity, accuracy percentages) because it's for a PACS platform, not a diagnostic AI algorithm for a specific clinical task.
Instead, the "acceptance criteria" for a PACS platform primarily relate to its functional performance, compliance with standards, and safety. The reported "performance" is a successful demonstration of these aspects.
Acceptance Criteria (Inferred from regulatory requirements and description) Reported Device Performance (as stated in the submission) Compliance with ISO 14971 (Risk Management) Demonstrated compliance with ISO 14971. (p. 9) Compliance with IEC 62304 (Medical Device Software Lifecycle Processes) Demonstrated compliance with IEC 623304. (p. 9) Compliance with NEMA-PS 3.1-PS 3.20 (DICOM Standard) Demonstrated compliance with NEMA-PS 3.1-PS 3.20 (DICOM). (p. 9) Compliance with FDA Guidance for Content of Premarket Submissions for Software Contained in Medical Devices Demonstrated compliance with relevant FDA guidance document. (p. 9) Meeting defined functionality requirements and performance claims (e.g., networking, selection, processing, filming of multimodality DICOM images, multi-user access, various viewing/manipulation tools as listed in comparison tables) Verification and Validation tests performed to address intended use, technological characteristics, requirement specifications, and risk management results. Tests demonstrated the system meets all defined functionality requirements and performance claims. (p. 9) Safety and Effectiveness equivalent to predicate device Demonstrated substantial equivalence in terms of safety and effectiveness, confirming no new safety or effectiveness concerns. (p. 9, 10) -
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This type of information is not provided in this document. Since the submission is for a PACS platform and not a diagnostic AI algorithm, there is no mention of a "test set" of clinical cases or patient data in the context of diagnostic performance evaluation. The "testing" refers to software verification and validation, which would involve testing functionalities rather than analyzing a dataset of medical images.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not applicable/not provided. As explained above, there is no "test set" of clinical cases with ground truth established by medical experts for diagnostic performance.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not applicable/not provided. There is no clinical "test set" requiring adjudication for diagnostic performance.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is a PACS platform, not an AI-assisted diagnostic tool designed to improve human reader performance for a specific clinical task. The submission explicitly states: "The subject of this premarket submission, ISPP does not require clinical studies to support equivalence." (p. 9).
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No, a standalone performance study (in the context of an AI algorithm performing a diagnostic task) was not done. This device is a software platform for image management and processing, intended for use by trained professionals (humans-in-the-loop) for visualization and administrative functions.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not applicable/not provided. There is no ground truth data in the context of diagnostic accuracy for this PACS platform submission. The "ground truth" for its functionality would be defined by its requirement specifications, and testing would verify if those specifications are met.
-
The sample size for the training set
This information is not applicable/not provided. This device is a PACS platform, not an AI/ML algorithm that requires a "training set" of data in the machine learning sense. The software development process involves design and implementation, followed by verification and validation, but not training on a dataset of images to learn a specific task.
-
How the ground truth for the training set was established
This information is not applicable/not provided. As there is no "training set," there is no ground truth establishment for it.
Ask a specific question about this device
(167 days)
The device is intended to be used for x-ray computed tomography and projection x-ray imaging of upper and lower extremities of adult patients and pediatric patients aged 12 and over.
The Carestream Health Onsight 3D Extremity System is a medical x-ray imaging device designed to acquire three-dimensional, volumetric CT data of patient extremities (feet, ankles, lower leg, knees, hands, wrists, arms and elbows). The device is configured as a "cone beam computed tomography" system (CBCT) in that the x-ray field covers the whole anatomy of interest (~25cm in length) and the data is acquired with a single rotation of the detector and x-ray source (actually a short scan of 180 degrees plus "fan angle" for a total of 216.5 degrees) with no patient motion through the irradiation field. The device is intended for use in a range of different locations including officebased medical practice, hospital departments and other imaging facilities. The system is also capable of acquiring standard two-dimensional projection radiographs of the same body parts.
Here's a breakdown of the acceptance criteria and the study details for the Carestream Health OnSight 3D Extremity System, based on the provided document:
Acceptance Criteria and Device Performance
The document states that predefined acceptance criteria were met, but it does not explicitly list the quantitative acceptance criteria. Instead, it generally indicates that the device's performance was evaluated against the predicate and reference devices and found to be "statistically equivalent to or better than."
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Diagnostic Capability (3D Images) | Statistically equivalent to or better than Philips Brilliance CT X-ray system (K060937) |
Diagnostic Capability (2D Images) | Statistically equivalent to or better than DRX-1C Detector (K120062) as a component of the DRX-Evolution x-ray system (K091889) |
Conforms to Specifications | Bench testing demonstrated the device conforms to its specifications. |
Safety and Effectiveness | Demonstrated to be as safe, as effective, and performs as well as or better than the predicate device. |
Study Details
1. Sample Size and Data Provenance
- Test Set Sample Size: Not explicitly stated. The document mentions a "clinical study" and "Reader Study" but does not provide the number of cases or images included in the test set.
- Data Provenance: Not explicitly stated. The document describes a "clinical study," which generally implies prospective data collection, but no details on the country of origin are provided.
2. Number of Experts and Qualifications
- Number of Experts: Not explicitly stated. The document refers to a "Reader Study," which implies multiple readers, but the exact number is not given.
- Qualifications of Experts: Not explicitly stated. The document does not provide details on the specialty or experience level of the readers.
3. Adjudication Method
- Not explicitly stated. The document mentions a "Reader Study" to determine diagnostic capability but does not describe any specific adjudication method used for discrepancies between readers, if any.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Yes, a "Reader Study" was performed to compare the diagnostic capability of the OnSight 3D Extremity System against reference devices.
- Effect Size (Improvement with AI vs. without AI assistance): Not applicable. This study compared the diagnostic capability of the OnSight device's images to images from existing reference devices, not the improvement of human readers with AI assistance versus without AI assistance. The OnSight 3D Extremity System itself generates the images, and the study assesses the diagnostic quality of those images.
5. Standalone Performance Study
- Was a standalone study done? Yes, the "Reader Study" assessed the diagnostic capability of the images produced by the OnSight 3D Extremity System. This implies an evaluation of the algorithm's output (the images) for diagnostic quality.
6. Type of Ground Truth Used
- Not explicitly stated. Given that the study evaluated "diagnostic capability" and compared to reference devices, it is highly likely that the ground truth was established by expert consensus or comparison to a gold standard diagnosis, but the specific method (e.g., pathology, clinical follow-up) is not detailed.
7. Sample Size for Training Set
- Not applicable. The document describes a clinical study and bench testing for a medical imaging device (a CT system) that generates images, rather than an AI/ML algorithm that is trained on a dataset. Therefore, there is no mention of a training set for an AI model.
8. How Ground Truth for Training Set was Established
- Not applicable, as there is no mention of a training set for an AI model.
Ask a specific question about this device
(144 days)
The Ingenuity CT is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipment supports, components and accessories. The Ingenuity CT is indicated for head, whole body, cardiac and vascular X-ray Computed Tomography applications in patients of all ages.
These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011: 365:395-409) and subsequent literature, for further information,
The Philips Ingenuity CT consists of three system configurations, the Philips Ingenuity CT, the Philips Ingenuity Core and the Philips Ingenuity Core128. These systems are Computed Tomography X-Ray Systems intended to produce cross-sectional images of the body by computer reconstruction of X-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient, and equipment supports, components and accessories. These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of Jung nodules that may represent cancer*.
The main components (detection system, the reconstruction algorithm, and the x-ray system) that are used in the Philips Ingenuity CT have the same fundamental design characteristics and are based on comparable technologies as the predicate.
The main system modules and functionalities are:
-
- Gantry. The Gantry consists of 4 main internal units:
- a. Stator a fixed mechanical frame that carries HW and SW.
- b. Rotor A rotating circular stiff frame that is mounted in and supported by the stator.
- c. X-Ray Tube (XRT) and Generator fixed to the Rotor frame.
- d. Data Measurement System (DMS) a detectors array, fixed to the Rotor frame.
-
- Patient Support (Couch) carries the patient in and out through the Gantry bore synchronized with the scan.
-
- Console A two part subsystem containing a Host computer and display that is the primary user interface and the Common Image Reconstruction System (CIRS) - a dedicated powerful image reconstruction computer.
In addition to the above components and the software operating them, each system includes a workstation hardware and software for data acquisition, display, manipulation, storage and filming as well as post-processing into views other than the original axial images. Patient supports (positioning aids) are used to position the patient.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Important Note: The provided document is a 510(k) submission for a CT scanner (Philips Ingenuity CT), which focuses on demonstrating substantial equivalence to a predicate device (Philips Plus CT Scanner), rather than establishing new performance claims with specific acceptance criteria and clinical trial results typical for entirely novel AI/ML devices. Therefore, much of the requested information, particularly regarding AI-specific performance (like effect size of human reader improvement with AI, standalone AI performance, ground truth for training AI models) is not directly present. The clinical evaluation described is a comparative image quality assessment rather than a diagnostic accuracy clinical trial.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of this 510(k) submission, the "acceptance criteria" are primarily established against international and FDA-recognized consensus standards for medical electrical equipment and CT systems, and against the performance of the predicate device. The "reported device performance" refers to the successful verification against these standards and equivalence to the predicate.
Acceptance Criteria Category | Specific Criteria / Standard Met | Reported Device Performance |
---|---|---|
Safety and Essential Performance (General) | IEC 60601-1:2006 (Medical electrical equipment Part 1: General requirements for basic safety and essential performance) | All verification tests were executed and passed the specified requirements. |
Electromagnetic Compatibility (EMC) | IEC 60601-1-2:2007 (Medical electrical equipment Part 1-2: General requirements for basic safety and essential performance - Collateral Standard: Electromagnetic disturbances -Requirements and tests) | All verification tests were executed and passed the specified requirements. |
Radiation Protection | IEC 60601-1-3 Ed 2.0:2008 (Medical electrical equipment Part 1-3: General requirements for basic safety - Collateral standard: Radiation protection in diagnostic X-ray equipment) | All verification tests were executed and passed the specified requirements, including radiation metrics. |
Usability | IEC 60601-1-6:2010 (Medical electrical equipment -- Part 1-6: General requirements for basic safety and essential performance - Collateral standard: Usability) | All verification tests were executed and passed the specified requirements. |
Safety of X-ray Equipment (Specific) | IEC 60601-2-44:2009 (Medical electrical equipment Part 2-44: Particular requirements for the safety of X-ray equipment) | All verification tests were executed and passed the specified requirements. |
Software Life Cycle Processes | IEC 62304:2006 (Medical device software Software life cycle processes) | Software Documentation for a Moderate Level of Concern (per FDA guidance) was included. All verification tests were executed and passed the specified requirements. |
Risk Management | ISO 14971 (Medical devices Application of risk management to medical devices (Ed. 2.0, 2007)) | Traceability between requirements, hazard mitigations and test protocols described. Test results per requirement and per hazard mitigation show successful mitigation. |
Image Quality Metrics (Comparative to Predicate) | CT number accuracy and uniformity, MTF, noise reduction performance (i.e., iDose4 vs. FBP), slice thickness, slice sensitivity profiles. Diagnostic image quality for brain, chest, abdomen, pelvis/orthopedic. | Bench tests included patient support/gantry positioning repeatability and accuracy, laser alignment accuracy, CT image quality metrics testing. Sample phantom images provided. Clinical evaluation found no difference in image quality between iDose4 and FBP, with iDose4 scoring higher in most cases, maintaining diagnostic quality. |
Functional and Non-Functional Requirements (System Level) | System Requirements Specification, Subsystem Requirement Specifications, User Interface Verification | Functional and non-functional regression tests, as well as user interface verification, provided in the Traceability Matrix (successful). |
Clinical Validation (Workflow & Features) | Covered requirements related to clinical workflows and features. | Validation test plan executed as planned, acceptance criteria met for each requirement. All validation tests demonstrate safety and effectiveness. |
Serviceability Validation | Covered requirements related to upgrade, installation, servicing, and troubleshooting. | Validation test plan executed as planned, acceptance criteria met for each requirement. |
Manufacturing Validation | Covered requirements related to operations and manufacturing. | Validation test plan executed as planned, acceptance criteria met for each requirement. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a distinct "test set" sample size in the sense of a number of clinical cases or patient images used for a diagnostic accuracy study. Instead, it refers to:
- Bench tests: These involved phantom images and physical testing of the system (e.g., patient support/gantry positioning repeatability and accuracy, laser alignment accuracy, CT image quality metrics testing). No sample size for these is given.
- Clinical Evaluation: An "image evaluation" was performed involving "images of the brain, chest, abdomen and pelvis/peripheral orthopedic body areas." The number of images or patient cases used for this evaluation is not specified.
- Data Provenance: Not explicitly stated, but given it's a Philips product, it's likely internal development and validation data. There is no mention of external datasets or specific countries of origin. The evaluation compares FBP and iDose4 reconstructions of the same images. The clinical evaluation implicitly relates to retrospective data as it compares reconstructed images.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: "a qualified radiologist". So, one expert.
- Qualifications of Experts: Described only as "a qualified radiologist." No specific experience (e.g., years of experience, subspecialty) is provided.
4. Adjudication Method for the Test Set
The evaluation was performed by a single radiologist using a 5-point Likert scale. Therefore, no adjudication method (like 2+1, 3+1 consensus) was used as there was only one reviewer.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done. The document describes an image evaluation by a single radiologist, not multiple readers. It also describes a comparison of image quality between reconstruction techniques (FBP vs. iDose4), not a comparison of human reader diagnostic performance with vs. without AI assistance.
- Effect size of human readers improving with AI vs without AI assistance: This information is not applicable as this type of study was not performed. The study evaluated if iDose4-reconstructed images (which is an iterative reconstruction technique for image quality improvement and dose reduction, not an AI for diagnosis) maintained diagnostic quality compared to standard FBP.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Yes, in spirit, the primary evaluation is about the algorithm's output quality. The iDose4 iterative reconstruction algorithm directly produces images without human intervention, and these images were then evaluated by a radiologist. The core of this 510(k) is about the technical performance and safety of the CT scanner and its components, including its reconstruction algorithms. The evaluation described ("image evaluation...") is a standalone assessment of the image quality produced by the iDose4 algorithm compared to standard FBP. It is not an "AI diagnostic algorithm" standalone performance, but rather an "image reconstruction algorithm" standalone performance.
7. The Type of Ground Truth Used
For the clinical image evaluation, the "ground truth" was established by the evaluation of a qualified radiologist using a 5-point Likert scale to determine if images were of "diagnostic quality" and for comparing image quality between reconstruction methods. This could be considered a form of "expert consensus," albeit from a single expert in this case. There is no mention of pathology or outcomes data being used as ground truth for this specific image quality assessment.
8. The Sample Size for the Training Set
Not applicable in the context of this 510(k) as presented.
The device (Philips Ingenuity CT) is a hardware CT scanner with associated software, including image reconstruction algorithms (like iDose4). While iterative reconstruction algorithms might involve some form of "training" or optimization during their development, the document does not speak to a "training set" in the sense of a dataset used to train a machine learning model for a specific diagnostic task that would typically be described in an AI/ML device submission. The description focuses on technical modifications and adherence to engineering and safety standards, and performance against a predicate device.
9. How the Ground Truth for the Training Set was Established
Not applicable for the reasons stated above (no "training set" for an AI/ML diagnostic model described).
Ask a specific question about this device
(133 days)
The Philips Multislice CT Systems are Computed Tomography X-Ray Systems intended to produce cross-sectional images of the body by computer reconstruction of X-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment supports, components and accessories. The scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*.
The screening must be performed within the established inclusion criteria of protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
Philips Low Dose CT Lung Cancer Screening option can be used with Philips whole body multi-slice CT X-Ray Systems installed in a healthcare facility (clinic / hospital). These systems provide a continuously rotating X-ray tube and detector array with multi-slice capability. The acquired x-ray transmission data is reconstructed by computer into cross-sectional images of the body taken at different angles and planes. Reconstruction algorithms available are standard reconstruction (filtered back projection), iDose 4 and IMR iterative reconstruction. These systems also include signal analysis and display equipment, patient and equipment supports, components and accessories.
There are no functional, performance, feature, or design changes required for the qualified CT systems onto which the LDCT LCS Option is applied. Because none of the CTs will require hardware or software modifications, the Philips Low Dose CT Lung Cancer Screening option and the currently marketed and predicate Philips Multislice CT System for qualified CT systems in the installed base consists of:
- A set of up to three reference LDCT LCS protocols: standard reconstruction, standard reconstruction with iDose 4, and with IMR iterative reconstruction (where applicable), for each qualified CT System on a per CT platform basis;
- Detailed instructions on how to create the protocols on the corresponding CT System; and
- A dedicated Instructions for Use for LDCT LCS that covers all qualified systems.
This document is a 510(k) premarket notification for the Philips Multislice CT System with Low Dose CT Lung Cancer Screening. It primarily focuses on demonstrating substantial equivalence to existing predicate devices, rather than presenting a standalone study proving the device meats specific acceptance criteria in a clinical setting with human readers.
However, based on the provided text, I can infer the acceptance criteria relate to maintaining image quality parameters for Low Dose CT Lung Cancer Screening (LDCT LCS) that are equivalent to or better than standard CT performance, especially given the reduced radiation dose. The "study" proving this largely relies on non-clinical bench testing and references to external clinical literature and trials.
Here's a breakdown of the information you requested:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" in a quantitative, measurable sense for a clinical study with patients. Instead, it focuses on demonstrating that the LDCT LCS option does not degrade image quality compared to existing CT systems and that existing clinical evidence supports the efficacy of LDCT LCS itself.
Here's a table based on the image quality parameters evaluated in non-clinical testing and the reported outcome of that testing:
Acceptance Criteria (Inferred from Image Quality Parameters Evaluated) | Reported Device Performance (Non-Clinical Bench Testing Outcomes) |
---|---|
Spatial Resolution (MTF): Ability to visualize fine anatomical details, preserved at lower dose. | "MTF is a measure of the high contrast spatial resolution performance of the system. Nodules in the lung are high contrast objects and therefore, MTF should be preserved at lower dose conditions." "Demontrates that the image quality metrics including MTF... are substantially equivalent among different family of scanners (Brilliance 16, Brilliance Big Bore, Brilliance 64/Ingenuity, Brilliance iCT and IQon)." |
Contrast Resolution (CNR): Ability to differentiate tissues with subtle differences in attenuation, sufficient for nodule detection. | "Sufficient Contrast-to-Noise is needed to detect solid and non-solid nodules in the lung. This parameter accounts for the contrast between an object and the background. This could also be a parameter that could influence nodule detectability." "The CNR scans were completed using the LDCT LCS scan protocols for all scanners in the comparison." "The results of non-clinical bench testing demonstrate that the image quality metrics including... Contrast to Noise Ratio are substantially equivalent among different family of scanners." "The contrast of the lung nodules it high relative to this increased noise, demonstrated by the CNR results (section 18) and NLST study." |
Image Noise (Standard Deviation): Acceptable background noise levels at reduced dose, not compromising nodule detectability/sizing. | "As dose is reduced, background noise in the image increases. If this noise becomes too large, nodule detectability and sizing measurement may be compromised." "The results of non-clinical bench testing demonstrate that the image quality metrics including... Image noise... are substantially equivalent among different family of scanners." "Noise goes up by the square root of the mAs. The contrast of the lung nodules it high relative to this increased noise, demonstrated by the CNR results (section 18) and NLST study." |
Noise Power Spectrum (NPS): Acceptable noise texture, not influencing nodule detection capabilities. | "Similar to the noise, changes in texture of the noise may have an influence on the nodule detection capabilities." "The NPS scans were completed using the LDCT LCS scan protocol." "The results of non-clinical bench testing also demonstrate the image quality parameters for iDose4 and IMR reconstructions are equivalent to, or better than standard FBP reconstruction." |
Slice Thickness: Accurate slice thickness for clear edges and boundaries of nodules. | "The ability to produce slice thicknesses (FWHM of the slice sensitivity profile) that are close to the nominal slice thickness is important in defining clear edges and boundaries of the nodule." "The results of non-clinical bench testing demonstrate that the image quality metrics including... Slice Thickness... are substantially equivalent among different family of scanners." |
CT Number Uniformity: Sufficient uniformity in the lung for robust nodule detectability. | "In a low dose scanning protocols such as with lung cancer screening, maintaining sufficient CT number uniformity throughout the lung and its various structures is important for more robust detectability of the nodules." "The results of non-clinical bench testing demonstrate that the image quality metrics including... CT number linearity, CT number accuracy... are substantially equivalent among different family of scanners." |
CT Number Linearity: Measured CT number in a nodule not significantly affected by low dose scanning. | "In a low dose scanning protocols such as with lung cancer screening, the CT number measured in a nodule may be affected and therefore measuring CT number linearity is important." "The results of non-clinical bench testing demonstrate that the image quality metrics including... CT number linearity... are substantially equivalent among different family of scanners." |
Image Artifacts: No new or increased artifacts obscuring anatomical details or mimicking pathology. | (Implicit in overall image quality assessment, not explicitly detailed as a separate quantified metric but mentioned as an important image quality parameter). "Image artifacts can obscure anatomical details and mimic pathology." |
Geometric Distortion: Accuracy of measurements and image correlation with other modalities. | (Implicit in overall image quality assessment, not explicitly detailed as a separate quantified metric but mentioned as an important image quality parameter). "Geometric distortion can affect the accuracy of measurements and the ability to correlate images with other modalities." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not applicable in the context of a clinical patient test set for this specific submission. The "test set" for demonstrating substantial equivalence related to image quality was composed of various CT scanner models and reconstruction algorithms.
- Data Provenance: The primary data used to support the efficacy of LDCT LCS itself comes from externally referenced clinical literature, specifically the National Lung Screening Trial (NLST) (N Engl J Med 2011; 365:395-409) and the International Early Lung Cancer Action Program (I-ELCAP), along with "subsequent literature." These were large-scale prospective clinical trials, likely conducted across multiple centers, including within the US (NLST).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not applicable for this specific submission's non-clinical testing. The non-clinical image quality phantom measurements do not involve expert interpretation for ground truth.
- Qualifications of Experts: For the external clinical trials (NLST, I-ELCAP) referenced, radiologists and other medical professionals were involved in establishing diagnoses and outcomes, but their specific numbers and qualifications are not detailed in this 510(k) document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable for this specific submission's non-clinical testing. No human adjudication was performed for the image quality metrics tested.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: No, a multi-reader, multi-case comparative effectiveness study with human readers (with vs. without AI assistance) was not conducted or presented in this 510(k) submission. This submission is for the CT system itself with a low-dose protocol option, not for an AI-powered CAD (Computer-Aided Detection) or CADx (Computer-Aided Diagnosis) device. The device does not incorporate AI for interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance Study: No, a standalone algorithm-only performance study was not conducted or presented in this 510(k) submission. The device is a CT scanner, not an independent algorithm for diagnosis or detection.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth:
- For the image quality bench testing: Ground truth was established by the physical phantoms used for measurements (e.g., wire phantom for MTF, ramp/disk phantom for slice thickness, water/density phantoms for CT numbers).
- For the clinical efficacy of LDCT LCS (referenced externally): The ground truth for the NLST and I-ELCAP studies would have included pathology reports (for confirmed cancers), clinical follow-up/outcomes data (for stable or resolving nodules), and potentially expert consensus reviews of indeterminate findings.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This submission describes a CT scanner with a new protocol, not a machine learning algorithm that requires a training set. The reconstruction algorithms (iDose4, IMR) themselves would have been developed using various data, but details are not provided here.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable, as this is not an AI/ML device requiring a training set with ground truth established through labeled data in the context of this submission.
Ask a specific question about this device
(144 days)
The Philips Spectral CT Applications support viewing and analysis of images at energies selected from the available spectrum in order to provide information about the chemical composition of the body materials and/or contrast agents. The Spectral CT Applications provide for the quantification and graphical display of attenuation, material density, and effective atomic number. This information may be used by a trained healthcare professional as a diagnostic tool for the visualization and analysis of anatomical and pathological structures.
The Spectral enhanced Advanced Vessel Analysis (SAVA) application is intended to assist clinicians in viewing and evaluating CT images, for the inspection of contrast-enhanced vessels.
The Spectral enhanced Comprehensive Cardiac Analysis (SCCA) application is intended to assist clinicians in viewing and evaluating cardiovascular CT images.
The Spectral enhanced Tumor Tracking (sTT) application is intended to assist clinicians in viewing and evaluating CT images, for the inspection of tumors.
The Spectral CT Applications package introduces a set of three SW clinical applications: spectral enhanced Comprehensive Cardiac Analysis (sCCA), spectral enhanced Advanced Vessel Analysis (sAVA), and spectral enhanced Tumor Tracking (sTT). Each application provides tools that assist a trained personal in visualization and analysis of anatomical and pathological structures.
The sCCA application is targeted to assist the user in analysis and diagnostic of Cardiac Cases, as contrast enhanced and ECG triggered scans. The application input is a cardiac case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the coronary tree and chambers. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the coronary vessel & findings along it.
The sAVA application is targeted to assist the user in analysis and diagnostic of CT Angiography cases, as contrast enhanced and whole body CT-angiography scans. The application input is a CT Angiography case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the vessel of interest. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the vessels & findings along it.
The sTT application is targeted to assist the user in analysis of tumors, as contrast enhanced, soft tissue oriented, and whole body scans. The application input is a tumor suspected contrast enhanced case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the tumor of interest. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the tumor.
The provided text describes the Philips Spectral CT Applications, a set of software clinical applications (sCCA, sAVA, sTT) designed to assist clinicians in viewing and evaluating CT images with spectral data. While the document mentions verification and validation, it does not provide explicit acceptance criteria in a quantitative table or detailed performance metrics against those criteria. It generally states that "SW requirements were met" and "intended uses and defined user needs were met."
Therefore, I cannot populate a table of acceptance criteria and reported device performance with specific numerical values from the provided text. However, I can extract information related to the study that proves the device meets its intended uses and user needs, as described in the "Summary of Clinical Testing" section.
Here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance:
No explicit quantitative acceptance criteria or corresponding reported device performance metrics are provided in the document. The document states that "SW requirements were met" and "intended uses and defined user needs were met."
2. Sample sized used for the test set and the data provenance:
- Test Set Sample Size: "clinical datasets" – specific number not provided.
- Data Provenance: The datasets were "derived from Philips IQon Spectral CT system (K133674)." The country of origin is not specified, but the manufacturer is Philips Medical Systems Nederland B.V. and the regulatory contact is in the USA. The data appears to be retrospective, as it's referred to as "clinical datasets" used for validation, not newly acquired data specifically for this study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "Philips Internal certified radiologists" – specific number not provided.
- Qualifications of Experts: "certified radiologists" - specific experience level (e.g., 10 years) not provided. They are referred to as representing "a typical user."
4. Adjudication method for the test set:
- Adjudication Method: Not explicitly stated. The document mentions that "The evaluators were questioned against each of the intended uses and provided score to describe their level of satisfaction." This suggests individual evaluation rather than a formal consensus or adjudication process among multiple readers for ground truth establishment.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No MRMC comparative effectiveness study, comparing human readers with and without AI assistance, is described. The validation focused on whether the applications meet intended uses and user needs, as evaluated by radiologists. The applications are described as "assistive tools" for clinicians.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Not explicitly evaluated or described. The validation confirms that the applications "allow visualization, manipulation and analysis of spectral data" and "assist clinicians in viewing and evaluating CT images." This implies human interaction as part of the intended use.
7. The type of ground truth used:
- Ground Truth Type: The ground truth for the validation appears to be based on the subjective evaluation and "satisfaction scores" provided by "Philips Internal certified radiologists" against the stated intended uses and user needs. It is based on expert consensus/evaluation of the utility of the application's outputs, rather than a separate, independent "ground truth" like pathology or outcomes data.
8. The sample size for the training set:
- Training Set Sample Size: Not specified. The document primarily discusses verification and validation using "datasets that were generated by the Philips IQon Spectral CT system" and "clinical datasets." It does not provide details on a distinct training set.
9. How the ground truth for the training set was established:
- Ground Truth for Training Set: Not specified, as details about a separate training set or how its ground truth was established are not provided in the document.
Ask a specific question about this device
(120 days)
The Brilliance CT is a Computed Tomography X-Ray System intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data taken at different angles and planes. This device may include signal analysis and display equipment, patient, and equipment supports, components and accessories.
The dual energy option allows the system to acquire two CT images of the same anatomical location using two distinct tube voltages and/or tube currents during two tube rotations. The x-ray dose will be the sum of the doses of each tube rotation at its respective tube voltage and current. Information regarding the material composition of various organs, tissues, and contrast materials may be gained from the differences in x-ray attenuation between these distinct energies. This information may also be used to reconstruct images at multiple energies within the available spectrum, and to reconstruct basis images that allow the visualization and analysis of anatomical and pathological materials.
Philips Healthcare offers a Dual Energy scanning option on the Brilliance CT Scanner. The Brilliance Dual Energy option automates the execution of sequential scanning protocols acquired during the same episode of care using two unique tube voltages and/or currents. The acquired datasets can be displayed side-by-side or overlaid and then analyzed to augment the review of anatomical and pathological structures. Dual energy imaging, by nature of differing x-ray energy values, enables the identification of attenuation differences found in those structures between the two applied energies.
This submission K090462 for the Philips Medical Systems (Cleveland) Inc. Brilliance Dual Energy option does not contain the detailed information necessary to fully address all aspects of your request regarding acceptance criteria and a study proving the device meets those criteria.
The document is a 510(k) Summary, which primarily focuses on establishing substantial equivalence to predicate devices and detailing the intended use. It does not typically include the specifics of performance studies, acceptance criteria, or ground truth establishment that would be found in a full submission or a clinical study report.
Based on the provided text:
- No specific acceptance criteria or a study demonstrating the device meets those criteria are explicitly reported. The document states that the device is "of comparable type and substantially equivalent to the legally marketed devices" (K060937 and K081105) based on "similar technological characteristics and subassemblies." This is a regulatory statement of equivalence, not a performance study result against stated acceptance criteria.
Therefore, I cannot populate the table or provide detailed answers to most of your questions based solely on the provided text.
However, I can extract information related to the device description and intended use:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Not explicitly stated in document) | Reported Device Performance (Implied from substantial equivalence) |
---|---|
(Not provided in the 510(k) Summary) | Functionally equivalent in providing dual-energy CT imaging capabilities to predicate devices. |
(Not provided in the 510(k) Summary) | Able to acquire two CT images at distinct tube voltages/currents. |
(Not provided in the 510(k) Summary) | Enables analysis of material composition based on attenuation differences. |
(Not provided in the 510(k) Summary) | Can reconstruct images at multiple energies and basis images. |
Since the document does not describe a performance study with acceptance criteria, the following questions cannot be answered from the provided text:
- 2. Sample size used for the test set and the data provenance: Not mentioned.
- 3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
- 4. Adjudication method for the test set: Not mentioned.
- 5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done: Not mentioned.
- 6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not mentioned, as this is an imaging option, not a standalone algorithm.
- 7. The type of ground truth used: Not mentioned.
- 8. The sample size for the training set: Not mentioned.
- 9. How the ground truth for the training set was established: Not mentioned.
Summary of what the document does provide:
- Device Name: Brilliance Dual Energy option
- Intended Use: To produce cross-sectional images using two distinct tube voltages and/or tube currents, aiding in material composition analysis, and reconstruction of images at multiple energies and basis images.
- Classification: Class II (21 CFR 892.1750, Product Code 90 JAK)
- Predicate Devices: Philips Brilliance Volume (K060937) and GE Lightspeed CT750 HD (K081105).
- Basis for Equivalence: Similar technological characteristics and subassemblies.
To obtain the detailed study information you're asking for, one would typically need access to the full 510(k) submission, not just the summary, or any publicly available performance reports or clinical studies related to this specific device option.
Ask a specific question about this device
Page 1 of 1