Search Results
Found 7 results
510(k) Data Aggregation
(174 days)
The AV Viewer is an advanced visualization software intended to process and display images and associated data in a clinical setting.
The software displays images of different modalities and timepoints, and performs digital image processing, measurement, manipulation, quantification and communication.
The AV Viewer is not to be used for mammography.
AV Viewer is an advanced visualization software which processes and displays clinical images from the following modality types: CT, CBCT – CT format, Spectral CT, MR, EMR, NM, PET, SPECT, US, XA (iXR, DXR), DX, CR and RF.
The main features of the AV Viewer are:
• Viewing of current and prior studies
• Basic image manipulation functions such as real-time zooming, scrolling, panning, windowing, and rolling/rotating.
• Advanced processing tools assisting in the interpretation of clinical images, such as 2D slice view, 2D and 3D measurements, user-defined regions of interest (ROIs), 3D segmentation and editing, 3D models visualization, MPR (multi planar Reconstructions) generation, image fusion and more.
• A finding dashboard used for capturing and displaying findings of the patient as an overview.
• Customized workflows allow the user to create their own workflows
• Tools to export customizable reports to the Radiology Information System (RIS) or PACS (Picture archiving and communication system) in different formats.
AV Viewer is based on the AV Framework, an infrastructure that provides the basis for the AV Viewer and common functionalities such as: image viewing, image editing tools, measurements tools, finding dashboard.
AV viewer can be hosted on multiple platforms and devices, such as Philips AVW, Philips CT/MR scanner console or on cloud.
The provided FDA 510(k) clearance letter for the AV Viewer device indicates that the device has met its acceptance criteria through various verification and validation activities. However, the document does not include detailed quantitative acceptance criteria (e.g., specific thresholds for accuracy, sensitivity, specificity, or measurement error) or comprehensive performance data that would typically be presented in a clinical study report. The submission focuses on demonstrating "substantial equivalence" to a predicate device rather than presenting detailed performance efficacy of the algorithm itself.
Therefore, much of the requested information regarding specific performance metrics, sample sizes for test and training sets, expert qualifications, and detailed study methodologies is not explicitly stated in this 510(k) summary. I will extract and infer what is present and explicitly state when information is missing.
Here's a breakdown based on the provided document:
Acceptance Criteria and Device Performance
The document describes comprehensive verification and validation activities, including "Bench Testing" for measurements and segmentation algorithms. However, specific quantitative acceptance criteria (e.g., "accuracy > 95%") and the reported performance values are not detailed in this summary. The general statement is that "Product requirement specifications were tested and found to meet the requirements" and "The validation objectives have been fulfilled, and the validation results provide evidence that the product meets its intended use and user requirements."
Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Quantified) | Reported Device Performance (Quantified) | Supporting Study Type mentioned |
---|---|---|---|
General Functionality | Meets product requirement specifications | Meets product requirements | Verification, Validation |
Clinical Use Simulation | Successful performance in use case scenarios | Passed successfully by clinical expert | Expert Test |
Measurement Accuracy | Not explicitly stated | "Correctness of the various measurement functions" | Bench Testing |
Segmentation Accuracy | Not explicitly stated | "Performance" validated for segmentation algorithms | Bench Testing |
User Requirements | Meets user requirement specifications | Meets user requirements | Validation |
Safety and Effectiveness | Equivalent to predicate device | Safe and effective; substantially equivalent to predicate | Verification, Validation, Substantial Equivalence Comparison |
Note: The quantitative details for the "Acceptance Criteria" and "Reported Device Performance" for measurement accuracy and segmentation accuracy are missing from this 510(k) summary. The document only confirms that these tests were performed and the results were positive.
Study Details Based on the Provided Document:
2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated. The document mentions "Verification," "Validation," "Expert Test," and "Bench Testing" were performed, implying the use of test data, but no specific number of cases or images in the test set is provided.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data used for testing, nor does it explicitly state whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated. The "Expert Test" mentions "a clinical expert" (singular) was used to test use case scenarios. It does not mention experts establishing ground truth for broader validation.
- Qualifications of Experts: The "Expert Test" mentions "a clinical expert." For intended users, the document states "trained professionals, including but not limited to, physicians and medical technicians" (Subject Device), and "trained professionals, including but not limited to radiologists" (Predicate Device). It can be inferred that the "clinical expert" would hold one of these qualifications, but specific details (e.g., years of experience, subspecialty) are not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Adjudication Method: Not explicitly stated. The document refers to "Expert test" where "a clinical expert" tested scenarios, implying individual assessment rather than a multi-reader adjudication process for establishing ground truth for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: Not explicitly stated or implied. The document focuses on the device's substantial equivalence to a predicate device and its internal verification and validation. There is no mention of a human-in-the-loop MRMC study to compare reader performance with and without AV Viewer assistance. The AV Viewer is described as an "advanced visualization software" and not specifically an AI-driven diagnostic aid that would typically warrant such a study for demonstrating improved reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance Study: The "Bench Testing" section states that it "was performed on the measurements and segmentation algorithms to validate their performance and the correctness of the various measurement functions." This implies a standalone evaluation of these specific algorithms. However, the quantitative results (e.g., accuracy, precision) of this standalone performance are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For the "Bench Testing" of measurement and segmentation algorithms, the ground truth would likely be based on reference measurements/segmentations, possibly done manually by experts or using highly accurate, non-clinical methods. For other verification/validation activities, the ground truth would be against the pre-defined product and user requirements. However, explicit details about how this ground truth was established (e.g., expert consensus, comparison to gold standard devices/methods) are not specified.
8. The sample size for the training set
- Training Set Sample Size: Not explicitly stated. The document does not mention details about the training data used to develop the AV Viewer's algorithms. The focus is on validation for regulatory clearance. Since the product is primarily an "advanced visualization software" with general image processing tools, much of its functionality might not rely on deep learning requiring large, distinct training sets in the same way a specific AI diagnostic algorithm would.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not explicitly stated. As no training set details are provided, the method for establishing its ground truth is also not mentioned.
Summary of Missing Information:
This 510(k) summary provides a high-level overview of the device's intended use, comparison to a predicate, and the types of verification and validation activities conducted. It largely focuses on demonstrating "substantial equivalence" based on similar indications for use and technological characteristics. Critical quantitative details about the performance of specific algorithms (measurements, segmentation), the size and characteristics of the datasets used for testing, and the methodology for establishing ground truth are not included in this public summary. Such detailed performance data is typically found in the full 510(k) submission, which is not publicly released in its entirety.
Ask a specific question about this device
(220 days)
uWS-CT is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The CT Oncology application is intended to support fast-tracking routine diagnostic oncology, staging, and follow-up, by providing a tool for the user to perform the segmentation and volumetric evaluation of suspicious lesions in lung or liver.
The CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.
The CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw.
The CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway.
The CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.
The CT Vessel Analysis application is intended to provide a tool for viewing, manipulating, and evaluating CT vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The CT Brain Perfusion application is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain.
The CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function.
The CT Calcium Scoring application is intended to identify calcifications and calculate the calcium score.
The CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes.
The CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk.
The CT Liver Evaluation application is intended to provide processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation.
uWS-CT is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
The provided document, a 510(k) summary for the uWS-CT software, does not contain detailed information about specific acceptance criteria and the results of a study proving the device meets these criteria in the way typically required for AI/ML-driven diagnostics.
The document primarily focuses on demonstrating substantial equivalence to a predicate device (uWS-CT K173001) and several reference devices for its various CT analysis applications. It lists the functions of the new and modified applications (e.g., CT Lung Density Analysis, CT Brain Perfusion, CT Heart, CT Calcium Scoring, CT Dynamic Analysis, CT Bone Structure Analysis, CT Liver Evaluation) and compares them to those of the predicate and reference devices, indicating that their functionalities are "Same."
While the document states that "Performance data were provided in support of the substantial equivalence determination" and lists "Performance Evaluation Report" for various CT applications, it does not provide the specifics of these performance evaluations, such as:
- Acceptance Criteria: What specific numerical thresholds (e.g., accuracy, sensitivity, specificity, Dice score for segmentation) were set for each function?
- Reported Device Performance: What were the actual measured performance values?
- Study Design Details: Sample size, data provenance, ground truth establishment methods, expert qualifications, adjudication methods, or results of MRMC studies.
The document explicitly states:
- "No clinical study was required." (Page 16)
- Software Verification and Validation was provided, including hazard analysis, SRS, architecture description, environment description, and cyber security documents. However, these are general software development lifecycle activities and not clinical performance studies.
Therefore, based solely on the provided text, I cannot fill out the requested table or provide the detailed study information. The document suggests that the performance verification was focused on demonstrating functional equivalence rather than presenting quantitative performance metrics against pre-defined acceptance criteria in a clinical context.
Summary of what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance: This information is not provided in the document. The document states "Performance Evaluation Report" for various applications were submitted, but the content of these reports (i.e., the specific acceptance criteria and the results proving they were met) is not included in this 510(k) summary.
2. Sample size used for the test set and the data provenance: This information is not provided. The document states "No clinical study was required." The performance evaluations mentioned are likely internal verification and validation tests whose specifics are not detailed here.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not provided. Given that "No clinical study was required," it's unlikely a formal multi-expert ground truth establishment process for a clinical test set, as typically done for AI/ML diagnostic devices, was undertaken for this submission in a publicly available manner.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: This information is not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: This information is not provided. The document explicitly states "No clinical study was required."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document states this is a "software solution intended to be used for viewing, manipulation, and storage of medical images" that "supports interpretation and evaluation of examinations within healthcare institutions." The listed applications provide "a tool for the user to perform..." or "a tool for the review and analysis..." which implies human-in-the-loop use. Standalone performance metrics are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): This information is not provided.
8. The sample size for the training set: This information is not provided. The document is a 510(k) summary for a software device, not a detailed technical report on an AI/ML model's development.
9. How the ground truth for the training set was established: This information is not provided.
In conclusion, the supplied document is a regulatory submission summary focused on demonstrating substantial equivalence based on intended use and technological characteristics, rather than a detailed technical report of performance studies for an AI/ML device with specific acceptance criteria and proven results. For this type of information, one would typically need access to the full 510(k) submission, which is not publicly available in this format, or a peer-reviewed publication based on the device's clinical performance.
Ask a specific question about this device
(406 days)
uWS-CT is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The CT Oncology application is intended to support fast-tracking routine diagnostic oncology, staging, and follow-up, by providing a tool for the user to perform the segmentation of suspicious lesions in lung or liver. The CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.
The CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. The CT Lung Density application is intended to provide the user a number of density parameters and structure information for evaluating tomogram scans of the lung.
The CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.
The CT Vessel Analysis application is intended to provide a tool for viewing, manipulating CT vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
uWS-CT is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
The provided document is a 510(k) Premarket Notification from Shanghai United Imaging Healthcare Co., Ltd. for their device uWS-CT. This document outlines the device's indications for use, technological characteristics, and comparison to predicate devices, but it does not contain a detailed study demonstrating that the device meets specific acceptance criteria based on human-in-the-loop or standalone performance.
Instead, the document primarily focuses on demonstrating substantial equivalence to predicate devices based on similar functionality and intended use, supported by software verification and validation testing, hazard analysis, and performance evaluations for various CT applications. It explicitly states that "No clinical study was required." and "No animal study was required." for this submission.
Therefore, I cannot provide the detailed information requested in the prompt's format (acceptance criteria table, sample size, expert ground truth, MRMC study, etc.) because these types of studies were not conducted or reported in this 510(k) submission.
The "Performance Data" section (Page 11) lists "Performance Evaluation Report For CT Lung Nodule," "Performance Evaluation Report For CT Oncology," etc., but these are internal reports that are not detailed in this public document. They likely refer to internal testing that verifies the software's functions perform as designed, rather than robust clinical performance studies against specific quantitative acceptance criteria with human readers or well-defined ground truth beyond internal validation.
What is present in the document regarding "performance" is:
- Software Verification and Validation: This typically involves testing that the software functions as designed, is free of bugs, and meets its specified requirements. The document mentions "hazard analysis," "software requirements specification (SRS)," "software architecture description," "software development environment description," "software verification and validation," and "cyber security documents."
- Performance Evaluation Reports for specific applications: These are listed but not detailed (e.g., CT Lung Nodule, CT Oncology). It's implied these show the software functions correctly for those applications.
In summary, based on the provided text, there is no information about:
- A table of acceptance criteria with reported device performance in the context of clinical accuracy or diagnostic performance.
- Sample sizes used for a test set in a clinical performance study.
- Data provenance for a clinical test set.
- Number of experts or their qualifications for establishing clinical ground truth.
- Adjudication methods for a clinical test set.
- Multi-Reader Multi-Case (MRMC) comparative effectiveness studies.
- Standalone (algorithm-only) performance studies against clinical ground truth.
- Type of clinical ground truth used (pathology, outcomes data, expert consensus from an external panel).
- Sample size for a training set (as no AI/ML model requiring a training set is explicitly discussed in terms of its performance data; the device is described as "CT Image Post-Processing Software" with various applications.)
- How ground truth for a training set was established.
The closest the document comes to "acceptance criteria" and "performance" are discussions of functional equivalence to predicate devices and general software validation, stating that the proposed device performs in a "similar manner" and has a "safety and effectiveness profile that is similar to the predicate device."
Ask a specific question about this device
(77 days)
The Incisive CT is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipment supports, components and accessories. The Incisive CT is indicated for head, whole body, cardiac and vascular X-ray Computed Tomography applications in patients of all ages.
These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed Philips Incisive CT is a whole-body computed tomography (CT) X-Ray System featuring a continuously rotating x-ray tube, detectors, and gantry with multi-slice capability. The acquired x-ray transmission data is reconstructed by computer into cross-sectional images of the body taken at different angles and planes. This system also includes signal analysis and display equipment, patient and equipment supports, components, and accessories. The Philips Incisive CT has a 72cm bore and includes a detector array that provides 50cm scan field of view (FOV). The main components (detection system, the reconstruction algorithm, and the x-ray system) that are used in the proposed Philips Incisive CT have the same fundamental design characteristics and are based on comparable technologies as the current market predicate Philips Ingenuity CT (K160743, 08/08/2016). The main system modules and functionalities are: 1. Gantry. The Gantry consists of 4 main internal units: a. Stator - a fixed mechanical frame that carries HW and SW b. Rotor - A rotating circular stiff frame that is mounted in and supported by the stator. c. X-Ray Tube (XRT) and Generator,- fixed to the Rotor frame d. Data Measurement System (DMS) - a detectors array, fixed to the Rotor frame 2. Patient Support (Couch) - carries the patient in and out through the Gantry bore synchronized with the scan 3. Console - Containing a Host computer and display that is the primary user interface. In addition to the above components and the software operating them, each system includes hardware and software for data acquisition, display, manipulation, storage and filming as well as post-processing into views other than the original axial images. Patient supports (positioning aids) are used to position the patient.
The Philips Incisive CT scanner is compared to a predicate device, the Philips Ingenuity CT (K160743), for substantial equivalence. The provided document focuses on technical comparisons and non-clinical performance data rather than a clinical study with specific acceptance criteria that would typically be seen for a new AI-powered diagnostic device.
Here's an analysis of the provided information:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" in a quantitative sense with pass/fail thresholds for clinical performance. Instead, it demonstrates substantial equivalence to a predicate device by comparing technical specifications and imaging features. The core "acceptance criterion" is proving substantial equivalence to the predicate device, Philips Ingenuity CT (K160743).
Category | Acceptance Criteria (Implied: Substantially Equivalent to Predicate) | Reported Device Performance (Philips Incisive CT) | Conclusion |
---|---|---|---|
Scan Characteristics | |||
Number of Slices | 64/128 | 64/128 | Identical. Substantially equivalent. |
Scan Modes | Surview, Axial Scan, Helical Scan | Surview, Axial Scan, Helical Scan | Identical. Substantially equivalent. |
Minimum Scan Time | 0.42 sec for 360° rotation | 0.35 sec for 360° rotation | Faster rotation speed to meet wider heart rate application. Safety and effectiveness are not affected. Substantially equivalent. |
Image (Spatial) Resolution | High resolution: 16 lp/cm, Standard resolution: 13 lp/cm | High resolution: 16 lp/cm, Standard resolution: 13 lp/cm | Identical. Substantially equivalent. |
Image Noise | 0.27% at 120 kV, 250 mAs, 10 mm slice thickness | 0.27% at 120 kV, 230 mAs, 10 mm slice thickness | Identical (despite slightly different mAs, the noise level is the same). Substantially equivalent. |
Slice Thicknesses | Helical: 0.67mm-5mm, Axial: 0.625mm-12.5mm | Helical: 0.67mm-5mm, Axial: 0.625mm-10.0mm | Essentially the same, does not affect safety and effectiveness. Substantially equivalent. |
Scan Field of View | Up to 500 mm | Up to 500 mm | Identical. Substantially equivalent. |
Image Matrix | Up to 1024 * 1024 | Up to 1024 * 1024 | Identical. Substantially equivalent. |
Imaging Features | (Function/User Interface/Workflow similar to Predicate) | ||
2D Viewer | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
MPR | Yes | Yes | User interface, algorithm principle, function, and workflow are similar/same. Substantially equivalent. |
3D (volume mode) | Yes | Yes | Volume rendering protocol, function, and workflow are similar/same. Substantially equivalent. |
Virtual Endoscopy (Endo) | Yes | Yes | VE rendering protocol, function, and workflow are similar/same. Substantially equivalent. |
Filming | Yes | Yes | Basic function (display, layout, editing, print management) similar/same. Substantially equivalent. |
Image matrix | 1024 * 1024 | 1024 * 1024 | Both are 1024 * 1024. Substantially equivalent. |
O-MAR | Yes | Yes | Algorithm Principle and workflow are same. Substantially equivalent. |
Dose Modulation | Yes | Yes | Function and workflow are same. Substantially equivalent. |
iPlanning | Manual | iPlanning (automated adjustment) | Workflow improvement for user assistance. Safety and effectiveness are not affected. Substantially equivalent. |
On line MPR | Yes (with tilt and trim) | Yes (without trim and tilt) | Can generate sagittal and coronal results. Other functions and workflow are same. Substantially equivalent. |
iBatch | Manual Batch | iBatch (automated identification) | Workflow feature to improve productivity. Safety and effectiveness are not affected. Substantially equivalent. |
Bolus Tracking | Yes | Yes (Post Threshold Delay longer) | Function and workflow are same (despite longer Post Threshold Delay). Substantially equivalent. |
SAS (Spiral Auto Start) | Yes | Yes (manual trigger only) | Other functions and workflow are same. Substantially equivalent. |
Worklist | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
MPPS | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
Reporting | Yes (including PDF) | Yes (no PDF support) | Format of exported report similar. Other functions and workflow are same. Substantially equivalent. |
CCT (Continuous CT) | Yes (with Volume display) | Yes (no Volume display support) | Other functions and workflow are same. Substantially equivalent. |
Brain Perfusion | Yes | Yes | User interface, principle, mechanism, and analysis parameters are similar/same. Substantially equivalent. |
Dental (Dental planning) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
iDose4 | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
Helical Retrospective Tagging | Yes | Yes | ECG viewer user interface, function, and workflow are similar/same. Substantially equivalent. |
Axial Prospective Gating calcium scoring | Yes | Yes | ECG viewer user interface, function, and workflow are similar/same. Substantially equivalent. |
Step & Shoot | Yes (with arrhythmia handling) | Yes (no arrhythmia handling) | Other functions and workflow are same. Substantially equivalent. The lack of arrhythmia handling is noted but not deemed to affect substantial equivalence. |
CCS (Cardiac calcium scoring) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
Supplementary Imaging Features (Compared to Philips MX 16 SLICE K091195) | |||
CTC (CT Colonoscopy) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
VA (Vessel Analysis) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
LNA (Lung Nodule Analysis) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
Supplementary Imaging Features (Compared to IntelliSpace Portal Platform K162025) | |||
CAA (Cardiac Artery Analysis) | Yes | Yes | User interface, analysis of cardiac coronary artery, and workflow are similar/same. Substantially equivalent. |
CFA (Cardiac Function Analysis) | Yes | Yes | User interface, function, and workflow are similar/same. Substantially equivalent. |
Supplementary Imaging Features (Compared to BRILLIANCE DUAL ENERGY OPTION K090462) | |||
DE (Dual Energy) | Yes | Yes | User interface, function, Algorithm Principle, and workflow are similar/same. Substantially equivalent. |
2. Sample size used for the test set and the data provenance
The document explicitly states: "The proposed Philips Incisive CT did not require clinical study since substantial equivalence to the legally marketed predicate device was proven with the verification/validation testing."
Therefore, there is no mention of a "test set" in the context of patient data, nor any information about data provenance (country of origin, retrospective/prospective). The evaluation was based on non-clinical performance data, primarily engineering verification and validation testing, as well as comparisons to the predicate device's specifications.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Given that no clinical study was performed and no patient-based "test set" was described, there were no experts used to establish ground truth in the traditional sense of clinical outcome assessment for the Incisive CT device. The ground truth for the technical comparisons was the established performance and specifications of the predicate device.
4. Adjudication method for the test set
Not applicable, as no clinical test set requiring adjudication of findings was described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is a CT scanner, not an AI-powered diagnostic software that assists human readers. The context is about the substantial equivalence of the imaging hardware and associated software functionalities.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not applicable. This refers to the performance of the CT scanner itself, a hardware device with integrated software, not a separate standalone algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the "study" (which was non-clinical verification/validation and substantial equivalence comparison) was the established technical specifications and performance characteristics of the predicate device (Philips Ingenuity CT, K160743) and compliance with various international and FDA-recognized consensus standards.
8. The sample size for the training set
Not applicable. This device is a CT scanner, not a machine learning model that requires a training set of data in the AI sense. Its underlying technology and algorithms are based on established CT principles, and its performance is verified through engineering tests.
9. How the ground truth for the training set was established
Not applicable, for the same reason as point 8.
Ask a specific question about this device
(53 days)
Illumeo system is an image management system intended to be used by trained professionals, including but not limited to radiologists.
Illumeo system is a software package used with general purpose computing hardware to acquire, store, distribute, process and display images and associated data throughout a clinical environment. The software performs digital image processing, measurement, manipulation and quantification of images, communication and storage.
This device is not to be used for mammography.
Illumeo is an image management system intended to be used by trained professionals, including but not limited to radiologists.
Illumeo system is a software package used with general purpose computing hardware to acquire, store, distribute, process and display images and associated data throughout a clinical environment. The software performs digital image processing, measurement, manipulation and quantification of images, communication and storage.
This device is not to be used for mammography.
Illumeo is a medical software system offering a primary interpretation solution for visualization and evaluation a variety of medical images deriving from various imaging modalities as well as non-imaging information. Illumeo interconnects with clinical imaging and non-imaging data sources to present in addition to images non-imaging data in patient context.
The provided text describes a Special 510(k) submission for the Illumeo system, focusing on its substantial equivalence to a predicate device after minor modifications. It does not contain the specific details about acceptance criteria, device performance, sample sizes, expert qualifications, or MRMC studies that you requested.
The document states that:
- Acceptance Criteria and Device Performance: "Verification and Validation (V&V) activities were performed for proposed Illumeo system and demonstrated that the predetermined acceptance criteria were successfully met." However, it does not describe what those acceptance criteria were nor the specific quantitative results of the device performance against them. It only generally states that the system "Meets the acceptance criteria and is adequate for its intended use and user needs."
- Study Type: Non-clinical verification and validation tests were performed. No clinical studies were required or conducted to support the equivalence for this submission ("Illumeo system did not requires clinical studies to support equivalence.").
- Ground Truth: The document does not discuss ground truth as it would be understood in the context of an AI/ML device relying on diagnostic accuracy, given it's a PACS system subject to a Special 510(k) for modifications.
- Sample Sizes (Test/Training): These details are not provided as no diagnostic performance study was conducted.
- Experts and Adjudication Methods: Not applicable, as there was no diagnostic performance study involving human readers.
- MRMC Comparative Effectiveness Study: Not conducted. The device is a PACS system with improved features, not an AI assisting human readers for a specific diagnostic task.
- Standalone Performance: Not applicable as it's not a standalone diagnostic AI algorithm. It's an image management and viewing system.
Summary of available information:
Category | Description |
---|---|
1. Acceptance Criteria & Performance | Acceptance Criteria: "predetermined acceptance criteria were successfully met." (Specific criteria not detailed in the provided text). |
Reported Device Performance: "demonstrated that the design outputs of the modified device meet the design input requirements and do not raise new questions on safety and/or effectiveness." "Meets the acceptance criteria and is adequate for its intended use and user needs." (Specific quantitative performance metrics are not detailed, as this was a Special 510(k) for minor modifications to a PACS system, not a new AI diagnostic device.) | |
2. Sample Size (Test) & Data Provenance | Not applicable. No diagnostic performance study involving a test set was conducted. The assessment was based on non-clinical verification and validation of software changes. |
3. Number & Qualifications of Experts | Not applicable. No diagnostic performance study requiring expert ground truth establishment was conducted. |
4. Adjudication Method | Not applicable. |
5. MRMC Comparative Effectiveness Study | No. The device is an image management system (PACS) not a diagnostic AI intended to assist human readers in a comparative effectiveness study. The modifications primarily involve improved system performance, integration, scalability, multimodality support, viewing tools, and UI enhancements. |
6. Standalone Performance Study | Not applicable. The Illumeo system is an image management system, not a standalone diagnostic algorithm. Its function is to acquire, store, process, and display images for trained professionals. |
7. Type of Ground Truth Used | Not applicable. The evaluation focused on software verification and validation, risk management, and compliance with standards, rather than diagnostic accuracy against a specific ground truth like pathology or outcomes data. |
8. Sample Size (Training Set) | Not applicable. The document describes a PACS system for which the concept of a "training set" (as in machine learning) is not relevant in the context presented. The system itself is "trained" in a sense through standard software development and testing cycles to perform its functions, rather than learning from a dataset. |
9. Ground Truth for Training Set | Not applicable. |
The study described is a non-clinical verification and validation effort, demonstrating compliance with international and FDA-recognized consensus standards (ISO 14971, IEC 62304, IEC 62366-1, NEMA-PS 3.1-3.20 DICOM, and FDA guidance for software in medical devices). The modifications were deemed "minor technology changes mainly designated to provide users further support in visualization" and did not change the intended use or fundamental scientific technology, hence a Special 510(k) was submitted. The conclusion states that "all risks are sufficiently mitigated, that no new risks are introduced, and that the overall residual risks are acceptable."
Ask a specific question about this device
(141 days)
The Philips CT Big Bore is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipments and accessories. These systems are indicated for head and whole body X-ray Computed Tomography applications in oncology, vascular and cardiology, for patients of all ages.
These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
- Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The Philips CT Big Bore is currently available in two system configurations, the Oncology configuration and the Radiology (Base) configuration.
The main components (detection system, the reconstruction algorithm, and the x-ray system) that are used in the Philips CT Big Bore have the same fundamental design characteristics and are based on comparable technologies as the predicate.
The main system modules and functionalities are:
- Gantry. The Gantry consists of 4 main internal units:
a. Stator a fixed mechanical frame that carries HW and SW
b. Rotor A rotating circular stiff frame that is mounted in and supported by the stator.
c. X-Ray Tube (XRT) and Generator, fixed to the Rotor frame
d. Data Measurement System (DMS) a detector array, fixed to the Rotor frame - Patient Support (Couch) carries the patient in and out through the Gantry bore synchronized with the scan
- Console A two part subsystem containing a Host computer and display that is the primary user interface and the Common Image Reconstruction System (CIRS) - a dedicated, powerful image reconstruction computer
In addition to the above components and the software operating them, each system includes workstation hardware and software for data acquisition, display, manipulation, storage and filming as well as post-processing into views other than the original axial images. Patient supports (positioning aids) are used to position the patient.
This document describes the Philips CT Big Bore, a Computed Tomography X-Ray System. The submission focuses on demonstrating substantial equivalence to a predicate device rather than a standalone clinical efficacy study with acceptance criteria in the typical sense of a diagnostic AI product. Therefore, much of the requested information regarding clinical studies and expert review for ground truth is not directly applicable in the same way.
However, based on the provided text, we can infer and extract the following:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are framed in terms of achieving similar or improved performance compared to the predicate device and meeting established industry standards for CT systems. The reported device performance is primarily a comparison to the predicate device's specifications and measurements on phantoms.
Metric | Acceptance Criteria (Implicit: Similar to/Better than Predicate & Standards) | Reported Device Performance (Philips CT Big Bore / Tested Values) |
---|---|---|
Design/Fundamental Scientific Technology | ||
Application | Head/Body (Identical to Predicate) | Head/Body |
Scan Regime | Continuous Rotation (Identical to Predicate) | Continuous Rotation |
No. of Slices | Up to 40 (Predicate) | 16/32 (with optional WARP/DAS for 32 slices) |
Scan Modes | Surview, Axial Scan, Helical Scan (Identical to Predicate) | Surview, Axial Scan, Helical Scan |
Minimum Scan Time | 0.42 sec for 360° rotation (Identical to Predicate) | 0.42 sec for 360° rotation |
Image (Spatial) Resolution | 15 lp/cm max. (Predicate) | 16 lp/cm (±2 lp/cm) |
Image Noise, Body, STD Res. | 10.7 at 16.25 mGy (Predicate) | 10.7 |
Image Matrix | Up to 1024 x 1024 (Identical to Predicate) | Up to 1024 x 1024 |
Display | 1024 x 1280 (Identical to Predicate) | 1024 x 1280 |
Host Infrastructure | Windows XP (Predicate) | Windows 7 (Essentially the same, Windows based) |
CIRS Infrastructure | PC/NT computer based on Intel processor & custom Multiprocessor Array (Predicate) | Windows Vista & custom Multiprocessor Array (Identical, Windows based) |
Communication | Compliance with DICOM (Identical to Predicate) | Compliance with DICOM |
Dose Reporting and Management | No (Predicate) | Compliance with MITA XR25 and XR29 |
Generator and Tube Power | 60 kW (Predicate) | 80 kW (Software limited to 60kW) |
mA Range | 30-500mA (Predicate) | 20-665mA (Software limited to 500mA) |
kV Settings | 80, 120, 140 (Predicate) | 80, 100, 120, 140 |
Focal Spot | Dynamic Focal Spot (Identical to Predicate) | Dynamic Focal Spot in X axis |
Tube Type | MRC 800 (Predicate) | MRC Ice Tube (880) (Identical tube technology) |
Detectors Type | 2.4 or 4 cm NanoPanel detector (Predicate) | 2.4 cm NanoPanel (Revision, slightly better performance stated) |
Scan Field of View | Up to 600 mm (Identical to Predicate) | Up to 600 mm |
Detector Type | Single layer ceramic scintillator plus photodiode array (Identical to Predicate) | Single layer ceramic scintillator plus photodiode array |
Gantry Tilt | $\pm 30^0$ (Identical to Predicate) | $\pm 30^0$ |
Gantry Rotation Speed | 143 RPM (Identical to Predicate) | 143 RPM |
Bore Size | 850 mm (Identical to Predicate) | 850 mm |
Low dose CT lung cancer screening | Yes (Predicate) | Yes (Configuration with Brilliance Big Bore cited in K153444) |
Communication between injector and scanner | SAS (Spiral Auto Start) (Predicate) | SAS and SyncRight |
DoseRight / Dose Management | Yes (K012238) (Predicate) | Yes and iDose4 |
Dose Modulation | D-DOM and Z-DOM (Predicate) | D-DOM (Angular DOM) and Z-DOM FDOM, 3D-DOM |
Cone Beam Reconstruction Algorithm - COBRA | Yes (Identical to Predicate) | Yes |
Axial 2D Reconstruction | Yes (Identical to Predicate) | Yes |
Lung Nodule Assessment | Yes (K023785) (Identical to Predicate) | Yes |
ECG Signal Handling | Yes (Identical to Predicate) | Yes |
Cardiac Reconstruction | Yes (Identical to Predicate) | Yes |
Bolus Tracking | Yes (K02005) (Identical to Predicate) | Yes |
Calcium Scoring | Yes (Identical to Predicate) | Yes |
Heartbeat Calcium Scoring (HBCS) | Yes (Identical to Predicate) | Yes |
Virtual Colonoscopy | Yes (Identical to Predicate) | Yes |
Pediatric Applications Support | Yes (Identical to Predicate) | Yes |
Remote Workstation Option | Yes - MxView - later renamed Extended Brilliance Workstation (Predicate) | Yes - IntelliSpace Portal (K162025) |
Volume Rendering | Yes (Identical to Predicate) | Yes |
Liver Perfusion | Yes (Identical to Predicate) | Yes |
Dental Planning | Yes (Identical to Predicate) | Yes |
Functional CT | Yes (Identical to Predicate) | Yes |
Stent Planning | Yes (Identical to Predicate) | Yes |
Retrospective Tagging | Yes (Identical to Predicate) | Yes |
Prospective Cardiac Gating | Yes (Identical to Predicate) | Yes |
CT Performance Metrics (Phantoms) | ||
MTF | Cut-off: High Mode 16±2lp/cm; Standard Mode: 13±2 lp/cm (Measured) | |
CTDIvol (Head) | 10.61mGy/100mAs±25% at 120kV (Measured) | |
CTDIvol (Body) | 5.92mGy/100mAs±25% at 120kV (Measured) | |
CT number accuracy (Water) | 0±4HU (Measured) | |
Noise | 0.27% ± 0.04% at 120 kV, 250 mAs, 12 mm slice thickness, UA filter (Measured) | |
Slice Thickness (Nominal 0.75mm) | 0.5mm - 1.5mm (Measured) | |
Slice Thickness (Nominal 1.5mm) | 1.0mm - 2.0mm (Measured) |
2. Sample Size for Test Set and Data Provenance
The document does not explicitly state a "test set" in the context of an AI/algorithm-driven diagnostic study. Instead, it refers to "bench testing included basic CT performance tests on phantoms" and "Sample clinical images were provided with this submission, which were reviewed and evaluated by radiologists."
- Sample Size for Test Set: Not specified for clinical images. For bench testing, it refers to "phantoms."
- Data Provenance: Not specified for the "sample clinical images." Given the context of a 510(k) for a hardware device, it's highly likely these were internal and possibly from a variety of sources. It's not stated whether they were retrospective or prospective.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: "radiologists" (plural, but exact number not specified).
- Qualifications of Experts: Only "radiologists" are mentioned. No details on years of experience or subspecialty.
4. Adjudication Method for Test Set
- Adjudication Method: Not specified. The document states, "All images were evaluated to have good image quality," suggesting a qualitative assessment rather than a structured adjudication process for a specific diagnostic task.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a typical MRMC comparative effectiveness study was not performed as described. This submission is for a CT scanner itself, not an AI-assisted interpretation tool where human readers' performance with and without AI would be compared.
- Effect Size of Human Readers with AI vs. without AI: Not applicable, as this was not an AI-assistance study.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study: No, this was not a standalone algorithm performance study. The submission is for a complete CT imaging system. The performance metrics reported are for the overall system, not an isolated algorithm. The document mentions "optional software algorithm called WARP or DAS" for increasing slice count, and features like "iDose4" (an extension of DoseRight) and "FDOM, 3D-DOM" for dose modulation, but their standalone performance is not detailed in terms of a clinical study.
7. Type of Ground Truth Used
- Type of Ground Truth: For the "sample clinical images," the ground truth seems to be expert opinion / qualitative assessment by radiologists that the image quality was "good." For the technical performance parameters (MTF, CTDIvol, CT number accuracy, Noise, Slice Thickness), the ground truth was derived from physical phantom measurements against established technical specifications.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. This document describes a CT scanner (hardware and embedded software), not a machine learning model that would have a separate "training set" in the conventional sense. The "training" for the system's development would be through engineering design, iterative testing, and adherence to established physical and software engineering principles.
9. How Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set Was Established: Not applicable. (See point 8). The development of the CT system likely involved extensive engineering design, simulations, and validation against known physical principles and performance targets, which are fundamentally different from establishing ground truth for a machine learning training set.
Ask a specific question about this device
(142 days)
The NeuViz Prime Multi-Slice CT Scanner System can be used as a whole body computed tomography X-ray system featuring a continuously rotating X-ray tube and detector array. The acquired X-RAY transmission data is reconstructed by computer into cross-sectional images of the body from either the same axial plane taken at different angles or spiral planes taken at different angles.
The NeuViz Prime Multi-Slice CT Scanner System is composed of a gantry, a patient couch, an operator console and includes image acquisition hardware and software, and associated accessories. It is designed to be a head and whole body X-ray computed tomography scanner which features a continuously rotating tube-detector system and functions according to the fan beam principle. The system provides the filter back-projection (FBP) and iterative reconstruction algorithm(ClearView cleared in K133373) to reconstruct images. The end user can choose to apply either ClearView or the FBP to the acquired raw data. The system software is an interactive program used for X-ray scan control, image reconstruction, and image archive/evaluation. It provides the following digital image processing and visualization tools:
- · Support following scan speed: 0.259s(option), 0.32s(option), 0.374s(option), 0.4s(option), 0.5s, 0.6s, 0.8s, 1.0s, 1.5s, 2.0s.
- . Surview scan
- Dual surview
- Spiral scan
- Axial scan
- Image reconstruction
- Plan scan
- Patient information management
- Patient information registration
- Protocol selection
- O-Dose
- · Bolus tracking
- SAS
- Home
- Film
- Report
- 2D
- MPR
- 3D
- VE(Virtual Endoscopy)
- Vessel Analysis
- Dicom Viewer
- Bar code Reader
- Dual Monitor
- CCT Scan
- ClearView
- iHD
- Cardiac Scan
- Dual Energy Scan and Reconstruction ●
- Dental Analysis
- Virtual Colonoscopy
- Brain Perfusion
- Body Perfusion
- · Lung Nodule Analysis
- Lung Density Analysis
- · Coronary Analysis
- Cardiac Calcium Scoring .
- Cardiac Function Analysis
- Cardiac Viewer
- Fat Analysis
- CTDSA
- Tumor Assessment
- · Preprocessing function
- AVW.Cloud
- · Prism Viewer
The provided document is a 510(k) summary for the NeuViz Prime Multi-Slice CT Scanner System. It describes the device's characteristics and its substantial equivalence to a predicate device but does not contain acceptance criteria for specific performance metrics or detailed results of a study designed to prove the device meets those criteria, especially in an AI context. The document focuses on showing non-inferiority to an existing device rather than meeting specific quantifiable performance targets with clinical evidence in the format you requested for an AI/ML device.
However, I can extract information related to the device's performance characteristics, safety, and the "clinical testing" that was performed, even if it doesn't align perfectly with the AI/ML-focused questions.
Here's an attempt to answer your questions based on the provided text, while acknowledging the limitations for AI-specific criteria:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of explicit acceptance criteria with numerical targets for clinical performance (e.g., sensitivity, specificity, accuracy for a specific disease detection task). Instead, it states that "the subject device performs as intended" and "NeuViz Prime can be used as defined in its clinical workflow and intended use," and that "The Results indicated that the images were of diagnostic quality."
For image quality metrics, it lists:
- CT number accuracy and uniformity
- MTF (Modulation Transfer Function)
- Noise
- Slice sensitivity profiles
- CTDI (Computed Tomography Dose Index)
The document doesn't report specific numerical acceptance criteria for these image quality metrics, nor does it provide the measured performance values for them. It only states that "The result of all conducted testing was found acceptable to support the claim of substantial equivalence."
It does provide CTDI Dose values for the subject device compared to the predicate:
Acceptance Criteria (Implied by Comparison) | Reported Device Performance (NeuViz Prime) | Predicate Device (NeuViz 128) | Comments |
---|---|---|---|
CTDI Dose (Head) | 14.2 mGy/100mAs | 13.0 mGy/100mAs | Approximately 10% higher due to beam filter and wedge material differences. |
CTDI Dose (Body) | 7.2 mGy/100mAs | 6.5 mGy/100 mAs | Approximately 10% higher due to beam filter and wedge material differences. |
Image Quality | Images were of diagnostic quality | N/A (implied similar) | Based on evaluation by a qualified radiologist using a 5-point Likert scale. |
Functionality | Performs as intended | N/A | Verified through functional, smoke, and regression tests, adhering to software lifecycle processes and addressing potential defects. |
2. Sample size used for the test set and the data provenance
The "clinical testing" involved an "image evaluation" where "sample images were provided to show the performance of the system in presence of implants." This suggests a test set composed of image data.
- Sample Size: Not explicitly stated. The document refers to "images of the brain, chest, abdomen and spine/extremities of the body area," implying multiple images, but no specific count is given.
- Data Provenance: Not explicitly stated. It is likely retrospective data as it describes an "image evaluation" of existing images rather than a prospective clinical trial. The location of data origin (e.g., country) is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: One.
- Qualifications of Experts: "A qualified radiologist." No specific experience level (e.g., "10 years of experience") is provided.
4. Adjudication method for the test set
- Adjudication Method: Not explicitly an adjudication method in the sense of multiple readers reaching consensus. The images were "scored using a 5 point Likert scale by a qualified radiologist." This implies a single-reader assessment rather than a consensus or adjudicated ground truth process.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader multi-case comparative effectiveness study was not reported. The provided document describes a CT scanner system, not an AI-assisted diagnostic tool for which human reader improvement would be typically measured. The "clinical testing" described was an image quality assessment by a single radiologist.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable in the context of this submission. The NeuViz Prime is a CT scanner, a hardware device that generates images. While it has reconstruction algorithms (FBP, ClearView, iHD) and image processing tools (e.g., Lung Nodule Analysis, Cardiac Calcium Scoring), the submission focuses on the overall performance of the imaging system and its substantial equivalence to a predicate device, not on the standalone performance of an AI algorithm intended for diagnostic interpretation. It does mention "The main algorithm of Prism Viewer Application is identifying of substances and calculating of dual energy images," but no standalone performance metrics are provided for this.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: The document refers to an "image evaluation" where images were "scored using a 5 point Likert scale by a qualified radiologist." This points to expert opinion/scoring as the basis for evaluating "diagnostic quality." It does not mention pathology, outcomes data, or a consensus of multiple experts.
8. The sample size for the training set
- Training Set Sample Size: Not applicable/not provided. This document describes the clearance of a CT scanner system, not an AI/ML algorithm that would typically have a distinct training set. While the system's reconstruction algorithms (like ClearView, iHD) would have been developed and "trained" or optimized during their creation, this document does not provide details on their specific training sets.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable/not provided. Similar to point 8, the document does not discuss the training of AI/ML models. If "ClearView" or other advanced algorithms involved machine learning at their core, the method for establishing their training ground truth is not detailed in this 510(k) summary.
Ask a specific question about this device
Page 1 of 1