Search Results
Found 8 results
510(k) Data Aggregation
(96 days)
The Low Dose CT Lung Cancer Screening Option for Canon/Toshiba CT systems is indicated for using low dose CT for lung cancer screening. The screening must be conducted with the established program criteria and protocols that have been approved and published by a governmental body, a professional medical society and/or Canon.
Information from professional societies related to lung cancer screening can be found, but is not limited to: American College of Radiology® (ACR)-resources and technical specification; accreditation American Association of Physicists in Medicine (AAPM) - Lung Cancer Screening Protocols; radiation management.
The low dose lung cancer screening option is an indication being added to the following existing, previously FDA-cleared scanners: [List of Aquilion and Lightning CT scanner models and their corresponding 510(k) numbers]. No functional, performance, feature, or design changes are being made to the devices that will be indicated for low dose lung cancer screening. The devices already include low dose lung screening protocols, intended for use in the review of thoracic CT images within the established inclusion criteria of programs/protocols that have been approved and published by either a governmental body or professional medical society.
The provided text describes a 510(k) premarket notification for a "Low Dose CT Lung Cancer Screening Option" from Canon Medical Systems Corporation. The submission seeks to add this indication to existing, previously FDA-cleared CT scanners. The key claim is substantial equivalence to a predicate device (Aquilion RXL, K121553, which is a successor to the Aquilion 16 used in the National Lung Screening Trial - NLST). The device's performance is demonstrated through bench testing only, not a clinical study involving human subjects or AI-assisted readings.
Therefore, the following information can be extracted/inferred:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Bench Test Metrics) | Relevance to Low-Dose Lung Cancer Screening | Reported Device Performance |
---|---|---|
Modulation Transfer Function (MTF) | Quantifies the in-plane spatial resolution performance of the system. | Demonstrated performance substantially equivalent to the NLST predicate. |
Axial Slice Thickness | Quantifies the longitudinal resolution performance of the system. | Demonstrated performance substantially equivalent to the NLST predicate. |
Contrast to Noise Ratio (CNR) | Quantifies the signal strength relative to the standard deviation of noise. | Demonstrated performance substantially equivalent to the NLST predicate. |
CT number uniformity | Quantifies the stability of the Hounsfield Unit for water across the FOV. | Demonstrated performance substantially equivalent to the NLST predicate. |
Noise Performance (Noise Power Spectrum) | Quantifies the noise properties of the system. | Demonstrated performance substantially equivalent to the NLST predicate. |
Note: The document states that performance was "substantially equivalent" to the predicate. Specific numerical values for the reported performance are not provided in this regulatory summary.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not applicable in the traditional sense of a clinical test set with patient data. The "test set" consists of bench testing data from representative scanners from different CT system families. One device from each of the three identified families (Aquilion 16/32/64/RXL, PRIME/PRIME SP, ONE/ViSION/Genesis, and Lightning) was used for bench testing.
- Data Provenance: The data is from bench testing performed by Canon Medical Systems Corporation. The document does not specify the country of origin for this bench testing data, but the manufacturer is Canon Medical Systems Corporation (Japan) with a U.S. agent. The original NLST data (which the predicate is compared against) was from a large-scale, prospective clinical trial conducted in the United States.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. This submission relies on bench testing for substantial equivalence, not a clinical study requiring expert ground truth for image interpretation.
4. Adjudication Method for the Test Set
Not applicable, as no human readers or clinical image interpretation were part of the presented performance data.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. This submission is for a CT scanner's indication for low-dose lung cancer screening, not an AI-powered diagnostic assist device. The performance demonstration is based on the physical imaging characteristics of the CT system.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Not applicable. This is for a CT imaging device, not a standalone algorithm.
7. The Type of Ground Truth Used
The "ground truth" for this substantial equivalence argument is the performance of the predicate device (Aquilion RXL), which is stated to have similar technological characteristics and performance equivalent to the Aquilion 16 used in the NLST. The "ground truth" for the benefit of low-dose CT lung cancer screening itself comes from clinical literature, specifically referencing the National Lung Screening Trial (NLST) results, which demonstrated reduced mortality from lung cancer with low-dose CT screening. However, the device's performance itself is measured against established phantom-based image quality metrics.
8. The Sample Size for the Training Set
Not applicable. This is a CT imaging device, not an AI/ML algorithm that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established
Not applicable.
Ask a specific question about this device
(78 days)
This device is indicated to acquire and display cross-sectional volumes of the whole body, to include the head.
The Aquilion Prime SP has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.
The Aquilion Prime SP TSX-303B/1 is an 80-row CT System that is intended to acquire and display cross-sectional volumes of the whole body, including the head. This system is based upon the technology and materials of previously marketed Toshiba CT systems.
The provided text describes a 510(k) submission for the Toshiba Aquilion Prime SP, TSX-303B/1, v8.4. It outlines modifications to a previously cleared CT system. While the document mentions various performance evaluations and studies, it does not contain specific acceptance criteria tables nor detailed study designs that definitively "prove" the device meets acceptance criteria in the format of a typical peer-reviewed clinical study. Instead, it focuses on demonstrating substantial equivalence to a predicate device through engineering and performance testing.
However, I can extract and infer information about the testing and performance as described in the document.
Missing Information:
- A clear table of acceptance criteria for specific performance metrics. The document describes improvements but doesn't explicitly state "acceptance criteria" values met.
- Detailed sample sizes for all tests.
- Specific data provenance for all tests (e.g., country of origin, retrospective/prospective).
- Number and qualifications of experts for all ground truth establishment.
- Adjudication methods.
- MRMC comparative effectiveness study details (effect size of human reader improvement with AI).
- Standalone algorithm performance (the device is a CT system, not an algorithm in the AI sense).
- Sample size for the training set.
- How ground truth for the training set was established.
Based on the provided text, here's what can be extracted and inferred:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria. Instead, it describes performance improvements and that the modified system "demonstrates equivalent or slightly improved image quality characteristics." The performance evaluations are primarily focused on physical parameters and dose reduction, not diagnostic accuracy in the way an AI algorithm might be assessed against clinical endpoints.
Performance Metric | Reported Device Performance (Aquilion Prime SP, TSX-303B/1, v8.4) | Implied Acceptance Criterion (relative to predicate) |
---|---|---|
Spatial Resolution | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
Axial Slice Thickness/Slice Sensitivity Profile | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
CT Number Magnitude/Uniformity | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
Noise Properties | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
Low Contrast Detectability (LCD) | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
Contrast-to-Noise Ratio (CNR) | Evaluated; demonstrated equivalent or slightly improved image quality. | Equivalent or improved |
Dose Reduction (with AIDR 3D Enhanced) | 51% to 75% dose reduction supported while preserving LCD and high contrast spatial resolution. | Not explicitly stated, but demonstrated within range |
Dose Reduction (with PURE ViSION Optics) | 20%-31% quantitative dose reduction. | Not explicitly stated, but demonstrated within range |
LCD Improvement (Head, PURE ViSION Optics) | Range 13%-19% improvement. | Not explicitly stated, but demonstrated improvement |
LCD Improvement (Body, PURE ViSION Optics) | Range 15%-22% improvement. | Not explicitly stated, but demonstrated improvement |
Noise Reduction (PURE ViSION Optics) | 13% noise reduction at the same dose. | Not explicitly stated, but demonstrated improvement |
Diagnostic Quality of Images | Produces images of diagnostic quality for head, chest, abdomen, and peripheral exams. | Diagnostic quality maintained |
2. Sample size used for the test set and the data provenance
- Sample Size for Physical Performance Tests: Not explicitly stated. The tests involved "model observer studies" using MITA-FDA LCD Head and MITA-FDA LCD Body phantoms, implying a phantom-based test set rather than patient data.
- Sample Size for Image Review: "Representative diagnostic images" were obtained. The exact number is not specified.
- Data Provenance: Not specified. Phantoms for performance tests. Clinical images for diagnostic quality assessment (implicitly from a clinical setting, but no country of origin or retrospective/prospective status is mentioned).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: One.
- Qualifications of Expert: An "American Board Certified Radiologist." Further details on experience (e.g., years) are not provided.
- Role: This radiologist "reviewed" the "representative diagnostic images" to confirm they were of "diagnostic quality."
4. Adjudication method for the test set
- Adjudication Method: Not applicable or not specified in detail. The document states a single American Board Certified Radiologist reviewed images. There is no mention of consensus or multi-reader adjudication for this informal review of diagnostic quality. For the quantitative performance metrics (dose reduction, LCD, noise), these were based on phantom studies and model observer analysis, not human reader adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. The document does not describe a MRMC comparative effectiveness study. This submission is for a CT system itself, not an AI-assisted diagnostic tool in the typical sense of showing improved human reader performance. The "AI" mentioned (AIDR 3D Enhanced, SEMAR) refers to image processing algorithms within the CT system to improve image quality or reduce artifacts, not a separate AI application for diagnosis or interpretation assistance that would warrant an MRMC study comparing human readers with and without its aid.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, in a way. The "performance testing" of the modified system, including spatial resolution, CT number, noise properties, LCD, and CNR, as well as the quantitative dose reduction and LCD/noise improvement studies using phantoms and model observers, represent a standalone evaluation of the system's technical image quality parameters. These are inherent algorithmic and hardware performance metrics of the CT scanner, not dependent on human interpretation for their measurement.
7. The type of ground truth used
- For Quantitative Performance: Model observer studies using MITA-FDA LCD Head and MITA-FDA LCD Body phantoms. These phantoms represent a controlled, objective ground truth for physical image quality parameters.
- For Diagnostic Quality: The subjective assessment of an "American Board Certified Radiologist" confirming images were of "diagnostic quality." This is expert opinion/consensus for a qualitative judgment rather than a definitive "ground truth" like pathology.
8. The sample size for the training set
- Training Set Sample Size: Not applicable / Not provided. This document describes a 510(k) submission for a CT scanner, not a machine learning algorithm that requires a "training set" in the conventional sense. While there might be internal development and validation data, it's not discussed as a distinct "training set" within this regulatory context.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not applicable / Not provided, as there is no described training set for an AI algorithm in the context of this submission.
Ask a specific question about this device
(104 days)
The device is a diagnostic imaging system that combines Positron Emission Tomography (PET) and X-ray Computed Tomography (CT) systems. The CT component produces crosssectional images of the body by computer reconstruction of x-ray transmission data. The PET component images the distribution of PET radiopharmaceuticals in the patient body. The PET component utilizes CT images for attenuation correction and anatomical reference in the fused PET and CT images.
This device is to be used by a trained health care professional to gather metabolic and functional information from the distribution of the radiopharmaceutical in the body for the assessment of metabolic and physiologic functions. This information can assist in the evaluation, detection, diagnosis, therapeutic planning and therapeutic outcome assessment of (but not limited to) cancer, cardiovascular disease and brain dysfunction. Additionally, this device can be operated independently as a whole body multi-slice CT scanner.
Celesteion, PCA-9000A/3, V6.4, is a large bore, TOF, PET-CT system, which combines a high-end CT system with a high-throughput PET system. The high-end CT system is a multi-slice helical CT scanner with a gantry aperture of 900 mm and a maximum scanning field of 700 mm. The highthroughput PET system has a time of flight (TOF) detector with temporal resolution of 450 ps. Celesteion, PCA-9000A/3, V6.4, is intended to acquire PET images of any desired region of the whole body and CT images of the same region (to be used for attenuation correction or image fusion), to detect the location of positron emitting radiopharmaceuticals in the body with the obtained images. This device is used to gather the metabolic and functional information from the distribution of radiopharmaceuticals in the body for the assessment of metabolic and physiologic functions. This information can assist research, diagnosis, therapeutic planning, and therapeutic outcome assessment. This device can also function independently as a whole body multi-slice CT scanner.
The provided text is a 510(k) summary for the Celesteion, PCA-9000A/3, v6.4 medical device. It does not contain information about a study with acceptance criteria and reported performance in the format requested. The document primarily focuses on establishing substantial equivalence to a predicate device and outlines technical specifications and compliance with various standards.
Here's a breakdown of why this information is not present and what is discussed instead:
- Type of Submission: This is a 510(k) premarket notification for a modification to an existing device (Celesteion, PCA-9000A/2). The primary goal of a 510(k) is to demonstrate that a new device is "substantially equivalent" to a legally marketed predicate device, meaning it's as safe and effective. It generally doesn't require new clinical efficacy studies with specific acceptance criteria in the same way a PMA (Premarket Approval) would for novel devices.
- Focus of "Testing": The "Testing" section mentions "Risk analysis and verification/validation testing conducted through bench testing." This refers to engineering tests to ensure the device performs according to its specifications and complies with design controls and quality systems, not typically clinical studies to prove efficacy against acceptance criteria.
- Image Quality Metrics: It states, "Image quality metrics studies concluded that the subject device is substantially equivalent to the predicate device with regard to spatial resolution, CT number and contrast-to-noise ratio and noise properties." This is a comparison to the predicate, not performance against predefined acceptance criteria in a clinical study.
- PSF Claims: "Additional bench testing was conducted to support PSF claims including improved contrast recovery, sharper point source in air, more uniform point size across the field of view, ringing artifact reduction, SUV increase and reduced reconstruction time." Again, these are technical performance improvements, likely measured against internal engineering benchmarks, not clinical acceptance criteria established for a patient outcome study.
- Standards Compliance: Much of the "testing" described involves adherence to international standards (IEC, NEMA), which ensures safety and basic performance, but doesn't usually involve clinical acceptance criteria of the type requested.
- Software Documentation: Mentions software documentation for a "Moderate Level of Concern," which related to software validation, not a clinical study.
Therefore, I cannot populate the table or answer the specific questions about acceptance criteria, sample sizes, ground truth, or MRMC studies because the provided document does not contain this information. The document confirms that the device is a modified PET-CT system and that changes (like a new CT detector, metal artifact reduction software, PET respiratory gating, and PSF correction) do not affect "safety or efficacy" as demonstrated through "performance testing" and comparison to a predicate device. This is typical for a 510(k) submission.
Ask a specific question about this device
(30 days)
This device is indicated to acquire and display cross-sectional volumes of the whole body, to include the head.
The Aquilion Lightning has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.
The Aquilion Lightning, TSX-036A/1, v8.4 is an 80-row CT System that is intended to acquire and display cross-sectional volumes of the whole body, including the head. This system is based upon the technology and materials of previously marketed Toshiba CT systems.
The provided text describes a 510(k) submission for the Aquilion Lightning, TSX-036A/1, V8.4 CT system. As such, it focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study with specific acceptance criteria and performance metrics for a novel AI-powered diagnostic device.
Therefore, much of the requested information, particularly regarding acceptance criteria for diagnostic performance, sample sizes for test sets in an AI context, expert ground truth establishment, MRMC studies, and standalone AI performance, is not present in this document because it describes a computed tomography x-ray system, not an AI software.
However, I can extract information related to the device's technical specifications and how its performance was assessed for regulatory clearance.
Here's a breakdown of the available information based on your request:
1. A table of acceptance criteria and the reported device performance:
The document doesn't define specific quantitative "acceptance criteria" in the typical sense of a diagnostic performance study (e.g., sensitivity, specificity thresholds). Instead, it states that the device was evaluated against performance metrics relevant to CT image quality and found to be "substantially equivalent" to the predicate.
Performance Metric | Reported Device Performance |
---|---|
Spatial Resolution | Demonstrated substantial equivalence to predicate device |
CT Number Magnitude and Uniformity | Demonstrated substantial equivalence to predicate device |
Noise Properties | Demonstrated substantial equivalence to predicate device |
Low Contrast Detectability | Demonstrated substantial equivalence to predicate device |
CNR Performance | Demonstrated substantial equivalence to predicate device |
Diagnostic Image Quality (overall) | Produces images of diagnostic quality for head, chest, abdomen, pelvis, peripheral exams |
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not specified in terms of a patient cohort. The testing involved "representative diagnostic images" and phantom studies.
- Data Provenance: Not explicitly stated but implies images were generated by the device itself and likely from standard clinical scenarios (retrospective or prospective is not specified).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: One.
- Qualifications of Experts: An "American Board Certified Radiologist."
4. Adjudication method for the test set:
Not applicable/specified. The document states a single American Board Certified Radiologist reviewed representative diagnostic images. There is no mention of a multi-reader adjudication process for establishing ground truth for a test set in the context of diagnostic performance.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No. This document does not mention an MRMC comparative effectiveness study, nor does it discuss AI assistance for human readers. This device is an imaging system, not an AI diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
No. This is a CT imaging system. The performance assessment relates to the image acquisition and display capabilities, not a standalone algorithm.
7. The type of ground truth used:
- For CT Image Quality metrics (phantom studies): The ground truth is the physical properties of the phantoms and established CT physics principles for measuring image quality.
- For diagnostic image quality: Expert opinion of an American Board Certified Radiologist ("produces images of diagnostic quality").
8. The sample size for the training set:
Not applicable. This document describes a CT scanner, not an AI algorithm that requires a training set in the typical sense. The "training" of the system involves its design, manufacturing under quality systems, and adherence to engineering specifications.
9. How the ground truth for the training set was established:
Not applicable. (As above, not an AI algorithm with a training set).
Ask a specific question about this device
(117 days)
The SpotLight CT Computed Tomography X-ray is intended to produce cross-sectional images of the body by computer reconstruction of X-ray transmission data taken at different angles. The system has the capability to image whole organs, including the heart, in a single rotation. The system may acquire data using Axial, Cine and Cardiac CT scan techniques from patients of all ages. These images may be obtained either with or without contrast. This device may include signal analysis and display equipment, patient and equipment supports, components and accessories.
This device may include data and image processing to produce images in a variety of trans-axial and reformatted planes.
The system is indicated for X-ray Computed Tomography imaging of organs that fit in a 25cm field of view, including cardiac and vascular CT imaging. The device output is useful for diagnosis of disease or abnormality and for planning of therapy procedures.
The SpotLight CT is a third generation rotate-rotate CT scanner, designed and built based on technologies and principles of operation of the predicate device and other legally marketed CT scanners. The SpotLight CT is a multi-slice (192 detector rows), dual tube CT scanner consisting of a gantry, patient table, operator console, power distribution unit (PDU) and interconnecting cables. The system includes image acquisition hardware, image acquisition and reconstruction software for operator interface and image handling.
The provided document describes the Arineta Ltd. SpotLight CT device and its substantial equivalence to a predicate device. Here's a breakdown of the acceptance criteria and the study details:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a tabular format with corresponding "reported device performance." Instead, it describes various performance specifications and how the device performed against them. I've re-framed the reported performance metrics as if they were acceptance criteria.
Performance Metric (Implied Acceptance Criteria) | Reported Device Performance |
---|---|
Coverage (Z-direction) | Up to 140 mm in a single axial scan |
Field of View (FOV) - Diagnostic | 250 mm (radiation outside 250mm or 160mm FOV is attenuated, providing diagnostic image quality up to 250mm FOV) |
Gantry Rotation Speed | Up to 0.24 seconds per rotation |
Temporal Resolution | 120 msec (at 0.24 second rotation speed) |
Spatial Resolution | 0.31 mm |
Detector Rows | 192 detector rows |
Number of X-ray sources | Two ("Gemini" X-ray tubes) |
Image Quality Evaluation | Evaluated for artifacts, spatial resolution, low contrast detectability, noise, and uniformity and CT number accuracy (details on specific pass/fail not provided, but generally stated as meeting specifications). |
Dose Performance | Evaluated as meeting specifications |
Ability to Image Whole Organs | Capable of imaging whole organs, including the heart, in a single rotation. |
Diagnostic Quality (Animal Testing) | Images were evaluated for diagnostic quality with positive results. |
Clinical Diagnostic Value & Image Quality (Human Testing) | Demonstrated diagnostic image quality performance. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set (Clinical Testing): 38 subjects
- Data Provenance: Not explicitly stated, but the study was conducted at "one site," and the readers were "US certified." This suggests the data was collected in the US.
- Retrospective or Prospective: The clinical testing describes "data were collected," which could mean either. However, the phrase "The study protocol was designed to test the scanner across different patient populations, clinical scenarios and scan techniques" implies a prospective study design for collecting new data specifically for this evaluation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Four (4)
- Qualifications: "US certified readers who are qualified radiologists or cardiologists." Specific years of experience are not mentioned.
4. Adjudication Method for the Test Set
The document states, "The images were evaluated and rated by four US certified readers." It does not specify an adjudication method like 2+1 or 3+1 for resolving discrepancies. It implies independent evaluation by each reader.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, a multi-reader multi-case (MRMC) comparative effectiveness study (comparing human readers with and without AI assistance) was not done. The clinical testing focused on evaluating the device's diagnostic image quality for standalone performance.
- Effect size of human readers with AI vs without AI assistance: Not applicable, as no such study was conducted or reported.
6. Standalone (Algorithm Only) Performance
- Was a standalone study done? Yes, the described "Non-clinical Performance Testing" and "Clinical Testing" primarily focus on the standalone performance of the SpotLight CT system. The image quality, temporal resolution, dose performance, and diagnostic quality evaluations are all measures of the device's inherent capabilities without human intervention during the image generation or initial analysis phase. The "clinical diagnostic value and image quality" evaluated by the readers are also assessing the output of the device itself.
7. Type of Ground Truth Used
- Non-clinical/Phantom Testing: The ground truth for these tests would be objective physical measurements and known parameters of the phantoms used to evaluate image quality metrics (e.g., spatial resolution targets, known low contrast objects, CT number uniformity).
- Animal Testing: The "diagnostic quality" evaluation in animal models likely used established veterinary diagnostic criteria and potentially post-mortem examination or other correlative imaging as ground truth.
- Clinical Testing: The "clinical diagnostic value and image quality" in human subjects were evaluated by "qualified radiologists or cardiologists" against established clinical diagnostic criteria. While not explicitly stated as "expert consensus ground truth," the assessment by multiple, qualified experts serves as the de-facto ground truth for evaluating diagnostic utility. It does not mention pathology or long-term outcomes data for ground truth.
8. Sample Size for the Training Set
The document does not provide any information about a training set since this is a hardware device (CT scanner) with associated imaging software, not typically an AI/ML algorithm that requires a distinct "training set" in the common sense. The image reconstruction algorithm is described as "Stereo CT reconstruction algorithm based on common algorithms used in single source scanners that are modified to combine the data acquired from the two sources." This implies an engineered algorithm, not one trained on a dataset.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no mention of an AI/ML algorithm requiring a training set in the document. The device uses an engineered image reconstruction algorithm.
Ask a specific question about this device
(100 days)
This device is indicated to acquire and display cross sectional volumes of the whole body, to include the head, with the capability to image whole organs in a single rotation. Whole organs include but are not limited to brain, heart, pancreas, etc.
The Aquilion ONE has the capability to provide volume sets of the entire organ. These volume sets can be used to perform specialized studies, using indicated software/hardware, of the whole organ by a trained and qualified physician.
Aquilion ONE (TSX-305A/3) V7.3 is a whole body multi-slice helical CT scanner, consisting of a gantry, couch and a console used for data processing and display. This device captures cross sectional volume data sets used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician. This system is based upon the technology and materials of previously marketed Toshiba CT systems.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Aquilion ONE (TSX-305A/3) V7.3:
1. Table of Acceptance Criteria and Reported Device Performance:
The document is a 510(k) summary for a premarket notification for a Computed Tomography X-ray System. It is not a clinical study report with specific acceptance criteria directly tied to a diagnostic performance metric (like sensitivity or specificity) of a disease-detecting AI algorithm. Instead, it demonstrates substantial equivalence to a predicate device, focusing on technical specifications and image quality for general diagnostic use.
Therefore, the "acceptance criteria" here relate to demonstrating that the new device performs acceptably for its intended use and is equivalent to the predicate. The "performance" is primarily a comparison of technical specifications and image quality metrics against the predicate.
Acceptance Criteria Category | Specific Criteria (Implicit/Explicit) | Reported Device Performance |
---|---|---|
Intended Use | The device is capable of acquiring and displaying cross-sectional volumes of the entire body, including the head, with the capability to image whole organs in a single rotation (e.g., brain, heart, pancreas). These volume sets should be usable for specialized studies by trained physicians. (Identical to predicate) | Aquilion ONE (TSX-305A/3) V7.3 has identical Indications for Use as the predicate Aquilion ONE Vision, TSX-301C/1-8, V7.0. It is a whole-body multi-slice helical CT scanner for acquiring and displaying cross-sectional volumes and whole organs. |
Technical Specifications (Substantial Equivalence) | Technical specifications should be comparable to the predicate device, or any differences should not raise new questions of safety and effectiveness. (e.g., gantry rotation speed, view rate, detector, pitch factor, FOV, wedge filter types, X-ray tube voltage/current, image reconstruction time, helical reconstruction method, metal artifact reduction, patient couch type, size, weight capacity, gantry opening, gantry tilt angle, minimum area for installation, area finder. Also, existing cleared software options being implemented should function as previously cleared.) | Similarities: |
- View rate: Maximum 2910 views/s (same)
- Detector: 896 channels x 320 rows (same)
- Pitch factor: Range 0.555 to 1.575 / 0.555 to 1.5 (very similar)
- FOV: 240/320/500mm / 180/240/320/400/500mm (subject has slightly reduced range, but still within typical diagnostic needs)
- Metal artifact reduction: SEMAR (Volume, Helical, ECG gated) / SEMAR (Volume, Helical) (subject has added ECG gated capability)
- Gantry opening size: 780 mm (same)
- All previously cleared software options are listed as "no change" in functionality, with some having "workflow improvements" (e.g., Lung Volume Analysis, surESubtraction Lung, MyoPerfusion, Dual Energy System Package, 4D Airways Analysis) which are enhancements rather than regressions.
Differences (addressed through testing or not raising new concerns):
- Gantry Rotation Speed: 0.35s (Optional max 0.275s) for subject vs. 0.275s (Standard or optional) for predicate. This indicates a minor hardware difference, likely addressed by showing image quality is maintained.
- Wedge filter types: Two types for subject vs. Three for predicate. This is a minor design change.
- X-ray tube voltage/current: Max 72kW (Optional Max 90kW) for subject vs. Max 90kW (for one model) or Max 72kW (for others) for predicate. Comparable.
- Image reconstruction time: Up to 80 images/s for subject vs. Up to 50 images/s for predicate. Improvement in subject device.
- Helical reconstruction method: 20 rows or more: TCOT+ for subject vs. 16 rows or more: TCOT+ for predicate. Improvement in subject device (more rows).
- Patient Couch Type and related dimensions/weights: Various configurations/differences between subject and predicate models, indicating design variations but within expected functional range.
- Gantry tilt angle: ±30° for subject vs. ±22° for predicate. Improvement in subject device.
- Minimum area for installation: Smaller for subject (27m² vs 37.2m²). Improvement in subject device.
- Area finder: Optional for subject vs. NA for predicate. New feature on subject device. |
| Image Quality | Image quality metrics (spatial resolution, CT number magnitude/uniformity, noise properties, low contrast detectability/CNR performance) should meet established specifications and be comparable to the predicate device. Images obtained should be of diagnostic quality. | CT image quality metrics performed using phantoms demonstrated that the subject device is substantially equivalent to the predicate device with regard to: spatial resolution, CT number magnitude/uniformity, noise properties, and low contrast detectability/CNR performance. Representative diagnostic images (head, chest, abdomen/pelvis, extremity, cardiac) were also reviewed and demonstrated diagnostic quality. |
| Safety and Standards Compliance | The device must be designed and manufactured under Quality System Regulations (21 CFR 820, ISO 13485) and conform to applicable performance standards for ionizing radiation-emitting products (21 CFR, Subchapter J, Part 1020). It must also comply with various IEC, NEMA, and other relevant standards. | The device is designed and manufactured under QSR and ISO 13485. It conforms to applicable performance standards for Ionizing Radiation Emitting Products [21 CFR, Subchapter J, Part 1020] and numerous international standards including IEC60601-1 series, IEC60601-2 series, IEC60825-1, IEC62304, IEC62366, NEMA PS 3.1-3.18, NEMA XR-25 and NEMA XR-26. |
| Software Validation | Software documentation must comply with FDA guidance for a Moderate Level of Concern, and validation testing should be successfully completed. | Software Documentation for a Moderate Level of Concern was included. Successful completion of software validation is cited in the conclusion. |
| Risk Management | Risk analysis should be conducted. | Risk analysis was conducted. |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Description: The "test set" for this submission primarily consists of:
- Phantoms: Used for evaluating CT image quality metrics (spatial resolution, CT number, noise, low contrast detectability). The number and specific types of phantoms are not explicitly stated but are typically standard phantoms used in CT performance testing.
- Representative Diagnostic Images: Clinical images covering various body regions (head, chest, abdomen/pelvis, extremity, cardiac). The number of cases/patients is not specified.
- Data Provenance: The document does not explicitly state the country of origin for the diagnostic images. Given Toshiba Medical Systems Corporation is based in Japan and Toshiba America Medical Systems, Inc. is in the US, the data could originate from either region or a combination. The document also does not specify if the data was retrospective or prospective. However, for a 510(k) clearance based on substantial equivalence, particularly for a hardware/software update to a CT scanner, diagnostic images are often retrospectively collected or acquired as part of internal validation.
3. Number of Experts Used to Establish Ground Truth and Qualifications:
- Number of Experts: One (1) expert is explicitly mentioned.
- Qualifications of Experts: An "American Board Certified Radiologist." No specific years of experience are stated. This expert reviewed the representative diagnostic images to confirm diagnostic quality.
4. Adjudication Method for the Test Set:
- The document describes a single American Board Certified Radiologist reviewing images to confirm diagnostic quality. This indicates no formal adjudication method involving multiple readers (like 2+1 or 3+1) was used for this specific part of the evaluation. The assessment of image quality from phantoms would not typically involve expert adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This document is for a general-purpose CT scanner system, not an AI-specific diagnostic tool that assists human readers. Therefore, there is no mention of an effect size for human reader improvement with or without AI assistance.
6. If a Standalone (Algorithm-Only Without Human-in-the-Loop Performance) Study Was Done:
- No, a standalone performance study in the context of an AI algorithm was not done. The Aquilion ONE (TSX-305A/3) V7.3 is a complete CT system where the "algorithm" refers to the image reconstruction and processing capabilities, which are inherent to the device's function. The study validates the overall system's ability to produce diagnostic images, not a separate AI algorithm's diagnostic accuracy. The performance is assessed on the system output.
7. The Type of Ground Truth Used:
- For the phantom studies, the "ground truth" is typically known physical properties of the phantoms (e.g., known dimensions, densities, contrast levels).
- For the representative diagnostic images, the "ground truth" for confirming "diagnostic quality" is based on the expert opinion/consensus of an American Board Certified Radiologist. This is a form of expert consensus, albeit from a single expert in this stated context. There is no mention of pathology or outcomes data being used as ground truth for this submission.
8. The Sample Size for the Training Set:
- The document does not specify a separate "training set" sample size. This submission is for a medical imaging device (CT scanner) rather than a deep learning AI algorithm that undergoes distinct training. The underlying algorithms for image reconstruction and processing (e.g., TCOT+, SEMAR) are developed and refined through engineering and iterative testing, but not typically in the same "training set" paradigm as AI for diagnostic interpretation. The software validation is mentioned, which refers to standard software development lifecycle testing.
9. How the Ground Truth for the Training Set Was Established:
- As a "training set" in the context of AI development is not explicitly mentioned as relevant to this submission, the establishment of ground truth for a training set is not applicable/described. The "ground truth" during the development of a CT scanner's image reconstruction algorithms would typically involve engineering specifications, physical models, and potentially early clinical data used for empirical tuning and validation, but not a formally labeled training set in the AI sense.
Ask a specific question about this device
(143 days)
This device is indicated to acquire and display cross-sectional volumes of the whole body, to include the head.
The Aquilion Lightning has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.
The Aquilion Lightning, TSX-035A/4 and /5, v7.0 is a 16-row CT System that is intended to acquire and display cross-sectional volumes of the whole body, including the head. This system is based upon the technology and materials of previously marketed Toshiba CT systems.
I am sorry, but the provided text is a 510(k) premarket notification for a Computed Tomography (CT) X-ray system (Aquilion Lightning, TSX-035A/4 and /5, V7.0). This type of document is a submission to the FDA to demonstrate that a device is substantially equivalent to a predicate device already on the market.
This specific document does not contain information about the acceptance criteria or a study proving the device meets acceptance criteria in the manner requested (e.g., using metrics like sensitivity, specificity, or performance against a ground truth dataset).
The document primarily focuses on:
- Indications for Use: What the device is intended for.
- Technological Characteristics Comparison: How the new device differentiates from its predicate (e.g., gantry rotation speed, X-ray rated output, patient couch specifications).
- Safety and Performance Standards Conformance: Listing of relevant IEC standards and CFR parts that the device adheres to.
- Testing: Mentions "summary tables detailing the risk analysis and verification/validation testing conducted through bench testing" and "successful completion of software validation." It does not provide details of such studies or specific performance metrics that would be considered acceptance criteria for AI/algorithm performance.
Therefore, I cannot provide the requested information about acceptance criteria, device performance, sample sizes, expert ground truth, adjudication methods, MRMC studies, standalone performance, or training set details because these are not present in the provided text. This document is for a general CT system, not an AI-powered diagnostic algorithm with performance metrics relative to a ground truth dataset.
Ask a specific question about this device
(81 days)
Vitrea Software Toshiba Package is an application package developed for use on Vitrea, a medical image processing software, which includes the following post-processing software applications.
CT/XA Cerebral Artery Morphological Analysis This software is intended to facilitate the extraction and seqmentation of user identified aneurysms on the cerebral arteries. The software can be used as an adjunct to diagnosis for the purposes of measurement of size and aspect ratio.
MR Wall Motion Tracking This application is intended to assist physicians with performing cardiac functional analysis based upon magnetic resonance images. It provides measurements of global and reqional myocardial function that is used for patients with suspected heart disease.
Vitrea Software Toshiba Package, VSTP-001A, an application package developed for use on Vitrea, a medical image processing software, marketed by Vital Images. Inc. Vitrea Software Toshiba Package, VSTP-001A, includes two post processing applications, CT/XA Cerebral Artery Morphological Analysis and MR Wall Motion Tracking, which use brain and cardiac image data, obtained from CT/XA/MR systems, to assist physicians in performing specialized measurements and analysis.
Here's a breakdown of the acceptance criteria and study information for the Vitrea Software Toshiba Package, VSTP-001A, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The device includes two main applications: CT/XA Cerebral Artery Morphological Analysis and MR Wall Motion Tracking. The document describes the performance of a test but doesn't explicitly list "acceptance criteria" as a defined threshold value for each metric. Instead, it states that the studies demonstrated the device performed as intended and met required success ratios compared to manual methods or prior versions.
Application/Feature | Acceptance Criteria (Implicit) | Reported Device Performance (as stated) |
---|---|---|
CT/XA Cerebral Artery Morphological Analysis | Comparable to manual measurements and/or segmentations. | "CT/XA Cerebral Artery Morphological Analysis was comparable to manual measurements and/or segmentations." |
Accurate extraction/display of aneurysm-shaped regions as well as measurement calculations. | "Bench studies were conducted using numerical phantoms to analyze the accuracy of extraction/display of aneurysm shaped regions as well as measurement calculations." and found it to be accurate. | |
MR Wall Motion Tracking | Met the required success ratio for contour tracking process. | "The contour tracking process of the MR Wall Motion Tracking application met the required success ratio." |
Accurate cardiac function and strain analysis. | "Bench studies were conducted using numerical phantoms... to analyze cardiac function and strain." |
2. Sample Size Used for the Test Set and Data Provenance
- CT/XA Cerebral Artery Morphological Analysis: The document mentions "clinical evaluations" and "bench studies using numerical phantoms." It does not specify the sample size for either the clinical evaluations or the numerical phantom studies.
- MR Wall Motion Tracking: Similar to the CT/XA application, "clinical evaluations" and "bench studies using numerical phantoms" were conducted. The sample size is not specified for either.
- Data Provenance: The document does not specify the country of origin or whether the data was retrospective or prospective. It only mentions "clinical evaluations."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not specify the number of experts used or their specific qualifications for establishing ground truth for the test set. It mentions "manual measurements and/or segmentations" for the CT/XA application, implying expert human input, but details are absent.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1) for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, the document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. It focuses on the device's performance compared to manual methods or prior versions, but not on how human readers improve with AI versus without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone (algorithm only) performance seems to have been evaluated. The benchmarking using "numerical phantoms" and assessment of "accuracy of extraction/display" and "measurement calculations" for the CT/XA application, and "cardiac function and strain" for the MR application, suggests standalone testing. The statement about "contour tracking process... met the required success ratio" for MR Wall Motion Tracking also implies standalone algorithm evaluation.
7. The Type of Ground Truth Used
- CT/XA Cerebral Artery Morphological Analysis: The ground truth appears to be based on expert human measurements and/or segmentations ("comparable to manual measurements and/or segmentations") and numerical phantoms for accuracy.
- MR Wall Motion Tracking: The ground truth also seems to be based on numerical phantoms for accuracy analysis of cardiac function and strain, and an "intended" or "required success ratio" for contour tracking, which would likely be established against a reference standard or expert review.
8. The Sample Size for the Training Set
The document does not mention the sample size for the training set for either application. It focuses on verification/validation testing.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established, as it does not discuss the training process or dataset.
Ask a specific question about this device
Page 1 of 1