K Number
K131075
Manufacturer
Date Cleared
2014-03-28

(345 days)

Product Code
Regulation Number
892.1650
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use
  • -The SONIALVISION G4 is intended to be used for the fluoroscopy/radiography diagnosis in hospital.
  • The equipment must only be operated by qualified personnel, such as radiography technicians or those with equivalent qualifications.
  • The system is used for total patient population.
  • This system is NOT intended to be used for Mammography screening.
  • This system is NOT intended to be used for interventional procedure.
  • This system is used for radiographic, fluoroscopic, angiographic and pediatric examinations.
  • Stored images in this system can be used for re-monitoring, image processing, storing to optical media (CD/DVD), and sending to DICOM server.
Device Description

The Shimadzu SONIALVISION G4 is a universal X-ray RF system offering radiographic, fluoroscopic and angiographic techniques. The Shimadzu SONIALVISION G4 is floor mounted table, and the system can be configured with Digital Radiography System, X-ray High Voltage Generator, Collimator and X-ray Tube.

AI/ML Overview

The provided text describes a 510(k) submission for the SHIMADZU SONIALVISION G4, an X-ray TV system. The submission focuses on demonstrating substantial equivalence to a predicate device (DAR-8000F, K052500) rather than establishing novel acceptance criteria for a new device's performance against specific metrics.

Here's a breakdown of the requested information based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The submission does not explicitly define distinct "acceptance criteria" in the format of specific numerical thresholds for diagnostic performance (e.g., sensitivity, specificity, AUC). Instead, the acceptance is based on demonstrating substantial equivalence to a predicate device.

Acceptance Criteria (Implicit)Reported Device Performance (Summary)
Equivalence in ability to acquire X-ray images, demonstrated through linearity, MTF, DQE, and Density resolution, with performance comparable to the predicate device."both have enough performance to acquire X-ray images"
Overall clinical diagnostic capability equivalent to the predicate device."Result of overall clinical review confirmed that new device is substantially equivalent to the predicate device in aspect of its diagnostic capability."
Compliance with relevant electrical safety and performance standards.Complies with AAMI/ANSI ES 60601-1:2005, IEC 60601-2-54 Edition 1.0:2009, and other involved standards.
Risk analysis completed and risk controls implemented.Risk analysis completed and risk controls implemented to mitigate identified hazards.
All software specifications fulfilled acceptance criteria."The testing results support that all the software specifications have fulfilled the acceptance criteria."

2. Sample size used for the test set and the data provenance

  • Sample size for test set: The text mentions a "concurrence study of clinical images" but does not specify the number of clinical images or patients included in this study.
  • Data provenance: Not explicitly stated, however, the study was reviewed by a "U.S. radiologist," suggesting the clinical images might have been relevant to a U.S. clinical context, though their origin (country, etc.) is not specified. The study was conducted for the purpose of a 510(k) submission to the FDA.
  • Retrospective or prospective: Not explicitly stated. The description "concurrence study of clinical images" could imply either.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of experts: The text states the concurrence study "was reviewed by an U.S. radiologist." This implies one expert was primarily responsible for the review reported in the submission.
  • Qualifications of experts: "an U.S. radiologist." No further details on years of experience, subspecialty, Board Certification, etc., are provided.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Adjudication method: Not specified. Since only one radiologist is mentioned as reviewing the study, it suggests an adjudication method for multiple readers was likely none or not applicable in the way implied by "2+1" or "3+1". The review seemed to be a direct comparison by a single radiologist.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC study: No, an MRMC comparative effectiveness study, especially one involving AI assistance, was not performed. The study was a "concurrence study of clinical images between subject device and its predicate device" reviewed by a single radiologist to confirm substantial equivalence in diagnostic capability, not to assess reader improvement with AI. This product is an X-ray system, not an AI-powered diagnostic tool.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Standalone algorithm performance: Not applicable. The SONIALVISION G4 is an X-ray imaging system, not an AI algorithm. Its performance is evaluated as an imaging device.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Type of ground truth: Not explicitly defined as a formal "ground truth" establishment process. The "concurrence study of clinical images" involved a U.S. radiologist reviewing the images from both the subject and predicate devices to determine diagnostic capability. This implies that the accepted clinical interpretation by that radiologist served as the basis for comparison, which could be considered a form of expert interpretation/clinical judgment in the context of demonstrating equivalence. It is not explicitly stated if this was compared against a separate, independent "ground truth" such as pathology or long-term outcomes.

8. The sample size for the training set

  • Sample size for training set: Not applicable/Not mentioned. This submission describes an X-ray imaging system, not an AI algorithm that undergoes a "training set" phase. Device development involved "verification and validation testing as well as phantom testing," but not a "training set" in the context of machine learning.

9. How the ground truth for the training set was established

  • Ground truth for training set: Not applicable/Not mentioned. As this is not an AI device, there is no "training set" or corresponding "ground truth" to establish in that context.

§ 892.1650 Image-intensified fluoroscopic x-ray system.

(a)
Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). An anthrogram tray or radiology dental tray intended for use with an image-intensified fluoroscopic x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9. In addition, when intended as an accessory to the device described in paragraph (a) of this section, the fluoroscopic compression device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.