Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K162518
    Device Name
    1012WCC, 1012WGC
    Manufacturer
    Date Cleared
    2016-10-06

    (27 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113630

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    1012WCC and 1012WGC Digital Flat Panel X-Ray Detector are indicated for digital imaging solution designed for human anatomy including head, neck, cervical spine, arm, leg and peripheral (foot, hand, wrist, fingers, etc.). They are intended to replace film based radio diagnostic systems and provide a case diagnosis and treatment planning for physicians and other health care professionals. Not to be used for mammography.

    Device Description

    1012WCC / 1012WGC is a wired/wireless digital solid state X-ray detector that is based on flat-panel technology. The wireless LAN(IEEE 802.11a/g/n/ac) communication signals images captured to the system and improves the user operability through high-speed processing. This radiographic image detector and processing unit consists of a scintillator coupled to an a-Si TFT sensor. This device needs to be integrated with a radiographic imaging system. It can be utilized to capture and digitalize X-ray images for radiographic diagnosis. The RAW files can be further processed as a DICOM compatible image file by a separate console SW program (K160579 / Xmaru View V1 and Xmaru PACS/ Rayence Co.,Ltd.) for a diagnostic analysis.

    AI/ML Overview

    The provided text describes the 510(k) premarket notification for the Rayence Co., Ltd.'s 1012WCC and 1012WGC Digital Flat Panel X-Ray Detectors. The document focuses on demonstrating substantial equivalence to a predicate device (1012WCA) and a reference device (1210SGA), rather than establishing novel acceptance criteria for a new device type.

    Therefore, the acceptance criteria are implicitly defined by demonstrating equivalence to the performance of the predicate and reference devices. The "study" here refers to the performance testing conducted to prove this equivalence.

    Here's an analysis of the provided information, framed to address your questions about acceptance criteria and the supporting study:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as numerical thresholds to be met, but rather as demonstrating substantial equivalence to the predicate (1012WCA) and reference (1210SGA) devices in key performance metrics. These metrics are:

    MetricAcceptance Criteria (Implied: Equivalent/Superior to Predicate/Reference)Reported Device Performance (1012WCC vs 1012WCA)Reported Device Performance (1012WGC vs 1210SGA)
    Modulation Transfer Function (MTF)Equivalent to predicate/reference at various spatial frequencies.1012WCC shows similar/slightly higher MTF than 1012WCA, especially at higher frequencies (e.g., 3.93 lp/mm).1012WGC MTF performance "almost same" as 1210SGA.
    Detective Quantum Efficiency (DQE)Equivalent to predicate/reference, particularly DQE(0).1012WCC DQE(0) = 0.778 (vs. 1012WCA DQE(0) = 0.753). "1012WCC has higher DQE performance at high spatial frequencies, especially from 2.5 lp/mm to 4 lp/mm, compared with 1012WCA."1012WGC DQE(0) = 0.437 (vs. 1210SGA DQE(0) = 0.470). Performance results "almost same".
    Noise Power Spectrum (NPS)Equivalent to predicate/reference.Tested (results not explicitly detailed, but implied as satisfactory).Tested (results not explicitly detailed, but implied as satisfactory).
    Clinical Image QualityDiagnostic image quality equivalent/superior to predicate/reference."images obtained with the 1012WCC/1012WGC were superior to the same view obtained from a similar patient with the 1012WCA/1210SGA, respectively." Specifically, "both the spatial and soft tissue contrast resolution are superior using the 1012WCC/1012WGC.""images obtained with the 1012WCC/1012WGC were superior to the same view obtained from a similar patient with the 1012WCA/1210SGA, respectively." Specifically, "soft tissues on extremity films were seen with better clarity."
    Safety and Performance (Electrical, Mechanical, Environmental)Conform to IEC 60601-1:2005 and IEC 60601-1-2:2007 standards.All test results satisfactory.All test results satisfactory.

    Study Information (Performance Testing)

    The document describes "non-clinical test" and "clinical consideration test" to demonstrate substantial equivalence.

    1. Sample sizes used for the test set and the data provenance:

      • Clinical Consideration Test: The document states "sample radiographs of similar age groups and anatomical structures." It does not specify the exact number of images or patients (sample size) in the clinical test set.
      • Data Provenance: Not explicitly stated regarding the origin (e.g., country) of the clinical data. It is a "clinical consideration test," meaning it's likely a small comparative review rather than a large clinical trial. The images are "taken from both subject devices" (1012WCC/WGC) and compared to images from the predicate/reference devices (1012WCA/1210SGA). This suggests a prospective collection of comparison images, not necessarily a large retrospective dataset.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Only one expert was used: "a licensed US radiologist."
      • Qualifications: "a licensed US radiologist." No further details on years of experience or subspecialty.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • There was no formal adjudication method described. The review was conducted by a single "licensed US radiologist" who rendered an "expert opinion." This falls into the "none" category for a multi-expert adjudication process.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study was done. This study solely focuses on the image quality of the device itself (hardware) compared to predicate devices, not on the interaction of human readers with AI assistance. The devices are flat-panel X-ray detectors, not AI algorithms for interpretation.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, in essence, standalone technical performance was done. The non-clinical tests (MTF, DQE, NPS) are standalone evaluations of the device's image quality metrics, independent of human interpretation.
      • The "clinical consideration test" is a standalone evaluation of the image quality from the devices by an expert, focusing on diagnostic utility, rather than an "algorithm only" performance. The device itself is the "algorithm" in terms of image generation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the non-clinical tests (MTF, DQE, NPS): The ground truth is based on physical measurements and established international standards (IEC 62220-1) for image performance.
      • For the clinical consideration test: The ground truth for image quality assessment was the expert opinion/review of a single licensed US radiologist. There's no mention of pathology or outcomes data to establish clinical ground truth.
    7. The sample size for the training set:

      • Not applicable / Not explicitly mentioned. The document describes a comparison between new devices and predicate devices, demonstrating substantial equivalence for hardware. It does not describe a machine learning model that would require a "training set." The closest analogy might be the development data used for the original predicate devices, but that's not detailed here.
    8. How the ground truth for the training set was established:

      • Not applicable. As there's no mention of a machine learning model or training set, this question is not addressed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K121400
    Manufacturer
    Date Cleared
    2012-08-28

    (110 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113630

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PaX-Uni3D is a computed tomography x-ray system which is a diagnostic x-ray system intended to produce panoramic, cephalometric and cross-sectional images for dental examination and diagnosis of diseases of the teeth, jaw and oral structure by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles.

    Device Description

    PaX-Uni3D (PHT-7500), a dental radiographic imaging system, consists of dual image acquisition modes; panoramic, cephalometirc and cone beam computed tomography. Specifically designed for dental radiography of the teeth or jaws, PaX-Uni3D (PHT-7500) is a complete dental X-ray system equipped with x-ray tube, generator and dedicated SSXI detector for dental panoramic, cephalometric and cone beam computed tomographic radiography. The dental CBCT system is based on CMOS digital X-ray detector. CMOS CT detector is used to capture radiographic diagnostic images of oral anatomy in 3D for dental treatment such as oral surgery or implant. The device can also be operated as the panoramic and cephalometric dental x-ray system based on CMOS X-ray detector.

    AI/ML Overview

    The provided text describes a 510(k) submission for a dental X-ray imaging system, PaX-Uni3D (PHT-7500). The submission aims to demonstrate substantial equivalence to a predicate device, PaX-Uni3D (K090467). The document focuses on non-clinical performance and safety data, rather than a clinical study evaluating the device's diagnostic performance compared to a baseline or human readers.

    Here's a breakdown of the requested information based on the provided text, noting where specific details are not available:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" in a quantitative, diagnostic performance sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it primarily focuses on demonstrating equivalence to the predicate device through technical specifications and adherence to international safety and performance standards.

    Characteristic / StandardAcceptance Criteria (Implicit)Reported Device Performance (PaX-Uni3D (PHT-7500))
    Indications for UseSame as predicate deviceMatches predicate device
    Performance Specification (Modes)Panoramic, cephalometric, computed tomographyPanoramic, cephalometric, computed tomography
    Input VoltageWithin acceptable rangeAC 100-120 / 200-240 V
    Tube VoltageWithin acceptable range50-90 kV
    Tube CurrentWithin acceptable range2-10 mA
    Focal Spot SizeMatches predicate device0.5 mm
    Exposure Time (Pano)Within acceptable rangeMax 20.2s
    Exposure Time (CT)Within acceptable range15s/24s selectable
    Exposure Time (Ceph)Within acceptable range0.9-1.2s
    Total FiltrationMatches predicate device2.8 mmAl
    SoftwareDICOM 3.0 Format compatibleDICOM 3.0 Format compatible
    Anatomical SitesMaxillofacialMaxillofacial
    CT Resolution (Xmaru0712CF, Xmaru1215CF Plus)Equivalent to or better than predicate3.5 lp/mm
    Pano Resolution (Xmaru1501CF)Equivalent to or better than predicate5 lp/mm
    Ceph Resolution (1210SGA)Equivalent to or better than predicate3.9 lp/mm
    CT Pixel Size (Xmaru0712CF, Xmaru1215CF Plus)Equivalent to or better than predicate140 x 140 μm
    Pano Pixel Size (Xmaru1501CF)Equivalent to or better than predicate100 x 100 μm
    Ceph Pixel Size (1210SGA)Equivalent to or better than predicate127 x 127 μm
    Safety StandardsCompliance with IEC standardsMet IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -2-44, -1-2 (EMC)
    DICOM ComplianceCompliance with NEMA PS 3.1-3.18Met NEMA PS 3.1-3.18

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "an expert review of image comparisons for both devices" and "Non-clinical & Clinical considerations according to FDA Guidance 'Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices.'" However, it does not specify the sample size (number of images or patients) used for this "expert review" or "clinical consideration." The data provenance (e.g., country of origin, retrospective or prospective) is also not explicitly stated.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document refers to "an expert review of image comparisons." It does not specify the number of experts or their precise qualifications (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set

    The document mentions "an expert review of image comparisons" but does not describe any specific adjudication method (e.g., 2+1, 3+1, none) used to establish ground truth or resolve discrepancies among experts.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    No MRMC comparative effectiveness study is mentioned in the provided text. The submission focuses on demonstrating substantial equivalence through technical specifications and non-clinical performance, rather than evaluating the diagnostic improvement of human readers with or without AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The device itself is an X-ray imaging system, not an AI algorithm performing diagnostic tasks. Therefore, the concept of "standalone (algorithm only without human-in-the-loop performance)" as usually applied to AI models is not relevant in this context. The performance evaluation is related to the image quality and physical specifications of the imaging system.

    7. The Type of Ground Truth Used

    For the "expert review of image comparisons," the implicit "ground truth" would likely be the expert consensus or judgment on the diagnostic quality and clinical utility of the images produced by the new device compared to the predicate device. The specific criteria for this judgment are not detailed, but it would relate to image resolution, clarity, ability to visualize relevant anatomical structures, and potential for diagnosis. There is no mention of pathology or outcomes data being used as ground truth for this comparison.

    8. The Sample Size for the Training Set

    This document describes a 510(k) for an X-ray imaging system, not an AI-powered diagnostic algorithm that would typically have a "training set." Therefore, the concept of a training set sample size is not applicable to this submission.

    9. How the Ground Truth for the Training Set Was Established

    As there is no "training set" for an AI algorithm in this context, this question is not applicable. The device is a hardware system for image acquisition.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1