Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K231796
    Manufacturer
    Date Cleared
    2023-07-19

    (29 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K162660

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Green X 12 (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and pediatric patients. The system also utilizes carpal images for orthodontic treatment The device is to be operated by healthcare professionals.

    Device Description

    Green X 12 (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X 12 (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X 12 (Model : PHT-75CHS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector.

    The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment.

    Green X 12 (Model : PHT-75CHS) can also acquire 2D diagnostic image data in conventional PANO and CEPH modes.

    The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same to the predicate device (PHT-75CHS (K201627)). The difference from the predicate device is that the maximum FOV provided to the user is different by equipping the new CBCT/PANO detector. Also, New software functions (Auto Pano, Smart Focus, Scout) have been added.

    AI/ML Overview

    The provided document details the 510(k) submission for the "Green X 12 (Model: PHT-75CHS)" dental X-ray imaging system. The submission aims to demonstrate substantial equivalence to a predicate device, the "Green X (Model: PHT-75CHS)" (K201627), and references another device, "Green Smart (Model: PHT-35LHS)" (K162660).

    The primary changes in the subject device compared to the predicate device are:

    1. New detector: Equipped with the Xmaru1404CF-PLUS detector (cleared with K162660).
    2. New software functions: Auto Pano, Smart Focus, and Scout.

    The document describes non-clinical performance evaluations rather than clinical studies with human readers.

    Here's the breakdown of the information based on your request:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly provide a table of acceptance criteria with specific numerical thresholds and corresponding reported performance values for each criterion in an acceptance study. Instead, it states that the device's performance was compared to the predicate device and relevant standards.

    However, based on the text, we can infer some criteria and the general findings:

    Acceptance Criterion (Inferred)Reported Device Performance
    General Image Quality (CT)Measured Contrast, Noise, CNR (Contrast-to-Noise Ratio), and MTF (Modulation Transfer Function) with FDK (back projection) and CS (iterative) reconstruction algorithms. Results demonstrated equivalent performance to the predicate device.
    Dosimetric Performance (DAP)In PANO mode, DAP (Dose Area Product) measurements were the same as the predicate device under identical FDD, exposure area, exposure time, tube voltage, and tube current.
    In CEPH mode, DAP measurements were the same as the predicate device under identical FDD, detector specifications, and exposure conditions.
    In CBCT mode (at common FOVs 80x80 / 80x50 / Endo 40x40 mm), the DAP of the subject device was lower than the predicate device.
    Clinical Image QualityEvaluation Report demonstrated that the general image quality of the subject device is equivalent to the predicate device in PANO/CBCT mode.
    Software V&VSoftware verification and validation were conducted according to FDA guidance. Considered "moderate" level of concern.
    CybersecurityApplied in compliance with FDA guidance.
    Safety, EMC, PerformanceElectrical, mechanical, environmental safety, and performance testing conducted per standards (IEC 60601-1, IEC 60601-1-3, IEC 60601-2-63, IEC 60601-1-2). All test results were satisfactory.
    DICOM ConformanceConforms to NEMA PS 3.1-3.18.
    Added Software Functions- Auto Pano: Already cleared in reference device (K162660).
    • Smart Focus: FOV 40x40 mm images were clinically evaluated by a US licensed dentist.
    • Image Quality (new software): Performed in compliance with IEC 61223-3-4 and IEC 61223-3-5. Both standard requirements were satisfied. |

    2. Sample Size Used for the Test Set and Data Provenance

    The document describes non-clinical performance and image quality evaluations rather than a test set of patient cases.

    • Sample Size for Test Set: Not applicable in the context of human patient data. The "test set" was described as physical measurements on the device itself and phantoms. No specific number of cases or images are mentioned for the "clinical evaluation" of Smart Focus or the image quality assessment of new functions beyond complying with IEC standards.
    • Data Provenance: Not applicable in terms of country of origin or retrospective/prospective for patient data. The tests were laboratory-based and non-clinical. The "clinical evaluation" for Smart Focus was conducted by a "US licensed dentist," implying evaluation of generated images rather than a broad clinical trial.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: For the "Smart Focus" mode, "a US licensed dentist" performed the clinical evaluation. This indicates at least one expert. For other image quality evaluations, the primary assessment relies on compliance with IEC standards and comparison to the predicate device, which would involve technical experts in radiology and medical imaging rather than medical specialists establishing "ground truth" on patient cases.
    • Qualifications of Experts: For Smart Focus, "a US licensed dentist" is specified. No specific years of experience are listed. For other evaluations, the experts are implied to be qualified in medical device testing, radiology physics, and engineering.

    4. Adjudication Method for the Test Set

    Not applicable. The evaluations described are primarily non-clinical measurements and comparisons against a predicate device or standards, rather than a diagnostic performance study requiring adjudication of expert interpretations of patient cases. The "clinical evaluation" of Smart Focus involved a single "US licensed dentist," suggesting no adjudication process was needed for this part.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not done. The submission focuses on demonstrating substantial equivalence through non-clinical performance testing and comparison to a predicate device, not on assessing human reader performance with or without AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

    The document primarily describes an imaging system, not an independent AI algorithm. However, the evaluation of "general image quality" (Contrast, Noise, CNR, MTF) for the CT reconstruction algorithms (FDK and CS) can be considered a standalone performance assessment of the image generation and processing aspects of the device, separate from human interpretation. The "new software functions" (Auto Pano, Smart Focus, Scout) are features of the device, and their performance was evaluated for image quality and clinical utility, effectively in a "standalone" manner in terms of the algorithm producing the image/feature.

    7. The Type of Ground Truth Used

    • For the non-clinical image quality metrics (Contrast, Noise, CNR, MTF), the "ground truth" is based on physical measurements using phantoms and established metrology for X-ray imaging systems (e.g., as per IEC standards).
    • For the clinical evaluation of the Smart Focus mode, the "ground truth" is implied to be the expert opinion/assessment of a US licensed dentist regarding the quality and diagnostic utility of the 40x40 mm images.
    • For the new software functions' image quality evaluation, adherence to IEC 61223-3-4 and IEC 61223-3-5 standards serves as the ground truth/benchmark.

    8. The Sample Size for the Training Set

    Not applicable. This document describes the 510(k) submission for a medical imaging device (CT X-ray system) and its software functions, not a machine learning or AI algorithm that requires a separate training set for model development. The "Auto Pano" function, while a software feature, is not described as an AI algorithm that learns from data; it reconstructs 3D CBCT data into 2D panoramic images, a known image processing technique.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for a machine learning model is mentioned in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K170731
    Manufacturer
    Date Cleared
    2017-04-04

    (25 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K162660

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PCH-30CS is intended to produce panoramic or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by physicians, dentists, and x-ray technicians.

    Device Description

    PaX-i Plus / PaX-i Insight (Model: PCH-30CS) is an advanced 3-in-1 digital X-ray imaging system that incorporates PANO, CEPH (Optional) and 3D PHOTO (Optional) imaging capabilities into a single system and acquires 2D diagnostic image data in conventional panoramic and cephalometric modes. The PaX-i Plus / PaX-i Insight dental systems are not intended for CBCT imaging.

    AI/ML Overview

    The provided text describes the PaX-i Plus/PaX-i Insight (Model: PCH-30CS) digital X-ray system and its substantial equivalence to a predicate device. However, it does not contain the level of detail requested for acceptance criteria and a study that 'proves' the device meets them, especially in the context of AI assistance or human-in-the-loop performance measurement.

    The document focuses on demonstrating substantial equivalence to an existing predicate device rather than outright proving performance against specific acceptance criteria in a clinical study as might be done for a novel AI-powered diagnostic device. It primarily relies on bench testing of components, comparison of technical characteristics, and a qualitative assessment of general image quality.

    Here's an analysis based on the available information, with caveats where data is missing:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" for diagnostic accuracy or clinical utility that an AI-driven device would typically have (e.g., sensitivity, specificity, AUC thresholds). Instead, the performance evaluation focuses on direct comparisons of technical imaging metrics and safety standards against a predicate device.

    Acceptance Criteria (Implied / Technical)Reported Device Performance (Subject Device vs. Predicate)
    Image Quality (Detectors)
    MTF (Modulation Transfer Function)Similar or better
    DQE (Detective Quantum Efficiency)Similar or better
    NPS (Normalized Noise Power Spectrum)Similar or better
    Signal-to-Noise Ratio (SNR)Superior in all spatial frequencies
    Pixel ResolutionSimilar (for Xmaru1501CF-PLUS) or Higher/Better SNR (for Xmaru1404CF-PLUS)
    Dosimetric Performance (DAP)
    Panoramic modeSimilar to predicate device under similar X-ray exposure conditions
    Cephalometric mode (Fast)Equivalent to predicate device
    Safety & Standards Compliance
    IEC 60601-1 (Electrical, Mechanical, Environmental Safety)Conforms
    IEC 60601-1-3 (Performance)Conforms
    IEC 60601-2-63 (Performance - Dental X-ray)Conforms
    IEC 60601-1-2 (EMC)Conforms
    21 CFR Part 1020.30, 31 (EPRC standards)Conforms
    NEMA PS 3.1-3.18 (DICOM Set)Conforms
    FDA Guidance: "Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices"Complies (non-clinical consideration report provided)
    IEC 61223-3-4 (Acceptance test and image evaluation)Satisfactory
    Software Verification & Validation (Moderate Concern Level)Satisfactory testing results, proper functioning
    Cybersecurity Risk AnalysisPerformed
    General Image Quality (Clinical)Equivalent or better

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "Clinical images were provided" and "PANO / CEPH images from the subject and predicate devices are evaluated in the Clinical consideration and image quality evaluation report." However:

    • Sample Size: The exact sample size of clinical images used for evaluation is not specified.
    • Data Provenance: The country of origin is not specified, and it's not explicitly stated if the data was retrospective or prospective. The phrasing "Clinical images were provided" suggests pre-existing images, implying retrospective data, but this is not confirmed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document states: "PANO / CEPH images from the subject and predicate devices are evaluated in the Clinical consideration and image quality evaluation report." However:

    • Number of Experts: The number of experts involved in this evaluation is not specified.
    • Qualifications of Experts: The qualifications of these experts (e.g., radiologist with X years of experience) are not specified.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating clinical images. It simply states that images were "evaluated."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study appears to have been conducted to measure the effect size of human readers improving with AI vs. without AI assistance. The device in question is a digital X-ray imaging system, not an AI-assisted diagnostic tool. The document focuses on the equivalence of image acquisition hardware and software, not on a decision support or AI diagnostic component that would interact with human readers in such a study.
    • The "Insight PAN" mode is described as providing "multilayer panorama images with depth information" which "demonstrate useful diagnostic information," but this is a feature of the imaging system itself, not necessarily an AI-driven interpretation aid.

    6. Standalone (Algorithm Only) Performance

    • No standalone (algorithm-only) performance testing was presented in the context of diagnostic accuracy. As noted above, the device is an imaging system, not primarily an AI algorithm for diagnosis. Performance evaluation focused on bench testing of hardware components (detectors, X-ray source) and overall image quality metrics, not on an algorithm's diagnostic output.

    7. Type of Ground Truth Used

    The document refers to "Clinical consideration and image quality evaluation report" and that "general image quality... is equivalent or better." This suggests that the "ground truth" for the clinical image evaluation was likely based on expert consensus or qualitative assessment of image quality for diagnostic detail, rather than pathology, outcomes data, or a single definitive reference standard.

    8. Sample Size for the Training Set

    • This information is not applicable or not provided. The document describes a traditional X-ray imaging system, not a machine learning model that would require a distinct "training set." The performance evaluation is based on comparison to a predicate device and technical measurements.

    9. How the Ground Truth for the Training Set Was Established

    • This information is not applicable or not provided for the same reasons as #8.

    In summary: The provided 510(k) summary focuses on demonstrating substantial equivalence for an X-ray imaging system by comparing technical specifications, bench test results of components (like detectors), compliance with safety standards, and general image quality to a predicate device. It does not provide the kind of detailed clinical study data (e.g., specific acceptance criteria for diagnostic performance, detailed clinical test set demographics, expert qualifications, or MRMC studies) that would be expected for an AI-powered diagnostic device or a system whose primary claim is improved diagnostic accuracy for specific conditions.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1