Search Filters

Search Results

Found 22 results

510(k) Data Aggregation

    K Number
    K243420
    Device Name
    HESTIA
    Manufacturer
    Date Cleared
    2025-07-17

    (255 days)

    Product Code
    Regulation Number
    892.1715
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GENORAY Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    HESTIA is indicated for generating mammographic images that can be used for screening and diagnosis of breast cancer. HESTIA is intended to be used in the same clinical applications as traditional film/screen systems.

    Device Description

    HESTIA is a Full-Field Digital Mammography (FFDM) System for screening, diagnostic on standing or seated patients. The system consists of a control unit with x-ray generator, a compression device(C-arm) with tube housing assembly, and X-ray tube stand, including detector and a console with an operation panel. The HESTIA comes with a variety of compression plates for diagnostic adjunct procedures.

    The system is mainly used in internal medicine, examination centers, obstetrics and gynecology, women's medicine, breast surgery, and imaging. Mammography X-rays are used to obtain diagnostic images of the breast's internal structure to diagnose changes more accurately in breast tissue or potential signs of breast cancer, such as micro-calcification or tumors.

    HESTIA has three output control mode, Manual mode, Semi-auto mode, and Auto mode.

    It is customized and dedicated acquisition workstation and can PACS accessibility with full DICOM capability.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the HESTIA Mammography System DOES NOT CONTAIN the detailed information typically found in a study proving a device meets acceptance criteria, particularly for AI/CAD devices. The HESTIA device is a Full-Field Digital Mammography (FFDM) System, a hardware device for generating mammographic images, not an AI/CAD software for interpreting them.

    The summary specifically states:

    • "A clinical image evaluation... was conducted with the HESTIA and determined that the images, reviewed by MQSA qualified expert radiologists, were of sufficiently acceptable quality for mammographic usage and that the images are substantially equivalent to those from predicate device."

    This indicates a human-in-the-loop comparison of image quality (visual assessment by radiologists for diagnostic suitability) rather than an AI/CAD performance study with metrics like sensitivity, specificity, or AUC, which are common for AI algorithms. The "clinical image evaluation" mentioned is likely focused on demonstrating that the images produced by the HESTIA system are diagnostically acceptable and equivalent to those from the predicate device, not on assessing the performance of an AI against a ground truth established by experts.

    Therefore, many of the requested items related to AI/CAD study design (e.g., sample size for test set, data provenance, number of experts for ground truth, adjudication method, MRMC studies, standalone performance, training set details) are not applicable or not provided in this document as it describes a hardware imaging system, not an AI interpretation software.

    However, based on the information provided, here's what can be extracted and inferred, addressing as many points as possible:


    Acceptance Criteria and Study for HESTIA Mammography System

    As the HESTIA is a hardware Full-Field Digital Mammography (FFDM) system, not an AI/CAD software, the acceptance criteria and study design are primarily focused on demonstrating the system's ability to produce diagnostically acceptable images and its substantial equivalence to a predicate device in terms of image quality and safety. There is no indication of an AI component or AI performance metrics in this 510(k) summary.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for a FFDM system like HESTIA are typically related to image quality metrics, safety standards, and functional equivalence to predicate devices. The document summarizes compliance with these, rather than providing a quantitative table of specific acceptance thresholds and measured values for image interpretation performance (which would be relevant for AI).

    Acceptance Criteria CategoryDescription/Reported Performance
    Non-Clinical Testing- Safety & Effectiveness: Demonstrated through compliance with internal requirements and international standards.
    • Electromagnetic Compatibility (EMC): Compliant with IEC 60601-1-2.
    • Radiation Protection: Compliant with IEC 60601-1-3.
    • Usability: Compliant with IEC 60601-1-6.
    • Mammographic X-ray Equipment Specifics: Compliant with IEC 60601-2-45.
    • Biocompatibility: Compliant with ISO 10993-1, -5, -10.
    • Software Life Cycle: Compliant with IEC 62304.
    • Risk Management: Compliant with ISO 14971.
    • Physical Laboratory Testing (Image Quality): Met all requirements for: Sensitometric response, Spatial resolution, Noise analysis, Signal-to-Noise Ratio Transfer-DQE, Dynamic range, Repeated exposures Test (Lag Effect), AEC Performance (CNR and SRN), Phantom tests (ACR Map, CDMAM), Patient radiation dose (Mean Glandular Dose). All tests demonstrated substantial equivalence to the predicate device. |
      | Clinical Image Evaluation | - Images reviewed by MQSA qualified expert radiologists were determined to be of "sufficiently acceptable quality for mammographic usage."
    • Images were found to be "substantially equivalent to those from predicate device." |
      | Intended Use | - "Generating mammographic images that can be used for screening and diagnosis of breast cancer."
    • "Intended to be used in the same clinical applications as traditional film/screen systems." (Met, as indications for use are identical to the predicate device). |

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" sample size in terms of patient cases for clinical evaluation, nor does it detail data provenance (country of origin, retrospective/prospective). The "clinical image evaluation" often involves a small number of images for visual quality assessment rather than a large clinical trial with diverse patient populations for diagnostic accuracy.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not explicitly stated. The statement refers to "MQSA qualified expert radiologists."
    • Qualifications: "MQSA qualified expert radiologists." (MQSA stands for Mammography Quality Standards Act, which sets federal standards for mammography facilities and personnel in the U.S. This implies they are board-certified and meet specific continuing education and interpretation requirements for mammography.)

    4. Adjudication Method for the Test Set

    Not specified, as this was an image quality assessment by radiologists rather than a diagnostic performance study requiring ground truth establishment through adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size.

    No information regarding an MRMC comparative effectiveness study for human readers with vs. without AI assistance. This type of study is relevant for AI/CAD devices, which is not what HESTIA is. The clinical evaluation focuses on the image quality produced by the HESTIA system being comparable to the predicate.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.

    Not applicable. HESTIA is a hardware imaging system, not a standalone AI algorithm.

    7. The Type of Ground Truth Used

    For the clinical image evaluation, the "ground truth" was based on the consensus/judgment of MQSA qualified expert radiologists regarding the diagnostic acceptability and equivalence of the image quality produced by the HESTIA system compared to the predicate device. This is distinct from establishing a clinical ground truth (e.g., biopsy-proven cancer) for an AI's diagnostic performance.

    8. The Sample Size for the Training Set

    Not applicable. HESTIA is a hardware device; thus, there is no mention of a training set as would be required for an AI algorithm.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no training set for a hardware device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232085
    Manufacturer
    Date Cleared
    2023-12-08

    (148 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GENORAY CO., Ltd

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DVAS is an extra-oral source x-ray system to be used by trained dentists and dental technicians as an extra-oral x-ray source for producing diagnostic dental radiographic examination and diagnosis of teeth, jaw, and other oral structures using intra-oral image receptors. It is intended for both adult and pediatric subjects.

    Device Description

    DVAS, the extra-oral source x-ray system for dentistry is the standard X-ray equipment used to acquire intraoral images of patients during dental diagnosis such as dental caries, periodontal diseases, dental root fracture, and other oral and dental pathologies. DVAS is used to provide anatomic X-ray images of a patient at hospitals or dental clinics using a cone for X-ray exposure. The doctor or dentist can check the acquired X-ray images with chemical films or a PC monitor. DVAS can be used Digital I/O Sensor, Image plate (CR), Film as a Image receptor. The image receptors are not part of this submission.

    AI/ML Overview

    The provided text is a 510(k) summary for the DVAS (DVAS-M, DVAS-W) extra-oral source x-ray system. This document focuses on demonstrating substantial equivalence to a predicate device, rather than presenting a standalone study with specific acceptance criteria and performance metrics for the device itself in a clinical setting.

    Therefore, the requested information about "acceptance criteria and the study that proves the device meets the acceptance criteria" in terms of clinical performance (e.g., sensitivity, specificity, or reader improvement) is not present in the provided document. The document primarily addresses engineering and regulatory compliance.

    However, I can extract information related to safety, EMC, performance data comparison, and software validation, which are forms of acceptance criteria for a medical device.

    Here's a breakdown of the available information based on your request, highlighting what is implicitly or explicitly stated regarding "acceptance criteria" and "proof":


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not provide a table of clinical performance acceptance criteria (e.g., specific sensitivity or specificity thresholds) because it is a 510(k) submission focused on substantial equivalence to a predicate device, not clinical efficacy trials for novel AI algorithms.

    However, it does describe compliance with various standards and successful completion of validation testing as its "performance criteria" for regulatory clearance.

    Acceptance Criterion (Implicit/Explicit)Reported Device Performance (Proof)
    Engineering Bench Testing & Verification/Validation (General)DVAS successfully completed verification and validation testing per GENORAY quality system and engineering bench testing. All test results were satisfactory.
    Safety and Electrical Compliance (IEC 60601-1, -1-2, -1-3, -2-65)The system has been tested and is compliant with IEC 60601-1 (General requirements for basic safety and essential performance), IEC 60601-1-2 (Electromagnetic disturbances - Requirements and tests), IEC 60601-1-3 (General requirements for radiation protection in diagnostic X-ray equipment), and IEC 60601-2-65 (Particular requirements for the basic safety and essential performance of dental extra-oral x-ray equipment).
    Radiation Control Compliance (21 CFR 1020.30, 1020.31)DVAS complies with all applicable 21 CFR performance standards: 21 CFR 1020.30 (Electronic products; general) and 21 CFR 1020.31 (Radiological safety for diagnostic x-ray systems and their major components).
    Software Validation (FDA Guidance: "Software Contained in Medical Devices")Software was validated according to the FDA Guidance "Guidance for the Content of Premarket Submissions for Software Contained in Medical devices" and "Guidance for the content of premarket submissions for management of cyber security." Results demonstrated that all executed verification tests were passed.
    Pediatric Information in Labeling (FDA Guidance: "Pediatric Information for X-ray Imaging Device Premarket Notifications")The system has reflected pediatric information in the labeling according to the FDA Guidance "Pediatric Information for X-ray Imaging Device Premarket Notifications," dated November 28, 2017, as DVAS can be used in both adult and pediatric populations.
    Non-clinical Validation Testing for Intended Use/ClaimsNon-clinical validation testing has been performed to validate that DVAS conforms to its intended use, claims, user needs, effectiveness of safety measures, and instructions for use. The bench tests indicate that the new device is as safe and effective as the predicate device.
    Substantial Equivalence to Predicate DeviceBased on comparison information (similar functions, electronic features, indications for use, patient type, mechanical configuration, X-ray field size, target material, electrical power, focal spot, and applied standards), the device is deemed safe and effective as the predicate device and has no new indication for use, therefore it is substantially equivalent to the predicate device (RIX 70 DC, K182206) and reference device (PORT-X IV, K172810).

    2. Sample size used for the test set and the data provenance

    The document does not mention a "test set" in the context of clinical images or patient data for evaluating a diagnostic algorithm. The testing described is primarily engineering and regulatory compliance testing, not a clinical performance study involving diagnostic accuracy metrics.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. No clinical test set needing expert ground truth establishment is mentioned in this submission.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. No clinical test set needing adjudication is mentioned.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study is mentioned. This device is an X-ray system (hardware), not an AI diagnostic algorithm, so such a study would not be relevant for this type of submission.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This is an X-ray system, not a standalone algorithm.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. For the engineering and regulatory compliance tests, the "ground truth" would be the specifications and requirements of the relevant standards (e.g., IEC 60601 series, 21 CFR 1020.30/31).


    8. The sample size for the training set

    Not applicable. This document does not describe the development or testing of an AI algorithm that would require a training set of data.


    9. How the ground truth for the training set was established

    Not applicable. No training set for an AI algorithm is mentioned.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232158
    Device Name
    GenX-CR
    Manufacturer
    Date Cleared
    2023-09-13

    (55 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Genoray Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GenX-CR is a digital radiographic scanner for dental diagnostics intended for use by dentists and other qualified professionals.

    This device is used to create and display digital images by scanning intraoral X-ray images stored in an Image Plate (or Phosphore Storage Plate).

    Device Description

    GenX-CR scans a reusable phosphor storage plate (hereinafter referred to as "PSP") instead of an analog film to acquire high-quality digital radiographic images, digitally processes them, and displays images on the equipment's touch display screen and computer screen. It's a scanner. PSP is also known as IP (Image Plate)

    After scanning the PSP, the image is saved in the device's internal memory, the scanned image can be checked in advance through the touch display, and the image stored in the PSP can be deleted and the PSP can be ejected. Scanned images can be directly transmitted to a computer or network via an Ethernet cable and are used in PortView, a dental diagnostic software package, and other diagnostic software.

    GenX-CR is composed of mainly consists of PSP Scanner, PSP tray, power adapter and cable, Ethernet cable, PSP transfer box, PSP, PSP Protective Cover, Hygenic Bag, Dental PSP scanner control system software (Version: V2.2)

    GenX-CR contains firmware that is part of the system. And we hereby certify that the level of concern for this software(PortView) and firmware(GenX-OP) is of Moderate level of concern. And It has not been cleared with other predicate devices but is the initial used in GenX-CR

    Its compact design allows installation in space-constrained locations and minimizeds operating costs by using a low-cost, reusable PSP.

    AI/ML Overview

    This FDA 510(k) summary for the GenX-CR device does not provided detailed information regarding specific acceptance criteria, a dedicated study proving device performance against those criteria, or most of the requested information about clinical studies.

    Here's a breakdown of what can be gleaned and what is missing, based on the provided text:

    1. Table of acceptance criteria and reported device performance:

    The document includes a "Substantial equivalence chart" which compares various specifications and performance metrics of the proposed device (GenX-CR) to a predicate device (CRUXCAN(CRX-1000)). While this chart doesn't explicitly state "acceptance criteria," it implicitly uses the predicate device's performance as a benchmark for substantial equivalence.

    Acceptance Criteria (Implied by Predicate Performance)Reported Device Performance (GenX-CR)
    Image File Format: TIFF / Raw FormatTIFF / Raw Format (Same)
    Power Supply: 50/60 Hz, 100-240V~50/60 Hz, 100-240V~ (Same)
    X-ray Absorber: Imaging PlateImaging Plate (Same)
    Image plate Size: Size 0: 22 x 31mm, Size 1: 24 x 40 mm, Size 2: 31 x 41mm, Size 3: 27 x 54 mmSize 0: 22 x 31mm, Size 1: 24 x 40 mm, Size 2: 31 x 41mm, Size 3: 27 x 54 mm (Same)
    Image Pixel Size: 25 um, 50 um25 um (High Resolution), 50 um (Standard Resolution) (Same)
    Gray Scale level: 8 bit / 16 bit16 bit (Same as one option, higher than other)
    Resolution: 14.0 lp/mm @ 25um12 lp/mm @ 25um (Similar, slightly lower than predicate but considered acceptable for substantial equivalence)
    Imaging Plate Performance - DQE at 10% efficiency: 2.8 lp/mm2.8 lp/mm (Same)
    Imaging Plate Performance - MTF at 3lp/mm: 35%35% (Same)
    Laser safety classification: Class 1 laser product EN60825-1:2014Class 1 laser product EN60825-1:2014 (Same)

    2. Sample size used for the test set and the data provenance:

    • Not explicitly stated for a clinical test set. The document mentions "Non-clinical validation testing has been performed to validate that GenX-CR conforms to the intended use, claims, user needs, effectiveness of safety measures, and instructions for use." However, it does not specify the sample size of images or patients used for this non-clinical validation.
    • Data Provenance: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not provided. The document states "Clinical testing was not necessary for the subject device, to demonstrate substantial equivalence." This implies no expert-based ground truth was established from a clinical study for the purpose of this 510(k).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable/Not provided. As no clinical study involving human readers and a test set for diagnostic accuracy is detailed, there is no mention of adjudication methods.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done. The device is a "digital radiographic scanner" that converts intraoral X-ray images from phosphor plates into digital images. It is not described as an AI-powered diagnostic assistance tool. The document explicitly states: "Clinical testing was not necessary for the subject device, to demonstrate substantial equivalence."

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • No, a standalone diagnostic algorithm performance study was not done. The device itself is a scanner. Its performance is related to image quality metrics (resolution, DQE, MTF) and compliance with safety standards, rather than an diagnostic algorithm's ability to interpret images.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • For the non-clinical validation, the "ground truth" would be against established engineering specifications, safety standards (IEC 60601 series), and the functional equivalence to the predicate device in terms of image capture and processing capabilities.
    • There is no mention of ground truth established from expert consensus, pathology, or outcomes data, as no clinical diagnostic efficacy study was conducted for this submission.

    8. The sample size for the training set:

    • Not applicable/Not provided. The GenX-CR is described as a hardware device (scanner) with associated firmware and control software. There is no mention of a machine learning or AI component requiring a "training set" for diagnostic performance.

    9. How the ground truth for the training set was established:

    • Not applicable/Not provided. As there is no mention of a training set for a diagnostic algorithm, the method for establishing its ground truth is not relevant here.
    Ask a Question

    Ask a specific question about this device

    K Number
    K230787
    Manufacturer
    Date Cleared
    2023-07-20

    (120 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Genoray Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OSCAR 15 & OSCAR 15i are a mobile fluoroscopy system designed to provide fluoroscopic and spot film images of the patient during diagnostic, surgical and interventional procedures. Examples of clinical application may include cholangiography, endoscopy, urologic, neurologic, vascular, cardiac and critical care. The system may be used for other imaging applications at the physician's discretion. OSCAR 15i are indicated only for adult patients.

    Device Description

    The OSCAR 15 & OSCAR 15i, C-Arm Mobile are used for providing fluoroscopic and radiographic images of patient anatomy, especially during special procedures in a hospital or medical clinics. The fluoroscopic mode of operation is very useful to the attending physician to see the images on real time without the need to develop individual films. These devices are intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. The OSCAR 15 & OSCAR 15i consist of the X-ray tube assembly, X-ray controller, Image receptor and some accessories with no wireless function. The difference between OSCAR 15 and OSCAR 15i is only image acquisition parts. (An Flat Panel Detector (FPD) is applied to OSCAR 15, and an Image instensifier is applied to OSCAR 15i.)

    AI/ML Overview

    The provided text is a 510(k) Pre-Market Notification for the OSCAR 15 & OSCAR 15i mobile fluoroscopy systems. It asserts substantial equivalence to a predicate device (OSCAR 15, K172180) and describes non-clinical performance and safety testing. However, it does not include specific acceptance criteria or a study that directly quantifies device performance against those criteria in a format applicable to AI/CADe devices (e.g., sensitivity, specificity, FROC analysis).

    The document is concerned with demonstrating that the devices (OSCAR 15 and the newly added OSCAR 15i) function safely and effectively as fluoroscopic X-ray systems, primarily through comparison to a previously cleared predicate device and compliance with relevant IEC and CFR standards. It describes physical and technical specifications and differences, particularly in the image acquisition parts (Flat Panel Detector vs. Image Intensifier).

    Therefore, based on the provided text, I cannot complete the requested tables and information for acceptance criteria and a study proving the device meets those criteria in the way typically expected for AI/CADe devices, as this information is not present. The document focuses on regulatory compliance and substantial equivalence for an imaging hardware device, not an AI/CADe algorithm.

    Here's what can be extracted and what is missing based on your request:


    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria: Not explicitly stated in terms of specific performance metrics (like sensitivity, specificity, accuracy) that would be common for AI/CADe devices. The acceptance criteria for this type of device are primarily compliance with safety and performance standards (e.g., IEC 60601 series, 21 CFR 1020.30, 1020.31, 1020.32) and demonstrating "substantial equivalence" to a predicate device.
    • Reported Device Performance: The document states that "the performance related to image quality was also different" due to the detector change, and "The image performance was evaluated according to the IEC standard through performance bench testing demonstrated that these differences do not matter and effectiveness in comparison with the predicate device." It also mentions "clinical images have been evaluated by a licensed radiologist confirmed the sufficient diagnostic quality to provide accurate information." However, no quantified performance metrics are provided.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not specified. The document mentions "performance bench testing" and evaluation of "clinical images" but does not give a number for images or cases used in these evaluations.
    • Data Provenance: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: One ("a licensed radiologist") is mentioned for evaluating clinical images.
    • Qualifications of Experts: Only "a licensed radiologist" is mentioned; no specific experience level or sub-specialty is provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not specified. Only one radiologist is mentioned for evaluation, implying no consensus/adjudication process was detailed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC study targeting human reader improvement with AI assistance was not mentioned. The device is a fluoroscopy system, not an AI/CADe tool for interpreting images.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: No, this does not apply. The device is a fluoroscopy system, a hardware imaging device, not an algorithm being evaluated in a standalone capacity.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: For the "clinical images," the "sufficient diagnostic quality" was confirmed by a "licensed radiologist." This implies a form of expert opinion/judgment, but it's not explicitly framed as establishing a ground truth for a diagnostic algorithm. For the hardware performance, ground truth would be adherence to physical and electrical specifications verified through bench testing.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable/not mentioned. This device is a hardware fluoroscopy system, not an AI model requiring a training set.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set: Not applicable/not mentioned, as there is no AI model or training set described.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220392
    Manufacturer
    Date Cleared
    2022-05-19

    (97 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GENORAY Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PAPAYA 3D & PAPAYA 3D Plus are digital panoramic, cephalometric and tomographic extra-oral X- ray systems, indicated for use in:

    (i) producing panoramic X-ray images of the maxillofacial area, for diagnostic examination of dentition (teeth), jaws and oral structures; and

    (ii) producing radiographs of jaws, parts of the skull and carpus for the purpose of cephalometric examination, when equipped with the cephalometric arm (Only for PAPAYA 3D Plus);

    (ii) producing tomographic images of the oral and maxillofacial structure, for diagnostic examination of dentition(teeth), jaws ,oral structures and some cranial bones if equipped with CBCT option.

    The systems accomplish tomographic exam by acquiring a 360-degree rotational X- ray sequence of images and reconstructing a three-dimensional matrix of the examined volumensional views of this volume and displaying both two dimensional images and three-dimensional renderings.

    Device Description

    PAPAYA 3D & PAPAYA 3D Plus are diagnostic imaging system which consists of multiple image acquisition modes; panorama, cephalometric, and computed tomography. Also, PAPAYA 3D & PAPAYA 3D Plus are designed for dental radiography of the oral and craniofacial anatomy such as teeth, jaws and oral structures. The difference between PAPAYA 3D & PAPAYA 3D Plus is only optional of the cephalometric detector. Without cephalometric detector, we name model PAPAYA, and with cephalometric detector, we name model PAPAYA Plus. Due to this difference, the cephalometric image acquisition function applies only to PAPAYA 3D Plus, not to PAPAYA 3D.

    PAPAYA 3D Plus is equipped with extra-oral flat panel x-ray detectors which is based on CMOS digital X-ray detector and has CT, panoramic and cephalometric radiography with an extra-oral x-ray tube. CMOS Flat panel detectors are used to capture scanned image for obtaining diagnostic information for craniofacial surgery or other treatments. And it also provides 3D diagnostic images of the anatomic structures by acquiring 360ºrotational image sequences of oral and craniofacial area.

    The differences from predicate device (K150354) are change of power voltage, addition of image processing software (Theia, Triana).

    AI/ML Overview

    The provided text does not contain detailed acceptance criteria and a study specifically proving the device meets those criteria for the PAPAYA 3D & PAPAYA 3D Plus, particularly with respect to its image processing software (Triana/Theia) beyond software validation. The submission primarily focuses on demonstrating substantial equivalence to a predicate device (K150354) through a comparison of physical characteristics, intended use, and general performance specifications, along with adherence to various safety and regulatory standards.

    Here's a breakdown of the available information from the provided text, addressing your questions to the extent possible:

    1. A table of acceptance criteria and the reported device performance

    The document presents performance specifications, but these are general technical specifications for the imaging hardware, not specific acceptance criteria for diagnostic performance outcomes. The comparison is between the proposed device and the predicate device.

    CriteriaProposed Device (PAPAYA 3D & PAPAYA 3D Plus)Predicate Device (PAPAYA 3D Plus - K150354)
    3D TechnologyCone beam Computed tomographyCone beam Computed tomography
    CT FOV (DXH)14x14, 14x8, 8x8, 7x7, 4x4 cm14x14, 14x8, 8x8, 7x7, 4x4 cm
    Input Voltage100-240 V~, 50/60Hz100-120 V~, 50/60Hz
    Tube Voltage60-90 kV60-90 kV
    Tube Current4-12 mA4-12 mA
    Focal Spot Size0.5 mm0.5 mm
    Total Filtration2.8 mm Al (Canon tube)2.5 mm Al (CEI tube)
    2.8 mm Al (Canon tube)
    Exposure TimePanorama: max 17 sec
    Cephalo: max 15.5 sec (Plus only)
    CT: max 15 secPanorama: max 17 sec
    Cephalo: max 15.5 sec
    CT: max 15 sec
    Image ReceptorPanoramic: CMOS FPD
    Cephalo: CMOS FPD
    CT: CMOS FPDPanoramic: CMOS FPD
    Cephalo: CMOS FPD
    CT: CMOS FPD
    Image processing S/WTriana (K103182) / Theia- (The predicate device itself likely had an image viewer, but it's not explicitly named as "Theia" or "Triana" in the predicate column of this table.)
    Image Quality (from non-clinical tests)MTF: >80% at 21lp/mm (panoramic/cephalometric sensors)
    DQE: ~80% at 01lp/mm (panoramic/cephalometric sensors)
    Dynamic Range: >72dB (panoramic/cephalometric sensors)
    MTF: >60% at 11lp/mm (all detectors)
    DQE: ~70% at 01lp/mm (all detectors)
    Dynamic Range: >72dB (all detectors)(Implicitly similar or was the benchmark for the proposed device's "similar" image quality)

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document mentions "Non-Clinical Test results" referencing "each detector of PAPAYA 3D Plus" and "additional detector test results," but does not specify a sample size for any clinical test set or data provenance (country, retrospective/prospective). The evaluation appears to be based on technical specifications and laboratory testing of the detectors, rather than a clinical study with patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. The document describes technical testing of components (detectors), and software validation (IEC 62304), not a clinical study involving experts establishing ground truth for diagnostic accuracy.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. As no clinical study with a test set requiring adjudication is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study is mentioned. The device described (PAPAYA 3D & PAPAYA 3D Plus) is an imaging system (hardware) and associated image processing software (Triana/Theia). The software functions listed are for image viewing and manipulation (e.g., 3D visualization, 2D analysis, MPR, measurement, rotation), not AI assistance for diagnosis. Therefore, this question is not applicable to the information provided.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The software (Triana/Theia) is an image processing and viewing tool, not a standalone diagnostic algorithm. Its validation was done against IEC 62304:2006/AC: 2008 for software lifecycle processes, not for standalone diagnostic performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the detector performance tests (MTF, DQE, Dynamic Range), the "ground truth" would be established by physical measurement standards and calibration, not clinical ground truth like pathology or expert consensus. For the software validation, the "ground truth" or reference for validation would be the functional and performance requirements defined during software development, ensuring it operates as intended according to IEC 62304.

    8. The sample size for the training set

    Not applicable. The document describes an imaging hardware system and image viewing software. There is no mention of an "AI" component or machine learning algorithm that would require a training set.

    9. How the ground truth for the training set was established

    Not applicable. As no training set for an AI algorithm is mentioned.


    Summary of what the document does describe for proving acceptance:

    The document focuses on demonstrating substantial equivalence to an existing predicate device (PAPAYA 3D Plus, K150354) by showing that the proposed devices (PAPAYA 3D & PAPAYA 3D Plus) have the same intended use and similar technological characteristics, and that any differences do not raise new questions of safety or effectiveness.

    The primary methods of "proving acceptance" appear to be:

    • Comparison to Predicate Device: A detailed comparison table ([6]) highlights the similarities in indications for use, 3D technology, CT FOV, tube parameters, exposure times, and image receptors.
    • Safety and EMC Testing: The device underwent testing to established international standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-2-63) to verify electrical, mechanical, environmental safety, and electromagnetic compatibility ([8]).
    • Performance Data for Detectors: Non-clinical tests were performed on the detectors, measuring MTF, DQE, and dynamic range, demonstrating "similar" diagnostic image quality to the predicate device ([8]).
    • Software Validation: The image processing software (Theia) was validated according to IEC 62304:2006/AC: 2008, and its similarities to the already cleared Triana software (K103182) are highlighted. The software is classified as having a "Minor Level of Concern" ([8]).
    • Compliance with Regulations: The device meets EPRC standards (21 CFR 1020.30.31.33) and NEMA PS 3.1-3.18 (DICOM Set). Relevant FDA guidance documents for submissions were also considered ([8]).
    Ask a Question

    Ask a specific question about this device

    K Number
    K220423
    Manufacturer
    Date Cleared
    2022-05-19

    (94 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GENORAY Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PAPAYA & PAPAYA Plus are digital extraoral source X-ray system intended to produce panoramic and cephalometric(option) images of the oral and craniofacial anatomy for a precise treatment planning in adult and pediatric care. The system is used for dental & skull radiographic examination and diagnosis of teeth, jaw, oral structure, and skull by exposing an X-ray image receptor to ionizing radiation, with a digital imaging capability for taking both panoramic and cephalometric images. And This system can be equipped cust(Tomographic) option, which is capable of taking crosssectional radiographic images provide dimensional information for dental implant planning and information about location of impacted teeth.

    Device Description

    The proposed devices PAPAYA & PAPAYA Plus are diagnostic imaging system which consists of multiple image acquisition modes; panorama and cephalometric. The difference beteween PAPAYA & PAPA YA Plus is only optional of the cephalometric detector. (Without cephalometric detector, we model named PAPAYA, and with cephalometric detector, we model named PAPAYA Plus.)

    The proposed devices have the CUST imaging option which is used to reconstruct tomographic images from a set of pre-acquired projection radiographic images of the object.

    The differences from predicate device(K141700) are change of power voltage, addition of image processing software(Theia, Triana).

    AI/ML Overview

    This FDA submission describes the PAPAYA & PAPAYA Plus digital extraoral source X-ray systems, which are intended to produce panoramic and cephalometric images for dental and craniofacial diagnosis and treatment planning. The submission asserts substantial equivalence to a previously cleared device, PAPAYA Plus (K141700).

    Here's an analysis of the provided information regarding acceptance criteria and supporting studies:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission does not explicitly list acceptance criteria in the format of a separate table with target values. Instead, it presents a comparative table between the proposed device and the predicate device, highlighting performance specifications and software features. The "acceptance criteria" are implied to be that the proposed device performs at least equivalently to the predicate device, with any changes either being safety-verified or not impacting overall efficacy.

    CriteriaProposed device (PAPAYA & PAPAYA Plus)Predicate Device (PAPAYA Plus - K141700)Conclusion on Equivalence against Implicit Criteria
    Indications for UseSame as predicateDigital extraoral source X-ray system for panoramic and cephalometric images of oral/craniofacial anatomy, dental/skull radiographic examination, and CUST (Tomographic) option for cross-sectional images for implant planning and impacted teeth.Equivalent
    Performance SpecificationPanoramic (PAPAYA), Panoramic and Cephalometric (PAPAYA Plus)Panoramic (PAPAYA), Panoramic and Cephalometric (PAPAYA Plus)Equivalent
    Input Voltage100-240 V~, 50/60Hz120 V~, 60HzDifferent, but safety/EMC verified
    Tube Voltage60-90 kV60-90 kVEquivalent
    Tube Current4-12 mA4-12 mAEquivalent
    Focal Spot Size0.5 mm0.5 mmEquivalent
    Exposure TimePanorama: max 17 sec, Cephalo: max 12 secPanorama: max 17 sec, Cephalo: max 12 secEquivalent
    Exposure ModePanoramic, TMJ, SINUS, CUST, CephaloPanoramic, TMJ, SINUS, CUST, CephaloEquivalent
    Image ReceptorCMOS (Panoramic and Cephalometric)CMOS (Panoramic and Cephalometric)Equivalent
    Image Processing SoftwareTriana (K103182) or TheiaN/A (Predicate did not specify external image processing software in this comparison)Different, but Theia validated against IEC 62304 and deemed functionally equivalent to Triana
    Safety/EMC/Performance DataTested to IEC 60601-1, -1-2, -1-3, -1-6, -2-63,IEC 62366, IEC 62304, Content of Premarket Submissions for Management of Cybersecurity in Medical Devices Guidance, Guidance for Industry and FDA Staff Guidance for the Content of Premarket Submissions for Software contained in Medical Devices, Pediatric Information for X-ray Imaging Device Premarket Notifications, Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices, NEMA PS 3.1-3.20 (DICOM Set)Established in K141700Equivalent through compliance with relevant standards and guidelines

    2. Sample Size Used for the Test Set and Data Provenance

    The submission mentions "Clinical Evaluation Report and bench" testing for safety and effectiveness but does not provide specific details about the sample size used for any clinical test set or the provenance of any data (e.g., country of origin, retrospective/prospective). The primary focus of the performance data section is on adherence to regulatory standards and comparison to a predicate device's specifications.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    There is no mention of experts or ground truth establishment in the context of a diagnostic performance study for the proposed device in this submission. The submission centers on the physical device's specifications and software validation.

    4. Adjudication Method

    No adjudication method is mentioned, as there is no description of a study involving subjective assessment of diagnostic accuracy.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC study comparing human readers with and without AI assistance is mentioned. The device described is an X-ray imaging system and associated image processing software, not an AI-assisted diagnostic tool in the sense of providing automated interpretations. While the software offers "3D visualization, 2D analysis, various MPR functions," which can assist human readers, it's not discussed in the context of an MRMC study for effect size improvement.

    6. Standalone (Algorithm Only) Performance

    A standalone performance study for the image processing software (Triana or Theia) beyond the validation against IEC 62304 for software lifecycle processes is not explicitly described in terms of diagnostic accuracy or a specific clinical task without human interaction. The software is described as a tool to "obtain, store, inquire, and process the acquired image," suggesting it's an enhancement for human interpretation, not a standalone diagnostic algorithm.

    7. Type of Ground Truth Used

    No specific "ground truth" (e.g., expert consensus, pathology, outcomes data) is mentioned as part of a diagnostic accuracy study. The safety and performance assessments are based on compliance with electrical, mechanical, and imaging standards, as well as software validation.

    8. Sample Size for the Training Set

    The submission does not mention a training set, as it does not describe the development or validation of a machine learning-based diagnostic algorithm. The software mentioned (Triana/Theia) performs image processing and visualization, not AI-driven diagnosis requiring a training set.

    9. How the Ground Truth for the Training Set Was Established

    Since no training set is mentioned, this information is not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K211780
    Device Name
    ZEN-2090 Turbo
    Manufacturer
    Date Cleared
    2022-03-09

    (273 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Genoray Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ZEN-2090 Turbo, C-Arm mobile is used for providing fluoroscopic images of patient anatomy in a hospital or medical clinics.

    Device Description

    ZEN-2090 Turbo, C-arm mobile is used for providing fluoroscopic image of patient anatomy, especially during diagnostic, surgical and interventional procedures. The fluoroscopic mode of operation is very useful to the attending physician to see the images on real time without the need to develop individual films.

    ZEN-2090 Turbo is consisting of the X-ray tube, X-ray tube assembly, X-ray controller, XTV Camera and some accessories.

    AI/ML Overview

    The provided text is a 510(k) summary for the ZEN-2090 Turbo device, which is an Image-Intensified Fluoroscopic X-Ray System. This submission focuses on demonstrating substantial equivalence to a predicate device (ZEN-2090 Pro), rather than presenting a standalone study proving a new device's performance against specific acceptance criteria for an AI/algorithm-based diagnostic tool.

    The document primarily discusses the technical specifications of theZEN-2090 Turbo and compares them to the predicate device to argue for substantial equivalence. It does not contain the kind of detailed study design, data, and acceptance criteria normally provided for AI-driven diagnostic systems where performance metrics like sensitivity, specificity, or AUC are critical.

    Therefore, I cannot extract the information requested regarding acceptance criteria, study details, sample sizes, expert ground truth establishment, or MRMC studies, as these aspects are not present in the provided 510(k) summary for this type of device (a C-arm mobile fluoroscopic X-ray system).

    The document states:

    • Device Type: Image-Intensified Fluoroscopic X-Ray System (ZEN-2090 Turbo)
    • Purpose: Provides fluoroscopic images of patient anatomy.
    • Mode of Action: Traditional X-ray imaging, not an AI/algorithm-driven diagnostic aid.
    • Basis for Clearance: Substantial equivalence to a predicate device (ZEN-2090 Pro, K091918).
    • Performance Claim: "ZEN-2090 Turbo has better image quality than predicate device, ZEN-2090 Pro (K091918)" due to higher power output in fluoroscopy modes.
    • Testing: Verification and validation testing per quality system, engineering bench testing, and compliance with IEC and 21 CFR standards. Software verification for functional requirements, performance, safety, risk management, privacy, and security was also performed.

    In summary, the provided text does not describe an AI/algorithm-driven diagnostic device study with the requested metrics and methodologies. The device is a traditional medical imaging hardware system.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200469
    Manufacturer
    Date Cleared
    2020-09-16

    (203 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Genoray Co.,Ltd

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The X-ray unit system is a diagnostic imaging system which consists of multiple image acquisition modes; panoramic, cephalometric, and CBCT (Cone Beam Computed Tomography). X-ray unit system is used for dental radiographic examination and diagnosis of teeth, jaw, oral structures and skull. The device is to be operated and used by dentists and other legally qualified professionals.

    Device Description

    The proposed device PAPAYA 3D Premium Plus the computed tomography x-ray system which consists of image acquisition modes; panorana, cephalometric, and computed tomography. The difference between PAPAYA 3D Premium Plus is only optional of the cephalometric detector. It designed for dental radiography of the oral and cranionarial as teeth, jaws and oral structures. The device with cephalometric detector is named PAPAYA 3D Premium Plus and the device without cephalometric detector is named PAPAYA 3D Premium.

    The proposed device are composed of flat panel x-ray detectors which are based on CMOS, and TFT detector types and divided in to CT, panoramic and cephalometric radiography, and x-ay tube. CMOS, and TFT detectors are used to capture scamed image for obtaining diagnostic information for craniofacial surgery or other treatments. And it also provides 3D the anatomic stuctures by acquiring 3609rotational image sequences of oral and craniofacial area.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a dental X-ray system, PAPAYA 3D Premium & PAPAYA 3D Premium Plus. The document focuses on demonstrating substantial equivalence to a predicate device (PAPAYA 3D Plus, K150354) rather than presenting a detailed clinical study with specific acceptance criteria and performance metrics for an AI algorithm.

    Therefore, many of the requested details about acceptance criteria for an AI device, sample sizes, expert qualifications, and specific study designs (MRMC, standalone performance) are not present in the provided text. The device in question is a medical imaging hardware system, not an AI software.

    However, I can extract information related to the performance validation of the newly added image receptors, which is the closest thing to "device meets acceptance criteria" in this context.

    Here's a breakdown of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance (as much as can be inferred for the imaging components):

    Acceptance Criteria (Inferred)Reported Device Performance (for newly added detectors)
    Clinical Considerations: Images are "diagnosable" and meet indications for use"well enough to diagnosable and meet its indications for use"
    Imaging Performance (for newly added CBCT image receptor FXDD-0909GA):Tested for:
    - Gantry positioning accuracy- Gantry positioning accuracy
    - In-plane uniformity- In-plane uniformity
    - Spatial resolution section thickness- Spatial Resolution section thickness
    - Noise- Noise
    - Contrast to Noise Ratio- Contrast to Noise Ratio
    - Geometric Distortion- Geometric Distortion
    - Metal Artifacts- Metal Artifacts
    Imaging Performance (for newly added Cephalometric image receptor FXDD-1012CA):Tested for:
    - Line pair resolution- Line pair resolution

    Note: The document states these performance metrics were "tested," implying they met predefined acceptance criteria, but the specific numerical values or thresholds for "acceptance" are not provided.

    2. Sample Size and Data Provenance:

    • Test Set Sample Size: Not explicitly stated for either clinical image evaluation or phantom testing. The document only mentions "clinical images" for evaluation.
    • Data Provenance: The document does not specify the country of origin of the clinical images. It implies a retrospective review of existing clinical image sets.

    3. Number of Experts and Qualifications:

    • Number of Experts: "the clinical images were evaluated by the US board-certified oral surgeon." (Singular - implies one or an unspecified small number of US board-certified oral surgeons).
    • Qualifications: "US board-certified oral surgeon." No specific years of experience are mentioned.

    4. Adjudication Method:

    • Adjudication Method: "Throughout the evaluation by oral surgeon..." This wording suggests a single expert's opinion, so there's no mention of a formal adjudication method (like 2+1 or 3+1).

    5. MRMC Comparative Effectiveness Study:

    • MRMC Study Done? No. This document describes a new imaging hardware device and its added detectors. There is no mention of an AI component requiring a comparison of human reader performance with and without AI assistance.

    6. Standalone Performance (Algorithm Only):

    • Standalone Performance Done? N/A. This is a hardware device. The closest related component is the "Theia" image processing software, which is described as having "only UI" differences from the predicate's software and being "developed for marketing purpose only." Its validation focused on standards compliance (EN 62304, NEMA PS 3.1-3.20 DICOM, FDA Guidance) rather than a standalone clinical performance study as one might expect for an AI algorithm.

    7. Type of Ground Truth Used:

    • For Clinical Image Evaluation: Expert consensus (from the US board-certified oral surgeon) on whether images were "diagnosable" and met indications for use.
    • For Imaging Performance Tests: Phantom data (e.g., gantry positioning accuracy, spatial resolution, CNR, etc.).

    8. Sample Size for Training Set:

    • Training Set Sample Size: Not applicable. This is a hardware device, not an AI model that undergoes "training."

    9. How Ground Truth for Training Set Was Established:

    • Ground Truth Establishment for Training Set: Not applicable, as there's no AI training set described.
    Ask a Question

    Ask a specific question about this device

    K Number
    K181943
    Manufacturer
    Date Cleared
    2018-08-17

    (28 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Genoray Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OSCAR is a mobile fluoroscopy system designed to provide fluoroscopic and spot film images of the patient during diagnostic, surgical and interventional procedures. Examples of clinical application may include cholangiography, endoscopy, urologic, orthopedic, neurologic, vascular, and stone localization.

    Device Description

    OSCAR Prime and OSCAR Classic are classified according to the option of image acquisition parts. Flat panel detector is OSCAR Prime, and Image intensifier is OSCAR Classic. And they are called OSCAR as the brand name.

    OSCAR is consist of X-ray Tube, X-ray tube assembly, x-ray controller, image receptor and some accessories. There is no wireless function in this device.

    The OSCAR, C-Arm Mobile is the device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. This device is used for providing fluoroscopic and radiographic images of patient anatomy, especially during the special procedures in a hospital clinics. The fluoroscopic mode of operation is very useful to the attending physician to see the images on real time without the need to develop individual films.

    AI/ML Overview

    The provided text does not contain information about acceptance criteria or a study proving the device meets acceptance criteria in the manner requested (e.g., a detailed clinical trial or performance study with metrics, sample sizes, and ground truth establishment).

    This document is a 510(k) Premarket Notification from the FDA, which primarily focuses on demonstrating substantial equivalence to a legally marketed predicate device, rather than proving the device meets specific detailed acceptance criteria through a dedicated performance study.

    Here's what can be extracted from the document, organized as per your request, with explicit notes about missing information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of acceptance criteria with specific quantitative thresholds that a study then verifies. Instead, the "performance" is discussed in terms of technical specifications and compliance with industry standards.

    Feature/StandardAcceptance Criteria (Implied)Reported Device Performance
    GeneralSafe and Effective (equivalent to predicate)"OSCAR is safe and effective as predicate device, and has no new indication for use. Therefore, OSCAR is substantially equivalent to predicate device."
    SafetyCompliance with relevant IEC standards and CFR"OSCAR complies with industry standards such as IEC 60601-1 Series and 21 CFR 1020.30, 21 CFR 1020.31 and 21 CFR 1020.32 to minimize electrical, mechanical and radiation hazards."
    "Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1, IEC 60601-1-3, IEC 60601-1-6, IEC 60601-2-28, IEC 60601-2-43, IEC 60601-2-54 and IEC 62366 were performed."
    EMCCompliance with IEC 60601-1-2"EMC testing was conducted in accordance with standard IEC 60601-1-2."
    EPRCCompliance with 21 CFR 1020.30, 31, 32"OSCAR meets the EPRC standards (21 CFR 1020.30, 31, 32)."
    SoftwareCompliance with FDA guidance, changes do not affect safety/effectiveness"FDA guidance 'guidance for SSXI devices', and 'guidance for the Content of Premarket Submissions for Software Contained in Medical devices', was performed for OSCAR."
    "Changes to the predicate device software were tested and they do not affect the device safety and effectiveness. Also, the device software is moderate level of concern."
    DQEFor OSCAR Prime: More effective and safe than predicate's DQEFor OSCAR Prime (Flat panel): Mentioned DQE: 59% (Option A) and 45% (Option B). Compared to predicate's DQE (image intensifier + CCD camera) of 51%. "the DQE of the OSCAR Prime is more effective and safety than predicate device." (This is a qualitative statement of comparison rather than a specific numeric acceptance criterion being met). Other technical specifications (e.g., resolution, kV, mA ranges) are listed but not presented with explicit acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified in the provided text. The document refers to "bench and clinical evaluation" but does not detail the size or nature of these evaluations for the purpose of a test set for performance.
    • Data Provenance: Not specified. The document does not indicate the country of origin of data or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • This information is not provided in the document. The document refers to "bench and clinical evaluation" but does not detail how ground truth was established by experts.

    4. Adjudication Method for the Test Set

    • This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. No effect size of human reader improvement with AI vs. without AI assistance is present, primarily because the device described is a fluoroscopy system, not an AI-assisted diagnostic tool.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • This information is not applicable as the device is a medical imaging system (fluoroscopy system), not an AI algorithm designed to operate without human intervention. The performance mentioned relates to the technical specifications of the imaging equipment.

    7. Type of Ground Truth Used

    • The document does not explicitly state the type of ground truth used for any performance evaluation. It mentions "bench and clinical evaluation," which generally implies a comparison against existing clinical standards or established benchmarks for image quality and safety. However, specific types like expert consensus, pathology, or outcomes data are not cited.

    8. Sample Size for the Training Set

    • This information is not provided and is not applicable in the context of this device, as it is a fluoroscopy system and not an AI/machine learning algorithm requiring a separate training set.

    9. How the Ground Truth for the Training Set Was Established

    • This information is not provided and is not applicable, as the device described is a fluoroscopy system, not an AI/machine learning algorithm with a training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K172810
    Device Name
    PORT-X IV
    Manufacturer
    Date Cleared
    2018-03-07

    (170 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GENORAY Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PORT-X IV is a portable X-ray system to be used by trained dental technicians as a mobile, extra oral x-ray source for producing diagnostic x-ray images using intra oral image receptors. It is intended for both adult and pediativ subjects.

    Device Description

    The portable X-Ray system, PORT-X IV, is intended to be used by trained dentists and dental technicians as an extraoral x-ray source for producing diagnostic x-ray images using intraoral image receptors. It is the X-ray equipment and uses it to diagnose the patient to acquire images. It also includes accessories, which are battery, recharging unit and hand switch.

    AI/ML Overview

    The provided text is a 510(k) summary for the GENORAY Co., Ltd. PORT-X IV, a portable X-ray system. This document focuses on demonstrating substantial equivalence to a predicate device (Metabiomed, Inc. REXTAR X) rather than presenting a study to prove the device meets specific acceptance criteria for AI/algorithm performance. Therefore, many of the requested elements for an AI-based device's acceptance criteria and study are not applicable or cannot be extracted from this document.

    However, I will extract what is available regarding general device performance and safety.

    Here's the breakdown of the information based on the prompt and the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" in a table format for AI performance. Instead, it details performance standards and safety criteria for an X-ray device based on federal and international standards.

    Acceptance Criteria (Standards and Requirements)Reported Device Performance (PORT-X IV)
    Safety and General Performance Standards
    IEC 60601-1 Series (Electrical, mechanical, environmental safety and performance)Complies with IEC 60601-1 Series. Electrical, mechanical, and environmental safety and performance testing performed. All test results were satisfactory.
    IEC 60601-1-3 (Radiological protection)Complies with IEC 60601-1-3. Radiological protection testing performed. All test results were satisfactory.
    IEC 60601-2-65 (Specific requirements for dental X-ray equipment)Complies with IEC 60601-2-65. Testing performed. All test results were satisfactory.
    IEC 60601-1-2 (EMC testing)EMC testing conducted in accordance with IEC 60601-1-2. All test results were satisfactory.
    Radiation Control Provisions (21 CFR 1020.30 & 1020.31)
    Focal spot to skin distance: Longer than minimum length of 18 cmConfirmed that the focal spot to skin distance was longer than the minimum length of 18 cm.
    Minimum HVL (half-value layer): 1.5 mmConfirmed that the minimum HVL was 1.5 mm.
    Accuracy of loading factors (e.g.,
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3