Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K200469
    Manufacturer
    Date Cleared
    2020-09-16

    (203 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The X-ray unit system is a diagnostic imaging system which consists of multiple image acquisition modes; panoramic, cephalometric, and CBCT (Cone Beam Computed Tomography). X-ray unit system is used for dental radiographic examination and diagnosis of teeth, jaw, oral structures and skull. The device is to be operated and used by dentists and other legally qualified professionals.

    Device Description

    The proposed device PAPAYA 3D Premium Plus the computed tomography x-ray system which consists of image acquisition modes; panorana, cephalometric, and computed tomography. The difference between PAPAYA 3D Premium Plus is only optional of the cephalometric detector. It designed for dental radiography of the oral and cranionarial as teeth, jaws and oral structures. The device with cephalometric detector is named PAPAYA 3D Premium Plus and the device without cephalometric detector is named PAPAYA 3D Premium.

    The proposed device are composed of flat panel x-ray detectors which are based on CMOS, and TFT detector types and divided in to CT, panoramic and cephalometric radiography, and x-ay tube. CMOS, and TFT detectors are used to capture scamed image for obtaining diagnostic information for craniofacial surgery or other treatments. And it also provides 3D the anatomic stuctures by acquiring 3609rotational image sequences of oral and craniofacial area.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a dental X-ray system, PAPAYA 3D Premium & PAPAYA 3D Premium Plus. The document focuses on demonstrating substantial equivalence to a predicate device (PAPAYA 3D Plus, K150354) rather than presenting a detailed clinical study with specific acceptance criteria and performance metrics for an AI algorithm.

    Therefore, many of the requested details about acceptance criteria for an AI device, sample sizes, expert qualifications, and specific study designs (MRMC, standalone performance) are not present in the provided text. The device in question is a medical imaging hardware system, not an AI software.

    However, I can extract information related to the performance validation of the newly added image receptors, which is the closest thing to "device meets acceptance criteria" in this context.

    Here's a breakdown of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance (as much as can be inferred for the imaging components):

    Acceptance Criteria (Inferred)Reported Device Performance (for newly added detectors)
    Clinical Considerations: Images are "diagnosable" and meet indications for use"well enough to diagnosable and meet its indications for use"
    Imaging Performance (for newly added CBCT image receptor FXDD-0909GA):Tested for:
    - Gantry positioning accuracy- Gantry positioning accuracy
    - In-plane uniformity- In-plane uniformity
    - Spatial resolution section thickness- Spatial Resolution section thickness
    - Noise- Noise
    - Contrast to Noise Ratio- Contrast to Noise Ratio
    - Geometric Distortion- Geometric Distortion
    - Metal Artifacts- Metal Artifacts
    Imaging Performance (for newly added Cephalometric image receptor FXDD-1012CA):Tested for:
    - Line pair resolution- Line pair resolution

    Note: The document states these performance metrics were "tested," implying they met predefined acceptance criteria, but the specific numerical values or thresholds for "acceptance" are not provided.

    2. Sample Size and Data Provenance:

    • Test Set Sample Size: Not explicitly stated for either clinical image evaluation or phantom testing. The document only mentions "clinical images" for evaluation.
    • Data Provenance: The document does not specify the country of origin of the clinical images. It implies a retrospective review of existing clinical image sets.

    3. Number of Experts and Qualifications:

    • Number of Experts: "the clinical images were evaluated by the US board-certified oral surgeon." (Singular - implies one or an unspecified small number of US board-certified oral surgeons).
    • Qualifications: "US board-certified oral surgeon." No specific years of experience are mentioned.

    4. Adjudication Method:

    • Adjudication Method: "Throughout the evaluation by oral surgeon..." This wording suggests a single expert's opinion, so there's no mention of a formal adjudication method (like 2+1 or 3+1).

    5. MRMC Comparative Effectiveness Study:

    • MRMC Study Done? No. This document describes a new imaging hardware device and its added detectors. There is no mention of an AI component requiring a comparison of human reader performance with and without AI assistance.

    6. Standalone Performance (Algorithm Only):

    • Standalone Performance Done? N/A. This is a hardware device. The closest related component is the "Theia" image processing software, which is described as having "only UI" differences from the predicate's software and being "developed for marketing purpose only." Its validation focused on standards compliance (EN 62304, NEMA PS 3.1-3.20 DICOM, FDA Guidance) rather than a standalone clinical performance study as one might expect for an AI algorithm.

    7. Type of Ground Truth Used:

    • For Clinical Image Evaluation: Expert consensus (from the US board-certified oral surgeon) on whether images were "diagnosable" and met indications for use.
    • For Imaging Performance Tests: Phantom data (e.g., gantry positioning accuracy, spatial resolution, CNR, etc.).

    8. Sample Size for Training Set:

    • Training Set Sample Size: Not applicable. This is a hardware device, not an AI model that undergoes "training."

    9. How Ground Truth for Training Set Was Established:

    • Ground Truth Establishment for Training Set: Not applicable, as there's no AI training set described.
    Ask a Question

    Ask a specific question about this device

    K Number
    K172180
    Device Name
    OSCAR 15
    Manufacturer
    Date Cleared
    2018-02-09

    (205 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OSCAR 15 is a mobile fluoroscopy system is designed to provide fluoroscopic and spot film images of the patient during diagnostic, surgical and interventional procedures. Examples of clinical application may include cholangiography, endoscopy, urologic, orthopedic, neurologic, vascular, cardiac and critical care.

    The system may be used for other imaging applications at the physician's discretion.

    Device Description

    OSCAR 15 is consist of X-ray Tube, X-ray tube assembly, x-ray controller, detector and accessories. There is no wireless function in this device.

    The OSCAR 15, C-Arm Mobile is the device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. This device is used for providing fluoroscopic and radiographic images of patient anatomy, especially during the special procedures in a hospital or medical clinics. The fluoroscopic mode of operation is very useful to the attending physician to see the images on real time without the need to develop individual films.

    AI/ML Overview

    The provided text is a 510(k) summary for the OSCAR 15 mobile fluoroscopy system. It is a submission to the FDA to demonstrate substantial equivalence to a predicate device, not a study proving the device meets specific acceptance criteria in the context of AI/ML performance.

    Therefore, most of the requested information regarding acceptance criteria, reported device performance, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, ground truth types, and training set details cannot be extracted from this document as it pertains to AI/ML device validation.

    Here's what can be extracted and a general explanation for the missing information:

    1. A table of acceptance criteria and the reported device performance

    The document does not present specific acceptance criteria in terms of numerical performance metrics for an AI/ML component. Instead, it focuses on demonstrating substantial equivalence to a predicate device (ZEN-7000) based on design features, indications for use, and compliance with industry standards for safety and electrical performance.

    The reported device performance is primarily in relation to physical and technical specifications, and safety/EMC compliance.

    Criterion CategoryAcceptance Metric (Implicit from Substantial Equivalence and Standards)Reported Device Performance (OSCAR 15)Predicate Device (ZEN-7000)
    Indications for UseSame as predicate deviceMobile fluoroscopy for diagnostic, surgical, interventional procedures (cholangiography, endoscopy, urologic, orthopedic, neurologic, vascular, cardiac, critical care). Physician's discretion for other applications.Mobile fluoroscopy for diagnostic, surgical, interventional procedures (cholangiography, endoscopy, urologic, orthopedic, neurologic, vascular, cardiac, critical care, emergency room procedures). Physician's discretion for other applications.
    GeneratorHigh Frequency InverterHigh Frequency InverterHigh Frequency Inverter
    Max. output powerSimilar to predicate (15 kW)15 kW5 kW (15 kW Optional)
    X-ray TubeRotating tube, same focal spotsRotating tube, Large: 0.6 mm, Small: 0.3 mmRotating tube, Large: 0.6 mm, Small: 0.3 mm
    Fluoroscopy kV/mASimilar range to predicate40-120 kV / 0.2-6.0 mA40-120 kV / 0.2-6.0 mA
    Pulsed Fluoroscopy mASimilar or improved range to predicate1 mA to 48 mA1 mA to 20 mA (5kW), 1 mA to 48 mA (15kW)
    Radiography kV/mAsSimilar range to predicate40-120 kV / 0.4-100 mAs40-120 kV / 1-100 mAs
    Detector TypeDifferent from predicate, but superior performance shown (DQE, image quality)Flat panel detector (CMOS)Image Intensifier
    Detector Active Image AreaSpecified260 x 256 mm9" or 12"
    Detector Central ResolutionSpecified4.6 lp/mm2.2 lp/mm (9"), 1.6 lp/mm (12") - at monitor
    Detector Contrast RatioSpecified30:1Not explicitly stated, implied by DQE
    Detector ResolutionSpecified2600 x 2560Not explicitly stated
    Detector Pixel Sampling ResolutionSpecified14 bitsNot explicitly stated
    Detector Pixel PitchSpecified100 µmNot explicitly stated
    Detector MTFSpecified56%Not explicitly stated
    Detector DQESuperior to effective DQE of predicate59%65% (typical Image Intensifier DQE), but effective DQE of complete predicate device is 51%
    Safety, EMC, PerformanceCompliance with relevant IEC standards and CFR regulationsComplies with IEC 60601-1 Series, 60601-1-3, 60601-2-28, 60601-2-43, 60601-2-54, 60601-1-2. Meets EPRC standards (21 CFR 1020.30, 31, 32). Followed FDA guidance for SSXI devices, software, and cybersecurity.Complies with similar standards (implied by K140041 substantial equivalence)
    Physical Dimensions (SID, Rotation, Travel)Similar to predicateSID: 1000 mm, Panning: ±12.5°, Orbital: 155°, Vert. Travel: 500 mm, Horiz. Travel: 200 mmSID: 1000 mm, Panning: ±12.5°, Orbital: 135°, Vert. Travel: 500 mm, Horiz. Travel: 200 mm

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This document describes a medical imaging device (C-arm fluoroscopy system), not an AI/ML algorithm. Therefore, there is no "test set" in the context of an AI/ML model. The evaluation is based on engineering tests, compliance with standards, and comparison of specifications with a predicate device. "Bench and clinical evaluation" is mentioned, suggesting some testing with human interaction, but no details on sample size or data provenance for such evaluations are provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as this is not an AI/ML device requiring expert-labeled ground truth for model validation.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as this is not an AI/ML device.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is a fluoroscopy system, which provides images directly to the physician. It does not include an AI assistance component whose effectiveness would be measured in an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable, as this is not an AI/ML device with an "algorithm only" performance to evaluate. The device itself is the standalone product.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable, as the evaluation is based on technical specifications, safety standards compliance, and image quality metrics (DQE, resolution, etc.), not the diagnostic accuracy of an AI/ML model against a clinical ground truth.

    8. The sample size for the training set

    Not applicable, as this is not an AI/ML device.

    9. How the ground truth for the training set was established

    Not applicable, as this is not an AI/ML device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1