Search Filters

Search Results

Found 24 results

510(k) Data Aggregation

    K Number
    K241249
    Manufacturer
    Date Cleared
    2024-09-12

    (132 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Palodex Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ORTHOPANTOMOGRAPH™ OP 3D LX is an X-ray device that is intended to be used for imaging of adult and pediatric patients. The device can be configured to take panoramic, cephalometric or 3D images of cranio-maxillofacial complex including the ear, nose and throat (ENT) and airway regions, and cervical spine. The device can be configured to take carpus images.

    The device is operated and used by qualified healthcare professionals.

    Device Description

    ORTHOPANTOMOGRAPH™ OP 3D LX is a digital panoramic, cephalometric and cone beam computed tomography (CBCT) x-ray device. OP 3D LX is used for imaging of the craniomaxillofacial complex and neck areas including the ear, nose and throat (ENT) for diagnostic and treatment planning purposes.

    AI/ML Overview

    This FDA 510(k) summary for the Orthopantomograph™ OP 3D LX (K241249) does not contain a typical acceptance criteria table with reported device performance for an AI/ML powered device, nor does it detail a study proving device performance against such criteria. This submission is for an X-ray imaging system, not an AI/ML algorithm.

    The core of this submission is to demonstrate substantial equivalence to a previously cleared predicate device (K230505), not to prove specific diagnostic performance against defined criteria in the way an AI algorithm would.

    Therefore, the following information, based on the provided document, will focus on what is available, explaining why certain requested details for AI/ML validation are not present in this type of submission.


    1. Table of Acceptance Criteria and Reported Device Performance

    As this is a filing for an X-ray imaging system and not an AI/ML diagnostic algorithm, there isn't a table of clinical performance acceptance criteria with reported metrics like sensitivity, specificity, AUC, etc. Instead, performance is demonstrated through adherence to international and FDA-recognized consensus standards for medical electrical equipment and X-ray systems.

    The device's performance is intrinsically linked to its ability to capture high-quality images and operate safely and effectively, as verified through compliance with these standards.

    2. Sample Size Used for the Test Set and Data Provenance

    Not applicable. This submission doesn't describe a test set of medical images used to evaluate an AI/ML algorithm's performance. The "testing" involved non-clinical bench testing to ensure the device meets safety and performance standards for an X-ray system.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Not applicable. As there's no AI/ML specific diagnostic performance study, there's no ground truth establishment by experts in the context of image interpretation.

    4. Adjudication Method for the Test Set

    Not applicable.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This type of study is typically performed to evaluate the impact of an AI assistance tool on human reader performance, which is not relevant to an X-ray imaging hardware submission.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Not applicable. This device is an X-ray imaging system, not a standalone algorithm.

    7. The Type of Ground Truth Used

    Not applicable. Compliance is demonstrated through adherence to engineering and safety standards, rather than diagnostic ground truth.

    8. The Sample Size for the Training Set

    Not applicable. This submission concerns an X-ray machine, not an AI/ML algorithm that requires a training set.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable.


    Summary of Device Acceptance and Performance from the Document:

    The acceptance of the Orthopantomograph™ OP 3D LX device is based on its substantial equivalence to a legally marketed predicate device (K230505) and its conformity to a comprehensive list of international and FDA-recognized consensus standards.

    The document states:

    • "Non-Clinical performance bench testing to international standards (and FDA recognized consensus standards) for Computed tomography x-ray system has been conducted to determine conformance..."
    • The subject device shares "the same architectural components and utilizes the same X-ray generation as the Predicate Device."
    • "Both devices utilize cone beam x-ray technology to acquire volumetric data."
    • "Both the Subject Device have the same image modes, X-ray source, focal spot, image detector scintillator, 2D image performance – DQE / MTF 70kV RAQS, 3D image technique, 3D field of view, 3D total viewing andel size, reconstruction software, 3D's effective exposure time, Ceph exposure time, Patient position, system footprint, 3D image programs, and material patient contacting components."
    • The primary difference noted is a minor one regarding the Automatic Exposure Control (Automatic Dose Control) functionality, which is identical to a secondary predicate device.

    Key conclusions from the document regarding acceptance:

    • "Clinical data is not needed to characterize performance and establish substantial equivalence."
    • "The non-clinical test data characterize all performance aspects of the device based on well-established scientific and engineering principles."
    • "Based on a comparison of intended use, indications, material composition, technological characteristics, principle of operation, features and performance data, the Orthopantomograph™ OP 3D LX is deemed to be substantially equivalent to the predicate device."

    Therefore, the study proving the device meets "acceptance criteria" here is the non-clinical bench testing and engineering analysis demonstrating compliance with relevant standards and substantial equivalence to a predicate device, rather than a clinical performance study with defined diagnostic endpoints.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230505
    Manufacturer
    Date Cleared
    2023-06-09

    (105 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Palodex Group OY

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ORTHOPANTOMOGRAPH™ OP 3D LX is an X-ray device that is intended to be used for imaging of adult and pediatric patients. The device can be configured to take panoramic, cephalometric or 3D images of cranio-maxillofacial complex including the ear, nose and throat (ENT) and airway regions, and cervical spine. The device can be configured to take carpus images.

    The device is operated and used by qualified healthcare professionals.

    Device Description

    ORTHOPANTOMOGRAPH™ OP 3D LX manufactured by PaloDEx Group Oy is a panoramic, cephalometric and cone beam CT (CBCT) x-ray device for 2D and 3D imaging of patient's cranio-maxillofacial complex. It can be used to capture 2D and 3D x-ray images from pediatric and adult patients. The Primary Predicate Device ORTHOPANTOMOGRAPH™ OP 3D has already been cleared for panoramic and cone beam CT usage (K180947).

    The ORTHOPANTOMOGRAPH™ OP 3D LX is an X-ray device that can be configured to take panoramic, cephalometric or 3D images of craniomaxillofacial complex, and neck areas including the ear, nose and throat (ENT). The device can be configured to take carpus images. The device is operated and used by qualified healthcare professionals. The ORTHOPANTOMOGRAPH™ OP 3D LX is intended to be used for imaging of adult and pediatric patients.

    The ORTHOPANTOMOGRAPH™ OP 3D LX is part of digital dental workflow providing image data for diagnosis and treatment planning for the healthcare professionals. Xray images reveal the targeted craniofacial anatomy, and condition and the position of anatomical structures inside FOV, such as teeth, mandibular joints, and oral and nasal cavities. This helps dentists to prepare for various dental procedures such as implant placement, orthodontics and dental prosthetics, providing possible early diagnosis, which enables early and less invasive treatment.

    AI/ML Overview

    The provided text describes the regulatory clearance for the Orthopantomograph™ OP 3D LX. However, it does not contain specific acceptance criteria, reported device performance metrics against those criteria, or details of a study that proves the device meets such criteria.

    The document primarily focuses on establishing substantial equivalence to predicate devices (ORTHOPANTOMOGRAPH™ OP 3D (K180947) and Orthopantomograph OP300 (K163423)) for regulatory purposes. It lists various non-clinical bench testing standards related to safety, electromagnetic compatibility, radiation protection, software, usability, biocompatibility, and imaging performance, but does not provide the results of these tests or acceptance thresholds.

    The "Clinical Performance Data" section merely states: "A clinical image review was performed in support of establishing substantial equivalence." It does not elaborate on the methodology, results, or specific criteria met.

    Therefore, I cannot provide the requested information from the given input. The requested details regarding acceptance criteria, reported performance, sample sizes, number of experts, adjudication methods, MRMC studies, standalone performance, ground truth types, and training set details are not present in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K180947
    Manufacturer
    Date Cleared
    2018-06-07

    (57 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PaloDEx Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ORTHOPANTOMOGRAPH™ OP 3D is an x-ray device that is intended to be used for imaging of adult and pediatric patients. The device can be configured to take panoramic, cephalometric, or 3D images of the cranio-maxillofacial complex for use in diagnostic support. The device can also be configured to take carpus images.

    ORTHOPANTOMOGRAPH™ OP 3D must only be used and operated by dentist and other qualified professionals.

    Device Description

    ORTHOPANTOMOGRAPH™ OP 3D x-ray unit is a software controlled diagnostic dental X-ray equipment for producing panoramic, cephalometric and 3D images of the cranio-dentomaxillofacial complex of patient head. The ORTHOPANTOMOGRAPH™ OP 3D has panoramic programs (adult, pediatric, TMJ, bitewing and partial), cephalometric programs (lateral and PA, also including carpus imaging) and 3D (CBCT) programs with different 3D Field of View configurations. The components of the device include column, carriage, rotating unit (containing tube head, sensors and collimator), upper shelf, patient head support, ceph attachment (containing ceph tube head and patient support) and driver software including image reconstruction. Workstation software for viewing the images is not included in this submission.

    The proposed device utilizes cone beam X-ray technology, which generates conical x-ray beams that rotate around the patient's head and incident upon the receptor that generate sufficiently contrasted images. Quality of the images depends on the level and amount of X-ray energy delivered to the tissue. When interpreted by a trained physician, these images provide useful diagnostic information.

    AI/ML Overview

    The provided text is a 510(k) summary for the ORTHOPANTOMOGRAPH™ OP 3D, a medical device. It focuses on demonstrating substantial equivalence to a predicate device (ORTHOPANTOMOGRAPH™ OP300) rather than presenting a study proving a device meets specific acceptance criteria based on its performance against a defined ground truth.

    Therefore, the document does not contain the specific information required to complete the detailed table and answer most of your questions regarding acceptance criteria and performance studies where an AI or algorithm's diagnostic performance is being evaluated. The submission is for an imaging device, and the "performance data" mentioned refers to engineering and quality assurance testing (electrical, mechanical, safety, EMC, biocompatibility, software level of concern), and subjective review of bench test images, not a clinical diagnostic performance study of the images themselves.

    Here's what can be extracted and what cannot:

    What can be extracted:

    • 1. A table of acceptance criteria and the reported device performance: This document does not provide a table of performance-based acceptance criteria for diagnostic accuracy (e.g., sensitivity, specificity, AUC) or human reader improvement with AI. Instead, it compares technical characteristics to a predicate device. The "performance" mentioned refers to engineering and quality assurance tests.
    • Conclusion as to Substantial Equivalence: The document states: "The proposed device does not introduce a fundamentally new scientific technology and the nonclinical tests demonstrate that the device is substantially equivalent with regards to safety and effectiveness. All internal verification and validation has been completed successfully." This is the general "acceptance" for the 510(k) submission: proving substantial equivalence, not meeting specific quantitative performance metrics for diagnostic accuracy.

    What cannot be extracted (as per the document):

    • 2. Sample sized used for the test set and the data provenance: Not applicable in the context of diagnostic performance. The document mentions "bench test images acquired using ORTHOPANTOMOGRAPH OP™ 3D." No sample size or provenance for this set is given, and it's for subjective quality review, not a formal test set for diagnostic accuracy.
    • 3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: The document states "bench test images acquired using ORTHOPANTOMOGRAPH OP™ 3D was reviewed by qualified clinicians to be of acceptable quality for the proposed Intended Use." It does not specify the number or qualifications of these clinicians, and this is for subjective image quality, not establishing diagnostic ground truth.
    • 4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
    • 5. If a multi reader multi case (MRMC) comparative effectiveness study was done: "No clinical images were utilized for this submission. Clinical data was not needed to support substantial equivalence." This means no MRMC study or any clinical performance study was conducted or submitted.
    • 6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable, as this is an imaging device, not an AI/algorithm for diagnostic interpretation.
    • 7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable. "Bench test images" were subjectively reviewed for "acceptable quality."
    • 8. The sample size for the training set: Not applicable, as this is an imaging device, not an AI/algorithm that requires a training set.
    • 9. How the ground truth for the training set was established: Not applicable.

    Summary based on the provided text:

    This 510(k) pertains to a Computed Tomography X-ray system (ORTHOPANTOMOGRAPH™ OP 3D). The acceptance criteria for this type of submission are primarily demonstrating substantial equivalence to a predicate device (ORTHOPANTOMOGRAPH™ OP300) in terms of intended use, technological characteristics, and safety/effectiveness through non-clinical (engineering, design verification, and validation) testing. It explicitly states that no clinical performance data or images were utilized for this submission and "Clinical data was not needed to support substantial equivalence." Therefore, the document does not describe a study proving the device meets acceptance criteria related to diagnostic performance, AI accuracy, or human reader improvement with AI assistance.

    The closest thing to "acceptance criteria" and "performance" described are:

    Acceptance Criteria Category (Implied by 510k)Reported Device Performance (as stated)
    Safety & Electrical Standards:Successfully passed internal design verification and validation.
    Electrical, mechanical, safety testing (IEC 60601-1, -1-6, -2-63, -2-28, -1-3) and EMC testing (IEC 60601-1-2:2014) were performed by a 3rd party test house.
    Result: The proposed ORTHOPANTOMOGRAPH™ OP 3D has passed all tests.
    Biocompatibility:Biocompatibility evaluation conducted on patient contacting accessory parts and their material.
    Result: Found to be in conformance with ISO 10993-1.
    Software Quality:Software categorized as Moderate Level of Concern and documented according to Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.
    Image Quality (Subjective Review):Bench test images acquired using ORTHOPANTOMOGRAPH OP™ 3D was reviewed by qualified clinicians.
    Result: Found to be of acceptable quality for the proposed Intended Use.
    Substantial Equivalence:No significant differences between proposed and predicate device. Minor differences shown not to affect substantial equivalence. Proposed device does not introduce fundamentally new scientific technology. Nonclinical tests demonstrate substantial equivalence with regards to safety and effectiveness.
    Result: All internal verification and validation has been completed successfully, leading to a finding of substantial equivalence to ORTHOPANTOMOGRAPH™ OP300 (K133544).

    The document does not relate to an AI device or a comparative effectiveness study involving human readers and AI assistance. It describes the regulatory submission for an X-ray imaging system.

    Ask a Question

    Ask a specific question about this device

    K Number
    K173646
    Device Name
    FS Ergo
    Manufacturer
    Date Cleared
    2017-12-19

    (22 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PaloDEx Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The FS Ergo sensor is a digital sensor which is indicated for acquiring dental intra-oral radiography images. The FS Ergo sensor shall be operated by healthcare professionals, who are educated and competent to perform the acquisition of dental intraoral radiographs.

    Device Description

    The FS Ergo is a USB-driven sensor designed to capture intraoral digital dental x-ray images. FS Ergo can be used with any X-ray units intended for dental intraoral purposes. When connected to a computer with an appropriate image capturing software application, images are automatically acquired when the FS Ergo sensor receives a perceptible X-ray dose. The FS Ergo sensor is designed with advanced ergonomic principles. It has a bendable sensor housing together with other physical features supporting advanced erqonomics: small diameter flexible cable, 32 degrees' cable exit and sensor dimensions that fit all needs (size 1.5). Comfort is achieved in with large corner chamfers, rounded edges, and thicker sensor structure. The basic system of the FS Ergo consists of senor unit, sensor holders and hygienic covers (which are medical devices separately cleared by the FDA). A connection to a computer is required but the computer is provided by the users e.q. the computer is not part of the sensor system itself.

    AI/ML Overview

    The provided text is a 510(k) summary for the FS Ergo dental intraoral sensor. It describes the device, its indications for use, and how it demonstrates substantial equivalence to predicate devices. However, it does not contain detailed information about specific acceptance criteria for algorithm performance, sample sizes for test sets (beyond general bench testing), or the methodologies typically associated with assessing the performance of AI/ML-driven medical devices (like MRMC studies, ground truth establishment methods for large datasets, or detailed expert adjudication processes).

    The document is for an X-ray sensor device (hardware and associated image acquisition), not an AI/ML algorithm that interprets images or provides diagnostic assistance. Therefore, many of the questions asked in the prompt (related to AI acceptance criteria, training sets, test sets, ground truth, expert review for AI, etc.) are not directly applicable to this specific device submission as described.

    The device's performance is primarily assessed through bench testing for image quality and other physical/electrical characteristics, and compliance with relevant safety and performance standards. The "study" proving the device meets criteria is primarily these non-clinical performance data and bench tests.

    Here's an attempt to answer the questions based solely on the provided text, highlighting where the information is not present or not applicable:


    Acceptance Criteria and Device Performance for FS Ergo (based on K173646)

    This submission focuses on the substantial equivalence of a digital intraoral X-ray sensor (hardware) and its ability to acquire images, rather than the performance of an AI model analyzing these images. Therefore, the "acceptance criteria" discussed are largely related to image quality, electrical safety, mechanical integrity, and biocompatibility to demonstrate equivalence to a predicate device.

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a specific table of quantitative acceptance criteria for image quality or a direct "reported performance" against those criteria in a tabular format. Instead, it states:

    • Bench Test Images: "Bench test images of skull phantoms were acquired using FS Ergo sensor, the images were reviewed by qualified external clinician to be of acceptable quality for the proposed intended use."
    • Bench Testing Results: "Bench testing results indicate substantial equivalence to the predicate device Snapshot."
    • Standards Compliance: The device successfully passed testing according to various IEC standards (electrical, mechanical, safety, usability, EMC). Biocompatibility was also confirmed.

    Inferred "Acceptance" for Image Quality (Qualitative based on text):

    Acceptance Criteria CategoryReported Device Performance (as stated in document)
    Image Quality"acceptable quality for the proposed intended use" (based on skull phantom images reviewed by a qualified external clinician)
    Equivalence to Predicate"substantial equivalence to the predicate device Snapshot" (based on bench testing results)
    Safety & PerformancePassed all tests according to IEC 60601-1:2005, IEC 60601-1-6:2013, IEC 62366:2014, IEC 62304:2006, IEC 60601-1-2:2014. Biocompatibility conforms to ISO 10993-1:2009.

    2. Sample sizes used for the test set and the data provenance

    • Sample Size: Not explicitly stated for specific image quality tests. The mention of "skull phantoms" implies a limited, controlled setup rather than a large clinical test set of patient images.
    • Data Provenance: Not specified. "Bench test images of skull phantoms" suggests a laboratory controlled environment, not clinical patient data from a specific country.
    • Retrospective/Prospective: Not applicable in the context of bench testing phantoms.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: "qualified external clinician" (singular, or implies a group but only states "clinician"). The number of experts is not specified if it was more than one.
    • Qualifications of Experts: "qualified external clinician." No specific details on years of experience, subspecialty, or board certification are provided beyond "qualified."
    • Ground Truth Establishment: For this type of device (an image acquisition sensor), the "ground truth" for image quality is typically visual assessment of clarity, contrast, resolution (e.g., ability to discern features), and absence of artifacts by a qualified radiologist or dental professional, often compared to images from established devices. The document implies a qualitative assessment rather than a quantitative ground truth for diagnostic accuracy, as this is a device for image acquisition, not diagnosis.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not specified. Given the qualitative statement about "acceptable quality" from a "qualified external clinician," a formal adjudication process for discordant reads (like 2+1 or 3+1) is not described or implied for this type of image quality assessment. It's likely a direct expert review for the purpose of validating image quality from the phantom.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. This type of study (often used for AI-assisted diagnostic tools) is not relevant for an X-ray sensor device whose primary function is image acquisition.
    • Effect Size: Not applicable.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable. This device is an X-ray sensor and does not perform image interpretation or diagnosis independently; it generates images for human review. Therefore, there is no "algorithm only" performance to evaluate in the context of an AI.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: For the "bench test images of skull phantoms," the ground truth appears to be expert qualitative assessment of image quality (e.g., visual clarity, diagnostic utility for dental structures) compared to known characteristics of the phantom and reference images from predicate devices. It is not pathological, clinical outcome, or a broad expert consensus on patient cases.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable. This is a hardware device (X-ray sensor), not a machine learning algorithm that requires a training set in the conventional sense.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set Establishment: Not applicable. As there is no training set for an AI/ML algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K163423
    Manufacturer
    Date Cleared
    2017-08-31

    (268 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PaloDEx Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Orthopantomograph OP300 panoramic, cephalometric and cone beam computed tomography x-ray device is intended to image the head and neck areas for diagnostic support. This includes temporomandibular Joints (TMJs) and dentomaxillofacial areas, and with the 13x15 cm field of view (FOV) this additionally includes the ear. nose and throat (ENT) regions. The x-ray device produces conventional 2D x-ray images and x-ray projection images for the reconstruction of a 3D view. The device is operated and used by qualified healthcare professionals.

    Device Description

    Orthopantomograph OP300 x-ray unit is a software controlled diagnostic dental X-ray equipment for producing panoramic, cephalometric and 3D images of the dentomaxillofacial complex, ENT and airway regions of the head and neck. The Orthopantomograph OP300 is available in different configurations (panoramic, cephalometric, 3D) with different 3D Field of View configurations (6X4 cm, 6X8 cm, 5X5 cm, 8X8 cm, 8X15 cm, and 13X15 cm). The components of the device include column, carriage, rotating unit (containing tube head, sensors and collimator), top-level hanger, patient head support, software for image reconstruction, and an optional software package for image reviewing. The proprietary name of the optional image reviewing software is CLINIVIEW.

    Cone Beam Volumetric Tomography is a medical imaging technique that uses X-rays to obtain crosssectional images of the head or neck. The proposed device utilizes cone beam X-ray technology, which generates conical x-ray beams that rotate around the patient's head and incident upon the receptor that generate sufficiently contrasted images. Quality of the images depends on the level and amount of X-ray energy delivered to the tissue. When interpreted by a trained physician, these images provide useful diagnostic information.

    AI/ML Overview

    The provided text describes the Orthopantomograph OP300, a dental X-ray device. The acceptance criteria and the study proving the device meets these criteria are mainly focused on demonstrating substantial equivalence to predicate devices, particularly the Scanora 3D (K110839), for expanded indications including ENT imaging.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria in a table format with corresponding reported performance values for diagnostic accuracy or specific imaging quality metrics. Instead, it focuses on demonstrating substantial equivalence in diagnostic quality.

    Acceptance Criteria (Implied)Reported Device Performance
    Image quality for proposed indications (head, neck, ENT, maxillofacial) is diagnostically acceptable.Clinical images acquired with Orthopantomograph OP300 were reviewed by qualified clinicians and found to be of acceptable quality for the proposed indications for use.
    Diagnostic quality of images is equivalent to predicate device Scanora 3D (K110839).An image comparison study demonstrated that images from the Orthopantomograph OP300 from ENT, airway, and maxillofacial areas are of equivalent diagnostic quality as images from the predicate device Scanora 3D.
    Adherence to relevant safety and performance standards (Biocompatibility, EMC, Electrical Safety).Passed biocompatibility evaluation (ISO 10993-5, ISO 10993-10). Met requirements in IEC 60601-1, IEC 60601-1-6, IEC 62366, IEC 60601-2-63, IEC 60601-2-28, IEC 60601-1-2, and IEC 60601-1-3.
    Minor differences in technical characteristics do not negatively affect device performance or raise new safety/effectiveness concerns.Differences in technical characteristics are "so small that they do not have any effect on the device performance in practice." The slightly bigger Field-of-View "does not negatively affect imaging of the intended anatomical structures and does not affect substantial equivalence of the device."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The text mentions an "image comparison study" and "clinical images acquired using Orthopantomograph OP300." However, it does not specify the sample size for the test set or the provenance (country of origin, retrospective/prospective) of the data used in these studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The text states that "clinical images acquired using Orthopantomograph OP300 were reviewed by qualified clinicians" and that the image comparison study involved "images from ENT, airway and maxillofacial areas." However, it does not specify the number of experts used or their specific qualifications (e.g., number of years of experience, specific board certifications).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The text does not provide any information about the adjudication method used for establishing ground truth or evaluating images in the mentioned studies. It simply states that images were "reviewed by qualified clinicians."

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document describes an "image comparison study of predicate device Scanora 3D and proposed device Orthopantomograph OP300 images." This suggests a comparative study between devices rather than a human-in-the-loop study with AI assistance. The device is an X-ray imaging system, not an AI-powered diagnostic tool. Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers is not applicable and was not reported.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This point is not applicable as the Orthopantomograph OP300 is an imaging device, not an AI algorithm. Its performance is evaluated based on the quality of the images it produces for human interpretation, not as a standalone diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the "acceptable quality" and "equivalent diagnostic quality" seems to be established through expert clinical review and comparison. There is no mention of pathology or outcomes data being used as ground truth for image quality assessments in the provided text.

    8. The sample size for the training set

    The document makes no mention of a "training set" as it describes an X-ray imaging device, not an AI model that requires training. The studies mentioned are focused on design verification, validation, and clinical image quality assessment.

    9. How the ground truth for the training set was established

    As there is no mention of a training set, the establishment of ground truth for a training set is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K162799
    Device Name
    Cliniview
    Manufacturer
    Date Cleared
    2017-04-25

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Palodex Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cliniview software program is indicated for general dental and maxillofacial diagnostic imaging. It controls capture, display, enhancement, and saving of digital images from various digital imaging systems. It stores and communicates these images within the system or across computer systems at distributed locations.

    Device Description

    Cliniview software has the functionality which compares with the functionality provided by VixWin Platinum predicate cleared under K141451, the DEXIS Software predicate cleared under K140445 and the Romexis predicate cleared under K140713.

    Scanora software is equal with the Cliniview software with branding labeling differences. Scanora is product name for Soredex brand imaging products whereas Cliniview is for Instrumentarium Dental imaging products. There is no difference in design. Cliniview 11 equals Scanora 6. This submission discusses about Cliniview software covering Scanora software at the same time.

    Cliniview software program is indicated for general dental and maxillofacial diagnostic imaging. It controls capture, display, enhancement, and saving of digital images from various digital imaging systems. It stores and communicates these images within the system or across computer systems at distributed locations.

    The types of images handled by Cliniview include, for example, panoramic, cephalometric, CBCT, intra-oral, and color photographs, Images can be viewed from a workstation, a mobile application or using a web browser. The Mobile Application is not intended for diagnostic use.

    Cliniview image acquisition supports imaging plates, intraoral sensors and digital X-ray imaging devices. Images can also be imported from other digital sources. Cliniview stores images and patient information in the SQL database and provides tools for image archiving. Cliniview has interfaces to 3rd party systems through the proprietary dental practice management system interface and DICOM standard interface.

    The main features of the Cliniview software include patient management, image acquisition, patient and image data storage, image viewing of 2D images and processing and enhancement of images.

    Cliniview software can be utilized either locally or over a networked environment. If Cliniview is installed on several computers, the patient and image database can be shared among them and used from different workstations.

    AI/ML Overview

    The provided text is a 510(k) summary for the Cliniview software, which is a Picture Archiving and Communications System (PACS) for dental and maxillofacial diagnostic imaging. It describes the device's intended use, technological characteristics, and its comparison to predicate devices, but it does not contain any information about acceptance criteria, specific performance data (e.g., accuracy, sensitivity, specificity), sample sizes for test or training sets, expert qualifications, or adjudication methods.

    Therefore, I cannot extract the requested information as it is not present in the provided document.

    To fulfill your request, the document would need to include details of a clinical study or performance evaluation with quantifiable metrics for the Cliniview software's performance, alongside predefined acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K170813
    Manufacturer
    Date Cleared
    2017-04-07

    (21 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PaloDEx Group Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ORTHOPANTOMOGRAPH™ OP 3D is an x-ray device to take panoramic and 3D images of the cranio-maxillofacial complex for use in diagnostic support.

    ORTHOPANTOMOGRAPH™ OP 3D must only be used and operated by dentist and other qualified professionals.

    Device Description

    ORTHOPANTOMOGRAPH™ OP 3D x-ray unit is a software controlled diagnostic dental X-ray equipment for producing panoramic and 3D images of the cranio-dentomaxillofacial complex of patient head. The ORTHOPANTOMOGRAPH™ OP 3D has panoramic programs (adult, child, TMJ, BW and partial) and 3D (CBCT) programs with different 3D Field of View configurations. The components of the device include column, carriage, rotating unit (containing tube head, sensors and collimator), upper shelf, patient head support and driver software including image reconstruction. Workstation software for viewing the images is not included in this submission.

    The proposed device utilizes cone beam X-ray technology, which generates conical x-ray beams that rotate around the patient's head and incident upon the receptor that generate sufficiently contrasted images. Quality of the images depends on the level and amount of X-ray energy delivered to the tissue. When interpreted by a trained physician, these images provide useful diagnostic information.

    AI/ML Overview

    The provided text is related to a 510(k) premarket notification for a dental x-ray device (ORTHOPANTOMOGRAPH™ OP 3D). It primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting a study with specific acceptance criteria and detailed performance data often seen in AI/CAD device submissions.

    Based on the provided information, I can answer some of your questions, but many of the requested details are not available or applicable to this type of submission.

    Here's an analysis based on the document:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of acceptance criteria for diagnostic performance or clinical effectiveness of the device in the way you might expect for an AI/CAD system. Instead, the "acceptance criteria" are implied by the conformity to various standards and the demonstration of "equivalent diagnostic quality" to a predicate device through non-clinical testing and clinician review of bench test images.

    Acceptance Criteria (Implied)Reported Device Performance
    Conformity to electrical, mechanical, safety, and performance standards (IEC 60601-1, IEC 60601-1-6, IEC 62366, IEC 60601-2-63, IEC 60601-2-28, IEC 60601-1-3)The proposed ORTHOPANTOMOGRAPH™ OP 3D has successfully passed internal design verification and validation, and all tests conducted by a 3rd party test house.
    EMC conformity (IEC 60601-1-2)EMC testing was conducted in accordance with standard IEC 60601-1-2:2014, and the proposed ORTHOPANTOMOGRAPH™ OP 3D has passed all tests.
    Biocompatibility of patient-contacting parts (ISO 10993-1)Biocompatibility evaluation was conducted on patient contacting accessory parts and their material and found to be in conformance with ISO 10993-1.
    Acceptable diagnostic quality of images for the proposed Intended Use (for bench test images)Bench test images acquired using ORTHOPANTOMOGRAPH OP™ 3D were reviewed by qualified clinicians to be of acceptable quality for the proposed Intended Use.
    Software categorized as Moderate Level of Concern and documented according to Guidance.Software for ORTHOPANTOMOGRAPH™ OP 3D has been categorized as Moderate Level of Concern and documented according to Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.
    Substantial equivalence to predicate device (ORTHOPANTOMOGRAPH™ OP300, K133544) regarding safety and effectiveness.Minor differences have been shown not to affect the substantial equivalence. Nonclinical tests demonstrate substantial equivalence regarding safety and effectiveness. All internal verification and validation have been completed successfully. The device is substantially equivalent to the predicate device.
    Technical Resolution: System MTF @10%, FOV 5x5 High Res (for 3D resolution)Proposed Device: 2.2 lp/mm @125µm voxel
    Predicate Device: 2.25 lp/mm @125µm voxel (This shows comparability, implying the proposed device meets the standard set by the predicate)

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document primarily discusses "bench test images" and "non-clinical performance data." It explicitly states: "No clinical images were utilized for this submission. Clinical data was not needed to support substantial equivalence." Therefore, no sample size, data provenance, or other details related to a clinical test set are provided. The "test set" for diagnostic quality appears to be composed of "bench test images."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document mentions "qualified clinicians" reviewed the bench test images for acceptable quality. However, it does not specify:

    • The number of clinicians.
    • Their specific qualifications (e.g., years of experience, specialty).
    • How ground truth was established, beyond their review for "acceptable quality."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. There's no mention of a formal adjudication method for the "qualified clinicians" reviewing bench test images.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was performed. The device is an imaging system, not an AI/CAD system assisting human readers. The submission relies on demonstrating substantial equivalence to a predicate imaging device through technical and performance comparisons.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This is not an AI/CAD algorithm. It is an x-ray imaging device. The performance evaluated is the image quality and physical/electrical safety of the device itself.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    For the "bench test images" that were reviewed, the "ground truth" seems to be an implicit assessment by "qualified clinicians" that the image quality was "acceptable" for diagnostic support. No formal "ground truth" (e.g., pathology, clinical outcomes) for specific diagnoses is mentioned, as this was not a diagnostic accuracy study.

    8. The sample size for the training set

    Not applicable. This is an X-ray imaging device, not an AI/ML algorithm that requires a training set.

    9. How the ground truth for the training set was established

    Not applicable, as there is no training set for an AI/ML algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K133544
    Device Name
    OP300
    Manufacturer
    Date Cleared
    2014-03-26

    (128 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PALODEX GROUP OY

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The OP300 dental panoramic, cephalometric and cone beam computed tomography x-ray device is intended for dental radiographic examination of teeth, jaw and TMJ areas by producing conventional 2D x-ray images as well as x-ray projection images of an examined volume for the reconstruction of a 3D view. The device is operated and used by qualified healthcare professionals.

    Device Description

    The Orthopantomograph OP300 is an extra oral source dental x-ray device that is softwarecontrolled which produces conventional digital 2D panoramic, cephalometric and TMJ x-ray images as well as digital x-ray projection images taken during cone beam rotations around a patient's head. The projection images are reconstructed to be viewed in 3D by a 3D viewing software.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a modified dental X-ray device, the OP300. The submission aims to demonstrate substantial equivalence to a predicate device (also an OP300, K122018) rather than presenting a novel device that requires extensive clinical studies. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are focused on engineering and bench testing, demonstrating that the modifications do not negatively impact safety or effectiveness.

    Here's an analysis based on the provided text, recognizing that this is a 510(k) submission for a modification, not a de novo device requiring broad clinical trials:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by complying with recognized consensus standards and demonstrating equivalent image quality and performance to the predicate device through bench testing.

    Acceptance Criteria CategorySpecific Criteria / StandardReported Device Performance (Modified OP300)
    Image Quality EquivalenceNo significant differences in image quality compared to the predicate OP300 (K122018)Concluded that there are no significant differences in image quality.
    Sensor Performance EquivalenceEquivalent sensor performance to the predicate OP300 (K122018)Concluded that there are no significant differences in sensor performance.
    Compliance with Consensus StandardsIEC60601-1:1988 (Medical electrical equipment - Part 1: General requirements for safety)Compliant
    IEC60601-1-2:2001 (Medical electrical equipment - Part 1-2: General requirements for safety - Collateral standard: Electromagnetic compatibility - Requirements and tests)Compliant
    IEC 60601-1-3:1994 (Medical electrical equipment - Part 1-3: General requirements for safety - Collateral Standard: General requirements for radiation protection in diagnostic X-ray equipment)Compliant
    IEC60601-1-4:1996 (Medical electrical equipment - Part 1-4: General requirements for safety - Collateral standard: Programmable electrical medical systems)Compliant
    IEC 60601-2-7:1998 (Medical electrical equipment - Part 2-7: Particular requirements for the safety of high-voltage generators of diagnostic X-ray generators)Compliant
    IEC 60601-2-28:1993 (Medical electrical equipment - Particular requirements for the safety of X-ray source assemblies and X-ray generators for medical diagnosis)Compliant
    IEC 60601-2-32:1994 (Medical electrical equipment - Part 2-32: Particular requirements for the safety of associated equipment of X-ray equipment)Compliant
    Anthropomorphic Phantom EvaluationProduce images without severe defects in 3D imaging mode.Demonstrated capability of producing images without severe defects.
    Software ValidationSuccessful validation of GUI software to incorporate new features (FOVs, low-dose mode).Successfully verified and validated.
    Safety and EffectivenessEnsure the safety and effectiveness of the device (overall).Successfully verified and validated.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • For image quality and sensor performance, the testing involved comparing the modified OP300 against the predicate OP300 (K122018). The specific number of images or runs is not explicitly stated, but it was "in-house Performance (bench) testing."
      • For the anthropomorphic phantom evaluation, it involved "images of an anthropomorphic phantom." The number of images is not specified.
      • Clinical images of patients were explicitly not used to support substantial equivalence.
    • Data Provenance: The testing was "in-house" bench testing, conducted by the manufacturer (PaloDEx Group Oy) in Finland. This indicates internal, controlled testing, not necessarily independent third-party validation. The data is retrospective in the sense that it's comparing a new version to an existing (predicate) version's performance characteristics.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    No external experts or clinicians were explicitly stated as establishing ground truth for the bench test set. The evaluation of "image quality" and "sensor performance" likely relied on internal engineering and quality control staff, comparing objective metrics and potentially subjective assessments by qualified personnel. The statement "it was concluded that there is no significant differences in image quality" implies an internal assessment.

    4. Adjudication Method for the Test Set

    No formal adjudication method (like 2+1 or 3+1 by multiple experts) is mentioned, as clinical data was not used. The determination of "no significant differences" in image quality and sensor performance appears to be a conclusion drawn from the in-house bench testing results.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    No MRMC comparative effectiveness study was conducted. The submission explicitly states: "Sample clinical images of patients were not used to support substantial equivalence of the OP300 device." This means there was no human reader component to the "effect size of how much human readers improve with AI vs without AI assistance" as there is no AI assistance feature discussed in the submission, and no human reader study.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The device is an imaging system, not an algorithm being validated in isolation. The "standalone" performance relates to its ability to produce images compliant with standards and equivalent to the predicate, which was assessed through bench testing. The reconstruction software (FBP or ART) operates in a "standalone" fashion to generate the 3D view from 2D projections, but its performance was evaluated in terms of image quality metrics from the phantom, not through a separate algorithm-only study.

    7. The Type of Ground Truth Used

    The "ground truth" for this 510(k) submission primarily relies on:

    • Engineering benchmarks and specifications: Adherence to the technical parameters and performance characteristics established for the predicate device.
    • Consensus Standards: Compliance with recognized international standards (IEC 60601 series).
    • Anthropomorphic phantom images: The "truth" for these images is the known anatomical/radiological features within the phantom, and the assessment looked for "severe defects" rather than diagnosing a specific condition.

    8. The Sample Size for the Training Set

    This submission is for a device modification (hardware and GUI changes), not a new algorithm that requires a separate training set. The device uses established image reconstruction techniques (FBP, ART) which do not involve deep learning or AI requiring a "training set" in the modern sense. Therefore, there is no mention of a training set sample size.

    9. How the Ground Truth for the Training Set Was Established

    As there is no mention of a training set, the establishment of ground truth for a training set is not applicable to this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K133231
    Device Name
    DIGORA OPTIME
    Date Cleared
    2014-03-20

    (150 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    INSTRUMENTARIUM DENTAL, PALODEX GROUP OY

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DIGORA® Optime imaging system is indicated for capturing, digitization and processing of intra oral x-ray images stored in imaging plate recording media.

    SOREDEX® DIGORA® Optime system is intended to be used only by dentist and other qualified dental professionals to process x-ray images exposed to the imaging plates from the intraoral complex of the skull.

    Device Description

    DIGORA® Optime (DXR-60) is a digital radiography system for intra oral imaging plates located in disposable bags. The system may be used with all x-ray equipment which is designed for intra oral radiography. The image is recorded on reusable imaging plate which substitutes for conventional xray film or digital sensor. The x-ray energy absorbed in the imaging plate remains stored as a latent image. When fed to the device the stored energy is released as an optical emission proportional to the stored energy when the imaging plate is stimulated pixel by a scanning laser. An optical system collects the emission for photo electronic system, which converts the emission to digital electronic signals. These signals are processed in a computer system which formats and stores the signals.

    Further image processing, display and achieving are carried out with auxiliary software.

    AI/ML Overview

    Here is an analysis of the provided text regarding the DIGORA® Optime (DXR-60) device, focusing on acceptance criteria and supporting studies:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes the DIGORA® Optime (DXR-60) as substantially equivalent to its predicate device, the DXR-50, based on design, composition, and function. The acceptance criteria essentially revolves around demonstrating that the new device is as safe and effective as the predicate, with modifications having no adverse impact.

    Acceptance Criteria CategorySpecific Criteria (Implicit or Explicit in Document)Reported Device Performance (DXR-60)Notes
    Material ChangesEquivalence in functionality to DXR-50Replaced machined aluminum with plastic components (chassis, plate carrier, door) providing equivalent functionality.Verified/validated to have no impact on safety, effectiveness, image quality, or expected lifetime.
    User Interface (UI)Better visual communication and usability without affecting safety/efficacyModified control panel with relocated buttons, lighted numeric LEDs, and symbols for plate positioning, device status, and error codes.Verified/validated to not affect safety or efficacy.
    Image Processing FeatureStitching two size 3 plates for occlusal view without affecting safety/effectiveness"Comfort Occlusal 4C image processing ability" - two size 3 plates can be stitched; plates processed and supplied separately, then stitched.Verified/validated to not affect safety or effectiveness, and makes the feature safe.
    Safety and EffectivenessNo adverse impact on safety or efficacy from modificationsDesign verification and validation demonstrated modifications did not affect safety and efficacy or raise new safety/efficacy questions.Comprehensive testing on all applicable requirements per FDA guidance.
    Image QualityDiagnostic-quality imagesCapable of providing diagnostic-quality images via anthropomorphic phantom images.Considered appropriate due to primary target anatomy not involving moving or soft tissues of similar contrast.
    Substantial EquivalenceSimilar technological/performance characteristics to predicateDeemed substantially equivalent to the predicate device (K041050) in clinical performance.Based on similar characteristics and successful validation.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • For the non-clinical testing, anthropomorphic phantoms were used as the "test set" to demonstrate image quality and system capability. No specific number of phantoms or images is provided, but it states "Anthropomorphic phantom images of perianical and bitewing views were provided."
      • No human patient images were used for clinical testing.
    • Data Provenance: The testing was conducted by PaloDEx Group Oy (Manufacturer) in Tuusula, Finland. The data is non-clinical and derived from phantom images rather than retrospective or prospective human clinical data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state the number of experts or their specific qualifications (e.g., radiologist with 10 years of experience) used to establish "ground truth" for the phantom images. It implies that the manufacturer's internal team determined the images were "diagnostic-quality." Given the non-clinical nature and the product (an imaging plate reader), the "ground truth" for image quality would likely be assessed against established imaging standards and anatomical correctness within the phantom.

    4. Adjudication Method for the Test Set

    No multi-expert adjudication method (e.g., 2+1, 3+1) is mentioned or implied, as the primary test involved non-clinical phantom images. The assessment of "diagnostic-quality" appears to be an internal verification by the manufacturer.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not deemed necessary on DIGORA® Optime (DXR-60) device." Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as this study type was not performed and the device is an imaging plate reader, not an AI-assisted diagnostic tool.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The DIGORA® Optime (DXR-60) is an imaging plate reader, not an algorithm in the sense of a standalone AI diagnostic tool. Its performance is inherent in its ability to capture, digitize, and process x-ray images from imaging plates. The non-clinical tests on phantoms demonstrate this standalone functional performance (i.e., algorithm only in the sense of image processing capability, without human interpretation for diagnostic purposes as the primary outcome). The "algorithm" here refers to the system's image processing capabilities to produce an image, not an interpretative AI.

    7. The Type of Ground Truth Used

    The ground truth for evaluating the device's performance was:

    • Engineering/Technical Specification Comparisons: Comparing the DXR-60's specifications (e.g., theoretical resolution, pixel size, bit depth) against the predicate DXR-50 and demonstrating either equivalence or improvement where relevant.
    • Physical Phantom Images: For image quality assessment, anthropomorphic phantom images were used, with the "ground truth" being the expected anatomical structures and image characteristics represented by the phantom for "diagnostic quality."
    • Functional Verification/Validation: For material changes and UI modifications, the ground truth was that the changes had "no impact on safety, effectiveness, and overall performance."

    8. The Sample Size for the Training Set

    The concept of a "training set" is not applicable here in the context of machine learning. The DIGORA® Optime is a physical medical device (an imaging plate reader) and not an AI/ML algorithm that requires a training set of data. Its development involves traditional engineering design, testing, and validation processes, not machine learning model training.

    9. How the Ground Truth for the Training Set Was Established

    As explained above, there is no "training set" in the machine learning sense for this device. Therefore, no ground truth was established for a training set. The device's "training" refers to its design and manufacturing processes ensuring it meets its intended specifications and performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K130297
    Device Name
    SCANORA 3DX
    Manufacturer
    Date Cleared
    2013-05-29

    (112 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PALODEX GROUP OY

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Scanora 3Dx is a Cone Beam 3D x-ray system for imaging the head and neck areas, including the ENT and dentomaxillofacial areas, for use in diagnostic support. Dedicated panoramic imaging is an option. A flat panel detector is used to acquire 3D images and an optional CCD sensor to acquire panoramic images. The device is operated and used by qualified healthcare professionals.

    Device Description

    Scanora 3Dx is a Cone Beam Computerized Tomography x-ray system for Dentomaxillofacial and Head & Neck (ENT) imaging. Dedicated panoramic imaging is an option. In CT mode it generates a conical x-ray beam during rotation around a patient's head and produces two dimensional projection images on a flat panel detector. Three dimensional images are then reconstructed and viewed with 3rd party software. In panoramic mode panoramic and TMJ images can be taken in the classical way on a separate CCD detector.

    AI/ML Overview

    The provided text describes a 510(k) submission for the SCANORA 3Dx device, indicating that it is substantially equivalent to a predicate device (SCANORA 3D [K110839]). This type of submission relies on demonstrating similarity to an already approved device rather than presenting new clinical study data to establish effectiveness or meet novel acceptance criteria.

    Therefore, the document does not contain a specific table of acceptance criteria and reported device performance based on a dedicated study proving these criteria. Instead, it focuses on comparative technological characteristics and a "preference study" for image quality.

    Here's an breakdown of the information that can be extracted, and where information is missing based on your request:

    1. Table of Acceptance Criteria and Reported Device Performance:

    • Acceptance Criteria: Not explicitly stated as quantifiable metrics for a clinical study. The primary "acceptance" is demonstrated through substantial equivalence to the predicate device.
    • Reported Device Performance: The document highlights the technological characteristics of SCANORA 3Dx and compares them to the predicate device. The only "performance" comparison mentioned in a study context is a "preference study" for image quality, but specific metrics from this study are not provided.
    CharacteristicSCANORA 3Dx (New Device)SCANORA 3D [K110839] (Predicate Device)
    X-ray source3D mode: 90 kV, 4-10 mA, pulsed. Pan mode: 60-81 kV, 4-8 mA continuous. kV accuracy +/-5kV. Same x-ray source for 3D and Pan modes.3D mode: 90 kV, 4-12.5 mA, pulsed. Pan mode: 60-81 kV, 4-8 mA continuous. kV accuracy +/-5kV. Same x-ray source for 3D and Pan modes.
    Focal spot0.5 mm0.5 mm
    Image detector(s)Amorphous Silicon Flat Panel + CCD for panoramic imagingCMOS Flat Panel + CCD for panoramic imaging
    3D imaging techniqueReconstruction from 2D imagesReconstruction from 2D images
    3D's Field Of ViewH50 x Ø50 mm, H50 x Ø100 mm, H80 x Ø100 mm, H140 x Ø100 mm, H80 x Ø165 mm, H140 x Ø165 mm, H180x Ø165 mm - Stitched, H240x Ø165 mm - StitchedH60 x Ø60 mm, H75 x Ø100 mm, H75 x Ø145 mm, H130xØ145 mm - stitched
    3D's total viewing angle360 degrees360 degrees
    Pixel sizeAmorphous Silicon flat panel: 120 / 240 µm. CCD for panoramic imaging: 48 µm.CMOS flat panel: 200 µm. CCD for panoramic imaging: 48 µm.
    Voxel size100/150/200/250/300/350/400/500 µm133/200/250/300/350 µm
    3D scan time18 - 34 sec10 - 26 sec
    3D's effective exposure time2.4 - 6 sec2.25 - 6 sec
    Indications for useScanora 3D is a Cone Beam 3D x-ray system for imaging the head and neck areas, including the ENT and dentomaxillofacial areas, for use in diagnostic support. Dedicated panoramic imaging is an option. A flat panel detector is used to acquire 3D images and an optional CCD sensor to acquire panoramic images. The device is operated and used by qualified healthcare professionals.Scanora 3D is a Cone Beam 3D x-ray system for imaging the head and neck areas, including the ENT and dentomaxillofacial areas, for use in diagnostic support. Dedicated panoramic imaging is an option. A flat panel detector is used to acquire 3D images and an optional CCD sensor to acquire panoramic images. The device is operated and used by qualified healthcare professionals.
    System footprintH197cm x D140cm x W160cmH197cm x D140cm x W160cm
    Weight310 kg310 kg
    Preference Study (Image Quality)"Results were evaluated by internal reviewers." (No specific metrics or acceptance criteria reported for this study).(Not applicable for this comparative preference study, as it was the comparison device)

    2. Sample sized used for the test set and the data provenance:

    • Test Set Sample Size: "Same phantom" was used for the preference study. The number of images generated is not specified beyond this.
    • Data Provenance: The study was conducted by the manufacturer, PaloDEx Group Oy (Finland). The provenance of the phantom data (e.g., country of origin, retrospective/prospective) is not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: "Internal reviewers" evaluated the results of the preference study. The exact number is not stated.
    • Qualifications of Experts: Their qualifications are not specified.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The document only states "results were evaluated by internal reviewers." No specific adjudication method (like 2+1 or 3+1 where disagreements are resolved by a third party) is mentioned. It implies a qualitative "preference" rather than a quantitative ground truth establishment.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC study was done. The described "preference study" was an image quality comparison between the new device and the predicate device using a phantom, judged by internal reviewers. This is not a comparative effectiveness study involving human readers with and without AI assistance.
    • Effect Size: Not applicable as no such study was performed.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • The device is a hardware system (Cone Beam 3D X-ray system) that generates images. The document does not describe any specific algorithm or AI component that would have standalone performance measured without a human-in-the-loop. The mention of "FBP algorithm" refers to a standard image reconstruction technique, not a diagnostic AI algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the "preference study," the ground truth was based on the qualitative "preference" of internal reviewers regarding image quality from a phantom. This is not expert consensus on diagnostic findings, pathology, or outcomes data. It's a subjective assessment of image quality.

    8. The sample size for the training set:

    • Not applicable. The document describes a medical imaging device (hardware) undergoing a 510(k) process based on substantial equivalence. It does not refer to a machine learning or AI algorithm that would require a "training set" in the context of diagnostic performance evaluation.

    9. How the ground truth for the training set was established:

    • Not applicable. As above, no training set for an AI algorithm is mentioned or implied.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3