Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K193139
    Manufacturer
    Date Cleared
    2020-03-05

    (113 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181432

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ProVecta 3D Prime Ceph is a computed tomography x-ray unit intended to generate 3D, panoramic and cephalometric Xray images in dental radiography for adult and pediatic patients. It provides diagnostic details of the maxillofacial areas for a dental treatment. The device is operated and used by physicians, dentists, and x-ray technicians.

    Not intended for mammography use.

    Device Description

    This device is a cone beam CT x-ray device for the acquisition of dental images. Similar to computer tomography or magnetic resonance tomography, sectional images can be generated with CBCT. With CBCT, an X-ray tube and an imaging sensor opposite it rotate around a seated or standing patient. The X-ray tube rotates through 180°-540° and emits a conical X-ray beam. The Xrays pass through the region under investigation and are measured for image generation by a detector as an attenuated grey scale X-ray image. Here, a large series of two-dimensional individual images is acquired during the revolution of the X-ray tube. Using a mathematical calculation on the rotating image series via a reconstruction computer, a grey value coordinate image is generated in the three spatial dimensions. This three-dimensional coordinate model corresponds to a volume graphic that is made up of individual voxels. This volume can be used to generate sectional images (tomograms) in all spatial dimensions as well as 3D views. The system complies with US Radiation Safety Performance Standard. This device is similar to our reference device, K181432, but we have now added cephalometric capability, making it entirely equivalent to our predicate device for indications. An option would allow the customer to purchase this new device without the CEPH function, if desired.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the ProVecta 3D Prime Ceph based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided FDA 510(k) summary does not include specific acceptance criteria in numerical or quantifiable terms (e.g., minimum sensitivity/specificity, specific image quality scores). Instead, it relies on demonstrating compliance with recognized standards and comparing its performance to a predicate device.

    The "reported device performance" primarily comes from conformity to these standards and the implicit performance derived from its technological characteristics being similar to or the same as the predicate.

    Acceptance Criteria (Implied by Standards & Equivalence)Reported Device Performance
    Safety:Complies with IEC 60601-1 (General requirements for basic safety and essential performance), IEC 60601-1-2 (Electromagnetic Compatibility), IEC 60601-1-3 (Radiation Protection in Diagnostic X-Ray Equipment), IEC 60825-1 (Safety of laser products).
    Essential Performance:Verified through compliance with IEC 60601-1, IEC 60601-2-63 (Particular requirements for the basic safety and essential performance of dental extra-oral X-ray equipment).
    Image Quality (Dental Radiography):Acceptance testing was performed for both panoramic and cephalometric modes according to DIN 6868-151 (Image quality assurance in diagnostic X-ray departments - Acceptance testing of dental radiographic equipment) and DIN 6868-161 (Image Quality Assurance In Diagnostic X-Ray Departments - Acceptance Testing Of Dental Radiographic Equipment For Digital Cone-Beam Computed Tomography). Line pair and contrast were evaluated using a phantom designed for this purpose. The device's technological characteristics (kV, mA, focal spot, detector) are similar to the predicate.
    Usability:Complies with IEC 60601-1-6 (Usability) and IEC 62366 (Application of usability engineering to medical devices).
    Software Life-cycle Processes:Complies with IEC 62304 (Medical Device Software Life-cycle processes). Firmware evaluated according to the FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices; risk management documented.
    Biocompatibility:Chin Holder (material PBT) complies with EN ISO 10993-5 (Cytotoxicity). Other accessories were previously cleared.
    Substantial Equivalence:The device is deemed substantially equivalent to the predicate (K152106) and reference device (K181432) regarding technology, performance, and indications for use. Key performance differences with the reference device (K181432, which lacked CEPH) are resolved by the addition of cephalometric capability, making it entirely equivalent to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a sample size for a "test set" in the context of patient data or clinical images. The testing described focuses on non-clinical performance and engineering standards (e.g., electrical safety, image quality with phantoms).

    • Test Set Sample Size: Not applicable/not provided for patient data.
    • Data Provenance: Not applicable, as no patient data test set is described. The non-clinical testing appears to have been conducted by the manufacturer, presumably in Germany (country of origin for DÜRR DENTAL SE).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not provided as the evaluation relies on non-clinical phantom-based testing and compliance with recognized standards, rather than expert-derived ground truth from clinical images.

    4. Adjudication Method for the Test Set

    This information is not provided as the evaluation relies on non-clinical phantom-based testing and compliance with recognized standards.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done, and thus, no effect size of human reader improvement with AI assistance is reported. This device is an X-ray imaging system, not an AI-powered diagnostic aid for interpretation.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    No, a standalone algorithm performance study was not done. The device is a medical imaging hardware system.

    7. The Type of Ground Truth Used

    The "ground truth" for this device's evaluation is primarily established through:

    • Phantoms: For image quality assessment (line pair, contrast).
    • Engineering Standards: Electrical safety, radiation protection, EMC, usability, software life-cycle, and biocompatibility standards provide the "ground truth" for compliance.
    • Predicate Device Performance: The primary "ground truth" for substantial equivalence is demonstrating that the ProVecta 3D Prime Ceph performs as safely and effectively as the legally marketed predicate device (Vatech Co. Ltd. PaX-i3D Smart, K152106).

    8. The Sample Size for the Training Set

    This information is not applicable/not provided. The device is an X-ray imaging system, not an AI/machine learning device that requires a training set of data.

    9. How the Ground Truth for the Training Set Was Established

    This information is not applicable/not provided for the same reason as point 8.

    Ask a Question

    Ask a specific question about this device

    K Number
    K192743
    Device Name
    VisionX 2.4
    Manufacturer
    Date Cleared
    2019-10-31

    (31 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181432, K191623

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, image plate scamers, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources. The software must only be used by authorized healthcare professionals in dental areas for the following tasks:

    • Filter optimization of the display of 2D and 3D images for improved diagnosis

    • Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitized 2D and 3D images and videos

    • Forwarding of images and additional data to external software (third-party software)

    The software is not intended for mammography use.

    Device Description

    VisionX 2.4 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 2.4 software runs on user provided PC-compatible computers and utilizes previously cleared digital image capture devices for image acquisition. This software was cleared in K181432 as part of the x-Ray system ProVecta 3D Prime. With this submission VisionX will be established as standalone software. Additionally, new hardware was integrated: Support of the ScanX Touch / Duo Touch (K191623)

    AI/ML Overview

    This document is a 510(k) premarket notification for the VisionX 2.4 imaging software. It primarily focuses on demonstrating substantial equivalence to a predicate device (DBSWIN and VistaEasy Imaging Software, K190629), rather than presenting a detailed study with specific acceptance criteria and performance metrics for an AI/algorithm-driven diagnostic aid.

    Here's an analysis based on the provided text, highlighting what is available and what is not:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly define acceptance criteria as a table with numerical thresholds for performance metrics (e.g., accuracy, sensitivity, specificity) for algorithm performance. Instead, it relies on the concept of "substantial equivalence" to a predicate device that has established safety and effectiveness.

    The "device performance" reported is at a high level, stating:

    • "Software testing, effectiveness, and functionality were successfully conducted and verified between VisionX 2.4 and image capture devices."
    • "Full functional software cross check testing was performed."
    • "The verification testing demonstrates that the device continues to meet its performance specifications and the results of the testing did not raise new issues of safety or effectiveness."

    2. Sample Size Used for the Test Set and Data Provenance:

    This information is not provided in the document. The submission is for an imaging software that manages and displays images, and while it mentions "supporting diagnosis," it does not seem to include a specific AI/algorithm for automated diagnosis where a test set with performance metrics would typically be required.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    This information is not provided. Given the nature of the submission (imaging software for viewing and management, rather than a novel AI diagnostic algorithm), such detailed ground truth establishment is not typically a requirement for this type of 510(k). The document mentions a "Clinical Evaluation" which included "detailed review of literature data, data from practical tests in dental practices, and safety data," to conclude suitability for dental use, but this is distinct from establishing ground truth for an AI algorithm's performance.

    4. Adjudication Method for the Test Set:

    This information is not provided.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported. The submission focuses on the functionality and safety of the imaging software itself and its equivalence to other legally marketed imaging software, not on an AI's impact on human reader performance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Given that VisionX 2.4 is described as "imaging software" for "viewing and diagnosis of image data" and includes "Filter optimization of the display of 2D and 3D images for improved diagnosis" and "supporting diagnosis," it functions as a tool for clinicians. The text explicitly states that "The software must only be used by authorized healthcare professionals in dental areas for the following tasks." This indicates a human-in-the-loop scenario. The document does not describe a standalone algorithm performance study in the way typically expected for an AI diagnostic tool.

    7. The Type of Ground Truth Used:

    This information is not explicitly stated as traditionally understood for AI performance. The nearest concept mentioned is that the "Clinical Evaluation" concluded that the software is suitable for dental use, based on "literature data, data from practical tests in dental practices, and safety data." This points more towards usability and safety in a clinical context rather than a specific ground truth for an automated diagnostic task.

    8. The Sample Size for the Training Set:

    This information is not provided. As this is an imaging and management software, not a deep learning AI model requiring a distinct training set for diagnostic capabilities, such data is not expected or presented.

    9. How the Ground Truth for the Training Set was Established:

    This information is not provided, as it's not a submission for a deep learning AI model with a training set requiring ground truth establishment in the typical sense.


    Summary of what is present and relevant to the request:

    The submission focuses on the functionality and software development process of VisionX 2.4, an imaging management and display software for dental use, seeking substantial equivalence to a predicate device (K190629). It highlights:

    • Compliance with IEC 62304 and FDA guidance for software in medical devices.
    • Successful software testing for effectiveness and functionality.
    • DICOM compliance.
    • Risk analysis, design reviews, and full functional cross-check testing.
    • A "Clinical Evaluation" assessing suitability for dental use based on literature and practical tests.

    The document does not provide specific quantitative acceptance criteria or detailed studies on the performance of a novel AI/algorithm in terms of diagnostic accuracy, sensitivity, or specificity against established ground truth, or its impact on human reader performance. This is consistent with the device being primarily an image management and viewing system with features that "support diagnosis" through display optimization, rather than a standalone AI diagnostic tool.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1