Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K203287
    Manufacturer
    Date Cleared
    2020-11-18

    (9 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Dürr Dental SE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DBSWIN and VistaEasy imaging software are intended for use by qualified dental professionals for windows based diagnostics. The software is a diagnostic aide for licensed radiologists, dentists and clinicians, who perform the actual diagnosis based on their training, qualification, and clinical experience. DBSWIN and VistaEasy are clinical software applications that receive images and data from various imaging sources (i.e., radiography devices and digital video capture devices) that are manufactured and distributed by Duerr Dental and Air Techniques. It is intended to acquire, display, edit (i.e., resize, adjust contrast, etc.) and distribute images using standard PC hardware. In addition, DBSWIN enables the acquisition of still images from 3rd party TWAIN compliant imaging devices (e.g., generic image devices such as scanners) and the storage and printing of clinical exam data, while VistaEasy distributes the acquired images to 3rd party TWAIN compliant PACS systems for storage and printing. DBSWIN and VistaEasy software are not intended for mammography use.

    Device Description

    DBSWIN and VistaEasy imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. DBSWIN and VistaEasy software runs on user provided PC-compatible computers and utilize previously cleared digital image capture devices for image acquisition. VistaEasy is included as part of DBSWIN. It provides additional interfaces for Third Party Software. VistaEasy can also be used by itself, as a reduced feature version of DBSWIN.

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device software (DBSWIN and VISTAEASY Imaging Software). It describes the software's purpose, comparison to a predicate device, and the non-clinical testing performed to establish substantial equivalence.

    However, this document does NOT contain information about specific acceptance criteria for performance metrics (like sensitivity, specificity, or image quality scores) or a study proving the device meets those criteria in the traditional sense of a clinical performance study with human-in-the-loop or standalone AI performance.

    Instead, this 510(k) submission leverages the concept of substantial equivalence to a previously cleared predicate device (DBSWIN and VistaEasy Imaging Software K190629). The "proof" that the device meets acceptance criteria is primarily demonstrated through:

    1. Direct comparison of technological characteristics: Showing that the modified device has the same intended use, functionality, and performance as the predicate device.
    2. Verification testing: Ensuring that the software performs as intended and that the minor modifications (operating system changes, new hardware support) do not introduce new safety or effectiveness concerns. This is typically bench testing and does not involve clinical performance metrics.
    3. Adherence to recognized standards and guidance documents: Demonstrating that the software development and risk management processes follow established regulatory guidelines.

    Therefore, many of the requested points about "acceptance criteria" for performance metrics and clinical study details (like sample size for test sets, expert consensus, MRMC studies, ground truth establishment) are not applicable or not detailed in this specific document, because the submission path is based on substantial equivalence to an existing device rather than demonstrating de novo clinical performance for a novel algorithm.

    Here's a breakdown of what can be extracted and what information is missing based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    As explained above, specific quantitative performance acceptance criteria (e.g., sensitivity, specificity, AUC) are not detailed in this 510(k) summary because it's a substantial equivalence submission based on an existing device's functionality. The "performance" is implicitly deemed equivalent to the predicate.

    The acceptance criteria are more related to functional equivalence and safety:

    Acceptance Criterion (Implied/Functional)Reported Device Performance (from comparison table)
    Indications for Use: Same as predicateYES (SAME, unchanged)
    Patient Management: Same as predicateYES
    Image Management: Same as predicateYES
    Acquisition Sources (X-ray, Laser Fluorescence, Video, Photos, Documents): Same as predicateYES (All listed sources are supported)
    Display Images: Same as predicateYES
    Save/Store Images: Same as predicateYES
    Produce Reports: Same as predicateYES
    Print/Export Images: Same as predicateYES
    Image Enhancement (Brightness, Contrast, Colorize, Crop, Rotate, Zoom, Invert, Sharpen, Measure, Over/Under Exposure, Annotate): Same as predicateYES (All listed enhancements are supported)
    Run on standard PC compatible computers: Same as predicateYES
    Supported Devices: Same as predicate, plus new integrationsYES (Supports ScanX, ProVecta S-Pan, CamX, and new SensorX)
    Computer operating systems: Updated but compatible subset of predicate's supported OSMicrosoft Windows 8.1, 64-bit; Microsoft Windows 10, 64-bit; Microsoft Windows Server 2012; Microsoft Windows Server 2016
    CPU, RAM, Drive, Hard Disk, Data Backup, Interface, Diagnostic Monitor, Resolution/Graphics: Same as predicateNo change
    Safety and Effectiveness: Minor modifications do not alter fundamental scientific technology, verification testing successfully conducted, no new safety/effectiveness issues raised.Confirmed through non-clinical testing and risk analysis update.

    2. Sample size used for the test set and the data provenance

    The document mentions "Bench testing, effectiveness, and functionality were successfully conducted and verified with the compatible image capture devices."

    • Sample Size for Test Set: Not specified. This typically refers to the number of patient cases or images used in a clinical performance study. For a substantial equivalence claim based on functional changes to image management software, detailed test set sample sizes for performance metrics are often not provided in the public summary if the testing is primarily functional verification rather than clinical validation of an AI algorithm.
    • Data Provenance: Not specified. (e.g., country of origin, retrospective or prospective). Not relevant given the type of submission.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. The submission is for image management software, not an AI diagnostic algorithm that requires expert-established ground truth for clinical performance evaluation. The "ground truth" for this type of device would be its ability to correctly acquire, display, edit, store, and distribute images as per its specifications, which is verified through functional testing.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. No clinical image-based test set requiring adjudication is described for this submission.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. This is not an AI diagnostic device. It is an image management software. Therefore, an MRMC study is not relevant or described.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not Applicable. This is not an AI diagnostic algorithm. The software performs functions like image acquisition, display, editing, and distribution. Its "performance" is its ability to execute these functions correctly, which is evaluated through functional bench testing, not as a standalone diagnostic algorithm.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not Applicable. For a software acting as a PACS-like system, the "ground truth" is typically whether the software correctly implements its specified functions (e.g., does it display the image correctly? Does it store it without corruption? Can it apply the specified edits?). This isn't a diagnostic ground truth established by medical experts or pathology.


    8. The sample size for the training set

    Not Applicable. This is not an AI/Machine Learning device that undergoes "training."


    9. How the ground truth for the training set was established

    Not Applicable. As per point 8, this is not an AI/Machine Learning device requiring a training set and its associated ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K202633
    Device Name
    ScanX Edge
    Manufacturer
    Date Cleared
    2020-10-07

    (26 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Dürr Dental SE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanX Edge is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.

    Device Description

    The ScanX Edge is a dental device that scans photostimulable phosphor storage plates that have been exposed in place of dental X- Ray film and allows the resulting images to be displayed on a personal computer monitor and stored for later recovery. It will be used by licensed clinicians and authorized technicians for this purpose. The device is an intraoral Plate Scanner, which is designed to read out intraoral Plates of the sizes 0, 1 and 2. intraoral plate Scanners are state of the art since 2002 and are available in various designs from many companies around the world. The intraoral Plates are put into the mouth of the patient, exposed to X-rays and then are read out with the device. The read-out-process is carried out with a 635nm Laser. The laser beam is moved across the surface of the plate by an oscillating MEMS mirror. The laser beam stimulates the top coating of the plates, which consists of x-ray sensitive material. Depending on the exposed dose, the coating emits different levels of light. These light particles are then requisitioned by an optical sensor (Photo Multiplier Tube/ PMT) and transferred into an electrical output signal is digitalized and is the data for the digital X-ray image. The data is transmitted via an Ethernet link to a computer. Before the plate is discharged, the remaining data is erased by a LED-PCB. The user chooses which size of plate he has to use and prepares the device by inserting the appropriate plate insert into the device. He then exposes the plate and then puts the plate directly into the insert by pushing it out of the light protection envelope. The user closes the light protection cover and starts the read out process. After the read out process the picture is transmitted to the connected PC, the picture can be viewed and the IP is erased and ready to use for the next acquisition.

    AI/ML Overview

    The provided text describes the Dürr Dental SE ScanX Edge device (K202633) and its substantial equivalence to a predicate device, the ScanX Intraoral View (K170733). However, it does not contain a study that establishes acceptance criteria and then proves the device meets those criteria with performance data from a specific study.

    Instead, the document focuses on demonstrating that the ScanX Edge maintains similar performance characteristics and safety profiles compared to a previously cleared device. It references technical testing and standards compliance rather than a clinical performance study with defined acceptance criteria and performance metrics for the device itself.

    Therefore, I cannot provide a table of acceptance criteria and reported device performance from a dedicated study as that information is not present in the provided text. I will extract the available performance specifications and the context of their assessment.

    Here's a breakdown of the information that can be extracted from the document, with explanations for what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted above, the document does not present a specific study that defines acceptance criteria and then reports performance against those criteria. Instead, it compares the ScanX Edge's technical specifications to those of its predicate device, often stating that the values are "similar." These comparisons are made against established standards for general X-ray imaging devices.

    Metric (Acceptance Criteria are inferred as being "Similar" or "More than XX% at YY lp/mm" to the predicate)Predicate Device (ScanX Intraoral View) PerformanceSubject Device (ScanX Edge) Performance
    Image quality - Theoretical resolutions10, 20, 25 or 40 LP/mmMax. theoretical resolution Approx. 16.7 Lp/mm
    MTF (Modulation Transfer Function) at 3 lp/mmMore than 45%More than 40%
    DQE (Detective Quantum Efficiency) at 3 lp/mmMore than 7.5%More than 3.4%
    Image data bit depth16 bits16 bits

    Note: The document states that the "lower DQE did not adversely affect image quality as shown by our SSD testing," implying that despite a numerically lower DQE, the overall image quality was deemed acceptable, likely within the context of substantial equivalence to the predicate in dental applications.


    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified. The document refers to "MTF and DQE image performance testing" and "SSD testing" but does not provide details on the number of images or cases used for these tests.
    • Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective. Implicitly, as Dürr Dental SE is a German company, some testing might have occurred in Germany, but this is not stated for the performance data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    The performance testing referenced (MTF, DQE, noise power spectrum) are objective technical measurements performed on the device itself, often using phantoms or controlled inputs, not human interpretation of clinical images. Therefore, the concept of "ground truth established by experts" as typically applied in diagnostic AI studies is not directly applicable here for these specific technical metrics. The "SSD testing" mentioned in relation to DQE might involve human assessment, but details are not provided.


    4. Adjudication method for the test set

    Not applicable/Not specified for the referenced technical performance tests (MTF, DQE). These are objective measurements rather than subjective assessments requiring adjudication.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted or reported in this document.
    • Effect Size: Not applicable, as no MRMC study was performed.

    This device is described as an "Extraoral source x-ray system" for scanning phosphor plates and displaying digital images. It is not an AI algorithm intended to assist human readers in diagnosis. Therefore, an MRMC study assessing AI assistance is outside the scope of this device's function as described.


    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    A "standalone" performance assessment was done in the sense that the intrinsic physical image quality metrics (MTF, DQE, theoretical resolution) of the device were measured and reported. These metrics reflect the device's inherent capability to produce high-quality images, independent of human interpretation or assistance. However, it's not an algorithm that performs a diagnostic task.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the MTF, DQE, and theoretical resolution measurements are based on physical properties and standardized test methods as outlined in IEC 62220-1:2003 ("Medical electrical equipment - Characteristics of digital X-ray imaging devices - Part 1: Determination of the input properties of digital X-ray imaging devices"). This typically involves using phantoms and known physical inputs rather than clinical ground truth like pathology or expert consensus.


    8. The sample size for the training set

    Not applicable. This device is a hardware imaging system, not an artificial intelligence algorithm that requires a training set.


    9. How the ground truth for the training set was established

    Not applicable, as there is no training set for this hardware device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1