K Number
K061126
Device Name
ACCUREX
Manufacturer
Date Cleared
2006-05-09

(15 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Use as a software for the display and 3D visualization of dental image files from scanning devices such as CT, MRI for radiologists, dental clinicians and practitioners to acquire, process, render, review, store, print and distribute DICOM 3.0 complaint image studies, utilizing standard PC hardware.

Device Description

Accurex™ is a dental imaging software which loads DICOM images taken from CT. MR and provides 3D visualization, 2D image reformation and various diagnosis tools for dentists on PC, laptop or PACS Workstation. Dentists can do further precise and accurate pre/post-operative diagnosis by using advanced image reformation and generation functionality of Accurex™ such as Panoramic image and Cephalometric x-ray imaqe as well as 3D volume rendering image.

AI/ML Overview

The provided 510(k) summary for CyberMed, Inc.'s Accurex device (K061126) focuses on demonstrating substantial equivalence to predicate devices (V-works and Vimplant) based on its functional description and intended use. It does not contain information about specific acceptance criteria, a formal study design the proves the device meets the acceptance criteria (beyond functional testing), reader performance, or detailed ground truth establishment for a test set.

Therefore, I cannot populate the requested table and details regarding a specific study proving acceptance criteria. The document primarily describes the device's features and its compliance with DICOM 3.0 standards, comparing it generally to predicate devices in terms of functionality for image processing and visualization in a dental context.

However, I can extract information related to the device's capabilities and its intended use, which inherently imply certain functional acceptance.

Here's what can be inferred and what information is missing:


1. Table of Acceptance Criteria and Reported Device Performance

As noted above, no specific quantitative acceptance criteria or reported performance metrics (like sensitivity, specificity, accuracy, or reader agreement statistics) are provided in the document. The document asserts the device's functionality and substantial equivalence to previously cleared devices.

Acceptance Criteria Category (Inferred)Specific Criteria (Inferred/Missing)Reported Device Performance (Missing/Inferred)
DICOM 3.0 ComplianceAbility to load, store, display DICOM 3.0 data"The Accurex™ 2.0 is adaptable technically for all data of DICOM 3.0."
Image Display & ManipulationCorrect display of 2D/3D images, basic operations (pan, zoom, rotate, WL)Functionality described: CINE player, 2D/3D manipulation tools. Implied to perform correctly.
Image Reconstruction & GenerationAccurate Panoramic, Cross-sectional, MPR, 3D volume rendering, MIP/MinIP, Cephalometric x-ray image generationFunctionality described. Implied to perform as intended for "accurate anatomical knowledge."
Nerve Creation & DisplayAbility to draw nerve lines and display intersectionsFunctionality described. Implied to perform correctly.
Reporting & MeasurementCapability to generate reports, perform 2D/3D measurementsFunctionality described. Implied to perform correctly.
File Format SupportLoad DCM, save as DCM, BMP, JPGSpecified.
Safety & EffectivenessGeneral safety and effectiveness for intended use.Concluded to be "safe and effective and substantially equivalent to predicate devices."

2. Sample size used for the test set and the data provenance

  • Sample Size: Not specified.
  • Data Provenance: Not specified. The document mentions loading DICOM images from CT/MR devices, but does not detail the origin or nature of data used for any testing.
  • Retrospective/Prospective: Not specified.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not specified.
  • Qualifications of Experts: Not specified.

4. Adjudication method for the test set

  • Adjudication Method: Not specified.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study: No, an MRMC study is not mentioned or implied. The device is described as an image processing and visualization tool, not an AI diagnostic aid that would directly assist human readers in interpretation or improve their performance. The purpose is to provide "further precise and accurate pre/post-operative diagnosis" by enabling the practitioner to view and manipulate images.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: Not explicitly detailed with performance metrics. The document describes the algorithm's capabilities (e.g., 3D image construction via Volume Rendering) and states that "3D Algorithm was cleared in Vworks((K013878)." This implies that the core 3D processing algorithms underwent some form of validation in the predicate device, but specific standalone performance metrics for Accurex are not provided.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • Type of Ground Truth: Not specified in relation to any formal testing or validation within this document. The device is a display and processing software, implying that the "ground truth" would relate to the accuracy of image rendering, measurements, and reformations compared to the source DICOM data and established imaging principles, rather than a diagnostic outcome.

8. The sample size for the training set

  • Training Set Sample Size: Not applicable. The document describes image processing software, not a machine learning or AI algorithm that would typically require a "training set" in the conventional sense for a diagnostic model. While algorithms might have been developed using data, a dedicated "training set" for a submitted AI model is not indicated.

9. How the ground truth for the training set was established

  • Ground Truth Establishment for Training Set: Not applicable, as no training set for an AI model is indicated.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).