(31 days)
The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, image plate scamers, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources. The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
-
Filter optimization of the display of 2D and 3D images for improved diagnosis
-
Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitized 2D and 3D images and videos
-
Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 2.4 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 2.4 software runs on user provided PC-compatible computers and utilizes previously cleared digital image capture devices for image acquisition. This software was cleared in K181432 as part of the x-Ray system ProVecta 3D Prime. With this submission VisionX will be established as standalone software. Additionally, new hardware was integrated: Support of the ScanX Touch / Duo Touch (K191623)
This document is a 510(k) premarket notification for the VisionX 2.4 imaging software. It primarily focuses on demonstrating substantial equivalence to a predicate device (DBSWIN and VistaEasy Imaging Software, K190629), rather than presenting a detailed study with specific acceptance criteria and performance metrics for an AI/algorithm-driven diagnostic aid.
Here's an analysis based on the provided text, highlighting what is available and what is not:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly define acceptance criteria as a table with numerical thresholds for performance metrics (e.g., accuracy, sensitivity, specificity) for algorithm performance. Instead, it relies on the concept of "substantial equivalence" to a predicate device that has established safety and effectiveness.
The "device performance" reported is at a high level, stating:
- "Software testing, effectiveness, and functionality were successfully conducted and verified between VisionX 2.4 and image capture devices."
- "Full functional software cross check testing was performed."
- "The verification testing demonstrates that the device continues to meet its performance specifications and the results of the testing did not raise new issues of safety or effectiveness."
2. Sample Size Used for the Test Set and Data Provenance:
This information is not provided in the document. The submission is for an imaging software that manages and displays images, and while it mentions "supporting diagnosis," it does not seem to include a specific AI/algorithm for automated diagnosis where a test set with performance metrics would typically be required.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:
This information is not provided. Given the nature of the submission (imaging software for viewing and management, rather than a novel AI diagnostic algorithm), such detailed ground truth establishment is not typically a requirement for this type of 510(k). The document mentions a "Clinical Evaluation" which included "detailed review of literature data, data from practical tests in dental practices, and safety data," to conclude suitability for dental use, but this is distinct from establishing ground truth for an AI algorithm's performance.
4. Adjudication Method for the Test Set:
This information is not provided.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported. The submission focuses on the functionality and safety of the imaging software itself and its equivalence to other legally marketed imaging software, not on an AI's impact on human reader performance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Given that VisionX 2.4 is described as "imaging software" for "viewing and diagnosis of image data" and includes "Filter optimization of the display of 2D and 3D images for improved diagnosis" and "supporting diagnosis," it functions as a tool for clinicians. The text explicitly states that "The software must only be used by authorized healthcare professionals in dental areas for the following tasks." This indicates a human-in-the-loop scenario. The document does not describe a standalone algorithm performance study in the way typically expected for an AI diagnostic tool.
7. The Type of Ground Truth Used:
This information is not explicitly stated as traditionally understood for AI performance. The nearest concept mentioned is that the "Clinical Evaluation" concluded that the software is suitable for dental use, based on "literature data, data from practical tests in dental practices, and safety data." This points more towards usability and safety in a clinical context rather than a specific ground truth for an automated diagnostic task.
8. The Sample Size for the Training Set:
This information is not provided. As this is an imaging and management software, not a deep learning AI model requiring a distinct training set for diagnostic capabilities, such data is not expected or presented.
9. How the Ground Truth for the Training Set was Established:
This information is not provided, as it's not a submission for a deep learning AI model with a training set requiring ground truth establishment in the typical sense.
Summary of what is present and relevant to the request:
The submission focuses on the functionality and software development process of VisionX 2.4, an imaging management and display software for dental use, seeking substantial equivalence to a predicate device (K190629). It highlights:
- Compliance with IEC 62304 and FDA guidance for software in medical devices.
- Successful software testing for effectiveness and functionality.
- DICOM compliance.
- Risk analysis, design reviews, and full functional cross-check testing.
- A "Clinical Evaluation" assessing suitability for dental use based on literature and practical tests.
The document does not provide specific quantitative acceptance criteria or detailed studies on the performance of a novel AI/algorithm in terms of diagnostic accuracy, sensitivity, or specificity against established ground truth, or its impact on human reader performance. This is consistent with the device being primarily an image management and viewing system with features that "support diagnosis" through display optimization, rather than a standalone AI diagnostic tool.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).