(9 days)
DBSWIN and VistaEasy imaging software are intended for use by qualified dental professionals for windows based diagnostics. The software is a diagnostic aide for licensed radiologists, dentists and clinicians, who perform the actual diagnosis based on their training, qualification, and clinical experience. DBSWIN and VistaEasy are clinical software applications that receive images and data from various imaging sources (i.e., radiography devices and digital video capture devices) that are manufactured and distributed by Duerr Dental and Air Techniques. It is intended to acquire, display, edit (i.e., resize, adjust contrast, etc.) and distribute images using standard PC hardware. In addition, DBSWIN enables the acquisition of still images from 3rd party TWAIN compliant imaging devices (e.g., generic image devices such as scanners) and the storage and printing of clinical exam data, while VistaEasy distributes the acquired images to 3rd party TWAIN compliant PACS systems for storage and printing. DBSWIN and VistaEasy software are not intended for mammography use.
DBSWIN and VistaEasy imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. DBSWIN and VistaEasy software runs on user provided PC-compatible computers and utilize previously cleared digital image capture devices for image acquisition. VistaEasy is included as part of DBSWIN. It provides additional interfaces for Third Party Software. VistaEasy can also be used by itself, as a reduced feature version of DBSWIN.
The provided text is a 510(k) summary for a medical device software (DBSWIN and VISTAEASY Imaging Software). It describes the software's purpose, comparison to a predicate device, and the non-clinical testing performed to establish substantial equivalence.
However, this document does NOT contain information about specific acceptance criteria for performance metrics (like sensitivity, specificity, or image quality scores) or a study proving the device meets those criteria in the traditional sense of a clinical performance study with human-in-the-loop or standalone AI performance.
Instead, this 510(k) submission leverages the concept of substantial equivalence to a previously cleared predicate device (DBSWIN and VistaEasy Imaging Software K190629). The "proof" that the device meets acceptance criteria is primarily demonstrated through:
- Direct comparison of technological characteristics: Showing that the modified device has the same intended use, functionality, and performance as the predicate device.
- Verification testing: Ensuring that the software performs as intended and that the minor modifications (operating system changes, new hardware support) do not introduce new safety or effectiveness concerns. This is typically bench testing and does not involve clinical performance metrics.
- Adherence to recognized standards and guidance documents: Demonstrating that the software development and risk management processes follow established regulatory guidelines.
Therefore, many of the requested points about "acceptance criteria" for performance metrics and clinical study details (like sample size for test sets, expert consensus, MRMC studies, ground truth establishment) are not applicable or not detailed in this specific document, because the submission path is based on substantial equivalence to an existing device rather than demonstrating de novo clinical performance for a novel algorithm.
Here's a breakdown of what can be extracted and what information is missing based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
As explained above, specific quantitative performance acceptance criteria (e.g., sensitivity, specificity, AUC) are not detailed in this 510(k) summary because it's a substantial equivalence submission based on an existing device's functionality. The "performance" is implicitly deemed equivalent to the predicate.
The acceptance criteria are more related to functional equivalence and safety:
Acceptance Criterion (Implied/Functional) | Reported Device Performance (from comparison table) |
---|---|
Indications for Use: Same as predicate | YES (SAME, unchanged) |
Patient Management: Same as predicate | YES |
Image Management: Same as predicate | YES |
Acquisition Sources (X-ray, Laser Fluorescence, Video, Photos, Documents): Same as predicate | YES (All listed sources are supported) |
Display Images: Same as predicate | YES |
Save/Store Images: Same as predicate | YES |
Produce Reports: Same as predicate | YES |
Print/Export Images: Same as predicate | YES |
Image Enhancement (Brightness, Contrast, Colorize, Crop, Rotate, Zoom, Invert, Sharpen, Measure, Over/Under Exposure, Annotate): Same as predicate | YES (All listed enhancements are supported) |
Run on standard PC compatible computers: Same as predicate | YES |
Supported Devices: Same as predicate, plus new integrations | YES (Supports ScanX, ProVecta S-Pan, CamX, and new SensorX) |
Computer operating systems: Updated but compatible subset of predicate's supported OS | Microsoft Windows 8.1, 64-bit; Microsoft Windows 10, 64-bit; Microsoft Windows Server 2012; Microsoft Windows Server 2016 |
CPU, RAM, Drive, Hard Disk, Data Backup, Interface, Diagnostic Monitor, Resolution/Graphics: Same as predicate | No change |
Safety and Effectiveness: Minor modifications do not alter fundamental scientific technology, verification testing successfully conducted, no new safety/effectiveness issues raised. | Confirmed through non-clinical testing and risk analysis update. |
2. Sample size used for the test set and the data provenance
The document mentions "Bench testing, effectiveness, and functionality were successfully conducted and verified with the compatible image capture devices."
- Sample Size for Test Set: Not specified. This typically refers to the number of patient cases or images used in a clinical performance study. For a substantial equivalence claim based on functional changes to image management software, detailed test set sample sizes for performance metrics are often not provided in the public summary if the testing is primarily functional verification rather than clinical validation of an AI algorithm.
- Data Provenance: Not specified. (e.g., country of origin, retrospective or prospective). Not relevant given the type of submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The submission is for image management software, not an AI diagnostic algorithm that requires expert-established ground truth for clinical performance evaluation. The "ground truth" for this type of device would be its ability to correctly acquire, display, edit, store, and distribute images as per its specifications, which is verified through functional testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No clinical image-based test set requiring adjudication is described for this submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. This is not an AI diagnostic device. It is an image management software. Therefore, an MRMC study is not relevant or described.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not Applicable. This is not an AI diagnostic algorithm. The software performs functions like image acquisition, display, editing, and distribution. Its "performance" is its ability to execute these functions correctly, which is evaluated through functional bench testing, not as a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not Applicable. For a software acting as a PACS-like system, the "ground truth" is typically whether the software correctly implements its specified functions (e.g., does it display the image correctly? Does it store it without corruption? Can it apply the specified edits?). This isn't a diagnostic ground truth established by medical experts or pathology.
8. The sample size for the training set
Not Applicable. This is not an AI/Machine Learning device that undergoes "training."
9. How the ground truth for the training set was established
Not Applicable. As per point 8, this is not an AI/Machine Learning device requiring a training set and its associated ground truth.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).