(27 days)
DBSWIN and VistaEasy imaging software are intended for use by qualified dental professionals for windows based diagnostics. The software is a diagnostic aide for licensed radiologists, dentists and clinicians, who perform the actual diagnosis based on their training, qualification, and clinical experience. DBSWIN and VistaEasy are clinical software applications that receive images and data from various imaging sources (i.e., radiography devices and digital video capture devices) that are manufactured and distributed by Duerr Dental and Air Techniques. It is intended to acquire, display, edit (i.e., resize, adjust contrast, etc.) and distribute images using standard PC hardware. In addition, DBSWIN enables the acquisition of still images from 3rd party TWAIN compliant imaging devices (e.g., generic image devices such as scanners) and the storage and printing of clinical exam data, while VistaEasy distributes to 3rd party TWAIN compliant PACS systems for storage and printing.
DBSWIN and VistaEasy software are not intended for mammography use.
DBSWIN and VistaEasy imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. DBSWIN and VistaEasy software runs on user provided PC-compatible computers and utilize previously cleared digital image capture devices for image acquisition.
VistaEasy is included as part of DBSWIN. It provides additional interfaces for Third Party Software. VistaEasy can also be used by itself, as a defeatured version of DBSWIN.
The provided text describes a 510(k) premarket notification for "DBSWIN and VistaEasy Imaging Software." This submission is a "Special 510(k) Summary" for minor modifications to an already cleared predicate device (K143290). Therefore, the document focuses on demonstrating that the new version is substantially equivalent to the previous one and primarily relies on non-clinical testing rather than extensive new clinical studies for acceptance criteria.
Based on the provided text, a detailed breakdown of acceptance criteria and a study proving those criteria is challenging because the document primarily asserts substantial equivalence through a comparison to a predicate device and relies on generalized non-clinical testing rather than specific new performance metrics for the modified device.
Here's an attempt to extract the requested information based on the available text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state numerical acceptance criteria like sensitivity, specificity, or accuracy for diagnostic performance. Instead, the "acceptance criteria" are implied to be the continued equivalent functionality and safety to the predicate device and compliance with relevant standards. The "reported device performance" is a confirmation that these functionalities are maintained.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with medical device software life cycle requirements (IEC 623304) | Developed in compliance with IEC 62304. |
Maintains intended use functionality as predicate | "Continues to meet its performance specifications." |
Hardware compatibility interfaces (especially with 3rd party software) | "Same intended use, functionality, and hardware compatibility interfaces with 3rd party software." |
Effective and functional with image capture devices | "Bench testing, effectiveness, and functionality were successfully conducted and verified." |
DICOM compliance | DBSWIN is DICOM compliant. |
No new issues of safety or effectiveness | "The results of the testing did not raise new issues of safety or effectiveness." |
Meets minimum system requirements | Hardware requirements table provided for various OS, CPU, RAM, etc. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a sample size for a test set in the context of clinical performance evaluation. The testing described is primarily non-clinical: "Bench testing," "Full functional software cross check testing." There is no mention of data provenance (country of origin, retrospective/prospective) because clinical data are not discussed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
No information is provided regarding experts, ground truth establishment, or their qualifications because the document does not describe a clinical study requiring such a test set. The software is described as a "diagnostic aide for licensed radiologists, dentists and clinicians, who perform the actual diagnosis based on their training, qualification, and clinical experience." This implies that the human expert is the ultimate arbiter of diagnosis, not the software.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
No information is provided regarding adjudication methods as no clinical test set requiring such expert review is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was mentioned or implied. The device is imaging software, not explicitly an AI-assisted diagnostic tool in the sense of providing automated interpretations or significant decision support that would require such a study. It's a tool for acquiring, displaying, and editing images.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A "standalone" performance study for an algorithm in a diagnostic sense was not done. The software's performance is gauged through non-clinical functional testing and its ability to process images. Its role is as a "diagnostic aide" to a human professional.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
No specific type of "ground truth" (e.g., pathology, expert consensus) is mentioned because the document does not describe the evaluation of a diagnostic algorithm against such a truth. The testing focuses on functional verification and compliance with standards.
8. The sample size for the training set
No information about a training set is provided. This type of submission (Special 510(k)) does not typically involve the training of new algorithms but rather the verification of modified software versions against established functionalities of previously cleared devices.
9. How the ground truth for the training set was established
Not applicable, as no training set or specific diagnostic algorithm requiring ground truth for training is mentioned. The document describes a software update for an image management system.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).