Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
VisionX 3.0
The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, image plate scamers, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimisation of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 3.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 3.0 software runs on user provided PCcompatible computers and utilize previously cleared digital image capture devices for image acquisition.
The VisionX 3.0 device includes new AI-powered functions such as automatic nerve canal tracing, automatic image rotation, and improved panoramic curve detection. The 510(k) summary provided does not contain a specific study demonstrating the device meets acceptance criteria for these AI functions. Instead, it states that "Full functional software cross check testing was performed." and that "The verification testing demonstrates that the device continues to meet its performance specifications and the results of the testing did not raise new issues of safety or effectiveness." This implies that the internal performance specifications were met, but details on these specifications and the testing methodology are not provided in the summary.
Here's the information that can be extracted from the provided text, along with details that are explicitly stated as not available or not applicable based on the given document:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Function | Acceptance Criteria (Implied/General) | Reported Device Performance (as per 510(k) summary) |
---|---|---|
General software functionality and effectiveness | No specific quantitative acceptance criteria are provided in the document. Implied: Device meets its performance specifications. | "Full functional software cross check testing was performed." "The verification testing demonstrates that the device continues to meet its performance specifications." |
AI Functions (Automatic nerve canal calculation, Automatic image rotation, In-line automatic image plate quality checks, Improved panoramic curve detection) | No specific quantitative acceptance criteria (e.g., accuracy, sensitivity, specificity) for these AI functions are provided in the document. Implied: Functions operate as intended. | The new functions were added and underwent "Full functional software cross check testing." The modifications did not raise new issues of safety or effectiveness. |
Cybersecurity | Compliance with FDA guidance for "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." | "Cybersecurity was addressed according to the FDA guidance document." |
DICOM compliance | Compliance with DICOM standards. | "VisionX 3.0 is DICOM compliant." |
Software life cycle requirements | Compliance with IEC 62304 standard. | "VisionX 3.0 was developed in compliance with the harmonized standard of IEC 62304." |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified in the provided document.
- Data Provenance: Not specified in the provided document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified or implied in the provided document.
- Qualifications of Experts: Not specified or implied in the provided document.
4. Adjudication method for the test set
- Adjudication Method: Not specified or implied in the provided document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study is not mentioned in the provided document.
- Effect Size: Not applicable, as no MRMC study was mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the AI features (e.g., automatic nerve canal tracing) operate in a standalone capacity within the software, as they are listed as "new functions." However, no specific standalone performance metrics (e.g., accuracy, sensitivity, specificity of the algorithm alone) are provided in the summary. The "full functional software cross check testing" suggests validation of the software's operation, but without specific performance data for the AI components in isolation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified in the provided document.
8. The sample size for the training set
- Sample Size for Training Set: Not specified in the provided document.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not specified in the provided document.
Ask a specific question about this device
Page 1 of 1