(281 days)
Viewer is a software device for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements and 3D visualization (Multiplanar reconstructions and 3D volume rendering). It is not intended for primary image diagnosis or the review of mammographic images.
Viewer is a software for viewing of DICOM data. The device provides basic measurement functionality for distances and anqles.
These are the operating principles:
- On desktop PCs the interaction with the software is mainly performed with mouse and/or keyboard. -
- On touch screen PCs and on mobile devices the software is mainly used with a touch screen interface. -
- -On Mixed Reality qlasses the interaction is performed with a dedicated pointing device.
The subject device provides or integrates the following frequently used functions:
- -Select medical images and other healthcare data to be displayed
- -Select views (e.g. axial, coronal & sagittal reconstruction views and 3D volume rendering views)
- Change view layout (e.g. maximize / minimize views, close / open / reorder views) -
- Manipulate views (e.g. scroll, zoom, pan, change windowing) -
- Perform measurements (e.g. distance or angle measurements) -
- -Place annotations at points of interests
The provided document is a 510(k) summary for the "Viewer" device from Brainlab AG. It describes the device, its intended use, and its comparison to a predicate device and a reference device to demonstrate substantial equivalence. However, it does not contain the detailed information required to fill out the table of acceptance criteria and the study that proves the device meets those criteria, specifically regarding device performance metrics (e.g., sensitivity, specificity, accuracy), sample sizes, ground truth establishment, or multi-reader multi-case studies for AI components.
The document primarily focuses on verifying the software's functionality, user interface, DICOM compatibility, and integration, rather than clinical performance metrics of an AI algorithm. The device is a "Picture Archiving And Communications System" (PACS) that displays medical images and other healthcare data and is not intended for primary image diagnosis. This indicates that the regulatory requirements for performance metrics such as sensitivity and specificity, which are common for AI algorithms involved in diagnosis, would not apply to this specific device.
Therefore, most of the information requested in your prompt cannot be extracted from this document because the device described is not an AI diagnostic algorithm, and the provided text focuses on software functionality verification rather than clinical performance studies.
Here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria Category | Test Method Summary | Reported Device Performance |
---|---|---|
User interface | Interactive testing of user interface | All tests passed |
DICOM compatibility | Interactive testing with companywide test data, which are identical for consecutive version of the SW | All tests passed |
Views | Interactive testing of user interface | All tests passed |
Unit test /Automatic tests | Automated or semi-automated cucumber tests or unit tests are written on the applicable level for new functionalities of the Viewer in respect to previous versions. Existing tests have to pass. | All tests passed |
Integration test | Interactive testing on various platforms and combination with other products following test protocols, combined with explorative testing. The software is developed with daily builds, which are explanatively tested. | All tests passed |
Usability | Usability tests (ensure user interface can be used safely and effectively) | All tests passed |
Communication & Cybersecurity | Verification of communication and cybersecurity between Viewer and Magic Leap Mixed Reality glasses | Successfully passed |
Missing Information/Not Applicable: The document does not provide acceptance criteria or performance metrics related to diagnostic accuracy (e.g., sensitivity, specificity, AUC) because the device is explicitly stated as not intended for primary image diagnosis.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The verification tests mention "companywide test data" and "various platforms and combination with other products" but do not provide specific numbers of cases or images.
- Data Provenance: Not specified. The document mentions "companywide test data" but does not detail the country of origin or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/Not specified. Since the device is not for primary diagnosis and the tests focus on software functionality, there is no mention of experts establishing ground truth for diagnostic purposes. The "ground truth" for the software functionality tests would be the expected behavior of the software.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified. The testing methods described are interactive testing, automated/semi-automated tests, and usability tests. There is no mention of an adjudication method typical for diagnostic performance studies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. The document does not describe an AI algorithm intended to assist human readers in diagnosis. It's a DICOM viewer. Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or reported.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is a viewer, not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The concept of "ground truth" in the context of diagnostic accuracy (e.g., pathology, expert consensus) does not apply here as the device is not for primary diagnosis. For its stated functions, the "ground truth" would be the expected, correct functioning of the software features (e.g., correct display of DICOM data, accurate measurements of distance/angle).
8. The sample size for the training set
- Not applicable/Not specified. The device is a viewer, not an AI model that undergoes a "training" phase with a dataset.
9. How the ground truth for the training set was established
- Not applicable. (See point 8).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).