Search Results
Found 1 results
510(k) Data Aggregation
(165 days)
Clarus Viewer
Clarus Viewer® Version 1.0 is a software solution intended to be used for viewing, manipulation, storage, 3D-visualization, and comparison of medical images from multiple imaging modalities and/or multiple timepoints. The application supports images and anatomical datasets, limited to CT and MR.
Clarus Viewer® supports the interpretation of examinations and follow-up documentation of findings within healthcare institutions, for example, in Radiology and other Medical Imaging environments. It is intended to provide image and related information that is interpreted by a trained professional but does not directly generate any diagnosis or potential findings.
Note: The medical professional retains the ultimate responsibility for making the perfinent diagnosis based on their standard practices. Clarus Viewer is a complement to these standard procedures. Clarus Viewer® is not intended for the displaying of digital mammography images for diagnosis.
Clarus Viewer® is a stand-alone software package that imports medical data in the Digital Imaging and Communications and Medicine (DICOM) standard, stored on local or remote PACS sources. Clarus Viewer® is intended to allow users to visualize and manipulate 2D and 3D images and models of CT and MRI datasets and visualize and manipulate 3D volumetric models. Clarus Viewer® can present the 2D and 3D images in either a desktop mode or in Virtual Reality.
Within Clarus Viewer®, users can strip away layers of bone and tissue, revealing the relevant images for evaluation. Users can view and evaluate the 3D model from any angle. In the same way a doctor may hold and rotate a physical anatomical model in the real world, within Clarus Viewer® the image can be rotated and examined, or sliced away or apart to examine interior structures, tissues, and fluids.
The Clarus Viewer® system is intended to be used as a supplemental viewer by trained medical professionals. It allows the user to view images, models, and related medical information that can then be interpreted by a trained professional. Clarus Viewer® does not directly generate any diagnosis. The medical professional retains the ultimate responsibility for making the diagnosis.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics like a typical validation report would. Instead, it lists the types of studies conducted to support substantial equivalence. The overall "acceptance criteria" can be inferred as demonstrating that the Clarus Viewer® performs as safely and effectively as the predicate device (ImmersiveTouch, K210726), and that any differences do not raise new questions of safety or effectiveness.
Therefore, a table of stated acceptance criteria and device performance cannot be directly extracted from the provided text. The document refers to "Performance Data" which includes various tests, implying successful completion of these tests serves as evidence of meeting unstated criteria for substantial equivalence.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample size for any test set (e.g., number of medical images or patient cases). It also does not explicitly state the provenance of the data (e.g., country of origin, retrospective or prospective) for any of the studies mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number of experts used to establish ground truth or their specific qualifications for any of the performance tests. It states that the device is intended to provide image and related information that is "interpreted by a trained professional," implying that expert interpretation is involved in the clinical context, but not specifically for ground truth establishment in a test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not mention any adjudication method for establishing ground truth or resolving discrepancies in expert interpretations during testing.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not indicate that a multi-reader multi-case (MRMC) comparative effectiveness study was conducted, nor does it mention any AI assistance or effect sizes related to human reader improvement. The Clarus Viewer® is described as a "supplemental viewer" and states it "does not directly generate any diagnosis or potential findings," suggesting its role is primarily for visualization and manipulation, not AI-driven interpretation or diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The Clarus Viewer® is a software solution for viewing, manipulating, and visualizing medical images. It does not generate diagnoses or findings independently. Therefore, a standalone (algorithm only) performance study in the context of generating diagnostic output would not be applicable, and the document does not suggest such a study was performed. The device's performance is tied to its capabilities for visualization and manipulation to aid trained professionals.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used for any of its performance evaluations. Given the device's function as a visualization and manipulation tool, the "Clinical Validation" and "Clinical Evaluation" likely involve assessing the accuracy and utility of the rendered 3D models and image aspects against observed anatomical structures or clinical findings, but the specific method of ground truth establishment is not detailed.
8. The sample size for the training set
The document does not provide information on the sample size for a training set. This is consistent with the device being described as a "viewer" and "image management and processing system" rather than an AI/ML diagnostic algorithm that typically requires extensive training data. The "Clarus data" mentioned in the software testing section likely refers to data processed by the viewer, not a training set for an algorithm.
9. How the ground truth for the training set was established
Since the document does not mention a training set, it does not describe how ground truth for such a set was established.
Ask a specific question about this device
Page 1 of 1