Search Results
Found 1 results
510(k) Data Aggregation
(72 days)
ADVANCED DIAGNOSTIC VIEWER (ADV)
ADV is intended to create 3D images of the anatomy from a set of CT or MRI images.
ADV is software used for 3D image processing of CT and MR diagnostic images.
This document describes a 510(k) summary for the ADV software, a device intended for 3D image processing of CT and MR diagnostic images.
Based on the provided information, the following can be extracted:
-
A table of acceptance criteria and the reported device performance
No specific quantitative acceptance criteria are provided in the document. The general claim is that "ADV is substantially equivalent to the predicate device in its ability to render accurate 3D images for use in medical diagnosis." The reported performance is that "The ADV software renders a 3D image which exhibits great faithfulness to the original and does so with great speed." This is a qualitative statement, not a measurable performance metric.
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified | Renders a 3D image exhibiting "great faithfulness to the original" with "great speed." Substantially equivalent to predicate device (VoxelView® 2.5) in ability to render accurate 3D images for medical diagnosis. |
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states: "Clinical data have not been submitted as part of this premarket notification." Therefore, there is no mention of a test set, sample size, or data provenance from clinical data. The non-clinical tests likely involved internal testing of the software's functionality and accuracy, but details are not provided.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
No clinical data were submitted, so there is no mention of experts establishing ground truth for a test set.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
No clinical data were submitted, so there is no mention of an adjudication method for a test set.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No clinical data were submitted, and the device is described as software for 3D image processing rather than an AI-assisted diagnostic tool. Therefore, an MRMC comparative effectiveness study was not performed or submitted.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document implies that nonclinical tests were performed on the software itself to determine its "faithfulness to the original" and speed, which would constitute standalone performance relative to its stated function. However, no specific details or results of these standalone tests are provided, beyond the qualitative statement of performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the non-clinical tests, the "ground truth" for evaluating the faithfulness of the 3D rendered image would likely be the original 2D CT or MR images themselves, and potentially the mathematical or graphical accuracy based on the algorithms used for 3D reconstruction. No external clinical "ground truth" (like pathology or outcomes) is mentioned as no clinical data were submitted.
-
The sample size for the training set
The document does not describe the device as an AI/machine learning model that requires a training set in the conventional sense. It's described as "software used for 3D image processing." Therefore, no training set or its sample size is mentioned.
-
How the ground truth for the training set was established
As the device is not described as an AI/machine learning model requiring a training set, this question is not applicable based on the provided text.
Ask a specific question about this device
Page 1 of 1