K Number
K081215
Device Name
VOLUMINA
Manufacturer
Date Cleared
2008-08-26

(118 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Volumina enables the display of 3D (MIP/MPR) visualization of CT, PET, and MR studies or other DICOM compliant images. Typical users are radiologists, technologists and clinicians. Not for mammographic purposes.

Device Description

Volumina is a Class II software application comprised of a client module and a server module. Volumina is intended to provide a diagnostic quality image to a qualified health care professional to visualize multimodal medical image data.

AI/ML Overview

The provided document is a 510(k) summary for the Volumina device. It details the device's purpose, predicate devices, and the general testing methodology. However, it does not contain specific acceptance criteria or a detailed study proving the device meets them in the format requested.

The document states:

  • "Verification and Validation was conducted according to written protocols and the test outcomes were documented with test reports including pass/fail determination."
  • "Validation was undertaken to demonstrate that the Volumina Client and Server Modules together consistently fulfill the requirements within the intended use operates as intended under actual operating conditions by accepting the required parameters as input and by returning the expected output, and that the user interface provides a display that is consistent with the data that has been given."
  • "A tabulation of Test Procedures, expected Results and Outcomes and are included in Appendix 16-C Traceability Matrix"

This implies that such information exists in the full 510(k) submission, likely in "Appendix 16-C Traceability Matrix" and "Section 16. Software, Item 7. Verification & Validation Testing," but it is not present in the provided excerpt.

Therefore, I cannot populate the table or answer most of the specific questions.

Here's what can be inferred or stated based on the provided text:

  • Acceptance Criteria & Reported Performance: Not explicitly stated in the provided text. The document refers to "written protocols" and "pass/fail determination" in a "Traceability Matrix" (Appendix 16-C) which is not included.
  • Study Details: The document mentions "Verification and Validation" testing, but it does not describe a clinical study or a comparative performance study in the context of diagnostic accuracy, which is typically what acceptance criteria and studies are designed to demonstrate for AI/medical imaging devices. Volumina appears to be a 3D visualization and image processing system, not an AI-driven diagnostic algorithm.

Information that can be extracted or inferred:

  1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriterionReported Device Performance
    Not provided in the documentNot provided in the document
    (e.g., Image Fidelity Accuracy)(e.g., Displayed images match source data within X% tolerance)
    (e.g., Processing Time)(e.g., 3D reconstruction within Y seconds for Z volume size)
    (e.g., Software Functionality)(e.g., All specified features operate as intended)
  2. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified.
    • Data Provenance: "The test data was retrieved from a PACS database containing actual patient Raw Data. Some synthetic data of known dimensions and values was also used for verification and validation." The country of origin is not specified, but the device is for FDA approval in the USA. Given it's a visualization tool rather than a diagnostic algorithm, the specific patient data provenance (country, retrospective/prospective) might be less critical than for an AI diagnostic device.
  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • This information is not provided. Given the device is for image display and visualization (MIP/MPR), the "ground truth" might pertain more to technical accuracy of the display and processing rather than diagnostic accuracy established by expert consensus on specific diseases.
  4. Adjudication method for the test set:

    • Not provided.
  5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a MRMC comparative effectiveness study is not mentioned. The device's primary function is 3D visualization, not AI-assisted diagnosis or enhancement of human reader performance.
  6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • The document describes "Validation was undertaken to demonstrate that the Volumina Client and Server Modules together consistently fulfill the requirements within the intended use operates as intended under actual operating conditions by accepting the required parameters as input and by returning the expected output, and that the user interface provides a display that is consistent with the data that has been given." This suggests a standalone functional verification focused on the software's ability to process and display images accurately. It is not an "algorithm-only" performance in the sense of a standalone diagnostic algorithm.
  7. The type of ground truth used:

    • For the "actual patient Raw Data," the "ground truth" would likely be the inherent data integrity and accuracy as captured by the original imaging modalities (CT, PET, MR). For "synthetic data," the ground truth would be the known dimensions and values of the synthetic data itself, allowing for direct comparison to the device's output.
  8. The sample size for the training set:

    • Not applicable/Not provided. The document does not describe an AI/machine learning component that would require a "training set." This device is a software application for image processing and display.
  9. How the ground truth for the training set was established:

    • Not applicable. There is no mention of a training set as it's not an AI/ML device.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).