K Number
K150976
Device Name
Collage
Manufacturer
Date Cleared
2015-06-04

(52 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Collage is a software application intended for viewing of 3D medical image files from scanning devices, such as CT, MRI, or 3D Ultrasound as well as 2D patient images, such as patient photographs, intraoral photographs, and dental x-rays. Images and data can be stored, communicated, processed and displayed within the system and or across computer networks at distributed locations. It is intended for use by doctors, clinicians, and other qualified individuals utilizing standard PC hardware. This device is not indicated for mammography use.

Device Description

Collage is an interactive imaging software used for the visualization, storage, and management of 3D medical image files from scanning devices, such as CT, MRI, or 3D Ultrasound as well as 2D patient images, such as patient photographs, intraoral photographs, and dental x-rays. Doctors, dental clinicians, and other qualified individuals can retrieve, process, render, review, store and print images, utilizing standard PC hardware. The software runs in Windows operating systems and visualizes medical imaging data on the computer screen. The Collage software is intended as a platform bridging different sets of patient data for comprehensive studies. With Collage software, doctors can manage all of their patient images, including both 2D and 3D image data, in a single software.

AI/ML Overview

Here's an analysis of the provided text regarding the acceptance criteria and study for the "Collage" device:

The provided text does not contain explicit acceptance criteria or a detailed study proving the device meets those criteria, as typically understood for AI/ML medical devices.

Instead, the document is a 510(k) premarket notification summary for a Picture Archiving and Communication System (PACS) software, "Collage." The evaluation focuses on demonstrating substantial equivalence to a predicate device (OsiriX MD, K101342) through software development quality assurance measures and bench testing.

Here's an attempt to answer your questions based on the available information, noting where information is absent:

  1. A table of acceptance criteria and the reported device performance

    The document does not specify formal, measurable "acceptance criteria" for clinical performance. The evaluation is based on demonstrating the software functions as intended and is comparable to a predicate device.

    Acceptance Criteria (Implied)Reported Device Performance
    Software is stable and operating as designed."Testing confirmed that the software is stable and operating as designed."
    Risk associated with the software is reduced to acceptable levels."Testing also confirmed that the software has been evaluated for hazards and that risk has been reduced to acceptable levels."
    Ability to render and manage 2D and 3D medical images."Bench testing of the software with predicate software was performed by evaluation of images rendered by Collage and predicate software. This testing and evaluation included testing of rendering both 2D and 3D images... This testing confirms that Collage is as effective as its predicate in its ability to perform its essential functions of rendering and managing medical images."
  2. Sample sized used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document mentions "Bench testing of the software with predicate software was performed by evaluation of images rendered by Collage and predicate software." It does not specify the sample size of images used for this bench testing, nor does it provide information on the provenance (country of origin, retrospective/prospective) of these images.

  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    "Bench testing... was evaluated by an expert in the field of radiology." Only one expert is mentioned. Their specific qualifications (e.g., years of experience) are not detailed beyond "an expert in the field of radiology."

  4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Given that only "an expert" evaluated the bench testing results, there was no adjudication method mentioned or implied (like 2+1 or 3+1 consensus).

  5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The device "Collage" is a PACS software for viewing and managing images, not an AI-assisted diagnostic tool. Therefore, there's no mention of how human readers improve with AI assistance.

  6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is described as "a software application intended for viewing...images." "Diagnosis is not performed by this software but by doctors, clinicians and other qualified individuals." It is a tool for clinicians, not a standalone diagnostic algorithm. Therefore, no standalone (algorithm-only) performance was assessed in the context of diagnostic accuracy, as it's not its intended function. The "standalone" performance assessed was its ability to render and manage images as a PACS system.

  7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the bench testing described, the "ground truth" was essentially the visual evaluation and comparison of rendered images by a radiology expert against images rendered by the predicate device, to ensure "Collage" performs its essential functions effectively. It's not a diagnostic "ground truth" derived from pathology or outcomes, but rather a functional ground truth for image display.

  8. The sample size for the training set

    The document describes "Collage" as an imaging software, not an AI/ML model that would typically have a distinct "training set." Therefore, no sample size for a training set is mentioned. The software development followed standard quality assurance measures, but this is different from machine learning model training.

  9. How the ground truth for the training set was established

    As there is no mention of a training set for an AI/ML model, this question is not applicable based on the provided text.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).