K Number
K182230
Manufacturer
Date Cleared
2018-09-07

(21 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Multi Modality Viewer is an option within Vitrea that allows the examination of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

Device Description

Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

The Multi Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows reformatting of image data, enables clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for the Multi Modality Viewer, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly present a table of numerical "acceptance criteria" for performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it focuses on functional capabilities and states that verification and validation testing confirmed the software functions according to requirements and that "no negative feedback was received," and "Multi Modality Viewer was rated as equal to or better than the reference devices."

The acceptance is primarily based on establishing substantial equivalence to predicate and reference devices, demonstrating that the new features function as intended and do not raise new questions of safety or effectiveness.

Feature/CriterionAcceptance Standard (Implied)Reported Device Performance/Conclusion
Overall Safety & EffectivenessSafe and effective for its intended use, comparable to predicate and reference devices.Clinical validations demonstrated clinical safety and effectiveness.
Functional EquivalenceNew features operate according to defined requirements and functions similarly to or better than features in reference devices.Verification testing confirmed software functions according to requirements. External validation evaluators confirmed sufficiency of software to read images and rated it "equal to or better than" reference devices.
No Negative FeedbackNo negative feedback from clinical evaluators regarding functionality or image quality of new features."No negative feedback received from the evaluators."
Substantial EquivalenceDevice is substantially equivalent to predicate and reference devices regarding intended use, clinical effectiveness, and safety."This validation demonstrates substantial equivalence between Multi Modality Viewer and its predicate and reference devices with regards to intended use, clinical effectiveness and safety."
Risk ManagementAll risks reduced as low as possible; overall residual risk acceptable; benefits outweigh risks."All risks have been reduced as low as possible. The overall residual risk for the software product is deemed acceptable. The medical benefits of the device outweigh the residual risk..."
Software Verification (Internal)Software fully satisfies all expected system requirements and features; all risk mitigations function properly."Verification testing confirmed the software functions according to its requirements and all risk mitigations are functioning properly."
Software Validation (Internal)Software conforms to user needs and intended use; system requirements and features implemented properly."Workflow testing... provided evidence that the system requirements and features were implemented properly to conform to the intended use."
CybersecurityFollows FDA guidance for cybersecurity in medical devices, including hazard analysis, mitigations, controls, and update plan.Follows internal documentation based on FDA Guidance: "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices."
Compliance with StandardsComplies with relevant voluntary consensus standards (DICOM, ISO 14971, IEC 62304).The device "complies with the following voluntary recognized consensus standards" (DICOM, ISO 14971, IEC 62304 listed).
New features don't raise new safety/effectiveness questionsNew features are similar enough to existing cleared features in predicate/reference devices that they don't introduce new concerns.For each new feature (Full volume MIP, Volume image rendering, 3D Cutting Tool, Clip Plane Box, bone/base segmentation tools, 1 Click Visible Seed, Automatic table segmentation, Automatic bone segmentation, US 2D Cine Viewer, Automatic Rigid Registration), the document states that the added feature "does not raise different questions of safety and effectiveness" due to similarity with a cleared reference device.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: The document repeatedly mentions "anonymized datasets" but does not specify the number of cases or images used in the external validation studies.
  • Data Provenance: The data used for the external validation studies were "anonymized datasets." The country of origin is not explicitly stated, but the evaluators were from "three different clinical locations." Given Vital Images, Inc. is located in Minnetonka, MN, USA, it's highly probable the data and clinical locations are from the United States. The studies were likely retrospective as they involved reviewing "anonymized datasets" rather than ongoing patient enrollment.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

  • Number of Experts: Three evaluators.
  • Qualifications of Experts: The evaluators were "from three different clinical locations" and are described as "experienced professionals" in the context of simulated usability testing and clinical review. Their specific medical qualifications (e.g., radiologist, specific years of experience) are not explicitly detailed in the provided text.

4. Adjudication Method for the Test Set

The document does not describe an explicit "adjudication method" for establishing ground truth or resolving discrepancies between experts in the traditional sense. The phrase "no negative feedback received from the evaluators" and "Multi Modality Viewer was rated as equal to or better than the reference devices" suggests a consensus or individual evaluation model, but not a specific adjudication protocol like 2+1 or 3+1. It appears the evaluations focused on confirming functionality and subjective quality rather than comparing against a pre-established ground truth for a diagnostic task.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

  • No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the context of measuring improvement with AI vs. without AI assistance.
  • The "Substantial Equivalence Validation" involved three evaluators comparing the subject device against its predicate and reference devices. However, this comparison focused on functionality and image quality and aimed to show the equivalence or non-inferiority of the new device and its features, rather than quantifying performance gains due to AI assistance in human readers. The new features mentioned (like automatic segmentation or rigid registration) are components that might assist, but the study design wasn't an MRMC to measure the effect size of this assistance on human performance.

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

  • The document describes "software verification testing" which confirms "the software functions according to its requirements." This implies a form of standalone testing for the algorithms and features. For example, "Automatic table segmentation" and "Automatic bone segmentation" are algorithms, and their functionality would have been tested independently.
  • However, no specific performance metrics (e.g., accuracy, precision) for these algorithms in a standalone capacity are provided from these tests. The external validation was a human-in-the-loop setting where evaluators used the software.

7. The Type of Ground Truth Used

The external validation involved "clinical review of anonymized datasets" where evaluators assessed "functionality and image quality." For new features like segmentation or registration, the "ground truth" would likely be based on the expert consensus or judgment of the evaluators during their review of the anonymized datasets, confirming if the segmentation was accurate or if the registration was correct and useful. There is no mention of pathology, direct clinical outcomes data, or a separate "ground truth" panel.

8. The Sample Size for the Training Set

The document does not specify the sample size for the training set. It details verification and validation steps for the software but does not provide information about the development or training of any AI/ML components within the software. While features like "Automatic table segmentation" and "Automatic bone segmentation" likely involve machine learning, the document does not elaborate on their training data.

9. How the Ground Truth for the Training Set Was Established

Since the document does not specify the training set or imply explicit AI/ML development in the detail often seen for deep learning algorithms, it does not describe how the ground truth for the training set was established.


§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).