Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K182230
    Manufacturer
    Date Cleared
    2018-09-07

    (21 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Multi Modality Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multi Modality Viewer is an option within Vitrea that allows the examination of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

    Device Description

    Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

    The Multi Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows reformatting of image data, enables clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Multi Modality Viewer, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of numerical "acceptance criteria" for performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it focuses on functional capabilities and states that verification and validation testing confirmed the software functions according to requirements and that "no negative feedback was received," and "Multi Modality Viewer was rated as equal to or better than the reference devices."

    The acceptance is primarily based on establishing substantial equivalence to predicate and reference devices, demonstrating that the new features function as intended and do not raise new questions of safety or effectiveness.

    Feature/CriterionAcceptance Standard (Implied)Reported Device Performance/Conclusion
    Overall Safety & EffectivenessSafe and effective for its intended use, comparable to predicate and reference devices.Clinical validations demonstrated clinical safety and effectiveness.
    Functional EquivalenceNew features operate according to defined requirements and functions similarly to or better than features in reference devices.Verification testing confirmed software functions according to requirements. External validation evaluators confirmed sufficiency of software to read images and rated it "equal to or better than" reference devices.
    No Negative FeedbackNo negative feedback from clinical evaluators regarding functionality or image quality of new features."No negative feedback received from the evaluators."
    Substantial EquivalenceDevice is substantially equivalent to predicate and reference devices regarding intended use, clinical effectiveness, and safety."This validation demonstrates substantial equivalence between Multi Modality Viewer and its predicate and reference devices with regards to intended use, clinical effectiveness and safety."
    Risk ManagementAll risks reduced as low as possible; overall residual risk acceptable; benefits outweigh risks."All risks have been reduced as low as possible. The overall residual risk for the software product is deemed acceptable. The medical benefits of the device outweigh the residual risk..."
    Software Verification (Internal)Software fully satisfies all expected system requirements and features; all risk mitigations function properly."Verification testing confirmed the software functions according to its requirements and all risk mitigations are functioning properly."
    Software Validation (Internal)Software conforms to user needs and intended use; system requirements and features implemented properly."Workflow testing... provided evidence that the system requirements and features were implemented properly to conform to the intended use."
    CybersecurityFollows FDA guidance for cybersecurity in medical devices, including hazard analysis, mitigations, controls, and update plan.Follows internal documentation based on FDA Guidance: "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices."
    Compliance with StandardsComplies with relevant voluntary consensus standards (DICOM, ISO 14971, IEC 62304).The device "complies with the following voluntary recognized consensus standards" (DICOM, ISO 14971, IEC 62304 listed).
    New features don't raise new safety/effectiveness questionsNew features are similar enough to existing cleared features in predicate/reference devices that they don't introduce new concerns.For each new feature (Full volume MIP, Volume image rendering, 3D Cutting Tool, Clip Plane Box, bone/base segmentation tools, 1 Click Visible Seed, Automatic table segmentation, Automatic bone segmentation, US 2D Cine Viewer, Automatic Rigid Registration), the document states that the added feature "does not raise different questions of safety and effectiveness" due to similarity with a cleared reference device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document repeatedly mentions "anonymized datasets" but does not specify the number of cases or images used in the external validation studies.
    • Data Provenance: The data used for the external validation studies were "anonymized datasets." The country of origin is not explicitly stated, but the evaluators were from "three different clinical locations." Given Vital Images, Inc. is located in Minnetonka, MN, USA, it's highly probable the data and clinical locations are from the United States. The studies were likely retrospective as they involved reviewing "anonymized datasets" rather than ongoing patient enrollment.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three evaluators.
    • Qualifications of Experts: The evaluators were "from three different clinical locations" and are described as "experienced professionals" in the context of simulated usability testing and clinical review. Their specific medical qualifications (e.g., radiologist, specific years of experience) are not explicitly detailed in the provided text.

    4. Adjudication Method for the Test Set

    The document does not describe an explicit "adjudication method" for establishing ground truth or resolving discrepancies between experts in the traditional sense. The phrase "no negative feedback received from the evaluators" and "Multi Modality Viewer was rated as equal to or better than the reference devices" suggests a consensus or individual evaluation model, but not a specific adjudication protocol like 2+1 or 3+1. It appears the evaluations focused on confirming functionality and subjective quality rather than comparing against a pre-established ground truth for a diagnostic task.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the context of measuring improvement with AI vs. without AI assistance.
    • The "Substantial Equivalence Validation" involved three evaluators comparing the subject device against its predicate and reference devices. However, this comparison focused on functionality and image quality and aimed to show the equivalence or non-inferiority of the new device and its features, rather than quantifying performance gains due to AI assistance in human readers. The new features mentioned (like automatic segmentation or rigid registration) are components that might assist, but the study design wasn't an MRMC to measure the effect size of this assistance on human performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The document describes "software verification testing" which confirms "the software functions according to its requirements." This implies a form of standalone testing for the algorithms and features. For example, "Automatic table segmentation" and "Automatic bone segmentation" are algorithms, and their functionality would have been tested independently.
    • However, no specific performance metrics (e.g., accuracy, precision) for these algorithms in a standalone capacity are provided from these tests. The external validation was a human-in-the-loop setting where evaluators used the software.

    7. The Type of Ground Truth Used

    The external validation involved "clinical review of anonymized datasets" where evaluators assessed "functionality and image quality." For new features like segmentation or registration, the "ground truth" would likely be based on the expert consensus or judgment of the evaluators during their review of the anonymized datasets, confirming if the segmentation was accurate or if the registration was correct and useful. There is no mention of pathology, direct clinical outcomes data, or a separate "ground truth" panel.

    8. The Sample Size for the Training Set

    The document does not specify the sample size for the training set. It details verification and validation steps for the software but does not provide information about the development or training of any AI/ML components within the software. While features like "Automatic table segmentation" and "Automatic bone segmentation" likely involve machine learning, the document does not elaborate on their training data.

    9. How the Ground Truth for the Training Set Was Established

    Since the document does not specify the training set or imply explicit AI/ML development in the detail often seen for deep learning algorithms, it does not describe how the ground truth for the training set was established.


    Ask a Question

    Ask a specific question about this device

    K Number
    K163574
    Manufacturer
    Date Cleared
    2017-01-26

    (38 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Multi Modality Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

    Device Description

    Multi Modality Viewer is a medical image viewer software application, available on the Vitrea software platform cleared by K150258. The application allows qualified clinicians, including physicians, radiologists and technologists, to display, navigate, manipulate and quantify medical images obtained from MRI, CT, CR, DX, RG, RF, XA, PET, and PET/CT modalities.

    The Multi-Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.

    AI/ML Overview

    The provided document, a 510(k) summary for the "Multi Modality Viewer" software, primarily focuses on demonstrating substantial equivalence to a predicate device rather than detailing specific acceptance criteria and a study proving those criteria are met for new features that would typically involve a performance study.

    The device is an update to an existing medical image viewer, and the clearance is based on the argument that new features (support for additional modalities like PET, PET/CT, CR, DX, RG, RF, XA) do not raise new questions of safety and effectiveness because similar functionalities exist in previously cleared reference devices.

    Therefore, the document explicitly states: "The subject of this 510(k) notification, Multi Modality Viewer software, did not require clinical studies to support safety and effectiveness of the software."

    However, it does mention "Verification and Validation" activities and "Performance Standards" in a general sense. Based on the provided text, I can extract the following information about general testing and the device's performance, as much as is available:


    1. Table of Acceptance Criteria and Reported Device Performance:

    Since no specific quantitative acceptance criteria or detailed performance metrics are provided for the new features in the context of a dedicated performance study, the "acceptance criteria" here are inferred from the general statements about software development and verification. The device's "performance" is reported as meeting these general standards.

    Acceptance Criteria (Inferred from General Software Practices)Reported Device Performance (as stated in the document)
    Feature functions according to its requirements."Verification confirmed that the feature functions according to its requirements."
    Operates on the Vitrea software platform without degrading existing functionality."Software testing was completed to ensure the Multi Modality Viewer software functions according to its requirements and operates on the Vitrea software platform without degrading the existing functionality of the Vitrea software platform."
    Meets all product release criteria."The Multi Modality Viewer software has achieved all product release criteria."
    Risks are reduced as low as possible and probability of occurrence of harm is "Improbable.""Every risk has been reduced as low as possible and has been evaluated to have a probability of occurrence of harm of at least 'Improbable.'"
    Unresolved defects do not compromise safety and effectiveness."Of the unresolved defects remaining in the released application, each has been carefully evaluated and it has been determined that the software can be used safely and effectively."
    Medical benefits outweigh residual risk."The medical benefits of the device outweigh the residual risk for each individual risk and all risks together."
    Compliance with DICOM standard for transfer and storage of data."The Vitrea software platform complies with the DICOM standard for transfer and storage of this data and does not modify the contents of DICOM instances."
    Compliance with recognized consensus standards (IEC 62304, ISO 14971, NEMA PS 3.1-3.20)."The Multi Modality Viewer software complies with the following voluntary recognized consensus standards: PS 3.1- 3.20 (2011), ISO 14971:2007, IEC 62304:2006."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not specify a distinct "test set" with a particular sample size for a performance study related to the new features. The testing mentioned is "internal verification and external validation" which included "simulated usability testing by experienced professionals." No details about the number of cases or data provenance (country of origin, retrospective/prospective) for these tests are provided.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    No specific number of experts or their detailed qualifications are provided for establishing ground truth for a test set. The document refers to "experienced professionals" for "simulated usability testing."

    4. Adjudication Method for the Test Set:

    No adjudication method (e.g., 2+1, 3+1, none) is mentioned as no specific performance study test set is described.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done:

    No MRMC comparative effectiveness study is mentioned. The document explicitly states: "The subject of this 510(k) notification, Multi Modality Viewer software, did not require clinical studies to support safety and effectiveness of the software."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The device is a medical image viewer software. Its function is to display, navigate, manipulate, and quantify medical images for clinicians. It is not an algorithm that performs automated detection or diagnosis. Therefore, a standalone (algorithm only) performance study as typically understood for AI algorithms would not be applicable and is not mentioned. Its performance is intrinsically tied to human interaction.

    7. The Type of Ground Truth Used:

    As no specific performance study with a dedicated test set is described, there is no mention of the type of ground truth used (e.g., expert consensus, pathology, outcomes data). The validation focuses on the software's ability to function as intended and display data from new modalities accurately, which relies on adherence to standards and internal software verification.

    8. The Sample Size for the Training Set:

    This device is image viewer software, not a machine learning or AI algorithm that requires a "training set" in the conventional sense. Therefore, no training set size is mentioned.

    9. How the Ground Truth for the Training Set Was Established:

    As there is no training set for this type of device, this information is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K161419
    Manufacturer
    Date Cleared
    2016-07-13

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Multi Modality Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multi Modality Viewer is a software application within Vitrea® that allows the examination and manipulation of a series of medical images obtained from MRI and CT scanners.

    The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.

    Device Description

    Multi Modality Viewer is a software application which functions on the Vitrea Platform, cleared by K150258. This application allows intuitive navigation, and manipulation of medical images obtained from MRI and CT scanners. This application enables clinicians to compare multiple series of the same patient, side-by-side, and switch to other integrated applications to further examine the data.

    lt provides clinical tools to review images to help qualified physicians provide efficient and effective patient care.

    Key features:
    General Viewing:

    • · Linked 2D, MPR and 4D viewers for single and multi-study comparison
    • · Creation of retrievable evidence and snapshots
    • · User defined flexible display protocols

    Access to Advanced Applications and Workflows:

    • In application access to MR Stitching application
    • · Evidence creation and sharing across workflows

    General Image Display, Manipulation, and Analysis Tools:

    • · Maximum and Minimum Intensity Projection (MIP/MinIP)
    • · Identification and Display of Regions of Interest (ROIs)
    • · CINE image display
    • · Multi-frame display
    • · Color image display
    • . Simultaneous multiple studies review
    • · Cross-reference lines support
    • Display of selected images, series, or entire study .
    • · Comparison of multiple series or studies
    • · Scroll
    • Pan
    • Zoom
    • · Focus
    • · Flip (Vertically, horizontally)
    • Invert
    • · Rotate (Clockwise, counter-clockwise)
    • · Arrow
    • · Adjust Registration
    • · Auto window level/width setting
    • · Text/Arrow annotation (Label)
    • · Measurement of distance (Ruler), Angle, Cobb Angle, Ellipse ROI, and Freehand ROI

    Specialized Tools:

    • · Image subtraction of two series/datasets
    • · Access to semi-automated image stitching
    • Study and series linking .
    • · Register two different series or groups that do not share a frame of reference to link them spatially
    AI/ML Overview

    The Multi Modality Viewer is a software application for examining and manipulating medical images from MRI and CT scanners.

    It is considered substantially equivalent to its predicate device, the MR Core Software (K151115), which only handled MRI images, and a reference device, Softread Software (K040305), which supports both CT and MRI.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission does not explicitly state quantitative acceptance criteria or a specific performance study comparison table for the Multi Modality Viewer beyond feature-by-feature comparison with predicate/reference devices. Instead, it focuses on demonstrating that the device has similar technological characteristics, intended use, and indications for use, and that verification and validation testing confirms its safety and effectiveness.

    The document primarily performs a feature-by-feature comparison to establish substantial equivalence.

    Criteria (for comparison/equivalence)Subject Device: Multi Modality ViewerPredicate Device: MR Core Software (K151115)Reference Device: Softread (K040305)Reported Performance (or comparison outcome)
    Classification NameSystem, Image Processing, RadiologicalSystem, Image Processing, RadiologicalN/ASame
    Regulatory Number892.2050892.2050N/ASame
    Product CodeLLZLLZN/ASame
    ClassificationClass IIClass IIN/ASame
    Review PanelRadiologyRadiologyN/ASame
    Indications for UseExamines/manipulates MRI and CT images; compares multiple series side-by-side.Examines/manipulates MRI images; compares multiple series side-by-side.N/AAdded CT support compared to predicate; similar to reference.
    Intended UsersRadiologists, Clinicians, TechnologistsRadiologists, Clinicians, TechnologistsN/ASame
    Patient PopulationNot applicable (viewer software)Not applicable (viewer software)N/ASame
    Modality SupportCT and MRIMRICT and MRIAdded CT support compared to predicate; same as reference.
    DICOM Image CommunicationYesYesYesSame
    2D Image ReviewYesYesYesSame
    2D Comparative ReviewYesYesYesSame
    Multi-Planner ReformattingYesYesYesSame
    MIP/MinIPYesYesYesSame
    Image Editing, Setting, SavingYesYesYesSame
    Annotation & Tagging ToolsYesYesYesSame
    Display Options (e.g., thickness)YesYesYesSame
    Quantitative MeasurementsYesYesYesSame
    SnapshotYesYesYesSame
    Cine Image DisplayYesYesYesSame
    Multi-frame DisplayYesYesYesSame
    Color Image DisplayYesYesYesSame
    Simultaneous Multiple Studies ReviewYesYesYesSame
    Cross-reference Lines SupportYesYesYesSame
    Display of Selected Images/Series/StudyYesYesYesSame
    Comparison of Multiple Series/StudiesYesYesYesSame
    Scroll ImageYesYesYesSame
    Zoom ImageYesYesYesSame
    Pan ImageYesYesYesSame
    Focus ImageYesYesYesSame
    Rotate ImageYesYesYesSame
    Flip Image - VerticalYesYesYesSame
    Flip Image - HorizontalYesYesYesSame
    Rotate Image - ClockwiseYesYesYesSame
    Rotate Image - Counter-clockwiseYesYesYesSame
    Invert ImageYesYesYesSame
    ArrowYesYesYesSame
    Auto Window Level/Width SettingYesYesYesSame
    Measurement of DistanceYesYesYesSame
    Measurement of AngleYesYesYesSame
    Measurement of Cobb AngleYesYesYesSame
    Identification & Display of Ellipse ROIsYesYesYesSame
    Identification & Display of Freehand ROIsYesYesYesSame
    Manual RegistrationYesYesYesSame
    Image Subtraction of two series/datasetsYesYesYesSame
    Study and Series LinkingYesYesYesSame
    Semi-automated Image StitchingYesYesYesSame
    Time Intensity AnalysisYesYesN/A (not listed for Softread)Same (comparison with predicate)
    Batch Save of MPR reformatsYesYesN/A (not listed for Softread)Same (comparison with predicate)

    Overall Conclusion: The device is considered substantially equivalent because the added CT modality feature is similar to the reference device and does not raise different questions of safety and effectiveness. The verification and validation testing performed "demonstrate the subject device is as safe and effective as the predicate and reference devices."

    2. Sample Size for Test Set and Data Provenance

    The document mentions "Verification of the software that included performance and safety testing" and "Validation of the software that included simulated usability testing by experienced professionals." However, it does not specify the sample size used for any test set or the country of origin of the data, nor whether it was retrospective or prospective. The information provided is high-level about the testing processes.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    For the "External Validation" mentioned under the "Validation" section, "experienced medical professionals evaluated the application." However, the exact number of experts and their specific qualifications (e.g., radiologist with 10 years of experience) are not specified.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the test set. It only states that "All validators confirmed that the Multi Modality Viewer software fulfills its intended use."

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    The document does not mention a Multi Reader Multi Case (MRMC) comparative effectiveness study and therefore, there is no information on the effect size of human readers improving with AI vs. without AI assistance. This device is a viewer, not an AI-assisted diagnostic tool in the sense of providing automated interpretations.

    6. Standalone (Algorithm Only) Performance Study

    The primary evaluation appears to be of the software's functionality as a viewer, not as an algorithm performing a specific diagnostic task in standalone mode. Verification and validation tests were conducted to confirm proper function of the device's features, but no standalone performance study measuring diagnostic accuracy or similar metrics for an algorithm without human-in-the-loop was reported.

    7. Type of Ground Truth Used

    The document indicates that "simulation usability testing by experienced professionals" and "workflow testing" were conducted, with "validators confirmed that the Multi Modality Viewer software fulfills its intended use." This suggests "expert consensus" on usability and fulfillment of intended use as the form of "ground truth" or validation outcome, rather than pathology, outcomes data, or a specific diagnostic ground truth, as the device is a viewing platform.

    8. Sample Size for the Training Set

    The document does not refer to a "training set" because the Multi Modality Viewer is described as a software application for viewing and manipulating images, not an AI/ML algorithm that requires a training set for model development.

    9. How Ground Truth for Training Set Was Established

    Since there is no mention of a training set, the method for establishing ground truth for a training set is not applicable or discussed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1