Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K181926
    Manufacturer
    Date Cleared
    2018-09-21

    (65 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    InVivo Web Viewer is a software application used for the display and 3D visualization of medical image files from scanning devices, such as CT and MRI. It is intended for use by radiologists, clinicians, referring physicians and other qualified individuals to retrieve, process, render, review, assist in diagnosis, utilizing web user interface. Additionally, InVivo Web Viewer is a preoperative software application used for the planning and evaluation of dental implants and surgical treatments. This device is not indicated for mammography use.

    Device Description

    InVivo Web Viewer is an imaging software designed specifically for clinicians, doctors, physicians and other qualified medical professionals. It is used for the visualization of patient image files from scanning devices, such as CT and MRI, and for assisting in case diagnosis, dental implant placement and treatment planning. It displays 2D image and renders 3D volumetric image in web browser environment such as Google Chrome or Mozilla Firefox. Users are able to examine anatomy on a computer screen and manipulate the rendered image by turning, zooming, clipping, adjusting brightness and contrast and choosing different rendering color presets. User can also make distance measurements on the image and use the implant placement tool for assisting implant placement and surgical treatment planning.

    AI/ML Overview

    The provided text describes a 510(k) submission for the "InVivo Web Viewer" and compares it to a predicate device, "InVivoDental." However, the document does not contain any information about acceptance criteria, device performance metrics, specific study results (like sensitivity, specificity, accuracy), sample sizes for test or training sets, ground truth establishment details, or details about expert involvement for establishing ground truth or adjudicating cases.

    The "Non-Clinical Test Results" section vaguely mentions "Performance testing" and "Bench testing of the software with predicate software," stating "This testing confirms that InVivo Web Viewer is as effective as its predicate in its ability to perform essential functions." This statement, however, lacks any quantitative performance data.

    Therefore, I cannot fulfill the request to describe the acceptance criteria and the study that proves the device meets the acceptance criteria with the provided information. The document focuses on regulatory equivalence based on intended use and technological characteristics rather than detailed performance study results against specific criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K150976
    Device Name
    Collage
    Manufacturer
    Date Cleared
    2015-06-04

    (52 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Collage is a software application intended for viewing of 3D medical image files from scanning devices, such as CT, MRI, or 3D Ultrasound as well as 2D patient images, such as patient photographs, intraoral photographs, and dental x-rays. Images and data can be stored, communicated, processed and displayed within the system and or across computer networks at distributed locations. It is intended for use by doctors, clinicians, and other qualified individuals utilizing standard PC hardware. This device is not indicated for mammography use.

    Device Description

    Collage is an interactive imaging software used for the visualization, storage, and management of 3D medical image files from scanning devices, such as CT, MRI, or 3D Ultrasound as well as 2D patient images, such as patient photographs, intraoral photographs, and dental x-rays. Doctors, dental clinicians, and other qualified individuals can retrieve, process, render, review, store and print images, utilizing standard PC hardware. The software runs in Windows operating systems and visualizes medical imaging data on the computer screen. The Collage software is intended as a platform bridging different sets of patient data for comprehensive studies. With Collage software, doctors can manage all of their patient images, including both 2D and 3D image data, in a single software.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the "Collage" device:

    The provided text does not contain explicit acceptance criteria or a detailed study proving the device meets those criteria, as typically understood for AI/ML medical devices.

    Instead, the document is a 510(k) premarket notification summary for a Picture Archiving and Communication System (PACS) software, "Collage." The evaluation focuses on demonstrating substantial equivalence to a predicate device (OsiriX MD, K101342) through software development quality assurance measures and bench testing.

    Here's an attempt to answer your questions based on the available information, noting where information is absent:

    1. A table of acceptance criteria and the reported device performance

      The document does not specify formal, measurable "acceptance criteria" for clinical performance. The evaluation is based on demonstrating the software functions as intended and is comparable to a predicate device.

      Acceptance Criteria (Implied)Reported Device Performance
      Software is stable and operating as designed."Testing confirmed that the software is stable and operating as designed."
      Risk associated with the software is reduced to acceptable levels."Testing also confirmed that the software has been evaluated for hazards and that risk has been reduced to acceptable levels."
      Ability to render and manage 2D and 3D medical images."Bench testing of the software with predicate software was performed by evaluation of images rendered by Collage and predicate software. This testing and evaluation included testing of rendering both 2D and 3D images... This testing confirms that Collage is as effective as its predicate in its ability to perform its essential functions of rendering and managing medical images."
    2. Sample sized used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

      The document mentions "Bench testing of the software with predicate software was performed by evaluation of images rendered by Collage and predicate software." It does not specify the sample size of images used for this bench testing, nor does it provide information on the provenance (country of origin, retrospective/prospective) of these images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

      "Bench testing... was evaluated by an expert in the field of radiology." Only one expert is mentioned. Their specific qualifications (e.g., years of experience) are not detailed beyond "an expert in the field of radiology."

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

      Given that only "an expert" evaluated the bench testing results, there was no adjudication method mentioned or implied (like 2+1 or 3+1 consensus).

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

      No MRMC comparative effectiveness study was done. The device "Collage" is a PACS software for viewing and managing images, not an AI-assisted diagnostic tool. Therefore, there's no mention of how human readers improve with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

      The device is described as "a software application intended for viewing...images." "Diagnosis is not performed by this software but by doctors, clinicians and other qualified individuals." It is a tool for clinicians, not a standalone diagnostic algorithm. Therefore, no standalone (algorithm-only) performance was assessed in the context of diagnostic accuracy, as it's not its intended function. The "standalone" performance assessed was its ability to render and manage images as a PACS system.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

      For the bench testing described, the "ground truth" was essentially the visual evaluation and comparison of rendered images by a radiology expert against images rendered by the predicate device, to ensure "Collage" performs its essential functions effectively. It's not a diagnostic "ground truth" derived from pathology or outcomes, but rather a functional ground truth for image display.

    8. The sample size for the training set

      The document describes "Collage" as an imaging software, not an AI/ML model that would typically have a distinct "training set." Therefore, no sample size for a training set is mentioned. The software development followed standard quality assurance measures, but this is different from machine learning model training.

    9. How the ground truth for the training set was established

      As there is no mention of a training set for an AI/ML model, this question is not applicable based on the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K140093
    Device Name
    TABLE
    Manufacturer
    Date Cleared
    2014-03-07

    (52 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Table is a software application used for the display and 3D visualization of medical image files from scanning devices such as CT and MRI. It is intended for use by radiologists, clinicians, referring physicians and other qualified individuals to retrieve, process, render, review, and assist in diagnosis, utilizing standard PC hardware.

    This device is not indicated for mammography use.

    Device Description

    Table is a volumetric imaging software designed specifically for clinicians, doctors, physicians, and other qualified medical professionals. The software runs in Windows operating systems and visualizes medical imaging data on the computer screen. Users are able to examine anatomy on a computer screen and use software tools to move and manipulate images by turning, zooming, flipping, adjusting contrast and brightness, cutting, and slicing using either touch control or a mouse. The software also has the ability to perform measurements of angle and length. There are multiple tools to annotate and otherwise mark areas of interest on the images. Additionally, Table has the ability to demonstrate pathology examples of patient data for educational purposes.

    AI/ML Overview

    The provided 510(k) summary (K140093) describes the Anatomage Table as a volumetric imaging software for 3D visualization of medical image files (CT, MRI) for diagnosis assistance.

    No specific quantitative acceptance criteria are explicitly stated in the provided document. The device's performance is primarily established through a qualitative comparison to a predicate device and general confirmation of stability and designed operation.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance
    Qualitative Equivalence to Predicate Device: The software should be as effective as its predicate in essential functions."Testing confirms that Table is as effective as its predicates in its ability to perform its essential functions of measurement and rendering of DICOM data."
    Stability and Operating as Designed: The software should function reliably and as intended."Testing confirmed that the software is stable and operating as designed."
    Hazard Evaluation and Risk Reduction: Identified hazards should be evaluated, and risks reduced to acceptable levels."Testing also confirmed that the software has been evaluated for hazards and that risk has been reduced to acceptable levels."
    Accuracy of Measurement Tools: Essential linear and angular measurements should be accurate."This testing included testing of measurement tools in both predicate and subject software..." (Implied accuracy through expert evaluation)
    Rendering of DICOM Data: The software should accurately visualize DICOM data."...and rendering of DICOM data." (Implied accuracy through expert evaluation)

    2. Sample Size Used for the Test Set and Data Provenance

    The document states "Bench testing of the software with predicate software was performed by evaluation of images rendered by Table and predicate software." However, it does not specify the sample size (number of images or cases) used for this test set nor the data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    3. Number of Experts and their Qualifications for Ground Truth

    The testing and evaluation of the bench test were performed by "an expert in the field of radiology." Only one expert is mentioned. The document does not provide further specific qualifications (e.g., years of experience) for this expert.

    4. Adjudication Method for the Test Set

    The document mentions evaluation by "an expert." This suggests a single-expert assessment rather than a multi-expert adjudication method (like 2+1 or 3+1). Therefore, the adjudication method appears to be "none" in the sense of multiple experts resolving discrepancies.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The evaluation described is a bench test comparing the software's output to a predicate, assessed by a single expert. There is no mention of human readers improving with AI assistance or without.

    6. Standalone Performance Study

    Yes, a standalone performance study was done in the sense that the "Table" software's performance was evaluated independently in a "bench testing" scenario, comparing its outputs (rendered images and measurements) with those of a predicate software. This evaluation focused on the algorithm's capabilities without explicit human-in-the-loop performance measurement.

    7. Type of Ground Truth Used

    The ground truth for the test set was implicitly established through comparison to the predicate software's output and evaluation by a single radiology expert. This leans towards an expert-based assessment of accuracy and equivalence rather than pathology or outcomes data.

    8. Sample Size for the Training Set

    The document does not provide any information regarding a training set sample size. This suggests that if machine learning was used (which is not explicitly clear for this type of software described), the details of its training are not included in this summary. Given the description, it appears to be more of a deterministic image processing and visualization software rather than an AI/ML-driven diagnostic algorithm.

    9. How Ground Truth for the Training Set Was Established

    As no training set is mentioned or detailed, the document does not describe how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1