Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K141745
    Manufacturer
    Date Cleared
    2014-10-31

    (123 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    IQQA-BODYIMAGING SOFTWARE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IQQA-BodyImaging is a PC-based, self-contained, non-invasive image analysis software application for reviewing body imaging studies (including thoracic, abdominal and pelvic) derived from CT and MR scanners. Combining image viewing, processing and reporting tools, the software is designed to support the visualization, evaluation, and reporting of body imaging studies and physician-identified lesions. The software supports a workflow based on automated image registration for viewing and analyzing multiphase and multiple time-point volume datasets. It includes tools for interactive segmentation and labeling of organ segments and vascular/ductal/airway structures. The software provides functionalities for manual or interactive segmentation of physician-identified lesions, interactive definition of virtual resection plane and virtual needle path, and allows for regional volumetric analysis of such lesions in terms of size, position, margin, and enhancement pattern, providing information for physician's evaluation and treatment planning, monitoring, and follow-up. The software is designed for use by trained professionals, including physicians and technicians. Image source: DICOM.

    Device Description

    The IQQA-BodyImaging Software is a self-contained, non-invasive radiographic image analysis application that is designed to run on standard PC hardware. The image input is DICOM. The data utilized is derived from CT and MR scanners, and includes thoracic/abdominal/pelvic images. Combining image processing, viewing and reporting tools, the software supports the visualization, evaluation and reporting of body imaging scans and physician identified lesions. Viewing tools include 2D original DICOM image viewing, window level adjustment, pre-defined optimized window level setting, synchronized viewing of multi-phase datasets or volumes from multiple time-points, MPR (orthogonal, oblique and curved), MIP and MinIP, volume rendering. Analysis and evaluation tools include automatic/interactive segmentation of structures utilizing user input of seeding points and bounded boxes, interactive labeling of segmented areas, user tracing and interactive editing, quantitative measurement derived from segmentation and labeling results, and the measurement of distance between physician specified structures and landmarks. Reporting tools in the software automatically assemble information including physician identified lesion locations, measurement information, physician-input lesion characterization, lesion snapshot images across multi-phases or multiple time-points, information of organ segments and vessels/ducts/airways, and illustrative snapshots of the GUI taken by physicians, for physician's confirmation and further diagnosis and patient management note input. The IQQA-BodyImaging software supports a workflow based on automated registration for viewing and analyzing multiphase or multiple time-point volume datasets. The software automatically matches the spatial location of original DICOM images across contrasted multiphases or multiple time-points, and with physicians' interactive adjustment, to enable synchronized viewing of datasets simultaneously. Physician may also activate the temporal movie display of selected slice locations across multi-phases to aid visualization and evaluation. After identifying and marking lesions on 2D image display, physicians can either manually trace lesion boundary or activate automated tools to segment lesion. The software further includes tools for interactive segmentation and interactive labeling of organ segments and vascular/ductal/airway structures (such as liver lobes, major branches of vessels/ducts/airways), thus facilitating the visualization of spatial relationship between suspicious lesions and specified anatomical structures/landmarks. The software provides functionalities for interactive adjustment of user-defined margin size around the lesion, interactive definition of virtual resection plane, interactive definition of virtual needle path to lesion and local zone, regional analysis of lesions with respect to suze, shape, position, margin, and enhancement pattern etc, synchronized view of lesion and information between planning/baseline study and monitoring/follow-up studies, thus providing information to support physician's to evaluation of physician-identified lesions as well as treatment planning, monitoring and follow-up assessment.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information provided in the document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state formal "acceptance criteria" in a table format with pass/fail thresholds. Instead, it reports performance metrics from various tests. I will present the reported performance as if these were the evaluation metrics against which the device's performance was assessed.

    Performance MetricAcceptance Criteria (Implicit)Reported Device Performance
    Volumetric Measurement Accuracy (on simulated images)Minimizing difference compared to ground truth.Less than 1.5% in volume measurement difference as compared with the ground truth for simulated images (ellipsoid, crescent, cylinder shapes).
    Intra-observer Consistency (mean volume measurement difference by two physicians)Minimizing difference between observers.For retrospective clinical patient studies of CT and MR modalities (thoracic, abdominal, pelvic):
    • Thoracic: 0.4%
    • Abdominal: 1.5%
    • Pelvic: 2.5% |
      | Interactive Registration Error (phantom studies) | Minimizing registration error. | Mean interactive registration error of 0.2394mm with a standard deviation of 0.2261mm on phantom studies scanned at different times with different positioning and orientations. |
      | Initial Automated Registration Error (patient studies with synthetic deformations) | Minimizing registration error. | Mean initial automated registration error of 0.5594mm with a standard deviation of 0.5448mm on patient studies with synthetic deformations. |
      | Interactive Registration Error (retrospective patient studies) | Minimizing registration error. | Mean interactive registration error of 0.5388mm with a standard deviation of 0.7150mm on retrospective patient studies that are scanned at different times during clinical practice. |
      | Major Functionality Validation | N/A (Qualitative feedback) | Software testing conducted at two clinical sites; physicians used the software to review CT and MR body imaging scans, validate major functionalities, and provide feedback along the line of intended use. |
      | Overall Safety and Effectiveness | Device is safe and effective; no new safety risks. | "The IQQA-BodyImaging labeling contains instructions for use and necessary cautions, warnings and notes to provide for safe and effective use of the device. Risk Management is ensured via the company's design control and risk management procedures. Potential hazards are controlled via software development and verification and validation testing." "Test results demonstrate that the device is safe, effective, and does not raise any new potential safety risks." |
      | Substantial Equivalence | Equivalent to predicate devices in technical characteristics, principles of operation, and functional features, without new safety risks. | "IQQA-BodyImaging and predicate devices are substantially equivalent in the areas of technical characteristics, principles of operation, and functional features. The new device does not raise any new potential safety risks and is equivalent in performance to the existing legally marketed devices." |

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated as a single number for all tests.
      • Simulated images: An unspecified number of simulated images described as containing "ellipsoid, crescent, and cylinder shapes."
      • Retrospective clinical patient studies: An unspecified number of "clinical patient studies" of CT and MR for thoracic, abdominal, and pelvic body parts.
      • Phantom image pairs: An unspecified number of "phantom image pairs scanned at different times with different positioning and orientations."
      • Patient studies with synthetic deformations: An unspecified number of "patient studies with synthetic deformations."
      • Retrospective patient studies for registration: An unspecified number of "retrospective patient studies that are scanned at different times during clinical practice."
    • Data Provenance:
      • Simulated images: Artificially generated (simulated).
      • Clinical Patient Studies (for intra-observer consistency): Retrospective, specific country of origin not mentioned, but described as "clinical patient studies."
      • Phantom Studies (for registration): Acquired from physical phantoms, specific country/institution not mentioned.
      • Patient Studies (for automated and interactive registration): Retrospective, specific country of origin not mentioned, but described as "patient studies."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts:
      • For intra-observer consistency in volume measurement validation, "two physicians" were used.
      • For clinical site validation/feedback, "physicians" at "two clinical sites" were involved, but the specific number involved in establishing ground truth for the performance metrics is not detailed.
    • Qualifications of Experts: The document states "physicians" were used. No further details about their specific qualifications (e.g., specialty, years of experience, subspecialty) are provided.

    4. Adjudication Method for the Test Set

    • Intra-observer consistency: Implies a comparison between the measurements of two physicians, not necessarily an adjudication to establish a "ground truth" but rather to assess agreement between human operators using the device.
    • For other tests where "ground truth" is mentioned (e.g., volumetric measurement accuracy on simulated images, registration error relating to phantom studies), the method of establishing this ground truth is not specified as an adjudication process. The ground truth for simulated images would be known by design. For phantom studies, it's typically based on physical measurements or precise setup parameters.
    • No specific adjudication method like "2+1" or "3+1" is described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC comparative effectiveness study is described in the provided text. The document focuses on the performance of the software's sub-components (segmentation, measurement, registration) and intra-observer consistency when physicians use the tools.
    • There is no mention of comparing human readers with AI assistance versus without AI assistance to measure an effect size of improvement. The device is positioned as a tool to support physician evaluation, not explicitly to replace or augment their accuracy in a comparative study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, partially.
      • The "Volumetric Measurement Accuracy" test on simulated images would likely represent a standalone performance evaluation against a known ground truth.
      • The "Initial Automated Registration Error" on patient studies with synthetic deformations also appears to be a standalone measurement of the algorithm's performance before any interactive adjustment.
    • However, other tests, like "Intra-observer Consistency" and "Interactive Registration Error," specifically involve human interaction with the device.

    7. The Type of Ground Truth Used

    • Simulated Images: "Ground truth" was established by the design of the simulated objects (ellipsoid, crescent, cylinder shapes).
    • Phantom Studies (for registration): "Ground truth" would likely be derived from the known physical properties or precise measurements of the phantom and its setup.
    • For clinical patient studies (intra-observer consistency and general registration performance), the "ground truth" for the observed differences is not explicitly described as pathology or outcomes data. Instead, the evaluations focus on consistency between human measurements or the accuracy of the software against a reference (which for patient studies might be a manual measurement considered as the reference standard, though not specified).

    8. The Sample Size for the Training Set

    • Not provided. The document describes validation and testing, but it does not mention the sample size used for training the algorithms within the IQQA-BodyImaging Software.

    9. How the Ground Truth for the Training Set Was Established

    • Not provided. Since the training set sample size is not mentioned, neither is the method for establishing its ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1