Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K992580
    Manufacturer
    Date Cleared
    1999-10-18

    (77 days)

    Product Code
    Regulation Number
    892.1560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FETAL ASSESSMENT CAP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Fetal Assessment CAP@ 1.1 is intended to retrieve, analyze and store digital ultrasound images for computerized 3-dimensional image processing.

    Fetal Assessment CAP can import digital 2D or 3D image file formats for 3D display.

    The software can be used with ultrasound systems previously cleared for Bmode imaging in obstetrics, gynecology, small organ, abdominal, endocavity, neuriological, and intraoperative uses.

    Device Description

    The Fetal Assessment CAP® 1.1 is a software module for high performance computer systems based on Microsoft Windows NT™ 4.0 operating system standards. Fetal Assessment CAP® 1.1 is proprietary software for the analysis, storage, retrieval and reconstruction of ultrasound B-mode images.

    The data can be acquired by a TomTec acquisition station or B-mode (2D) acquisition capable Ultrasound systems. With the Fetal Assessment CAP a 3 dimensional display can be reconstructed.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the "Fetal Assessment CAP® 1.1" software. It focuses primarily on the device description, its intended use, comparison to a predicate device, and a general statement about testing.

    Here's an analysis of the acceptance criteria and study information, based only on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Design IntentDevice performance satisfies design intent.
    System Performance SpecificationsDevice performance conforms to system performance specifications.

    2. Sample sized used for the test set and the data provenance

    • Sample Size: Not specified. The document states "Testing was performed according to internal company procedures" and "Software testing and validation were done at the module and system level." This does not provide details on the number of cases or images used.
    • Data Provenance: Not specified. There is no mention of where the data came from (e.g., country of origin, specific clinics) or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not provided in the document. The text only mentions "Test results were reviewed by designated technical professionals before software proceeded to release," but it does not specify the number or qualifications of these professionals, nor does it explicitly state they established "ground truth" for the test set in a medical sense.

    4. Adjudication method for the test set

    • This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done or at least not described. The submission is for a standalone software module that reconstructs 3D images. It states, "the Fetal Assessment CAP is a simple to use, qualitative 3D ultrasound tool, which provides 3D structures without any measurement functionality." There is no mention of human readers using the system in a comparative effectiveness study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance assessment was done. The document describes the Fetal Assessment CAP as "proprietary software for the analysis, storage, retrieval and reconstruction of ultrasound B-mode images" and "can import digital 2D or 3D image file formats for 3D display." The testing focused on whether the software itself satisfied its design intent and conformed to system specifications. The "performance" described is the functionality of the software to process and display images, rather than a diagnostic performance metric.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • The term "ground truth" in a medical context (e.g., confirmed diagnosis by pathology) is not applicable here. The software's function is "3 dimensional display" and "reconstruction" of ultrasound images, without measurement functionality. The "ground truth" for this type of device would likely relate to the accuracy and fidelity of the 3D reconstruction based on the input 2D data, and the correct functioning of storage, retrieval, and analysis features. The document states "Test results were reviewed by designated technical professionals," implying validation against the software's specified functional requirements rather than medical ground truth.

    8. The sample size for the training set

    • This information is not provided. As the device is referred to as "proprietary software for the analysis, storage, retrieval and reconstruction of ultrasound B-mode images," and not explicitly an AI/ML algorithm (especially given the 1999 date), it's possible there wasn't a "training set" in the modern sense of machine learning. The focus was on software validation.

    9. How the ground truth for the training set was established

    • This information is not provided, and as noted above, a distinct "training set" with established ground truth as understood in current AI/ML might not have been relevant or formalized for this type of software in 1999.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1