Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K200356
    Device Name
    MEDO ARIA
    Manufacturer
    Date Cleared
    2020-06-11

    (119 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MEDO ARIA

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MEDO ARIA is designed to view and quantify ultrasound image data using machine learning techniques to aid trained medical professionals in diagnosis of developmental dysplasia of the hip (DDH). The device is intended to be used on neonates and infants, aged 0 to 12 months.

    Device Description

    MEDO ARIA is a cloud-based standalone software as a medical device (SaMD) that helps qualified users with image-based assessment of developmental hip dysplasia (DDH) of pediatric patients (e.g., ages 0 to 12 months). It is designed to support the workflow by helping the radiologist to evaluate, quantify, and generate reports for hip images. MEDO ARIA Software takes as an input imported Digital Imaging and Communications in Medicine (DICOM) images from ultrasound scanners and allows users to upload, browse, and view images, measure alpha angle and acetabular coverage, and manipulate 2D and 3D infant hip ultrasound images, as well as create and finalize examination reports. It provides users with a specific toolset for viewing pediatric ultrasound hip images, placing landmarks, and creating reports.

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details for the MEDO ARIA device:

    The provided document, a 510(k) summary for the MEDO ARIA device, offers limited details specifically on acceptance criteria and a dedicated "study that proves the device meets the acceptance criteria." The document primarily focuses on demonstrating substantial equivalence to a predicate device and outlining the device's features and indications for use.

    However, we can infer some information from the "Performance Data" and the general context.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or detailed performance metrics. It generally states: "Safety and performance of MEDO ARIA have been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing."

    Without specific numerical targets, it's impossible to create a precise table. However, based on the functionalities listed, the implied performance criteria would likely revolve around the accuracy, precision, and usability of its quantitative analysis tools (alpha angle and coverage measurement), landmark placement (manual and semi-automatic), and Graf classification.

    Implied Acceptance Criteria and Reported Performance (Inferred)

    Acceptance Criterion (Inferred)Reported Device Performance
    Accuracy of Alpha Angle MeasurementNo specific numerical accuracy reported. The device provides quantitative analysis for "Angle (alpha angle)." Its performance has been "evaluated and verified in accordance with software specifications."
    Accuracy of Acetabular Coverage MeasurementNo specific numerical accuracy reported. The device provides quantitative analysis for "Distance ratio (coverage)." Its performance has been "evaluated and verified in accordance with software specifications."
    Accuracy of Semi-Automatic Landmark PlacementNo specific numerical accuracy reported. The device supports "Semi-automatic landmark placement" which is specified as "user-modifiable." Its performance has been "evaluated and verified in accordance with software specifications."
    Correctness of Hip Graf ClassificationNo specific agreement rate or correctness metric reported. The device provides "Lookup-table-based Graf Classification," which is "user-modifiable." Its performance has been "evaluated and verified in accordance with software specifications."
    Software Functionality and Usability (e.g., image display, navigation, report generation)All key features listed (2D visualization, slice-scroll, manual/semi-automatic landmark placement, alpha/coverage measurements, Graf classification, report generation) are presumed to function as intended and meet internal specifications, based on "software verification and validation testing."
    Compliance with Software StandardsThe device was evaluated in accordance with "IEC 62304:2006/AC:2015 - Medical device software -Software life cycle processes" and the "FDA Guidance document, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.'"

    2. Sample Size for the Test Set and Data Provenance

    The document does not provide any specific information regarding the sample size used for the test set or the provenance (e.g., country of origin, retrospective/prospective) of the data used for any testing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not provide any information about the number or qualifications of experts used to establish ground truth for a test set.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for a test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it provide any effect size for human reader improvement with AI assistance. The device is described as an aid to "trained medical professionals," implying human-in-the-loop, but formal comparative studies are not detailed.

    6. If a Standalone Study (algorithm only without human-in-the-loop performance) was done

    The entire description of the software, particularly its role in "helping the radiologist to evaluate, quantify, and generate reports" and "aid trained medical professionals in diagnosis," suggests a human-in-the-loop context. However, the quantitative analyses (alpha angle, coverage, Graf classification) are likely performed by the algorithm in a standalone manner before the user's final decision. The document does not explicitly state if a standalone performance study (algorithm only) was conducted for these specific measurements, separate from interaction with a human.

    7. The Type of Ground Truth Used

    The document does not specify the type of ground truth used (e.g., expert consensus, pathology, outcomes data). Given the nature of hip ultrasound measurements, expert consensus from radiologists or orthopedic specialists reviewing the images and confirming measurements would be a likely ground truth, but this is not explicitly stated.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide any information on how the ground truth for the training set was established.


    Summary of Missing Information:

    The provided 510(k) summary is very high-level regarding the performance evaluation. It lacks critical details that would typically be found in a detailed clinical or validation study report, such as:

    • Specific quantitative acceptance criteria (e.g., "alpha angle measurement must be within X degrees of ground truth").
    • Performance metrics (e.g., sensitivity, specificity, AUC, mean absolute error, inter-reader variability improvement).
    • Details about the datasets used (size, characteristics, provenance).
    • Information about expert review for ground truth (number, qualifications, adjudication).
    • Details of any comparative studies (MRMC or standalone algorithm performance).

    The current document focuses on demonstrating that software verification and validation activities were performed in accordance with applicable standards and that the device is substantially equivalent to a predicate, without delving into the specifics of how well it performs against defined metrics.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1