Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K222745
    Device Name
    Axial3D Insight
    Date Cleared
    2023-07-03

    (294 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Axial3DInsight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.

    The Axial3DInsight output file can be used for the fabrication of the output file using additive manufacturing methods.

    The output file or physical replica can be used for treatment planning.

    The output file or physical replica can be used for diagnostic purposes in the field of orthopedic trauma, orthopedic, maxillofacial, and cardiovascular applications.

    Axial3DInsight should be used with other diagnostic tools and expert clinical judgment.

    Device Description

    Axial3D Insight is a secure, highly available cloud-based image processing, segmentation and 3D modelling framework for the transfer of imaging information either as a 3D printed physical model.

    AI/ML Overview

    The acceptance criteria and the study proving the device meets them are described below, based on the provided text.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state a table of acceptance criteria with specific quantitative metrics. However, it describes two validation studies and their outcomes, implying that meeting these outcomes constituted the acceptance.

    Inferred Acceptance Criteria & Reported Performance:

    Acceptance Criteria (Implied)Reported Device Performance
    Clinical Segmentation Performance: Consistent and diagnostically acceptable segmentation by radiologists.Clinical Segmentation Performance Study: "The Clinical Segmentation Performance study was conducted with 3 radiologists reviewing the segmentation of 12 cases across the fields of orthopedics, trauma, maxillofacial and cardiovascular. Axial3D adopted a peer reviewed medical imaging review framework of RADPEER to capture the assessment and feedback from the radiologists involved – all cases were scored within the acceptance criteria of 1 or 2a [1]." (This indicates successful segmentation as per expert review).
    Intended Use Validation (3D Models): 3D models produced by the device satisfy end-user needs and indications for use.Intended Use Validation Study: "The Intended Use validation study of the device was conducted with 9 physicians reviewing 12 cases across the fields of Orthopedics, Trauma, Maxillofacial, and Cardiovascular, as defined in the Intended Use statement of the device. This study concluded successful validation of the 3D models produced by Axial3D demonstrating the device outputs satisfied end user needs and indications for use."
    Software Verification & Validation: All software requirements and risk analysis successfully verified and traced."Axial3D has conducted software verification and validation, in accordance with the FDA quidance, General Principles of Software Validation; Final Guidance for Industry and FDA Staff, issued on January 11, 2002. All software requirements and risk analysis have been successfully verified and traced."
    Machine Learning Model Validation: Independent verification and validation of machine learning models before inclusion."Axial™- machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." (Detailed data on number of images, slice spacing, and pixel size used for validation of Cardiac CT/CTa, Neuro CT/CTa, Ortho CT, and Trauma CT models are provided in Table 5-4, indicating the scope of this validation).

    2. Sample Sizes and Data Provenance

    • Test Set Sample Sizes:

      • Clinical Segmentation Performance Study: 12 cases
      • Intended Use Validation Study: 12 cases
      • Machine Learning Model Validation:
        • Cardiac CT/CTa: 4,838 images
        • Neuro CT/CTa: 4,041 images
        • Ortho CT: 10,857 images
        • Trauma CT: 19,134 images
    • Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It only mentions the imaging scanner manufacturers and models used in the validation datasets: GE Medical Systems, Siemens, Phillips, and Toshiba.

    3. Number of Experts and Qualifications

    • Clinical Segmentation Performance Study: 3 radiologists. No specific years of experience are mentioned, but they are described as "radiologists."
    • Intended Use Validation Study: 9 physicians. No specific qualifications (e.g., orthopedic surgeon, maxillofacial surgeon, cardiologist) or years of experience are mentioned, only "physicians."

    4. Adjudication Method

    • For the Clinical Segmentation Performance Study, the "RADPEER" framework was adopted. All cases were scored within the acceptance criteria of 1 or 2a. While RADPEER is a peer review system, the specific adjudication
      method for discrepancies among the 3 radiologists (e.g., majority vote, consensus meeting, 2+1, 3+1) is not explicitly detailed. It only states that all cases met the acceptance criteria, suggesting agreement or successful resolution.
    • For the Intended Use Validation Study, no adjudication method is explicitly described beyond "9 physicians reviewing 12 cases" and concluding "successful validation."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, a MRMC comparative effectiveness study was not explicitly mentioned as being done to evaluate how much human readers improve with AI vs. without AI assistance. The studies described focus on validation of the device's output and the AI models, rather than human-in-the-loop performance improvement. The text mentions that the Axial™ machine learning models are used to generate an initial segmentation, but the final segmentation and validation are done by "Axial3D trained staff," implying a human-in-the-loop process, but no comparative study to measure effect size is presented in this document.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance of the machine learning models was conducted. The document states: "Axial™- machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." Table 5-4 provides the number of images used for validation for different clinical areas (Cardiac, Neuro, Ortho, Trauma CT), indicating a quantitative assessment of the models themselves. However, the specific metrics (e.g., Dice score, sensitivity, specificity) for this standalone performance are not provided in the text.

    7. Type of Ground Truth Used

    • For the Clinical Segmentation Performance Study: The ground truth was established by the consensus or review of the 3 radiologists, consistent with a form of expert consensus.
    • For the Intended Use Validation Study: The ground truth was based on the expert clinical judgment of the 9 physicians, who reviewed the 3D models and concluded their utility for intended use.
    • For the Machine Learning Model Validation: The document states that "The Axial™- machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." While it doesn't explicitly state the type of ground truth for this segment, it can be inferred that the ground truth for the validation of the machine learning models was also based on expert-derived segmentations used to compare against the model's output.

    8. Sample Size for the Training Set

    • The document states: "The Axial™- machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." However, the sample size for the training set is not provided. Only the sample sizes for the validation data are listed (Table 5-4).

    9. How Ground Truth for Training Set was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only implies that training data was distinct from validation data. Given the nature of medical image segmentation, it is highly probable that the ground truth for the training set was established through manual segmentation by human experts (e.g., radiologists, clinical experts), but this is an inference and not explicitly stated in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1