K Number
K232841
Device Name
Axial3D Insight
Date Cleared
2023-11-15

(62 days)

Product Code
Regulation Number
892.2050
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Axial3D Insight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.

The Axial3D Insight output file can be used for the fabrication of the output file using additive manufacturing methods.

The output file or physical replica can be used for treatment planning.

The output file or physical replica can be used for diagnostic purposes in the field of trauma, orthopedic, maxillofacial, and cardiovascular applications.

Axial3D Insight should be used with other diagnostic tools and expert clinical judgment.

Device Description

Axial3D Insight is a secure, highly available cloud-based image processing, segmentation, and 3D modelling framework for the transfer of imaging information either as a 3D printed physical model.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for the Axial3D Insight device, based on the provided text:

Acceptance Criteria and Device Performance

Acceptance CriteriaReported Device Performance
Clinical Segmentation Performance (RADPEER score)All cases scored within the acceptance criteria of 1 or 2a.
Intended Use Validation Study3D models produced by Axial3D demonstrated satisfaction of end-user needs and indications for use.
Phantom Testing (Origin One printer)Reproduce required geometry to an acceptance criterion of ± 0.3mm.
Standalone performance of AI modelsNo direct acceptance criteria are stated, as AI outputs are not used in isolation.

Note: The document states that the update to the product does not affect the current software validation, and the software portion is not being updated. Therefore, the existing validation testing from the predicate device (K222745) is considered applicable.

Study Details

Clinical Segmentation Performance Study

  1. Sample Size for Test Set: 12 cases.
  2. Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective).
  3. Number of Experts and Qualifications: 3 radiologists. No specific years of experience or subspecialty are provided, beyond being "radiologists."
  4. Adjudication Method: Not explicitly stated as 2+1 or 3+1. The study adopted a "peer-reviewed medical imaging review framework of RADPEER" to capture assessment and feedback.
  5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned. This study focused on the performance of the device's segmentation, not human reader improvement with AI assistance.
  6. Standalone Performance Study: The output of the machine learning models is not used in isolation. The segmentations are further refined and validated by Axial3D trained staff. Therefore, a standalone performance study for the AI component (without human oversight) is not presented as the final product.
  7. Type of Ground Truth: Not explicitly stated, though implicitly refers to the standard of radiologists reviewing the segmentation.
  8. Sample Size for Training Set: Not specified for the Clinical Segmentation Performance Study.
  9. How Ground Truth for Training Set was Established: Not specified for the Clinical Segmentation Performance Study.

Intended Use Validation Study

  1. Sample Size for Test Set: 12 cases.
  2. Data Provenance: Not explicitly stated.
  3. Number of Experts and Qualifications: 9 physicians. No specific qualifications are provided beyond "physicians."
  4. Adjudication Method: Not explicitly stated.
  5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned.
  6. Standalone Performance Study: Not applicable; this study validated the 3D models with physician review.
  7. Type of Ground Truth: Implicitly based on "end user needs and indications for use" as assessed by physicians.
  8. Sample Size for Training Set: Not specified.
  9. How Ground Truth for Training Set was Established: Not specified.

Axial™- Machine Learning Validation

This section describes the validation of the underlying machine learning models, which are used to generate initial segmentations, but their output is not used in isolation.

  1. Sample Size for Test Set (Validation Data):
    • Cardiac CT/CTa: 4,838 images
    • Neuro CT/CTa: 4,041 images
    • Ortho CT: 10,857 images
    • Trauma CT: 19,134 images
  2. Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). However, a list of various CT scanner manufacturers and models (GE Medical Systems, Siemens, Phillips, Toshiba) indicates a diversity of acquisition sources.
  3. Number of Experts and Qualifications: Not mentioned for this specific validation as it focuses on model output.
  4. Adjudication Method: Not mentioned.
  5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned.
  6. Standalone Performance Study: The document explicitly states that the "output of these models is not used in isolation to produce the final 3D patient specific model." The segmentations are "used by Axial3D trained staff who complete the final segmentation and validation." Therefore, this is not a standalone performance of the AI in a clinical workflow, but an internal validation of the AI component before human refinement.
  7. Type of Ground Truth: Not explicitly stated for this machine learning validation. Implicitly, it would be expertly generated ground truth for segmentation.
  8. Sample Size for Training Set: Not specified in the provided text, but it states that the "training data used during the algorithm development was explicitly kept separate and independent from the validation data used."
  9. How Ground Truth for Training Set was Established: Not specified in the provided text.

Phantom Testing (for 3D Printer Verification)

  1. Sample Size for Test Set: Not explicitly stated as a number of phantoms, but involves "3D test phantoms provided by the National Institute of Standards and technology (NIST)."
  2. Data Provenance: NIST test phantoms.
  3. Number of Experts and Qualifications: Not applicable, as this is a technical verification of printer accuracy.
  4. Adjudication Method: Not applicable.
  5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not applicable.
  6. Standalone Performance Study: Not applicable.
  7. Type of Ground Truth: Accuracy measurements against a known NIST test phantom.
  8. Sample Size for Training Set: Not applicable.
  9. How Ground Truth for Training Set was Established: Not applicable.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).