Search Results
Found 1 results
510(k) Data Aggregation
(220 days)
Axial3D Insight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.
The Axial3D Insight output file can be used for the fabrication of physical replicas of the output file using additive manufacturing methods. The output file or physical replica can be used for treatment planning.
The output file or the physical replica can be used for diagnostic purposes in the field of trauma, orthopedic, maxillofacial and cardiovascular applications. Axial3D Insight should be used in conjunction with other diagnostic tools and expert clinical judgment.
Axial3D Insight is a secure, highly available cloud-based image processing, segmentation and 3D modelling framework for the transfer of imaging information either as a digital file or as a 3D printed physical model.
The FDA 510(k) clearance letter and supporting documentation for Axial3D Insight (K250369) details the device's acceptance criteria and the studies performed to demonstrate its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes two main validation studies: a "Clinical Segmentation Performance study" for the overall Axial3D Insight software, and "AxialML Machine Learning Validation" for the underlying machine learning models. The acceptance criteria for the Clinical Segmentation Performance study are described in terms of a peer-reviewed medical imaging review framework (RADPEER). For the AxialML Machine Learning Validation, the acceptance criteria are based on quantitative metrics demonstrating "equivalence or improvement" compared to the original model.
Acceptance Criteria Category | Specific Metric/Mechanism | Acceptance Threshold/Method | Reported Device Performance |
---|---|---|---|
Clinical Segmentation Performance (Axial3D Insight) | Radiologist Review via RADPEER Framework | All cases scored within RADPEER acceptance criteria of 1 or 2a. | All cases were scored within the acceptance criteria of 1 or 2a. |
Intended Use Validation (Axial3D Insight) | Physician Review of 3D Models | Successfully validated, satisfying end user needs and indications for use. | Concluded successful validation; 3D models satisfied end user needs and indications for use. |
AxialML Machine Learning Model Validation (PCCP) | Quantitative 3D Medical Image Segmentation Metric Analysis (Dice Coefficient, Pixel Accuracy, AUC, Precision, Recall) | Performance must demonstrate equivalence or improvement compared to the original submission model version. | Not explicitly reported as a single summary metric, but the document states these metrics are used to ensure the model "consistently meet performance standards" and for successful validation in line with the modification protocol. |
AxialML Machine Learning Model Validation (PCCP) | Qualitative Assessment by Medical Visualization Engineers | Fixed evaluation methodology to define improved, equivalent, or reduced performance against AxialML Model Design Input Specifications. | Confirmed validation by producing objective evidence that each AxialML Model Design Input Specification has been met and the model output supports Axial Staff in completing anatomical segmentation. |
AxialML Machine Learning Model Validation (PCCP) | Quantitative Assessment using Expert Reference Standard (DICE, AUC, Precision, Accuracy, Recall) | Mean of identified quantitative metrics must demonstrate equivalence or an improvement for the proposed modified AxialML model. | Not explicitly reported as a single summary metric, but this is the criterion for successful validation. |
2. Sample Sizes Used for the Test Set and Data Provenance
The document provides details for two primary studies and for the AxialML model validation.
Clinical Segmentation Performance study (for Axial3D Insight software):
- Sample Size: 12 cases
- Data Provenance: Not explicitly stated, but it is implied to be clinical medical imaging data. Specific country of origin is not mentioned. The data type is retrospective as it refers to existing medical imaging.
Intended Use validation study (for 3D models produced by Axial3D Insight):
- Sample Size: 12 cases (presumably the same cases as the Clinical Segmentation Performance study, though not explicitly stated that they are the exact same dataset).
- Data Provenance: Not explicitly stated, but implied to be clinical medical imaging data for generating 3D models. Retrospective.
AxialML Machine Learning Model Validation (Validation Datasets):
- Sample Sizes:
- Cardiac CT/CTa: 4,838 images
- Neuro CT/CTa: 4,041 images
- Ortho CT: 10,857 images
- Trauma CT: 19,134 images
- Data Provenance: Not explicitly stated, but includes various scanner manufacturers and models (GE, Siemens, Phillips, Toshiba). The document states that for "Quantitative Assessment using Expert Reference Standard," independently sourced datasets commissioned from US only sites were used. This suggests at least a portion of the validation data is from the US. The nature of this data (e.g., existing scans) suggests a retrospective nature.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Clinical Segmentation Performance study:
- Number of Experts: 3 radiologists
- Qualifications: "Radiologists" - no additional experience or specific subspecialty is detailed.
Intended Use validation study:
- Number of Experts: 9 physicians
- Qualifications: "Physicians" - no additional experience or specific subspecialty is detailed.
AxialML Machine Learning Model Validation (for expert reference standard):
- Number of Experts: Unspecified "expert radiologists" for independently segmenting and reviewing the expert reference standards. The number is not explicitly stated but implies more than one ("expert radiologists").
- Qualifications: "Expert radiologists" - no additional experience or specific subspecialty is detailed beyond being an expert radiologist.
- For Qualitative Assessment: A "pool, minimum of 3, of Axial3D Medical Visualization Engineers" review segmentations. These are internal staff, not external medical experts establishing ground truth.
4. Adjudication Method for the Test Set
Clinical Segmentation Performance study:
- The document states "3 radiologists reviewing the segmentation of 12 cases" and that "all cases were scored within the acceptance criteria of 1 or 2a" using the RADPEER framework. This suggests an individual review by each radiologist, and potentially a consensus or adjudication if scores differed, but the specific adjudication method (e.g., 2+1, 3+1) is not detailed. The phrase "all cases were scored within the acceptance criteria" implies successful agreement or resolution.
AxialML Machine Learning Model Validation:
- For the "Qualitative Assessment," a "fixed evaluation methodology" is used by a pool of Medical Visualization Engineers. This implies a standardized process for assessment, but not a specific consensus or adjudication method among the engineers beyond their individual reviews contributing to the overall assessment.
- For the "Quantitative Assessment using Expert Reference Standard," the ground truth is established by "expert radiologists" who independently segmented and reviewed the datasets. This implies these expert interpretations form the ground truth without a further adjudication step by the study designers, or at least no explicit adjudication process is described in the provided text.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
No explicit MRMC comparative effectiveness study is mentioned, nor is an effect size indicating human reader improvement with AI assistance vs. without AI assistance reported. The studies described focus on the device's performance in isolation or its output reviewed by human experts, rather than comparing human performance with and without the AI.
6. Standalone Performance Study
Yes, a standalone validation was performed for the AxialML machine learning models.
The document states that "AxialML machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." This validation involved quantitative metrics (Dice Coefficient, Pixel Accuracy, AUC, Precision, Recall) directly assessing the performance of the ML models against ground truth.
However, the output of these ML models is not used in isolation in the final product. The text clarifies: "The segmentations produced by the AxialML machine learning models are used by Axial3D trained staff who complete the final segmentation and validation of the quality of each 3D patient specific model produced." This means the final device performance is human-in-the-loop, even if the ML component has a standalone validation.
7. Type of Ground Truth Used
- Clinical Segmentation Performance study: Assessed by "3 radiologists" using the RADPEER framework. This is expert consensus/review (implicitly, given all cases met criteria).
- Intended Use validation study: Assessed by "9 physicians" reviewing 3D models. This is expert review of the device output usability.
- AxialML Machine Learning Model Validation: "Expert reference standards, independently sourced datasets... independently segmented and reviewed by expert radiologists." This is expert consensus/pathology-like reference (since it's a segmentation ground truth).
8. Sample Size for the Training Set
The document explicitly states that "The AxialML machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." However, the sample size for the training set is not provided in the given text. Only the validation dataset sizes are listed (e.g., 4,838 images for Cardiac CT/CTa).
9. How the Ground Truth for the Training Set Was Established
While the document mentions that training data was "explicitly kept separate and independent from the validation data," it does not describe how the ground truth for the training set was established. It only details how ground truth for the validation sets used for the PCCP was established (expert radiologists independently segmenting and reviewing).
Ask a specific question about this device
Page 1 of 1