Search Results
Found 5 results
510(k) Data Aggregation
(220 days)
Axial3D Insight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.
The Axial3D Insight output file can be used for the fabrication of physical replicas of the output file using additive manufacturing methods. The output file or physical replica can be used for treatment planning.
The output file or the physical replica can be used for diagnostic purposes in the field of trauma, orthopedic, maxillofacial and cardiovascular applications. Axial3D Insight should be used in conjunction with other diagnostic tools and expert clinical judgment.
Axial3D Insight is a secure, highly available cloud-based image processing, segmentation and 3D modelling framework for the transfer of imaging information either as a digital file or as a 3D printed physical model.
The FDA 510(k) clearance letter and supporting documentation for Axial3D Insight (K250369) details the device's acceptance criteria and the studies performed to demonstrate its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes two main validation studies: a "Clinical Segmentation Performance study" for the overall Axial3D Insight software, and "AxialML Machine Learning Validation" for the underlying machine learning models. The acceptance criteria for the Clinical Segmentation Performance study are described in terms of a peer-reviewed medical imaging review framework (RADPEER). For the AxialML Machine Learning Validation, the acceptance criteria are based on quantitative metrics demonstrating "equivalence or improvement" compared to the original model.
Acceptance Criteria Category | Specific Metric/Mechanism | Acceptance Threshold/Method | Reported Device Performance |
---|---|---|---|
Clinical Segmentation Performance (Axial3D Insight) | Radiologist Review via RADPEER Framework | All cases scored within RADPEER acceptance criteria of 1 or 2a. | All cases were scored within the acceptance criteria of 1 or 2a. |
Intended Use Validation (Axial3D Insight) | Physician Review of 3D Models | Successfully validated, satisfying end user needs and indications for use. | Concluded successful validation; 3D models satisfied end user needs and indications for use. |
AxialML Machine Learning Model Validation (PCCP) | Quantitative 3D Medical Image Segmentation Metric Analysis (Dice Coefficient, Pixel Accuracy, AUC, Precision, Recall) | Performance must demonstrate equivalence or improvement compared to the original submission model version. | Not explicitly reported as a single summary metric, but the document states these metrics are used to ensure the model "consistently meet performance standards" and for successful validation in line with the modification protocol. |
AxialML Machine Learning Model Validation (PCCP) | Qualitative Assessment by Medical Visualization Engineers | Fixed evaluation methodology to define improved, equivalent, or reduced performance against AxialML Model Design Input Specifications. | Confirmed validation by producing objective evidence that each AxialML Model Design Input Specification has been met and the model output supports Axial Staff in completing anatomical segmentation. |
AxialML Machine Learning Model Validation (PCCP) | Quantitative Assessment using Expert Reference Standard (DICE, AUC, Precision, Accuracy, Recall) | Mean of identified quantitative metrics must demonstrate equivalence or an improvement for the proposed modified AxialML model. | Not explicitly reported as a single summary metric, but this is the criterion for successful validation. |
2. Sample Sizes Used for the Test Set and Data Provenance
The document provides details for two primary studies and for the AxialML model validation.
Clinical Segmentation Performance study (for Axial3D Insight software):
- Sample Size: 12 cases
- Data Provenance: Not explicitly stated, but it is implied to be clinical medical imaging data. Specific country of origin is not mentioned. The data type is retrospective as it refers to existing medical imaging.
Intended Use validation study (for 3D models produced by Axial3D Insight):
- Sample Size: 12 cases (presumably the same cases as the Clinical Segmentation Performance study, though not explicitly stated that they are the exact same dataset).
- Data Provenance: Not explicitly stated, but implied to be clinical medical imaging data for generating 3D models. Retrospective.
AxialML Machine Learning Model Validation (Validation Datasets):
- Sample Sizes:
- Cardiac CT/CTa: 4,838 images
- Neuro CT/CTa: 4,041 images
- Ortho CT: 10,857 images
- Trauma CT: 19,134 images
- Data Provenance: Not explicitly stated, but includes various scanner manufacturers and models (GE, Siemens, Phillips, Toshiba). The document states that for "Quantitative Assessment using Expert Reference Standard," independently sourced datasets commissioned from US only sites were used. This suggests at least a portion of the validation data is from the US. The nature of this data (e.g., existing scans) suggests a retrospective nature.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Clinical Segmentation Performance study:
- Number of Experts: 3 radiologists
- Qualifications: "Radiologists" - no additional experience or specific subspecialty is detailed.
Intended Use validation study:
- Number of Experts: 9 physicians
- Qualifications: "Physicians" - no additional experience or specific subspecialty is detailed.
AxialML Machine Learning Model Validation (for expert reference standard):
- Number of Experts: Unspecified "expert radiologists" for independently segmenting and reviewing the expert reference standards. The number is not explicitly stated but implies more than one ("expert radiologists").
- Qualifications: "Expert radiologists" - no additional experience or specific subspecialty is detailed beyond being an expert radiologist.
- For Qualitative Assessment: A "pool, minimum of 3, of Axial3D Medical Visualization Engineers" review segmentations. These are internal staff, not external medical experts establishing ground truth.
4. Adjudication Method for the Test Set
Clinical Segmentation Performance study:
- The document states "3 radiologists reviewing the segmentation of 12 cases" and that "all cases were scored within the acceptance criteria of 1 or 2a" using the RADPEER framework. This suggests an individual review by each radiologist, and potentially a consensus or adjudication if scores differed, but the specific adjudication method (e.g., 2+1, 3+1) is not detailed. The phrase "all cases were scored within the acceptance criteria" implies successful agreement or resolution.
AxialML Machine Learning Model Validation:
- For the "Qualitative Assessment," a "fixed evaluation methodology" is used by a pool of Medical Visualization Engineers. This implies a standardized process for assessment, but not a specific consensus or adjudication method among the engineers beyond their individual reviews contributing to the overall assessment.
- For the "Quantitative Assessment using Expert Reference Standard," the ground truth is established by "expert radiologists" who independently segmented and reviewed the datasets. This implies these expert interpretations form the ground truth without a further adjudication step by the study designers, or at least no explicit adjudication process is described in the provided text.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
No explicit MRMC comparative effectiveness study is mentioned, nor is an effect size indicating human reader improvement with AI assistance vs. without AI assistance reported. The studies described focus on the device's performance in isolation or its output reviewed by human experts, rather than comparing human performance with and without the AI.
6. Standalone Performance Study
Yes, a standalone validation was performed for the AxialML machine learning models.
The document states that "AxialML machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." This validation involved quantitative metrics (Dice Coefficient, Pixel Accuracy, AUC, Precision, Recall) directly assessing the performance of the ML models against ground truth.
However, the output of these ML models is not used in isolation in the final product. The text clarifies: "The segmentations produced by the AxialML machine learning models are used by Axial3D trained staff who complete the final segmentation and validation of the quality of each 3D patient specific model produced." This means the final device performance is human-in-the-loop, even if the ML component has a standalone validation.
7. Type of Ground Truth Used
- Clinical Segmentation Performance study: Assessed by "3 radiologists" using the RADPEER framework. This is expert consensus/review (implicitly, given all cases met criteria).
- Intended Use validation study: Assessed by "9 physicians" reviewing 3D models. This is expert review of the device output usability.
- AxialML Machine Learning Model Validation: "Expert reference standards, independently sourced datasets... independently segmented and reviewed by expert radiologists." This is expert consensus/pathology-like reference (since it's a segmentation ground truth).
8. Sample Size for the Training Set
The document explicitly states that "The AxialML machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." However, the sample size for the training set is not provided in the given text. Only the validation dataset sizes are listed (e.g., 4,838 images for Cardiac CT/CTa).
9. How the Ground Truth for the Training Set Was Established
While the document mentions that training data was "explicitly kept separate and independent from the validation data," it does not describe how the ground truth for the training set was established. It only details how ground truth for the validation sets used for the PCCP was established (expert radiologists independently segmenting and reviewing).
Ask a specific question about this device
(237 days)
Axial3D Cloud Segmentation Service is intended for use as a cloud-based service and image segmentation Framework for the transfer of DICOM imaging information from a medical scanner to an output file, which can be used for the fabrication of physical replicas of the output file using additive manufacturing methods.
The output file or physical replica can be used for treatment planning and/or diagnostic purposes in the field of orthopedic, maxillofacial, and cardiovascular applications in adults. The or physical replica may also be used for pediatrics between the ages of 12 and 21 years of age in cardiovascular applications.
Axial3D Cloud Segmentation Service should be used with other diagnostic tools and expert clinical judgment.
Axial3D Cloud Segmentation Service is a secure, highly available cloud-based image processing, segmentation, and 3D modelling framework for the transfer of imaging information either as a digital file or as a 3D printed physical model.
This document describes the Axial3D Cloud Segmentation Service and its FDA clearance.
Here's an analysis of the acceptance criteria and study data based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Stated or Implied) | Reported Device Performance |
---|---|
Measurement accuracy for segmentation | +/- 0.7mm |
Validation for intended use | Successfully validated |
Printing accuracy of physical replica models | Demonstrated to be accurate with compatible printers |
Software requirements and risk analysis | Successfully verified and traced |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the sample size used for the test set in terms of the number of patient cases or images. It mentions "nonclinical testing" and "software design verification and validation testing on all three software components of the device."
The data provenance (country of origin, retrospective/prospective) is not specified in the provided text.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts used or their qualifications for establishing ground truth for the test set.
4. Adjudication Method for the Test Set
The adjudication method used for the test set is not specified in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
The document does not mention an MRMC comparative effectiveness study or any effect size related to human readers improving with AI vs. without AI assistance. The device is a "segmentation framework" and its output (digital file or 3D printed model) is to be "used in conjunction with other diagnostic tools and expert clinical judgment." This suggests the device's role is preparatory rather than directly diagnostic in an unassisted workflow.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done
Yes, a standalone performance assessment was conducted for the algorithm's segmentation accuracy. The "measurement accuracy and comparisons were performed and confirmed to be within specification of +/- 0.7mm," indicating an evaluation of the algorithm's output against a reference.
7. The Type of Ground Truth Used
The type of ground truth used is not explicitly stated. However, given the measurement accuracy reported ("+/- 0.7mm") for segmentation, it implies a comparison against a highly precise reference, likely either a gold standard segmentation created by expert manual contouring or a known physical dimension.
8. The Sample Size for the Training Set
The sample size used for the training set is not specified in the provided text.
9. How the Ground Truth for the Training Set Was Established
How the ground truth for the training set was established is not specified in the provided text.
Ask a specific question about this device
(62 days)
Axial3D Insight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.
The Axial3D Insight output file can be used for the fabrication of the output file using additive manufacturing methods.
The output file or physical replica can be used for treatment planning.
The output file or physical replica can be used for diagnostic purposes in the field of trauma, orthopedic, maxillofacial, and cardiovascular applications.
Axial3D Insight should be used with other diagnostic tools and expert clinical judgment.
Axial3D Insight is a secure, highly available cloud-based image processing, segmentation, and 3D modelling framework for the transfer of imaging information either as a 3D printed physical model.
Here's a breakdown of the acceptance criteria and study information for the Axial3D Insight device, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Clinical Segmentation Performance (RADPEER score) | All cases scored within the acceptance criteria of 1 or 2a. |
Intended Use Validation Study | 3D models produced by Axial3D demonstrated satisfaction of end-user needs and indications for use. |
Phantom Testing (Origin One printer) | Reproduce required geometry to an acceptance criterion of ± 0.3mm. |
Standalone performance of AI models | No direct acceptance criteria are stated, as AI outputs are not used in isolation. |
Note: The document states that the update to the product does not affect the current software validation, and the software portion is not being updated. Therefore, the existing validation testing from the predicate device (K222745) is considered applicable.
Study Details
Clinical Segmentation Performance Study
- Sample Size for Test Set: 12 cases.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective).
- Number of Experts and Qualifications: 3 radiologists. No specific years of experience or subspecialty are provided, beyond being "radiologists."
- Adjudication Method: Not explicitly stated as 2+1 or 3+1. The study adopted a "peer-reviewed medical imaging review framework of RADPEER" to capture assessment and feedback.
- Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned. This study focused on the performance of the device's segmentation, not human reader improvement with AI assistance.
- Standalone Performance Study: The output of the machine learning models is not used in isolation. The segmentations are further refined and validated by Axial3D trained staff. Therefore, a standalone performance study for the AI component (without human oversight) is not presented as the final product.
- Type of Ground Truth: Not explicitly stated, though implicitly refers to the standard of radiologists reviewing the segmentation.
- Sample Size for Training Set: Not specified for the Clinical Segmentation Performance Study.
- How Ground Truth for Training Set was Established: Not specified for the Clinical Segmentation Performance Study.
Intended Use Validation Study
- Sample Size for Test Set: 12 cases.
- Data Provenance: Not explicitly stated.
- Number of Experts and Qualifications: 9 physicians. No specific qualifications are provided beyond "physicians."
- Adjudication Method: Not explicitly stated.
- Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned.
- Standalone Performance Study: Not applicable; this study validated the 3D models with physician review.
- Type of Ground Truth: Implicitly based on "end user needs and indications for use" as assessed by physicians.
- Sample Size for Training Set: Not specified.
- How Ground Truth for Training Set was Established: Not specified.
Axial™- Machine Learning Validation
This section describes the validation of the underlying machine learning models, which are used to generate initial segmentations, but their output is not used in isolation.
- Sample Size for Test Set (Validation Data):
- Cardiac CT/CTa: 4,838 images
- Neuro CT/CTa: 4,041 images
- Ortho CT: 10,857 images
- Trauma CT: 19,134 images
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). However, a list of various CT scanner manufacturers and models (GE Medical Systems, Siemens, Phillips, Toshiba) indicates a diversity of acquisition sources.
- Number of Experts and Qualifications: Not mentioned for this specific validation as it focuses on model output.
- Adjudication Method: Not mentioned.
- Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not mentioned.
- Standalone Performance Study: The document explicitly states that the "output of these models is not used in isolation to produce the final 3D patient specific model." The segmentations are "used by Axial3D trained staff who complete the final segmentation and validation." Therefore, this is not a standalone performance of the AI in a clinical workflow, but an internal validation of the AI component before human refinement.
- Type of Ground Truth: Not explicitly stated for this machine learning validation. Implicitly, it would be expertly generated ground truth for segmentation.
- Sample Size for Training Set: Not specified in the provided text, but it states that the "training data used during the algorithm development was explicitly kept separate and independent from the validation data used."
- How Ground Truth for Training Set was Established: Not specified in the provided text.
Phantom Testing (for 3D Printer Verification)
- Sample Size for Test Set: Not explicitly stated as a number of phantoms, but involves "3D test phantoms provided by the National Institute of Standards and technology (NIST)."
- Data Provenance: NIST test phantoms.
- Number of Experts and Qualifications: Not applicable, as this is a technical verification of printer accuracy.
- Adjudication Method: Not applicable.
- Multi Reader Multi Case (MRMC) Comparative Effectiveness Study: Not applicable.
- Standalone Performance Study: Not applicable.
- Type of Ground Truth: Accuracy measurements against a known NIST test phantom.
- Sample Size for Training Set: Not applicable.
- How Ground Truth for Training Set was Established: Not applicable.
Ask a specific question about this device
(294 days)
Axial3DInsight is intended for use as a cloud-based service and image segmentation framework for the transfer of DICOM imaging information from a medical scanner to an output file.
The Axial3DInsight output file can be used for the fabrication of the output file using additive manufacturing methods.
The output file or physical replica can be used for treatment planning.
The output file or physical replica can be used for diagnostic purposes in the field of orthopedic trauma, orthopedic, maxillofacial, and cardiovascular applications.
Axial3DInsight should be used with other diagnostic tools and expert clinical judgment.
Axial3D Insight is a secure, highly available cloud-based image processing, segmentation and 3D modelling framework for the transfer of imaging information either as a 3D printed physical model.
The acceptance criteria and the study proving the device meets them are described below, based on the provided text.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state a table of acceptance criteria with specific quantitative metrics. However, it describes two validation studies and their outcomes, implying that meeting these outcomes constituted the acceptance.
Inferred Acceptance Criteria & Reported Performance:
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Clinical Segmentation Performance: Consistent and diagnostically acceptable segmentation by radiologists. | Clinical Segmentation Performance Study: "The Clinical Segmentation Performance study was conducted with 3 radiologists reviewing the segmentation of 12 cases across the fields of orthopedics, trauma, maxillofacial and cardiovascular. Axial3D adopted a peer reviewed medical imaging review framework of RADPEER to capture the assessment and feedback from the radiologists involved – all cases were scored within the acceptance criteria of 1 or 2a [1]." (This indicates successful segmentation as per expert review). |
Intended Use Validation (3D Models): 3D models produced by the device satisfy end-user needs and indications for use. | Intended Use Validation Study: "The Intended Use validation study of the device was conducted with 9 physicians reviewing 12 cases across the fields of Orthopedics, Trauma, Maxillofacial, and Cardiovascular, as defined in the Intended Use statement of the device. This study concluded successful validation of the 3D models produced by Axial3D demonstrating the device outputs satisfied end user needs and indications for use." |
Software Verification & Validation: All software requirements and risk analysis successfully verified and traced. | "Axial3D has conducted software verification and validation, in accordance with the FDA quidance, General Principles of Software Validation; Final Guidance for Industry and FDA Staff, issued on January 11, 2002. All software requirements and risk analysis have been successfully verified and traced." |
Machine Learning Model Validation: Independent verification and validation of machine learning models before inclusion. | "Axial™- machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." (Detailed data on number of images, slice spacing, and pixel size used for validation of Cardiac CT/CTa, Neuro CT/CTa, Ortho CT, and Trauma CT models are provided in Table 5-4, indicating the scope of this validation). |
2. Sample Sizes and Data Provenance
-
Test Set Sample Sizes:
- Clinical Segmentation Performance Study: 12 cases
- Intended Use Validation Study: 12 cases
- Machine Learning Model Validation:
- Cardiac CT/CTa: 4,838 images
- Neuro CT/CTa: 4,041 images
- Ortho CT: 10,857 images
- Trauma CT: 19,134 images
-
Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It only mentions the imaging scanner manufacturers and models used in the validation datasets: GE Medical Systems, Siemens, Phillips, and Toshiba.
3. Number of Experts and Qualifications
- Clinical Segmentation Performance Study: 3 radiologists. No specific years of experience are mentioned, but they are described as "radiologists."
- Intended Use Validation Study: 9 physicians. No specific qualifications (e.g., orthopedic surgeon, maxillofacial surgeon, cardiologist) or years of experience are mentioned, only "physicians."
4. Adjudication Method
- For the Clinical Segmentation Performance Study, the "RADPEER" framework was adopted. All cases were scored within the acceptance criteria of 1 or 2a. While RADPEER is a peer review system, the specific adjudication
method for discrepancies among the 3 radiologists (e.g., majority vote, consensus meeting, 2+1, 3+1) is not explicitly detailed. It only states that all cases met the acceptance criteria, suggesting agreement or successful resolution. - For the Intended Use Validation Study, no adjudication method is explicitly described beyond "9 physicians reviewing 12 cases" and concluding "successful validation."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was not explicitly mentioned as being done to evaluate how much human readers improve with AI vs. without AI assistance. The studies described focus on validation of the device's output and the AI models, rather than human-in-the-loop performance improvement. The text mentions that the Axial™ machine learning models are used to generate an initial segmentation, but the final segmentation and validation are done by "Axial3D trained staff," implying a human-in-the-loop process, but no comparative study to measure effect size is presented in this document.
6. Standalone (Algorithm Only) Performance
- Yes, standalone performance of the machine learning models was conducted. The document states: "Axial™- machine learning models were independently verified and validated before inclusion in the Axial3D Insight device." Table 5-4 provides the number of images used for validation for different clinical areas (Cardiac, Neuro, Ortho, Trauma CT), indicating a quantitative assessment of the models themselves. However, the specific metrics (e.g., Dice score, sensitivity, specificity) for this standalone performance are not provided in the text.
7. Type of Ground Truth Used
- For the Clinical Segmentation Performance Study: The ground truth was established by the consensus or review of the 3 radiologists, consistent with a form of expert consensus.
- For the Intended Use Validation Study: The ground truth was based on the expert clinical judgment of the 9 physicians, who reviewed the 3D models and concluded their utility for intended use.
- For the Machine Learning Model Validation: The document states that "The Axial™- machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." While it doesn't explicitly state the type of ground truth for this segment, it can be inferred that the ground truth for the validation of the machine learning models was also based on expert-derived segmentations used to compare against the model's output.
8. Sample Size for the Training Set
- The document states: "The Axial™- machine learning model training data used during the algorithm development was explicitly kept separate and independent from the validation data used." However, the sample size for the training set is not provided. Only the sample sizes for the validation data are listed (Table 5-4).
9. How Ground Truth for Training Set was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only implies that training data was distinct from validation data. Given the nature of medical image segmentation, it is highly probable that the ground truth for the training set was established through manual segmentation by human experts (e.g., radiologists, clinical experts), but this is an inference and not explicitly stated in the provided text.
Ask a specific question about this device
(30 days)
Axial3D Cloud Segmentation Service is intended for use as a cloud based service and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file.
The Axial3D Cloud Segmentation Service output file can be used for the fabrication of physical replicas of the output file using additive manufacturing methods.
The output file or physical replica can be used for treatment planning.
The physical replica can be used for diagnostic purposes in the field of orthopedic, maxillofacial and cardiovascular applications.
Axial3D Cloud Segmentation Service should be used in conjunction with other diagnostic tools and expert clinical judgment.
Axial3D Cloud Segmentation Service is a secure, highly available cloud based image processing, segmentation and 3D modeling framework for the transfer of imaging information to either a digital file or as a 3D printed physical model.
Axial3D Cloud Segmentation Service is made up of a number of component parts, which allow the production of patient-specific 1:1 scale replica models, either as a digital file or as a 3D printed physical model.
Here's a breakdown of the acceptance criteria and study information for the Axial3D Cloud Segmentation Service, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
The provided document describes a substantial equivalence submission to the FDA. In this context, the "acceptance criteria" are implied by the claim of substantial equivalence to the predicate device, Mimics InPrint (K173619). The primary performance metric is the accuracy of the segmentation against a defined ground truth, demonstrating that the subject device performs at least as well as the predicate and within acceptable specifications.
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Measurement Accuracy of Segmentation | "within specification" and "performs at least as well as the legally marketed predicate device." | "Measurement accuracy and comparisons were performed and confirmed to be within specification." |
"The validation highlighted that the subject device performed to a higher standard, than the predicate device." | ||
"minimal variances were visible between the Mesh generated from subject device and the predicate device" | ||
Accuracy of Physical Replica Printing | Demonstrated to be accurate when using compatible 3D printers. | "Validation of printing of physical replica models was performed and demonstrated to be accurate when using any of the compatible 3D printers." |
Equivalence in Design & Functionality | Similar to predicate device in intended use, design, functionality, operating principles, and performance characteristics. | "Comparison shows the Axial3D Cloud Segmentation Service is substantially equivalent in intended use, design, functionality, operating principles and performance characteristics of the predicate device." |
"Both devices use the same segmentation functionality and generate the same output files." | ||
Safety & Effectiveness | As safe and effective as the legally marketed predicate device. | "The conclusions drawn from the nonclinical tests demonstrate that the proposed subject device is as safe, as effective, and performs as well as the legally marketed predicate device." |
Minimal Variance from Original DICOM (Mesh) | Minimal variance from original DICOM images after smoothing. | "Axial3D apply minimal smoothing to the STL file generated from the labeled images to retain a higher level of accuracy to the original DICOM images." |
2. Sample Size and Data Provenance
- Test Set Sample Size: Not explicitly stated in the provided text. The document mentions "measurement accuracy and comparisons were performed" and "validation of printing of physical replica models was performed," but does not detail the number of cases or images used for these tests.
- Data Provenance: Not explicitly stated. The document does not mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not explicitly stated. The document refers to "expert clinical judgment" in the Indications for Use, which suggests clinical experts are involved in the overall use of the device, but it doesn't specify their role or qualifications in establishing the ground truth for the validation study.
4. Adjudication Method
- Adjudication Method: Not explicitly stated.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not mentioned or described in the provided text. The study focuses on the comparison of the device's output (segmentation accuracy) against a predicate device, not on the improvement of human readers' performance with AI assistance.
6. Standalone (Algorithm Only) Performance
- Standalone Performance: Yes, a standalone performance study was seemingly conducted. The description states "Measurement accuracy and comparisons were performed" for the device's segmentation output, and "The validation highlighted that the subject device performed to a higher standard, than the predicate device." This indicates an assessment of the algorithm's performance independent of human-in-the-loop interaction for the specific tasks evaluated.
7. Type of Ground Truth Used
- Type of Ground Truth: The document implies comparison against the output of the predicate device ("Mimics InPrint") as a reference for accuracy, and also refers to "original DICOM images" for mesh accuracy. It doesn't explicitly state whether expert consensus or pathology was used to establish the gold standard for the initial segmentation accuracy comparison. Given the nature of segmentation, it is highly probable that expert-annotated segmentations were used as ground truth, or a method accepted as gold standard in the field for anatomical model creation.
8. Sample Size for Training Set
- Training Set Sample Size: Not explicitly stated. The document focuses on the validation and verification of the device, not its development or training data.
9. How Ground Truth for Training Set Was Established
- Ground Truth for Training Set: Not explicitly stated. As with the test set, it's not detailed how ground truth was established for any potential training data used to develop the segmentation algorithms.
Ask a specific question about this device
Page 1 of 1