(150 days)
Aibolit 3D+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT and MRI imaging devices. Aibolit 3D+ is intended as software for preoperative surgical planning, patient information and as software for the intraoperative display of the multidimensional digital images. Aibolit 3D+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
Aibolit 3D+ is a web-based stand-alone application that can be presented on a computer connected to the internet. Once the enhanced images are created, they can be used by the physician for case review, patient education, professional training and intraoperative reference.
Aibolit 3D+ is a software only device, which processes CT and MR images from a patient to create 3-dimensional images that may be manipulated to view the anatomy from virtually any perspective. The software also allows for transparent viewing of anatomical structures artifacts inside organs such as ducts, vessels, lesions and entrapped calcifications (stones). Anatomical structures are identified by name and differential coloration to highlight them within the region of interest.
The software may help to facilitate the surgeon's decision-making during planning, review and conduct of surgical procedures and, hence, may potentially help them to decrease or prevent possible errors caused by the misidentification of anatomical structures and their positional relationship.
Here's a summary of the acceptance criteria and the study details for the AIBOLIT 3D+ device, extracted from the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are generally focused on the validation of the software's ability to segment anatomical structures accurately and generate 3D models with conservation of shape dimensions and volume. The reported performance indicates that the device met these criteria through validation studies.
Acceptance Criteria Category | Specific Criteria | Reported Device Performance/Validation |
---|---|---|
Software Validation | Software functions as intended and meets user needs. | Software verification and validation performed against defined requirements and user needs. |
Segmentation Validation | Accurate segmentation of organs/structures. | Segmentation validation of the Customize software performed. R&R study on segmentation of multiple internal organ/structure anatomies performed. AI-based algorithm demonstrated identification of organs/structures based on trained dataset. |
3D Model Generation Accuracy | Accurate generation of 3D models from segmented data. | Accuracy study on 3D model generation for multiple organ structures performed. Validation demonstrated conservation of shape dimensions and volume of structures when compared to a "ground truth" accepted standard. |
MRI Validation | Performance maintained when using MRI images. | Expansion of Software Validation to include MRI validation using multiple organ structures, multiple radiologists, and multiple view perspectives. Conducted per written protocol with pre-determined acceptance criteria. |
Conservation of Shape/Volume | 3D models accurately represent original dimensions/volume. | The MRI validation demonstrated "conservation of shape dimensions, volume of the structures in a side-by-side testing comparison with a 'ground truth' accepted standard independent of radiologist, organ structure and view perspective." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Segmentation Training/Evaluation: A dataset of 108 anatomical structures was used for training the AI-based algorithm, obtained from medical images (MRI scans) and their corresponding segmentation. While this is referred to as "trained to identify organs/structures," the subsequent statement about "evaluated from 3 perspectives by 4 radiologists" suggests this dataset may also have served as a test/evaluation set for the AI component. It's not explicitly stated if a separate, distinct test set was used solely for performance evaluation post-training.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective). The type of images are MRI scans. However, the study involved "multiple radiologists and multiple view perspectives" suggesting a multi-center or varied data collection, though specifics are missing.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: 4 radiologists were involved in evaluating the AI-based algorithm's segmentations. For the "final patient management decisions" and the manual annotation process, the text specifies "Radiologist (MD)" and "Radiologist."
- Qualifications of Experts: All experts are identified as radiologists. No specific years of experience or subspecialty are mentioned.
4. Adjudication Method for the Test Set
- The AI-based algorithm's segmentations were "evaluated from 3 perspectives by 4 radiologists." This implies a review process.
- After the AI system produces additional segmentations, a radiologist reviews them.
- For image segmentation, the "Radiologist (MD) – Manual annotation is done for all CT and MRI slices with optional use of software as determined by Radiologist and with Radiologist's approval and control." This indicates that the radiologists act as the final decision-makers and can modify annotations.
The precise adjudication method (e.g., majority vote, or whether disagreements were resolved by a super-reviewer) is not explicitly detailed for the evaluation phase. However, the overall process shows a human-in-the-loop approach where radiologists have final say over segmentations.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Not explicitly stated. The document describes a validation study focused on the device's accuracy against a ground truth and its substantial equivalence to predicate devices. It mentions that "multiple radiologists and multiple view perspectives" were used in the MRI validation, but it does not describe a comparative effectiveness study measuring the improvement of human readers with AI assistance versus without AI assistance. The device is described as assisting the clinician, but no quantitative measure of this assistance's effect on human reader performance is provided.
6. Standalone Performance Study (Algorithm Only)
- Yes, a standalone performance aspect is implied. The AI-based algorithm is described as being "trained to identify organs/structures" and then "produces additional segmentations for review by the radiologist." This indicates that the algorithm itself performs segmentation, which is then subject to human review. The "segmentation validation of the Customize software" and "accuracy study on 3D model generation" would also likely assess the algorithm's performance in isolation before human review. The validation demonstrated "conservation of shape dimensions, volume...independent of radiologist," which points to the intrinsic accuracy of the software's processing.
7. Type of Ground Truth Used
- Expert Consensus / Accepted Standard:
- For the AI training/evaluation, ground truth was "corresponding segmentation" from the MRI scans. This usually implies expert-labeled segmentations.
- For the MRI validation, "conservation of shape dimensions, volume of the structures in a side-by-side testing comparison with a 'ground truth' accepted standard independent of radiologist, organ structure and view perspective" was used. This suggests an established reference or gold standard for anatomical dimensions and volumes.
8. Sample Size for the Training Set
- The AI-based algorithm was trained using a dataset of 108 anatomical structures obtained from MRI scans.
9. How Ground Truth for the Training Set Was Established
- The ground truth for the training set (the "corresponding segmentation") was likely established by experts, as the device's workflow involves radiologists making annotations: "After a radiologist establishes contours, the system produces additional segmentations for review by the radiologist." The expert radiologists are central to the process of creating and validating segmented structures, which would form the basis of the ground truth for training.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).