(140 days)
iPlan's indications for use are the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, atlas assisted visualization and segmentation, intraoperative functional planning where the output can be used e.g. with stereotactic image guided surgery or other devices for further processing and visualization.
Example procedures include but are not limited to:
- Planning and simulation of cranial surgical procedures such as tumor resection, shunt placement, minimal-invasive stereotactic interventions, biopsy, planning and simulation of trajectories for stimulation and electrode recording
- ENT procedures such as sinus surgery, tumor surgery
- Spine procedures such as tumor surgery, pedicle screw planning, vertebroplasty planning
- iPlan View is an application which is intended to be used for reviewing existing treatment plans
- Planning and simulation of cranio-maxillofacial procedures
Typical users of iPlan are medical professionals, including but not limited to surgeons and radiologists.
iPlan is a software based treatment planning application providing functionalities like viewing, processing and documentation of medical data including different modules for image preparation, image fusion, image segmentation where the result is a treatment plan that can be used e.g. for stereotactic and/or image guided surgery.
The provided 510(k) summary for iPlan focuses on demonstrating substantial equivalence to predicate devices through technical characteristics, intended use, and non-clinical performance data. It does not provide detailed acceptance criteria or a specific study proving the device meets those criteria in the format typically associated with AI/ML device performance evaluations (e.g., sensitivity, specificity, AUC).
However, I can extract the information related to non-clinical performance data and application performance testing which served as the basis for the FDA's substantial equivalence determination for this software planning application.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria (e.g., minimum sensitivity, specificity, accuracy) for the device's performance. Instead, it relies on demonstrating equivalence to predicate devices and adherence to internal standards and specifications.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Usability Validation: The device is usable and meets user needs. | "Usability workshops were performed with prototype versions of the software which has no relevant user interface differences to the final version and is therefore equivalent to the final version in respect to the usability validation. Moreover an Expert Group Review has worked with Brainlab in order to tailor the existing iPlan planning functionalities to the specific needs of CMF surgeons." |
Functional Equivalence: The software performs its intended functions correctly and reliably. | "On different levels of development (module, subsystem, system) specific bench and integration tests were conducted." |
"Internal standards were tested and documented as conformance report, environment compatibility and interfaces." | |
"Compatibility with previous version and comparable workflows to predicate devices were documented in corresponding review protocols." | |
Clinical Relevance/Safety: The device's output is safe and effective for its indicated uses. | "The clinical evaluation has been based on literature studies." (This suggests reliance on existing clinical knowledge and predicate device performance rather than a new clinical trial for iPlan itself). |
Substantial Equivalence: The device is as safe and effective as legally marketed predicate devices. | The overall conclusion of the 510(k) summary is that the submitted information... is complete and supports a substantial equivalence decision. The FDA's letter concurs with this, stating that the device is "substantially equivalent... to legally marketed predicate devices." |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" in the context of a dataset for a performance study.
- For Usability Workshops: The sample size (number of participants) is not mentioned. The workshops were likely conducted on prototypes.
- For Application Performance Testing: The sample size of test cases or data used during bench and integration tests is not specified.
- Data Provenance: Not applicable in the context of a dataset for a performance study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This is not directly applicable as there isn't a traditional "test set" with ground truth established by experts in the context of a performance study for this type of device described.
- For the Expert Group Review in usability: The number of experts is not specified, but they are described as working with Brainlab "to tailor the existing iPlan planning functionalities to the specific needs of CMF surgeons." Their qualifications are implicitly that they are CMF (Cranio-Maxillofacial) surgeons.
4. Adjudication method for the test set
Not applicable, as a dedicated "test set" for performance evaluation with an adjudication process is not described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned. The device is a planning application, and the evaluation focuses on its functional correctness, usability, and equivalence to predicates, not on AI-assisted diagnostic improvement for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to the software performing its functions without human interaction. The "Application performance testing" ("bench and integration tests") would implicitly cover the standalone performance of the algorithms and modules within iPlan to ensure they meet internal standards. However, specific metrics are not provided. iPlan is described as a "software based treatment planning application," implying it's a tool for medical professionals, not a fully autonomous diagnostic or therapeutic AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the "Application performance testing," the ground truth would be the expected output or behavior of the software modules based on their specifications and internal standards. This is typically established through:
- Functional Specifications: Defining what each module (e.g., Image Fusion, Object Creation) is supposed to do.
- Reference Data/Known Inputs: Using synthetic or previously validated medical data where the "correct" output of a segmentation, fusion, or trajectory calculation is known or can be analytically derived.
- Comparison to Predicate Devices: Ensuring that functionally similar modules perform comparably to predicate devices.
8. The sample size for the training set
This document does not describe the development of an AI/ML model that would typically have a "training set." iPlan is presented as a software application with various processing and visualization functionalities.
9. How the ground truth for the training set was established
Not applicable, as a "training set" for an AI/ML model is not mentioned in this context.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.