(250 days)
CAS-One IR is a user controlled, stereotactic accessory intended to assist in planning, navigation and manual advancement of one or more instruments, as well as in verification of instrument position and performance during Computed Tomography (CT) guided procedures.
In planning, the desired needle configuration and performance is defined relative to the target anatomy.
In navigation, the instrument position is displayed relative to the patient and guidance for needle alignment is provided while respiratory levels are monitored.
In verification, the achieved instrument configuration and performance are displayed relative to the previously defined plan through an overlay of the pre- and post- treatment image data.
CAS-One IR is indicated for use with rigid straight instruments such as needles and probes used in CT guided interventional procedures performed by physicians trained for CT procedures.
CAS-One IR is intended to be used for patients older than 18 years and eligible for CT-guided percutaneous interventions.
The system consists of the following main components:
- . A mobile navigation platform: this platform can be moved in and out of radiology rooms and is positioned next to the patient in front of the CT scanner. The platform includes two touch screens, a camera, and a computer.
- . Instruments: The instrument set comprises a guide arm, aiming device and a navigational pointer that are connected to each other and assist the user in aligning and positioning a needle trajectory relative to the patient. After positioning the aiming device using the guide arm, the aiming device is aligned with respect to the desired entry point (translational alignment) and rotationally oriented to the desired insertion angle.
- CAS-One IR software: The software provides the step-by-step workflow assistance for needle ● navigation. It provides a means for users to precisely plan a single or multiple needle trajectories, navigate a needle to this exact position and validate the inserted needle's position to the planned position.
Let's break down the information regarding the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for CAS-One IR (K232022).
First, it's important to note that this 510(k) submission primarily focuses on demonstrating substantial equivalence to a predicate device (CAS-One IR, K152473). Therefore, the "study" described is a non-clinical performance testing and algorithm validation study, specifically addressing the differences and new features of the updated device. It is not an MRMC comparative effectiveness study or a typical standalone performance study with clinical endpoints.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document explicitly mentions acceptance criteria for the segmentation algorithms.
Acceptance Criteria (Mean DICE Coefficient) | Reported Device Performance |
---|---|
Liver: 0.9 | Passed |
Tumor: 0.8 | Passed |
Effective Treatment Volume: 0.8 | Passed |
Kidney: 0.85 | Passed |
Lung: 0.9 | Passed |
Mean Centerline DICE (Liver-Vessels): 0.6 | Passed |
For instrument detection algorithms, the performance is generally described as "reliability was gauged by analyzing the ground truth positions and the positions identified by the algorithm," and "These validation efforts provide a robust foundation for asserting the accuracy and effectiveness of the algorithms." Specific quantitative performance metrics for instrument detection are not provided in this summary, but it states they were assessed against ground truth.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set used for algorithm validation. It also doesn't provide information about the data provenance (e.g., country of origin, retrospective or prospective nature). It only mentions "ground truth data annotated by personnel considered expert in the domain."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document states that the ground truth data was "annotated by personnel considered expert in the domain." It does not specify the number of experts or their specific qualifications (e.g., years of experience, specific medical specialty like radiologist).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method for establishing the ground truth for the test set. It simply states "annotated by personnel considered expert in the domain."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of the device." The studies performed were non-clinical performance and algorithm validation.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, a standalone algorithm validation was performed. The "Algorithm validation" section describes testing the segmentation algorithms (comparing mean DICE coefficient with state-of-the-art algorithms) and instrument detection algorithms (gauging reliability by comparing algorithm-identified positions with ground truth). These are evaluations of the algorithm's performance in isolation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for algorithm validation was expert annotation/segmentation. The document states, "Test protocols were systematically executed to assess the performance of the algorithmic validation procedures involved comparisons with ground truth data annotated by personnel considered expert in the domain." This implies the ground truth for segmentation and instrument positions was established by human experts.
8. The sample size for the training set
The document does not provide the sample size for the training set. It focuses on the validation of the algorithms rather than their development or training data.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for the training set was established. Given the focus on substantial equivalence and non-clinical testing, this level of detail about training data is typically not required in a 510(k) summary if the primary claim relies on equivalence and validation of specific new features.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.