Search Results
Found 1 results
510(k) Data Aggregation
(151 days)
IRIS 1.0 System
The IRIS 1.0 System is intended as a medical imaging system that allows the processing, review, analysis, communication, and media interchange of multi-dimensional digital images acquired from CT imaging devices. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multidimensional digital images. The IRIS 1.0 System is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
The IRIS 1.0 System is a software only device that processes medical images and delivers segmented image studies (3D anatomical models) to clinicians. Diagnosis is not performed by the software; the end user (physician) is ultimately responsible for reviewing and interpreting the 3D anatomical models and the original CT study it is based on.
The IRIS 1.0 System will deliver 3D renderings of patient anatomy to the Physician's iOS device (iPad or iPhone). The physician will be able to view and manipulate the labeled MPRs (multiplanar reconstructions) and the 3D model on their iOS device. The physician also has the option of using the da Vinci® Surgical System TilePro input to display the 3D models in the High Resolution Stereo Viewer (HRSV) via a hardwire connection from the iOS device.
The physician can view and order patient CT studies using the IRIS website or the IRIS iOS interface. The original image study will be anonymized and routed to Intuitive Surgical where image segmentation and quality assurance steps will be performed, before releasing the model to the Physician for review. The Physician can access the 3D Model using the IRIS App on an iOS device, where they will review it, compare it with the original image study, and approve or reject the model.
Another component of the IRIS 1.0 System is the networking and software infrastructure to route the image studies in and out of the hospital, manage ordering information, and manage data in accordance with HIPAA and Cybersecurity requirements. This infrastructure is composed of "Gateway" hardware and software and the cloud storage system used for storage and access of original and segmented image studies.
This document (K182643) describes the FDA 510(k) clearance for the IRIS 1.0 System, a medical image processing software. No specific acceptance criteria or a detailed study proving the device meets those criteria are provided in the provided text, beyond general statements about "verification and validation."
The text states:
"The IRIS 1.0 System was verified and validated according to a Moderate Level of Concern software device. The subject device met all required specifications and functioned as intended. Safety and performance of the IRIS 1.0 System has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing."
This indicates that internal testing was conducted to ensure the device met its specifications and functioned as intended, but the specific acceptance criteria (e.g., target accuracy, precision, recall for specific tasks) and the study results demonstrating meeting these criteria are not detailed.
Therefore, many of the requested details cannot be extracted directly from the provided text. However, based on the information provided, here's what can be inferred or explicitly stated:
Acceptance Criteria and Reported Device Performance
No explicit quantitative acceptance criteria or corresponding reported device performance metrics are provided in the document. The document generally states that the device "met all required specifications and functioned as intended." This implies that the internal acceptance criteria were met, but the specific metrics are not disclosed.
Acceptance Criteria (Example) | Reported Device Performance |
---|---|
Not specified in document | Not specified in document |
Study Details
Given the statement "The IRIS 1.0 System was verified and validated according to a Moderate Level of Concern software device. The subject device met all required specifications and functioned as intended," it can be inferred that some form of internal verification and validation study was conducted. However, granular details are largely absent.
1. Sample sized used for the test set and the data provenance:
* Test Set Sample Size: Not specified.
* Data Provenance: Not specified (e.g., country of origin, retrospective or prospective). The text mentions "The original image study will be anonymized and routed to Intuitive Surgical where image segmentation and quality assurance steps will be performed," which implies a process where real-world data might be used, but the specifics of the test set are not provided.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Number of Experts: Not specified.
* Qualifications of Experts: Not specified. The document states a "physician... is ultimately responsible for reviewing and interpreting the 3D anatomical models and the original CT study it is based on," and that the physician "will review it, compare it with the original image study, and approve or reject the model." This suggests that physicians are involved in a quality assurance or review process, which might be related to ground truth establishment, but it's not explicitly confirmed for a formal test set ground truth.
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
* Adjudication Method: Not specified.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* MRMC Study: Not mentioned in the provided text. The device is intended to "assist the clinician," but no study on the impact of this assistance on human reader performance (e.g., improved diagnostic accuracy, reduced reading time) is described.
5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Standalone Performance: The document states, "Diagnosis is not performed by the software; the end user (physician) is ultimately responsible for reviewing and interpreting the 3D anatomical models and the original CT study it is based on." This strongly suggests the device is not intended for standalone diagnostic performance. While the software performs "segmentation and quality assurance steps," the specific standalone performance metrics for these functions are not provided, nor is a formal standalone study described.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Type of Ground Truth: Not explicitly stated. The process mentions that physicians "review it [the 3D model], compare it with the original image study, and approve or reject the model," implying that the original CT image and subsequent physician verification serve as a form of ground truth for the 3D anatomical models. However, the exact methodology for establishing ground truth for testing purposes is not detailed.
7. The sample size for the training set:
* Training Set Sample Size: Not specified. The document only mentions that "The original image study will be anonymized and routed to Intuitive Surgical where image segmentation and quality assurance steps will be performed," which describes the operational flow, not necessarily the training data for the AI.
8. How the ground truth for the training set was established:
* Training Set Ground Truth Establishment: Not specified.
Ask a specific question about this device
Page 1 of 1