(447 days)
Arterys MICA software is a medical diagnostic application that displays, processes, stores, and transfers DICOM and non-DICOM medical data. It provides the capability to store images and patient information, and perform filtering, digital manipulation, and quantitative measurements. The client software is designed to run on standard personal and business computers and on monitors/screens that meet appropriate technical specifications for image diagnosis.
Arterys MICA includes an optional Cardio AI module which is used to analyze the heart and its major vessels using multi-slice, multi-phase, and velocity-encoded cardiovascular magnetic resonance (MR) imges. It provides clinically relevant and reproducible, quantitative data, and has been tested and validated on MR images acquired from both 1.5T and 3.0 T MR Scanners.
Arterys MICA includes an optional Oncology AI module which provides analytical tools to help the user assess and document changes in morphological activity at diagnostic and therapy follow-up examinations. It is a tool used to support the oncological workflow by helping the user confirm the absence of lesions, including evaluation, quantification, follow-up, and documentation of any such lesions.
Arterys MICA software is intended to be used as a support tool by trained healthcare professionals to aid in diagnosis. It is intended to provide image and related information that is interpreted by a trained professional to render findings and/or diagnosis, but it does not directly generate any diagnosis or potential findings.
Arterys MICA, already cleared as per the predicate, is a dedicated software application used as a Digital Imaging and Communications in Medicine (DICOM) and non-DICOM information and data management system. Pre-existing DICOM images, such as CT or MR, are uploaded into Arterys MICA from a PACS or a scanner. Arterys MICA is completely hosted in the cloud and is accessed using a compatible web browser by navigating to the following https://app.arterys.com. Cloud servers are provided by Amazon Web Services (AWS) and service is accessible globally.
The Viewer AI application of Arterys MICA is designed around a modular architecture of separate components that make up a basic image viewer. These include the Studylist, from which studies are selected and opened, the image display (2D, 3D, MIP, etc), view synchronization, metadata information, and various navigational, measurement, and other action tools.
Functionality provided by the Viewer AI is extended by the additional Cardio AI and Oncology AI application modules which add support for specific clinical workflows:
- Cardiac Workflow Module: Evaluates multi-slice and multi-phase velocity-encoded . cardiovascular magnetic resonance (MR) images to quantify blood flow and ventricular function. In addition, perfusion and delayed enhancement datasets are analyzed and quantified, and for parametric mapping, T1, T2, and T2* values are obtained to assess tissue changes in the myocardium, as well as ECV calculations.
- Arterys MICA includes an optional Oncology AI module which provides analytical tools ● to help the user assess and document changes in morphological activity at diagnostic and therapy follow-up examinations. It is a tool used to support the oncological workflow by helping the user confirm the absence or presence of lesions, including evaluation, quantification, follow-up, and documentation of any such lesions.
Arterys MICA uses many deep learning algorithms to reduce many tedious, time-consuming manual steps, such as segmentation, landmark identification, etc. The results of these AI models are available on-screen to the user for further review and editing. The software does not perform any functions that could not be accomplished by a trained user with a manual method; the purpose of the automation is to save time and automate potential error-prone manual tasks, while allowing the results to be reviewed as per the normal clinical workflow.
NOTE: The clinician retains the ultimate responsibility for making the pertinent diagnosis based on their standard practices and visual comparison of images. Artervs MICA software is a complement to these standard procedures.
The provided text describes the Arterys MICA device and its regulatory submission (K203744). While it mentions some validation efforts and performance data, it does not contain a dedicated table of acceptance criteria paired with reported device performance results for a specific clinical study set up to prove the device meets these criteria. The information is fragmented and generally describes software verification and validation, along with a brief mention of a "standalone performance assessment" for the T1 workflow.
However, based on the provided text, I can extract and infer some information relevant to your request.
Here's an attempt to address your request based on the provided document, highlighting where information is present and where it is lacking:
Acceptance Criteria and Device Performance (Inferred/Extracted)
As explicitly requested, a direct "table of acceptance criteria and the reported device performance" is not present in the provided document. However, the document alludes to certain performance expectations and a specific assessment for the T1 workflow.
Inferred Acceptance Criteria / Performance Study Highlights:
Feature/Metric | Acceptance Criteria (Inferred) | Reported Device Performance (Partial/Inferred) |
---|---|---|
T1 Value Accuracy (for Cardio AI module) | Difference between calculated and expected T1 values within predefined accuracy requirements (i.e., less than or equal to 30 ms) | "The results showed that the difference between calculated and expected T1 and T2 values were within predefined accuracy requirements (i.e., 30 ms and 4 ms for T1 and T2 values, respectively)." |
T2 Value Accuracy (for Cardio AI module) | Difference between calculated and expected T2 values within predefined accuracy requirements (i.e., less than or equal to 4 ms) | "The results showed that the difference between calculated and expected T1 and T2 values were within predefined accuracy requirements (i.e., 30 ms and 4 ms for T1 and T2 values, respectively)." |
T1 Workflow Contour Generation (Deep Learning Model) | Device performs as intended compared to ground-truth contours (User-modifiable) | "As a result, the device performed as intended compared to ground-truth contours." |
General Software Performance | Adherence to software specifications and applicable performance standards; defects fixed or assessed for approval. | "Safety and performance of Arterys MICA has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing." |
Detailed Information on the Study Proving Device Meets Acceptance Criteria:
-
A table of acceptance criteria and the reported device performance:
- As shown above, an explicit table is not provided in the document. The performance information is embedded within text describing the validation process. The closest to quantitative acceptance criteria are the accuracy requirements for T1 and T2 values (30 ms and 4 ms, respectively), and the stated performance "within predefined accuracy requirements."
-
Sample size used for the test set and the data provenance:
- T1 Workflow Standalone Assessment: "90 Cardiac T1 Mapping MRI short axis (SAX) scans from 16 studies."
- Data Provenance: Not explicitly stated (e.g., country of origin, multiple sites). It's not specified if this was retrospective or prospective data. The general context of "validation assessment" often implies retrospective data, but it's not confirmed.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Information Not Provided: The document does not specify the number of experts or their qualifications (e.g., radiologist with X years of experience) used to establish ground truth for any of the described tests.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Information Not Provided: The document does not mention any adjudication method for establishing ground truth for the test sets.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC Study Described: The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The device is generally described as a "support tool" and that "the clinician retains the ultimate responsibility for making the pertinent diagnosis." The studies mentioned focus on quantitative accuracy and standalone performance of the AI components.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes: A standalone performance assessment was explicitly done for the T1 workflow: "A standalone performance assessment was performed on 90 Cardiac T1 Mapping MRI short axis (SAX) scans from 16 studies."
- Additionally, the T1 and T2 value accuracy assessment implies a comparison of the algorithm's output to some "expected" value, which suggests a standalone component of evaluation.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For T1/T2 Accuracy: Ground truth was "the average intensity value obtained in a similar ROI delineated in OEM vendor generated inline maps." This implies a comparison to a known, perhaps previously validated, standard or reference.
- For T1 Workflow Contours: Ground truth was referred to simply as "ground-truth contours." It's not specified how these "ground-truth contours" were established (e.g., expert manual segmentation, consensus).
-
The sample size for the training set:
- Information Not Provided: The document does not disclose the sample size used for training the deep learning models.
-
How the ground truth for the training set was established:
- Information Not Provided: The document does not specify how the ground truth for the training set was established.
Summary of Gaps in Information:
The provided document, being an FDA 510(k) summary, focuses on demonstrating substantial equivalence rather than providing a detailed clinical study report. Key information typically found in a comprehensive clinical validation study, such as details about human experts, adjudication methods, detailed ground truth establishment (especially for training data), and full MRMC study results, are largely absent. The document emphasizes software verification and validation, and a specific standalone AI assessment for a component of the device (T1 workflow contours) and quantitative accuracy for T1/T2 values.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).