K Number
K230074
Manufacturer
Date Cleared
2023-07-27

(198 days)

Product Code
Regulation Number
892.2080
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Rapid Aneurysm Triage and Notification (ANRTN) is a radiological computer-assisted triage and notification software device for analysis of CT images of the head. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected saccular aneurysms during routine patient care. Rapid ANRTN uses an artificial intelligence algorithm to analyze images and highlight studies with suspected saccular aneurysms in a standalone application for study list prioritization or triage in parallel to ongoing standard of care. The device generates compressed preview images that are meant for informational purposes only and not intended for diagnostic use. The device does not alter the original medical image and is not intended to be used as a diagnostic device. Analyzed images are available for review through the PACS, email and mobile application. When viewed the images are for informational purposes only and not for diagnostic use. The results of Rapid ANRTN, in conjunction with other clinical information and professional judgment, are to be used to assist with triage/prioritization of saccular aneurysm cases. Radiologists who read the original medical images are responsible for the diagnostic decision. Rapid ANRTN is limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis.

Rapid ANRT is limited to detecting saccular aneurysms at least 4mm in diameter in adults.

Device Description

Rapid ANRTN software device is a radiological computer-assisted image processing software device. The Rapid ANRTN device is a CTA processing module which operates within the integrated Rapid Platform to determine the suspicion of head saccular aneurysm(s). The ANRTN software analyzes input CTA images that are provided in DICOM format and provides notification of suspected saccular aneurysm(s) and a non-diagnostic, compressed image for preview. Rapid ANRTN is an AI/ML image processing module which integrates within the Rapid Platform.

AI/ML Overview

The provided text describes the acceptance criteria and the study that proves the device (Rapid Aneurysm Triage and Notification - Rapid ANRTN) meets these criteria.

Here's the breakdown of the requested information:

1. A table of acceptance criteria and the reported device performance

MetricAcceptance Criteria (Product Code QFM Definition)Reported Device Performance
AUC (for overall performance)> 0.95 (for high performance)> 0.95
SensitivityNot explicitly defined as a threshold, but reported as a key metric.0.933
SpecificityNot explicitly defined as a threshold, but reported as a key metric.0.868

2. Sample size used for the test set and the data provenance

  • Test Set Sample Size: 266 CTA cases (151 positive for aneurysm, 115 negative).
  • Data Provenance:
    • Country of Origin: Not explicitly stated in the provided text.
    • Retrospective or Prospective: Not explicitly stated, but the mention of cases "obtained from Siemens, GE, Toshiba, and Philips scanners" and "698 (633 training, 65 validation) CTA cases from multiple sites" suggests a retrospective collection of existing imaging data.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: 3 experts.
  • Qualifications of Experts: Not explicitly stated beyond "experts." It is typically assumed these are trained medical professionals (e.g., radiologists) with relevant experience, but specific qualifications are not detailed in the provided text.

4. Adjudication method for the test set

  • Adjudication Method: "Ground truth established by 3 experts." This implies a consensus-based approach, but the specific adjudication method (e.g., majority vote, specific tie-breaking rules, or if all 3 had to agree) is not explicitly detailed (e.g., 2+1, 3+1). It likely refers to a consensus reading among the three experts.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated or described. The study focused on the standalone performance of the algorithm. The device's intended use is to "assist hospital networks and trained radiologists in workflow triage," implying an assistive role to humans, but the provided data only shows the algorithm's performance, not human performance with and without assistance.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: Yes, a standalone performance validation was done. The text explicitly states: "Final device validation included standalone performance validation." and "This performance validation testing demonstrated the Rapid ANRTN device provides accurate representation of key processing parameters under a range of clinically relevant perturbations associated with the intended use of the software."

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Type of Ground Truth: Expert consensus. The text states, "ground truth established by 3 experts."

8. The sample size for the training set

  • Training Set Sample Size: 633 CTA cases. (The broader algorithm development dataset included 698 total, split into 633 training and 65 validation cases, with the 266 cases being the final performance validation set).

9. How the ground truth for the training set was established

  • Ground Truth Establishment for Training Set: The text states, "Algorithm development was performed using 698 (633training, 65 validation) CTA cases from multiple sites." While it mentions the cases were "selected [to] covered a wide range of suspected saccular aneurysms," the specific method for establishing ground truth for the training set (e.g., expert review, clinical reports, or a combination) is not explicitly detailed in the provided document. It is implied, but not stated, that a similar expert review process would have been used as for the test set.

§ 892.2080 Radiological computer aided triage and notification software.

(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.