K Number
K240942
Device Name
CINA-CSpine
Manufacturer
Date Cleared
2024-09-12

(160 days)

Product Code
Regulation Number
892.2080
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

CINA-CSpine is a radiological computer aided triage and notification software indicated for use in the analysis of cervical spine CT images.

The device is intended to assist hospital networks and appropriately trained physician specialists by flagging and communication of suspected positive findings compatible with acute cervical spine fractures including non-displaced fracture lines and/or displaced fracture fragments.

CINA-CSpine uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings on a standalone application in parallel to the ongoing standard of care image interpretation. The device is not designed to detect vertebral compression fractures.

The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only, and are not intended for diagnostic use beyond notification. The device does not alter the original medical image, and it is not intended to be used as a diagnostic device.

The results of CINA-CSpine are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images.

Notified clinicians are ultimately responsible for reviewing full images per the standard of care.

Device Description

CINA-CSpine is a radiological computer-assisted triage and notification software device.

CINA-CSpine runs on a standard "off the shelf" server/workstation and consists of CSpine Image Processing Application, which can be integrated, deployed and used with the CINA Platform (cleared under K200855) or other medical image communications devices. CINA-CSpine receives cervical spine CT scans identified by the CINA Platform or other medical image communications device, processes them using deep learning algorithms involving execution of multiple computational steps to identify the suspected positive findings compatible with acute cervical spine fractures and generates results files to be transferred by CINA Platform or a similar medical image communications device for output to a PACS system or workstation for worklist prioritization.

To identify the suspected presence of cervical fractures, the device uses a deep learning model trained end-to-end on 1,338 cases acquired from US and France, representing a distribution of fracture presentations, locations and acquisition protocols, including multiple scanner models from Siemens, Philips, GE and Canon/Toshiba. Additional deep learning models are used to locate the individual vertebrae to exclude images that do not conform to the expected field of view.

DICOM images are received, recorded and filtered before processing. The series are processed chronologically by running algorithms on each series to detect suspected positive findings of a cervical spine fracture, then active notifications on the flagged series are sent to the worklist application.

The Worklist Application displays the active notification of new studies with suspected findings when they come in. All the cervical spine CT studies which include at least 5 visible cervical vertebrae received by CINA-CSpine device are displayed in the worklist and those on which the algorithms have detected a suspected finding are marked with an icon (i.e., active notification). In addition, a compressed, grayscale, unannotated image that is marked "not for diagnostic use" is displayed as a preview function. This compressed preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for diagnostic use beyond notification.

Presenting the trained physician specialist with notification facilitates earlier triage by allowing image prioritization in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

The CINA platform is an example of medical image communications platform for integrating and deploying the CINA-CSpine image processing application. The medical image communications device (i.e., the technical platform) provides the necessary requirements for interoperability based on the standardized DICOM protocol and services to communicate with existing systems in the hospital radiology department such as CT modalities or other DICOM nodes (DICOM router or PACS for example). It is responsible for transferring, converting formats, notifying of suspected findings and displaying medical device data such as radiological data. The medical image communications server includes the Worklist client application, which receives notifications from the CINA-CSpine Image Processing application.

AI/ML Overview

1. Table of Acceptance Criteria and Reported Device Performance

MetricAcceptance Criteria (Performance Goal)Reported Device Performance (mean [95% CI])Predicate Device Performance (mean [95% CI])
Sensitivity≥ 80%90.3% [84.5% - 94.5%]91.7% [82.7% - 96.9%]
Specificity≥ 80%91.9% [86.8% - 95.5%]88.6% [81.2% - 93.8%]
Time-to-Notification (All Cases)Not specified (Comparable to predicate)2.9 minutes [2.7 - 3.0]Not specified
Time-to-Notification (True Positive Cases)Not specified (Comparable to predicate)2.8 minutes [2.6 - 3.0]3.9 minutes [3.8 - 4.1]

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 328 clinical anonymized cases.
  • Data Provenance: Retrospective, multicenter, multinational. Data was acquired from:
    • US: 60.4% of cases, including a U.S. teleradiology company with a database from various U.S. hospitals across different territories.
    • OUS: 39.6% of cases.
    • Scanner Manufacturers: GE (31.1%), Philips (21.6%), Siemens (28.7%), Canon (18.3%), and 36 different scanner models.
    • Time Periods: The validation dataset was from independent sites and different time periods compared to the training data.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Number of Experts: Three.
  • Qualifications of Experts: US-board-certified expert radiologists.

4. Adjudication Method for the Test Set

The ground truth was established by the consensus of the three US-board-certified expert radiologists. This implies a 3-expert consensus (e.g., all 3 agree, or majority vote if specific rules were defined for disagreement, which is not further detailed).

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

No, an MRMC comparative effectiveness study was not reported. The study focused on the standalone performance of the AI device and compared its performance metrics (Sensitivity, Specificity, Time-to-Notification) to those reported for the predicate device. There is no mention of human readers improving with AI assistance.

6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

Yes, a standalone performance testing was performed. The described study evaluated the software's performance (Sensitivity and Specificity) in detecting cervical spine fractures on non-contrast CT scans without human intervention in the initial detection process.

7. Type of Ground Truth Used

Expert Consensus: The ground truth was established by the consensus of three US-board-certified expert radiologists.

8. Sample Size for the Training Set

The deep learning model was trained end-to-end on 1,338 cases.

9. How the Ground Truth for the Training Set Was Established

The document states that the training data was acquired from US and France, representing a distribution of fracture presentations, locations, and acquisition protocols. However, it does not explicitly detail how the ground truth was established for this training set (e.g., if it was also expert consensus, based on pathology reports, or other methods). It can be inferred that it would likely follow a similar rigorous annotation process to establish "true" fracture presence, but specific details are not provided.

§ 892.2080 Radiological computer aided triage and notification software.

(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.