K Number
K200385
Device Name
CONTINUUM PACS
Date Cleared
2020-03-16

(27 days)

Product Code
Regulation Number
892.2050
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

CONTINUUM PACS is a software system to store, manage and display patient data, diagnostic data, videos and images from computerized ophthalmic diagnostic imaging devices.

Device Description

CONTINUUM PACS is an ophthalmic image management system that has been designed to store, retrieve and provide browser-based review of reports, videos and images which were generated by ophthalmic imaging devices. CONTINUUM PACS has a central database for patient information and historical exams. CONTINNUM PACS is installed on the user's server and communicate with the networked imaging devices. The users review images, reports, and videos via their existing browser software.

AI/ML Overview

The provided document, K200385 for CONTINUUM PACS, is a 510(k) summary for an ophthalmic image management system. This type of submission focuses on demonstrating substantial equivalence to a predicate device, rather than proving novel safety and effectiveness through extensive clinical trials or performance metrics against strict acceptance criteria.

Therefore, many of the requested details, such as specific acceptance criteria for a device's performance, the sample size of a test set, the number and qualifications of experts for ground truth, adjudication methods, MRMC studies, standalone algorithm performance, and the sample size/ground truth for a training set, are NOT PRESENT in this document because they are typically not required for a 510(k) submission of this nature. The document explicitly states "Clinical Performance Data: None required or submitted."

The document focuses on comparing the new device (CONTINUUM PACS) to a predicate device (Sonomed, Inc. AXIS Image Management System K171098). The acceptance criteria, in this context, are primarily centered around demonstrating functional equivalence and no new safety or effectiveness concerns compared to the predicate.

Here's a breakdown of what can be extracted from the document:

1. A table of acceptance criteria and the reported device performance:

The document doesn't provide a quantitative table of acceptance criteria and performance metrics in the way one would expect for an AI/ML device or a device requiring specific performance thresholds. Instead, the "acceptance criteria" are implied by the comparison to the predicate device, demonstrating functional parity and compliance with industry standards.

CharacteristicAcceptance Criteria (Implied by Predicate Equivalence)Reported Device Performance (for CONTINUUM PACS)
Software-only systemYesYes
Patient databaseYesYes
Imaging review capabilityYesYes
Image annotation and measurement capabilityYesYes
Browser-based applicationYesYes
Secure loginYesYes
Interface with electronic medical records (EMR)YesYes
Connects to imaging instruments via DICOM and non-DICOM methodsYesYes
Intended UseStore, manage, display patient data, diagnostic data, videos, and images from computerized ophthalmic diagnostic imaging devices.Stores, manages, and displays patient data, diagnostic data, videos, and images from computerized ophthalmic diagnostic imaging devices.
ComplianceDICOM compliantDICOM compliant (as specified in its DICOM Conformance Statement).
Performance as intendedPerforms as intendedPerformance testing during software V&V found to perform as intended.
Safety & EffectivenessAs safe, effective, and performs as well as or better than the predicate.Demonstrated to be as safe, effective, and performs as well as or better than the predicate device.

The key difference highlighted is the User Interface design (HTML5 for CONTINUUM PACS vs. Silverlight for AXIS Image Management), which is stated to be a "minor difference" and does not raise new questions of safety or effectiveness.

2. Sample size used for the test set and the data provenance:

  • Sample Size: Not specified. The document states "Performance testing was performed on CONTINUUM PACS during software verification and validation." This implies internal testing against defined requirements, but not a specific "test set" in the context of clinical or AI performance evaluation with a defined sample size of cases.
  • Data Provenance: Not specified. As there are no clinical studies, no patient data or images were used for performance evaluation in the context of this submission beyond general software functionality testing.
  • Retrospective/Prospective: Not applicable, as no clinical study or test set of patient data was used in the described manner.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Number of Experts: Not applicable. No clinical test set requiring expert-established ground truth was part of this 510(k) summary.
  • Qualifications of Experts: Not applicable.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

  • Adjudication Method: Not applicable. No clinical test set requiring adjudication was part of this 510(k) summary.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

  • MRMC Study: No. The document explicitly states "Clinical Performance Data: None required or submitted."
  • Effect size of how much human readers improve with AI vs. without AI assistance: Not applicable, as no MRMC study or AI assistance component is described.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • Standalone Performance: No. This device is a PACS system, not an AI algorithm. Its function is to store, manage, and display data for human review, not to provide diagnostic output itself.

7. The type of ground truth used:

  • Type of Ground Truth: For the software functionality testing, the "ground truth" would be the expected behavior of the system based on design specifications and industry standards (e.g., DICOM conformance). No "clinical ground truth" (like pathology or outcomes data) was used or required for this type of submission.

8. The sample size for the training set:

  • Sample Size for Training Set: Not applicable. This device is a PACS system and does not involve AI/ML models that require a training set.

9. How the ground truth for the training set was established:

  • Ground Truth Establishment for Training Set: Not applicable. As there is no training set for an AI/ML model, no ground truth needed to be established in this context.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).