K Number
K140054
Device Name
RESCAN
Manufacturer
Date Cleared
2014-05-16

(127 days)

Product Code
Regulation Number
868.5905
Panel
AN
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ResScan is intended to augment the standard follow-up care of patients by providing transfer of machine and therapeutic information. This includes the ability to remotely change settings in non-life support devices only.

It is intended to be used by Clinicians in conjunction with ResMed compatible therapy devices, using ResMed's proprietary communications protocol.

Device Description

The performance and functional characteristics of ResScan includes all the user friendly features of the predicate device.

ResScan allows the clinician to:

  • Download and view patient and machine data from ResMed flow generators
  • Store patient details
  • Set machine parameters (Using Removal Media or PC direct connection), non-life support devices only.
  • Create and print reports
  • Uses Removal Media or PC direct connection as the interface between the flow generator and ResScan
  • Support for Data Card Reader
AI/ML Overview

The provided document describes a 510(k) submission for the ResScan device, which is a software intended to augment patient follow-up care by transferring machine and therapeutic information and allowing remote changes to settings in non-life support devices. This submission aims to demonstrate substantial equivalence to a predicate device, also named ResScan (K113815).

Here's an analysis of the acceptance criteria and study information:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly present a table of quantitative acceptance criteria for specific performance metrics (e.g., accuracy, sensitivity, specificity) with corresponding reported device performance values. Instead, it discusses "predetermined acceptance criteria" in the context of verification testing.

The acceptance criteria appear to be qualitative and focused on successful communication and data transfer. The reported device performance is that these criteria were met.

Feature/TestAcceptance Criteria (Implied)Reported Device Performance
Settings Transfer (End-to-End Testing)Settings successfully transferred between flow generator and ResScan"All tests confirmed the product met the predetermined acceptance criteria."
Data Capture & Transmission (End-to-End Testing)Data captured by flow generator sent to ResScan"All tests confirmed the product met the predetermined acceptance criteria."
Removable Media Functionality (Non-Clinical)ResScan can receive patient usage, device settings, therapy summary data, detailed signal data, and device log information from flow generator via removable media."All tests confirmed the product met the predetermined acceptance criteria." (Specific mention: "ResScan met the predetermined pass/fail criteria as defined in the ResScan System Verification Report.")

2. Sample Size Used for the Test Set and the Data Provenance

The document describes "End-to-End bench testing" and "non-clinical verification activities." These refer to laboratory or bench testing rather than testing with patient data or in a clinical setting.

  • Sample Size for Test Set: Not applicable in the context of patient data. The "test set" would consist of various configurations, data types, and scenarios used in the bench testing. No numerical sample size for these test cases is provided.
  • Data Provenance: Not applicable as the testing was non-clinical/bench testing. The document states that ResScan "only obtains patient and machine information from therapeutic devices for which clinical trials have already been conducted, or compared with previous predicate comparison test results." This implies reliance on data generated by the therapeutic devices themselves, not data specifically collected for ResScan's performance evaluation.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Number of Experts: Not applicable. Given that the testing was end-to-end bench testing of data transfer and settings changes, the "ground truth" would be established by the expected behavior of the system and direct observation/validation during engineering testing. There is no mention of human experts defining ground truth for this type of technical performance evaluation.

4. Adjudication Method for the Test Set

  • Adjudication Method: Not applicable. The testing described is technical verification (e.g., "end-to-end testing to confirm that settings were successfully transferred," "data captured by the flow generator was sent"). Adjudication methods like 2+1 or 3+1 are typically used for clinical image interpretation or diagnostic tasks where expert consensus is needed to resolve discrepancies in human assessments of often ambiguous data. Here, the outcome of a test case (e.g., "was the setting changed correctly?") is expected to be binary and objectively verifiable.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Was it done?: No. The document explicitly states: "Clinical testing was not deemed necessary as identified in the Risk Analysis, as ResScan only obtains patient and machine information from therapeutic devices for which clinical trials have already been conducted, or compared with previous predicate comparison test results. Accordingly no clinical testing is required."
  • Effect size of human readers with vs. without AI assistance: Not applicable, as no MRMC study was conducted. ResScan is a data management and setting adjustment software, not an AI-assisted diagnostic or interpretation tool.

6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

  • Was it done?: Yes, in essence. The "end-to-end bench testing" described is a standalone performance test of the software's ability to correctly transfer data and settings. The performance evaluated is the software's internal functionality. While it is ultimately used by clinicians, the testing described focuses on the software's direct interaction with the flow generators and its own data processing capabilities, without evaluating a human-in-the-loop task.

7. Type of Ground Truth Used

  • Type of Ground Truth: Expected System Behavior/Technical Specification Adherence.
    • For settings transfer: The ground truth is whether the new setting was correctly applied to the flow generator as initiated by ResScan.
    • For data capture and transmission: The ground truth is whether the data generated by the flow generator was accurately received and stored by ResScan.
    • This is established by comparing the software's output/actions against the design specifications and expected functional behavior, not against pathology, outcomes data, or expert consensus in a clinical diagnostic sense.

8. Sample Size for the Training Set

  • Sample Size for Training Set: Not applicable. This document describes a software system that manages data and settings, and interfaces with medical devices. It is not an AI/machine learning model that typically requires a distinct "training set" for model development. The software capabilities are likely developed through traditional software engineering paradigms, not statistical learning from a large dataset.

9. How the Ground Truth for the Training Set Was Established

  • How Ground Truth for Training Set Was Established: Not applicable, as there isn't a "training set" in the machine learning sense. The "ground truth" for developing the software (e.g., for unit and integration testing) would be derived from functional requirements, design specifications, and communication protocols.

§ 868.5905 Noncontinuous ventilator (IPPB).

(a)
Identification. A noncontinuous ventilator (intermittent positive pressure breathing-IPPB) is a device intended to deliver intermittently an aerosol to a patient's lungs or to assist a patient's breathing.(b)
Classification. Class II (performance standards).