Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K982384
    Device Name
    MERCATOR
    Date Cleared
    1999-01-27

    (203 days)

    Product Code
    Regulation Number
    870.2800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    HARLEY STREET SOFTWARE LTD.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Mercator is intended to be used as a data management tool for physicians and cardiac clinics to store, retrieve, communicate and report ECG and ECG data acquired from a variety of ECG sources including single and multi-lead ECG devices. Users will be able to purchase specific modules for managing other patient cardiac related data such as pacemaker and rehabilitation data that fit their patients' needs. Mercator is intended for use in clinics, hospitals, physician's offices, or anywhere a medical doctor deems appropriate. Mercator does not offer diagnosis or medical alarms.

    Device Description

    Mercator is a software product designed for the Microsoft Windows 95 and Microsoft Windows NT operating systems running on an IBM compatible platform. Mercator consists of a user interface that enables health care professionals to input, store, and output data from a relational database.

    The product consists of a set of modules that can be "plugged in" to customize the application to individual users' needs.

    Mercator is capable of multi-tasking and supports the linking and embedding of related information objects in the ECG. The software supports many aspects of a patient's cardiology record including: arrhythmia diagnosis, pathological diagnosis, ECGs, ECG information, doctor notes, pacemaker/ICD data and associated reports. Mercator also supports appointment scheduling and stores information on physicians, facilities, allied health care professionals, and insurance providers.

    Mercator is not a life-supporting or life-sustaining system. It is intended that competent human intervention be involved before any impact on health occurs. Clinical judgment and experience are used to check and interpret the data.

    AI/ML Overview

    The Mercator 510(k) summary does not contain detailed information about specific acceptance criteria or a dedicated study proving device performance against those criteria in a quantitative manner. The document focuses on regulatory submission requirements for substantial equivalence.

    Here's a breakdown of the available information and what's missing, based on your requested headings:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Software Functionality"All software modules were tested at multiple levels including the unit level, post integration, and system level. All modules are tested thoroughly for proper functionality by trained staff using proven techniques. Testing included a combination of manual and automated test suites."
    Applicability to ANSI/AAMI EC38 - 1994 Standard"the ECG module was tested against the applicable sections of the ANSI/AAMI EC38 - 1994 Standard for Ambulatory Electrocardiographs concerned with effectiveness and safety."
    Safety and Reliability"Testing results indicated that the product is safe and reliable. All test plans were passed in accordance with the Mercator Product Release Test Plan."

    Missing:

    • Specific, quantifiable acceptance criteria (e.g., "Software shall perform X with Y% accuracy" or "System uptime shall be Z%").
    • Quantitative performance metrics and results against those criteria. The statements are qualitative ("thoroughly," "proper functionality," "safe and reliable").

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: Not specified. The document mentions testing "all software modules" and "the ECG module," but does not provide details on the specific data sets or number of test cases used.
    • Data Provenance: Not specified. Given the nature of the device (ECG data management software), the "test set" would likely refer to the data used during software testing, not patient data for clinical evaluation. The document primarily discusses software functionality and compliance with a standard, not clinical performance with patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. The "ground truth" concept, as typically applied to performance evaluation with expert adjudication, is not relevant here. The testing described is primarily focused on software functionality, adherence to design specifications, and compliance with a technical standard (ANSI/AAMI EC38 - 1994), not diagnostic accuracy against a clinical ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. As noted above, the testing is for software functionality and technical standard compliance, not clinical diagnostic accuracy requiring adjudicated ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. This is not mentioned. The Mercator is described as a "data management tool" and explicitly states, "Mercator does not offer diagnosis or medical alarms." Therefore, an MRMC study comparing human reader performance with and without AI assistance isOutside the scope of its intended use.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No. This is not applicable, as Mercator is a data management tool, not an algorithm providing a diagnostic output. The document explicitly states, "It is intended that competent human intervention be involved before any impact on health occurs. Clinical judgment and experience are used to check and interpret the data."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. The "ground truth" in the context of Mercator's testing would be verifying that the software performed its intended functions (e.g., storing, retrieving, displaying ECG data correctly) according to its design specifications and the relevant technical standard. This is not a clinical ground truth regarding disease state.

    8. The sample size for the training set

    Not applicable. Mercator is a data management software, not a machine learning or AI-driven diagnostic device that would require a "training set" in the conventional sense.

    9. How the ground truth for the training set was established

    Not applicable, as there is no mention of a training set for Mercator.


    Summary of Device Evaluation in the 510(k) document:

    The 510(k) submission for Mercator primarily focuses on demonstrating substantial equivalence to predicate devices by comparing technological characteristics and showing compliance with relevant software development and testing practices. The "Performance Testing and Conclusions" section states that:

    • All software modules underwent multi-level testing (unit, integration, system) by trained staff using manual and automated test suites.
    • The ECG module was tested against "applicable sections of the ANSI/AAMI EC38 - 1994 Standard for Ambulatory Electrocardiographs concerned with effectiveness and safety."
    • The results indicated the product is "safe and reliable" and "All test plans were passed in accordance with the Mercator Product Release Test Plan."

    This type of documentation is typical for software products of this era (1998) that act as data management tools rather than advanced diagnostic or AI-powered devices. The focus is on demonstrating that the software functions as designed and meets recognized industry standards for medical devices of its class. The absence of detailed clinical performance data, expert adjudication, or AI-specific evaluation metrics is consistent with its intended use as a data management tool, not a diagnostic aid.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1