Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K200360
    Device Name
    TrackX
    Date Cleared
    2020-03-05

    (21 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TrackX v.2.0 is intended for use in any application where a fluoroscope is incorporated to aid in the diagnosis and treatment of disease.

    Device Description

    TrackX is a software application which captures diagnostic images from a fluoroscope via a video cable. In addition, TrackX interfaces with an off-the-shelf tracking system in order to track the position of surgical instruments relative to the fluoroscope. The user controls and views information via a primary monitor. The viewing monitor is not part of the subject device.

    TrackX will track the location of the tip of a surgical instrument. TrackX works by translating and rotating an X-ray image which contains the surgical instrument on the screen based on the surgeon's movement of the instrument. This real-time feedback allows the physician to reposition the instrument with greater accuracy between X-ray images. This aids the physician in repositioning their surgical instruments by providing visual feedback on where they have moved their instruments between X-ray images.

    AI/ML Overview

    The provided FDA 510(k) summary for TrackX v.2.0 (K200360) describes the device's acceptance criteria and the study that proves the device meets them.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Instrument Tracking with OpticalInstrument Tracking with Optical
    The location of projected markers should be within a 2mm mean of their expected 10mm increment from the detected instrument tip to each consecutive projected marker."The testing demonstrated that TrackX met specifications." (This implies the 2mm mean deviation criterion was met, but the exact quantitative result is not provided in this summary.)
    Software VerificationSoftware Verification
    Implement the intended changes for both the projection feature and the home base selector."The software testing performed for the modifications that are the subject of this Special 510(k) used established test methods that were used for the predicate device." (This confirms the new features were implemented and tested as intended.)

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The provided document does not specify the sample size used for the bench testing of "Instrument Tracking with Optical" or the software verification testing. It also does not explicitly state the provenance of the data (e.g., country of origin, retrospective or prospective) for these tests. The testing described is "bench testing," which typically refers to lab-based, simulated scenarios rather than patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document does not mention the use of experts to establish a ground truth for the test set. The "Instrument Tracking with Optical" testing appears to rely on a predefined, expected 10mm increment for projected markers, suggesting a technical ground truth rather than expert interpretation.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No adjudication method is mentioned in the provided summary. The testing described is bench testing against predefined technical specifications.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was performed or needed as indicated by the statement: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This device is a software application that assists with fluoroscopic procedures by tracking instruments and providing visual cues, not an AI-driven diagnostic reader.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the performance data presented (Bench Testing, Software Verification and Validation) represents standalone testing of the algorithm/software's functionality and accuracy against predefined specifications, without human-in-the-loop performance evaluation in a clinical setting.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the "Instrument Tracking with Optical" bench testing, the ground truth was based on expected technical measurements (e.g., the "expected 10mm increment" between projected markers). For the "Software Verification," the ground truth was the intended functionality and changes for the new features (Projection and Home Base Selector).

    8. The sample size for the training set

    The document does not provide any information about a training set. This is because TrackX v.2.0 is described as a software application that captures images and interfaces with a tracking system to translate and rotate images and track instrument location. It does not appear to be an AI/machine learning device that requires a training set in the conventional sense for image classification or prediction. The changes are "new features, Projection and Home Base Selector," which are functionally implemented, not learned.

    9. How the ground truth for the training set was established

    As no training set is mentioned or implied for this type of device and its new features, information on how its ground truth was established is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1