Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K170946
    Date Cleared
    2017-11-24

    (239 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Skan-C Mobile C-Arm X-Ray System - 230V Variant, Skan-C Mobile C-Arm X-Ray System - 110V Variant

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Skan-C, a Mobile Surgical C-Arm X-Ray System, is intended to provide Fluoroscopic and Radiographic images of the patient during Diagnostic, Surgical and Interventional procedures.

    Examples of Clinical Applications may include Orthopaedic. GI Procedure like Endoscopy and Cholenography. Neurology, Urology Procedures, Vascular, Critical Care and Emergency Room Procedures.

    Skan-C is not recommended for Cardiac Applications.

    Skan-C Surgical C-Arm is indication in real time and/or recording of surgical region of interest and anatomy, using X-ray imaging technique.

    Device Description

    SKAN-C, is a mobile X-Ray C-Arm fluoroscopic device to assist in quiding medical intervention surgical procedures. The device can also be used for radiographic applications. The device is designed in such a way that it can be moved around and can be positioned for the required anatomical/clinical/procedural position.

    SKAN-C, a Mobile Surgical C-Arm consists of two units, namely, Mobile Image Intensified C-Arm unit with generator, and a Work-Station for Image display, store and manipulation. C-Arm unit with generator is capable of movements which are essential for patient positioning, like horizontal travel, orbital movement, wig-wag movement and C rotation. The X-ray generator, X-Ray control system and collimator controls are housed in the C-Arm unit.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for a medical device (Skan C Mobile C-Arm X-Ray system). It outlines the device's indications for use, technological characteristics, and non-clinical/clinical tests performed to demonstrate substantial equivalence to a predicate device, rather than providing specific acceptance criteria and detailed study results typical of a performance evaluation directly tied to an AI algorithm.

    Based on the provided text, the device is an X-ray system, not an AI-powered diagnostic device. Therefore, the questions related to AI-specific metrics, ground truth establishment for AI training/testing, and multi-reader multi-case studies for AI assistance are not directly applicable to this document.

    However, I can extract information related to the performance evaluation of the X-ray system itself, which serves as its "acceptance criteria" and "study" for regulatory purposes.

    Here's an interpretation of the requested information based on the provided document, focusing on the device as an X-ray system:


    1. A table of acceptance criteria and the reported device performance

    The document doesn't present a direct "acceptance criteria" table in the way one might see for diagnostic performance metrics (e.g., sensitivity, specificity). Instead, it demonstrates compliance with recognized safety and performance standards and compares its technological characteristics to a predicate device. The "acceptance criteria" can be inferred as meeting these standards and showing comparable technical specifications.

    Acceptance Criteria (Inferred from Compliance)Reported Device Performance (Skan C)
    Safety & Essential Performance: Compliance with IEC 60601-1Compliant
    Imaging Performance, Accuracy of Loading Factors, Reproducibility of Output: Compliance with IEC 60601-2-54Compliant
    Recovery Management, Patient Data, Last Image Hold, Image Measuring: Compliance with IEC 60601-2-43Compliant
    Radiation Safety (Half Value Layer, Leakage/Stray Radiation): Compliance with IEC 60601-1-3Compliant
    Electromagnetic Compatibility (EMC): Conducted/radiated emission, harmonics, voltage fluctuations, ESD, EFT, RF, surges, power frequency magnetic field, voltage dips as per IEC 60601-1-2Compliant
    Image Quality (DQE, Spatial Resolution, Dynamic Range, Beam Alignment, Recovery/Reuse Rate): As per FDA guidance for solid state X-ray imaging devicesCompliant
    FDA Performance Standards: 21 CFR 1020.30-1020.32Compliant
    Traceability to Predicate Device (Technological Characteristics):"Equivalent in technological and other characteristics to the predicate device. GE OEC Fluorostar."
    Usability: User experience with device setup and post-imaging processes"Did not reveal any discomfort or complex user interfaces."
    Image Adequacy for Indicated Use:"Acquired images were of adequate quality for the indicated use" as per independent radiologists' views.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify a numerical sample size for "test sets" in the context of diagnostic image analysis, as it's an X-ray system, not an AI algorithm being tested for diagnostic accuracy on a dataset.

    • Non-Clinical Tests: These involve testing the physical device against engineering and safety standards. There isn't a "sample size" of images or patients in the typical sense. It refers to testing the device's components and overall system functionality (e.g., radiation output measurements, EMC tests).
    • Clinical Tests (Usability and Image Quality):
      • Usability: "Usability aspects of the device were tested by the users and independent participants." No specific number provided.
      • Image Quality: "Independent views of Radiologists were obtained on the imaging performances and the acquired images were of adequate quality for the indicated use." No specific number of images or patients mentioned.
    • Data Provenance: Not explicitly stated for any "data." The company is based in India (Skanray Technologies Private Limited, Mysore, India). It's common for such tests to be conducted internally or by accredited labs in the manufacturer's region or contracted locations.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: "Independent views of Radiologists were obtained on the imaging performances." The number of radiologists is not specified, only that "Radiologists" (plural) were involved.
    • Qualifications: Stated as "Radiologists." No specific experience level (e.g., "10 years of experience") or subspecialty is detailed in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document states "Independent views of Radiologists were obtained." This implies individual assessments. There is no mention of an adjudication method (like 2+1 or 3+1 consensus) being used for combining expert opinions or resolving discrepancies.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There was no MRMC comparative effectiveness study done comparing human readers with AI assistance versus without AI assistance. This device is an X-ray imaging system, not an AI diagnostic tool. Its performance evaluation focuses on the safety, technical specifications, and image quality of the X-ray system itself.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This question is not applicable. The device is a C-Arm X-Ray system, not an AI algorithm. Therefore, there is no "standalone algorithm" performance to report.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "clinical tests" part of the evaluation:

    • Usability: The "ground truth" was user feedback and observational assessment by "users and independent participants."
    • Image Quality: The "ground truth" was "Independent views of Radiologists" regarding image adequacy. This is a form of expert opinion/consensus (though the consensus method isn't detailed). There's no mention of pathology or outcomes data for this specific evaluation in the provided summary.

    8. The sample size for the training set

    This question is not applicable. The device is a C-Arm X-Ray system, not an AI algorithm that requires a "training set" in the machine learning sense. The X-ray system is developed and validated through engineering standards and clinical evaluations demonstrating its functionality and safety, not through machine learning training.


    9. How the ground truth for the training set was established

    This question is not applicable, as there is no "training set" for an AI algorithm in the context of this device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1