Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K181943
    Manufacturer
    Date Cleared
    2018-08-17

    (28 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K091918

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OSCAR is a mobile fluoroscopy system designed to provide fluoroscopic and spot film images of the patient during diagnostic, surgical and interventional procedures. Examples of clinical application may include cholangiography, endoscopy, urologic, orthopedic, neurologic, vascular, and stone localization.

    Device Description

    OSCAR Prime and OSCAR Classic are classified according to the option of image acquisition parts. Flat panel detector is OSCAR Prime, and Image intensifier is OSCAR Classic. And they are called OSCAR as the brand name.

    OSCAR is consist of X-ray Tube, X-ray tube assembly, x-ray controller, image receptor and some accessories. There is no wireless function in this device.

    The OSCAR, C-Arm Mobile is the device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. This device is used for providing fluoroscopic and radiographic images of patient anatomy, especially during the special procedures in a hospital clinics. The fluoroscopic mode of operation is very useful to the attending physician to see the images on real time without the need to develop individual films.

    AI/ML Overview

    The provided text does not contain information about acceptance criteria or a study proving the device meets acceptance criteria in the manner requested (e.g., a detailed clinical trial or performance study with metrics, sample sizes, and ground truth establishment).

    This document is a 510(k) Premarket Notification from the FDA, which primarily focuses on demonstrating substantial equivalence to a legally marketed predicate device, rather than proving the device meets specific detailed acceptance criteria through a dedicated performance study.

    Here's what can be extracted from the document, organized as per your request, with explicit notes about missing information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of acceptance criteria with specific quantitative thresholds that a study then verifies. Instead, the "performance" is discussed in terms of technical specifications and compliance with industry standards.

    Feature/StandardAcceptance Criteria (Implied)Reported Device Performance
    GeneralSafe and Effective (equivalent to predicate)"OSCAR is safe and effective as predicate device, and has no new indication for use. Therefore, OSCAR is substantially equivalent to predicate device."
    SafetyCompliance with relevant IEC standards and CFR"OSCAR complies with industry standards such as IEC 60601-1 Series and 21 CFR 1020.30, 21 CFR 1020.31 and 21 CFR 1020.32 to minimize electrical, mechanical and radiation hazards."
    "Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1, IEC 60601-1-3, IEC 60601-1-6, IEC 60601-2-28, IEC 60601-2-43, IEC 60601-2-54 and IEC 62366 were performed."
    EMCCompliance with IEC 60601-1-2"EMC testing was conducted in accordance with standard IEC 60601-1-2."
    EPRCCompliance with 21 CFR 1020.30, 31, 32"OSCAR meets the EPRC standards (21 CFR 1020.30, 31, 32)."
    SoftwareCompliance with FDA guidance, changes do not affect safety/effectiveness"FDA guidance 'guidance for SSXI devices', and 'guidance for the Content of Premarket Submissions for Software Contained in Medical devices', was performed for OSCAR."
    "Changes to the predicate device software were tested and they do not affect the device safety and effectiveness. Also, the device software is moderate level of concern."
    DQEFor OSCAR Prime: More effective and safe than predicate's DQEFor OSCAR Prime (Flat panel): Mentioned DQE: 59% (Option A) and 45% (Option B). Compared to predicate's DQE (image intensifier + CCD camera) of 51%. "the DQE of the OSCAR Prime is more effective and safety than predicate device." (This is a qualitative statement of comparison rather than a specific numeric acceptance criterion being met). Other technical specifications (e.g., resolution, kV, mA ranges) are listed but not presented with explicit acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified in the provided text. The document refers to "bench and clinical evaluation" but does not detail the size or nature of these evaluations for the purpose of a test set for performance.
    • Data Provenance: Not specified. The document does not indicate the country of origin of data or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • This information is not provided in the document. The document refers to "bench and clinical evaluation" but does not detail how ground truth was established by experts.

    4. Adjudication Method for the Test Set

    • This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. No effect size of human reader improvement with AI vs. without AI assistance is present, primarily because the device described is a fluoroscopy system, not an AI-assisted diagnostic tool.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • This information is not applicable as the device is a medical imaging system (fluoroscopy system), not an AI algorithm designed to operate without human intervention. The performance mentioned relates to the technical specifications of the imaging equipment.

    7. Type of Ground Truth Used

    • The document does not explicitly state the type of ground truth used for any performance evaluation. It mentions "bench and clinical evaluation," which generally implies a comparison against existing clinical standards or established benchmarks for image quality and safety. However, specific types like expert consensus, pathology, or outcomes data are not cited.

    8. Sample Size for the Training Set

    • This information is not provided and is not applicable in the context of this device, as it is a fluoroscopy system and not an AI/machine learning algorithm requiring a separate training set.

    9. How the Ground Truth for the Training Set Was Established

    • This information is not provided and is not applicable, as the device described is a fluoroscopy system, not an AI/machine learning algorithm with a training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K170946
    Date Cleared
    2017-11-24

    (239 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K091918

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Skan-C, a Mobile Surgical C-Arm X-Ray System, is intended to provide Fluoroscopic and Radiographic images of the patient during Diagnostic, Surgical and Interventional procedures.

    Examples of Clinical Applications may include Orthopaedic. GI Procedure like Endoscopy and Cholenography. Neurology, Urology Procedures, Vascular, Critical Care and Emergency Room Procedures.

    Skan-C is not recommended for Cardiac Applications.

    Skan-C Surgical C-Arm is indication in real time and/or recording of surgical region of interest and anatomy, using X-ray imaging technique.

    Device Description

    SKAN-C, is a mobile X-Ray C-Arm fluoroscopic device to assist in quiding medical intervention surgical procedures. The device can also be used for radiographic applications. The device is designed in such a way that it can be moved around and can be positioned for the required anatomical/clinical/procedural position.

    SKAN-C, a Mobile Surgical C-Arm consists of two units, namely, Mobile Image Intensified C-Arm unit with generator, and a Work-Station for Image display, store and manipulation. C-Arm unit with generator is capable of movements which are essential for patient positioning, like horizontal travel, orbital movement, wig-wag movement and C rotation. The X-ray generator, X-Ray control system and collimator controls are housed in the C-Arm unit.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for a medical device (Skan C Mobile C-Arm X-Ray system). It outlines the device's indications for use, technological characteristics, and non-clinical/clinical tests performed to demonstrate substantial equivalence to a predicate device, rather than providing specific acceptance criteria and detailed study results typical of a performance evaluation directly tied to an AI algorithm.

    Based on the provided text, the device is an X-ray system, not an AI-powered diagnostic device. Therefore, the questions related to AI-specific metrics, ground truth establishment for AI training/testing, and multi-reader multi-case studies for AI assistance are not directly applicable to this document.

    However, I can extract information related to the performance evaluation of the X-ray system itself, which serves as its "acceptance criteria" and "study" for regulatory purposes.

    Here's an interpretation of the requested information based on the provided document, focusing on the device as an X-ray system:


    1. A table of acceptance criteria and the reported device performance

    The document doesn't present a direct "acceptance criteria" table in the way one might see for diagnostic performance metrics (e.g., sensitivity, specificity). Instead, it demonstrates compliance with recognized safety and performance standards and compares its technological characteristics to a predicate device. The "acceptance criteria" can be inferred as meeting these standards and showing comparable technical specifications.

    Acceptance Criteria (Inferred from Compliance)Reported Device Performance (Skan C)
    Safety & Essential Performance: Compliance with IEC 60601-1Compliant
    Imaging Performance, Accuracy of Loading Factors, Reproducibility of Output: Compliance with IEC 60601-2-54Compliant
    Recovery Management, Patient Data, Last Image Hold, Image Measuring: Compliance with IEC 60601-2-43Compliant
    Radiation Safety (Half Value Layer, Leakage/Stray Radiation): Compliance with IEC 60601-1-3Compliant
    Electromagnetic Compatibility (EMC): Conducted/radiated emission, harmonics, voltage fluctuations, ESD, EFT, RF, surges, power frequency magnetic field, voltage dips as per IEC 60601-1-2Compliant
    Image Quality (DQE, Spatial Resolution, Dynamic Range, Beam Alignment, Recovery/Reuse Rate): As per FDA guidance for solid state X-ray imaging devicesCompliant
    FDA Performance Standards: 21 CFR 1020.30-1020.32Compliant
    Traceability to Predicate Device (Technological Characteristics):"Equivalent in technological and other characteristics to the predicate device. GE OEC Fluorostar."
    Usability: User experience with device setup and post-imaging processes"Did not reveal any discomfort or complex user interfaces."
    Image Adequacy for Indicated Use:"Acquired images were of adequate quality for the indicated use" as per independent radiologists' views.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify a numerical sample size for "test sets" in the context of diagnostic image analysis, as it's an X-ray system, not an AI algorithm being tested for diagnostic accuracy on a dataset.

    • Non-Clinical Tests: These involve testing the physical device against engineering and safety standards. There isn't a "sample size" of images or patients in the typical sense. It refers to testing the device's components and overall system functionality (e.g., radiation output measurements, EMC tests).
    • Clinical Tests (Usability and Image Quality):
      • Usability: "Usability aspects of the device were tested by the users and independent participants." No specific number provided.
      • Image Quality: "Independent views of Radiologists were obtained on the imaging performances and the acquired images were of adequate quality for the indicated use." No specific number of images or patients mentioned.
    • Data Provenance: Not explicitly stated for any "data." The company is based in India (Skanray Technologies Private Limited, Mysore, India). It's common for such tests to be conducted internally or by accredited labs in the manufacturer's region or contracted locations.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: "Independent views of Radiologists were obtained on the imaging performances." The number of radiologists is not specified, only that "Radiologists" (plural) were involved.
    • Qualifications: Stated as "Radiologists." No specific experience level (e.g., "10 years of experience") or subspecialty is detailed in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document states "Independent views of Radiologists were obtained." This implies individual assessments. There is no mention of an adjudication method (like 2+1 or 3+1 consensus) being used for combining expert opinions or resolving discrepancies.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There was no MRMC comparative effectiveness study done comparing human readers with AI assistance versus without AI assistance. This device is an X-ray imaging system, not an AI diagnostic tool. Its performance evaluation focuses on the safety, technical specifications, and image quality of the X-ray system itself.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This question is not applicable. The device is a C-Arm X-Ray system, not an AI algorithm. Therefore, there is no "standalone algorithm" performance to report.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "clinical tests" part of the evaluation:

    • Usability: The "ground truth" was user feedback and observational assessment by "users and independent participants."
    • Image Quality: The "ground truth" was "Independent views of Radiologists" regarding image adequacy. This is a form of expert opinion/consensus (though the consensus method isn't detailed). There's no mention of pathology or outcomes data for this specific evaluation in the provided summary.

    8. The sample size for the training set

    This question is not applicable. The device is a C-Arm X-Ray system, not an AI algorithm that requires a "training set" in the machine learning sense. The X-ray system is developed and validated through engineering standards and clinical evaluations demonstrating its functionality and safety, not through machine learning training.


    9. How the ground truth for the training set was established

    This question is not applicable, as there is no "training set" for an AI algorithm in the context of this device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1