Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K251167
    Device Name
    uDR Aurora CX
    Date Cleared
    2025-09-19

    (157 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213700

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uDR Aurora CX is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.

    Device Description

    uDR Aurora CX is a model of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). It includes X-ray Generator, X-ray Imaging System. The X-ray Generator produces controlled X-rays by high-voltage generator and X-ray tube assembly, ensuring stable energy output for human body penetration. The X-ray Imaging System converts X-ray photons into electrical signals by detectors, and generates DICOM-standard images by workstation to reflecting density variations of human body.

    AI/ML Overview

    This document describes the acceptance criteria and study details for two features of the uDR Aurora CX device: uVision and uAid.


    1. Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    uVisionWhen users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. This demonstrates that uVision can effectively assist clinical technicians in positioning tasks, specifically by aiming to reduce retake rates attributed to incorrect positioning (which studies indicate can range from 9% to 28%).In 95% of patient positioning processes, the light field and equipment position automatically set by uVision met clinical positioning and shooting requirements for chest PA, whole-spine, and whole-lower-limb stitching exams. In the remaining 5% of cases, manual adjustments by technicians were needed.
    uAidThe accuracy of non-standard image recognition (specifically, the rate of "Grade A" images recognized) should meet a 90% pass rate, aligning with industry standards derived from guidelines like those from European Radiology and ACR-AAPM-SPR Practice Parameter (which indicate Grade A image rates between 80% and 90% in public hospitals). This demonstrates that uAid can effectively assist clinical technicians in managing standardized image quality.Overall Performance: The uAid function can correctly identify four types of results (Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation) and classify images into Green (qualified), Yellow (secondary), or Red (waste). It meets the requirement for checking examination and positioning quality.
    Specific Metric/Quantitative Performance (from "Summary"):- Average algorithm time: 1.359 seconds (longest not exceeding 2 seconds).- Maximum memory occupation: Not more than 2G.- For foreign body, lung field integrity, and scapula opening, both sensitivity and specificity for recognition exceed 0.9.

    2. Sample Size and Data Provenance for the Test Set

    FeatureSample Size for Test SetData Provenance
    uVision348 cases (328 Chest PA cases + 20 Full Spine or Full Lower Limb Stitching cases) collected over one week from 2024.12.17 to 2024.12.23. The device had been installed for over a year, with an average daily volume of ~80 patients, ~45 chest X-rays/day, and ~10-20 stitching cases/week.Prospective/Retrospective Hybrid: The data was collected prospectively from a device (serial number 11XT7E0001) in clinical use after installation and commissioning over a year prior to the reported test period. It was collected from individuals of all genders and varying heights (able to stand independently). The testing was conducted in a real-world clinical setting. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.
    uAidNot explicitly stated as a single total number of cases. Instead, the data distribution is provided, indicating various counts for different conditions across gender and age groups. For example, "lung field segmentation" had 465 negative and 31 positive cases. "Foreign object" had 1078 negative and 3080 positive cases. The sum of these individual counts suggests a total dataset of several thousand images.Retrospective: Data collection for uAid started in October 2017, with a wide range of data sources, including different cooperative hospitals. The data was cleaned and stored in DICOM format. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    FeatureNumber of ExpertsQualifications of Experts
    uVisionNot explicitly stated. The statement says, "The results automatically set by the system are then statistically analyzed by clinical experts.""Clinical experts." No specific qualifications (e.g., years of experience, specialty) are provided.
    uAidNot explicitly stated. The document mentions "The study was approved by the institutional review board of the hospitals," which implies expert review but does not detail the number or roles of experts in establishing the ground truth labels for the specific image characteristics tested.Not explicitly stated for establishing ground truth labels.

    4. Adjudication Method (Test Set)

    FeatureAdjudication Method
    uVisionNot explicitly stated. The data was "statistically analyzed by clinical experts." It does not specify if multiple experts reviewed cases or how disagreements were resolved.
    uAidNot explicitly stated. The process mentions data cleaning and sorting, and IRB approval, but not the specific adjudication method for individual image labels.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • uVision: No MRMC comparative effectiveness study was done to compare human readers with and without AI assistance. The study evaluates the AI's direct assistance in positioning, measured by compliance with clinical criteria, rather than comparing diagnostic performance of human readers.
    • uAid: No MRMC comparative effectiveness study was done. The study focuses on the standalone performance of the algorithm in identifying image quality issues, not on how it impacts human reader diagnostic accuracy or efficiency.

    6. Standalone Performance (Algorithm Only)

    • uVision: Yes, a standalone performance study was done. The "95% compliance" rate refers to the algorithm's direct ability to set system position and FOV that meet clinical technician criteria without a human actively adjusting or guiding the initial AI-generated settings during the compliance evaluation. Technicians could manually adjust those settings if needed.
    • uAid: Yes, a standalone performance study was done. The algorithm processes images and outputs a quality classification (Green, Yellow, Red) and identifies specific issues (foreign object, incomplete lung fields, etc.). Its sensitivity and specificity metrics are standalone performance indicators.

    7. Type of Ground Truth Used

    • uVision: Expert Consensus/Clinical Criteria: The ground truth for uVision's performance (i.e., whether the automatically set position/FOV was "compliant") was established by "clinical experts" based on "clinical technicians' criteria" for proper positioning and shooting requirements.
    • uAid: Expert Consensus/Manual Labeling: The ground truth for uAid's evaluation (e.g., presence of foreign objects, complete lung fields, open scapula, centerline deviation) was established through a "classification" process, implying manual labeling or consensus by experts after data collection and cleaning. The document mentions "negative" and "positive" data distributions for each criterion.

    8. Sample Size for the Training Set

    • uVision: Not explicitly stated in the provided text. The testing data was confirmed to be "collected independently from the training dataset, with separated subjects and during different time periods."
    • uAid: Not explicitly stated in the provided text. The document mentions "The data collection started in October 2017, with a wide range of data sources" for training, but does not provide specific numbers for the training set size.

    9. How Ground Truth for Training Set was Established

    • uVision: Not explicitly stated for the training set. It can be inferred that a similar process to the test set, involving expert review against clinical criteria, would have been used.
    • uAid: Not explicitly stated for the training set. Given that the data was collected from "different cooperative hospitals," "multiple cleaning and sorting" was performed, and the study was "approved by the institutional review board," it is highly likely that the ground truth for the training set involved manual labeling by clinical experts/radiologists, followed by a review process (potentially consensus-based or single-expert) to establish the labels for image characteristics and quality.
    Ask a Question

    Ask a specific question about this device

    K Number
    K233543
    Device Name
    YSIO X.pree
    Date Cleared
    2024-05-21

    (200 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K213700

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.

    The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.

    YSIO X.pree is not for mammography examinations.

    Device Description

    The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.

    The following modifications have been made to the cleared predicate device:

    • -New Camera Model in Collimator
    • -New Auto Collimation Function: Auto Long-Leg/Full-Spine
    • -Two new wireless detectors
    AI/ML Overview

    The provided text is a 510(k) summary for the YSIO X.pree X-ray system. It describes the device, its intended use, and comparisons to predicate and reference devices. However, it does not contain the detailed clinical study information typically required to directly answer all aspects of your request regarding acceptance criteria and performance metrics for an AI/CADe device.

    Specifically, the document mentions:

    • A "Customer Use Test (CUT)" was performed at the "Universitätsklinikum Augsburg, Germany," focusing on "System function and performance-related clinical workflow, Image quality, Ease of use, Overall performance and stability."
    • "The results of the clinical test stated that the intended use of the system was met, and the clinical need covered."
    • "All images acquired with the new detectors were sufficiently acceptable for radiographic usage."

    This summary indicates that new features, particularly the "Auto Collimation Function: Auto Long-Leg/Full-Spine" which is AI-based (taken from the MULTIX Impact algorithm, K213700), underwent testing. However, the FDA 510(k) summary does not include the specific acceptance criteria with reported performance against those criteria, nor detailed information about the study design (sample size, ground truth establishment, expert qualifications, etc.) for the AI-based auto collimation feature. The "Customer Use Test" appears to be a general usability and performance test for the overall system and new detectors, rather than a rigorous performance study for an AI algorithm with specific quantitative metrics.

    Therefore, I cannot fully complete the table and answer all questions with the provided text. I can only extract what is present.

    Here's a breakdown of what can be extracted and what cannot:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not provide a table of explicit acceptance criteria for the AI-based auto collimation function with corresponding quantitative performance metrics (e.g., accuracy, precision for delimiting regions of interest). It only states that the overall system and new detectors' images were "sufficiently acceptable for radiographic usage" and that the "intended use of the system was met, and the clinical need covered."

    Acceptance CriteriaReported Device Performance
    For overall system and new detectors (from Customer Use Test):
    System function and performance-related clinical workflow met criteriaIntended use of the system was met, and the clinical need covered.
    Image quality acceptableAll images acquired with the new detectors were sufficiently acceptable for radiographic usage.
    Ease of use acceptableNot explicitly quantified, but implied by overall "intended use met."
    Overall performance and stability acceptableNot explicitly quantified, but implied by overall "intended use met."
    For AI-based Auto Collimation (Auto Long-Leg/Full-Spine):Information Not Provided in Text

    2. Sample size used for the test set and the data provenance:

    • Test set sample size for AI-based auto collimation: Not specified in the provided text.
    • Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg, Germany." This suggests prospective data collection in a clinical setting in Germany for the general system and new detectors. It is not explicitly stated if the AI-based auto collimation performance was evaluated on this specific dataset, or if a separate dataset (and its provenance) was used for validating the AI. Given the AI algorithm was "taken over" from the MULTIX Impact (K213700) and that previous 510(k) for that device might contain more details.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified for any specific ground truth establishment (especially for the AI-based auto collimation). The "Customer Use Test" involved clinical evaluation, implying healthcare professionals (presumably radiologists or radiographers) were involved, but their number and specific qualifications for establishing ground truth for AI performance are not detailed.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study is described for the AI-based auto collimation. The document focuses on device safety and substantial equivalence to a predicate, not enhancement of human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not explicitly detailed. The AI auto collimation feature is integrated into the workflow, implying it assists, but a standalone technical performance study for the AI component itself is not described with quantitative results. The statement that the "Multix Impact algorithm has been taken over" suggests that its performance characteristics might have been established during the clearance of the MULTIX Impact (K213700), but those details are not in this document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not specified for the AI-based auto collimation. For the general system usability and image quality, the "Customer Use Test" implies a clinical assessment, likely representing expert (clinician) judgment.

    8. The sample size for the training set:

    • Not specified. The document states that the AI algorithm was "taken over" from the MULTIX Impact. This implies the training was done previously for the MULTIX Impact, but the size of that training set is not provided here.

    9. How the ground truth for the training set was established:

    • Not specified, for the same reasons as in point 8.
    Ask a Question

    Ask a specific question about this device

    K Number
    K233532
    Device Name
    MULTIX Impact E
    Date Cleared
    2023-11-29

    (27 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213700

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MULTIX Impact E is a radiographic system used in hospitals, clinics, and medical practices. MULTIX Impact E enables radiographic exposures of the whole body including: skull, chest, abdomen, and extremities and may be used on pediatric, adult and obese patients. Exposures may be taken with the patient sitting, standing, or in the prone position. MULTIX Impact E uses digital detectors for generating diagnostic images by converting X- rays into image signals. MULTIX Impact E is also designed to be used with conventional film/screen or Computed Radiography (CR) cassettes. MULTIX Impact E is not intended for mammography.

    Device Description

    The MULTIX Impact E Radiography X-ray system is a modular system of X-ray components (floor-mounted x-ray tube, bucky wall stand, bucky table, x-ray generator, portable wireless detector) based on the predicate device, the MULTIX Impact E (VB10) (K220919). The following modifications have been made to the predicate device: 1) A new elevating table (option) 2) Upgraded software version to VB20 to support hardware modifications. The modified system will be branded as the MULTIX Impact E.

    AI/ML Overview

    Please note, this document pertains to the MULTIX Impact E, which is a stationary X-ray system, and not an AI-powered diagnostic device. The submission focuses on demonstrating substantial equivalence to a predicate device, highlighting hardware modifications and software upgrades to support these changes. Therefore, the questions related to AI device performance, such as MRMC studies, ground truth establishment for AI, and training/test set sample sizes, are not applicable to this specific submission.

    The acceptance criteria and performance are based on general safety and effectiveness of X-ray systems, primarily through compliance with recognized standards and non-clinical testing.

    Here's a breakdown based on the provided document, addressing the applicable points:

    Acceptance Criteria and Device Performance for MULTIX Impact E (X-Ray System)

    1. A table of acceptance criteria and the reported device performance

    The document does not present explicit "acceptance criteria" in a quantitative performance table for a diagnostic outcome (like sensitivity/specificity for a disease). Instead, the acceptance is based on demonstrating substantial equivalence to a predicate device through meeting regulatory standards, functional performance of components, and safety considerations. The performance is reported in terms of comparison to the predicate device for various attributes.

    Here's an inferred "acceptance criteria" based on the document's structure, which is "Substantially Equivalent to Predicate Device (MULTIX Impact E VB10, K220919) and Reference Device (MULTIX Impact VA21, K213700) with no new safety risks":

    AttributeAcceptance Criteria (Equivalent/No Negative Impact)Reported Device Performance (Subject Device: MULTIX Impact E)
    Indications for UseSame as predicate device.Same: Radiographic system for whole body (skull, chest, abdomen, extremities) for pediatric, adult, obese patients.
    DetectorSame as predicate device.Same: Wireless detector Mars1717VS.
    Tube Stand (TS)Integrated fully manual TS: Same as predicate. Independent fully manual TS: Movement range increase has no impact on safety/effectiveness.Integrated fully manual TS: Same functionality. Independent fully manual TS: Movement range 33180 cm (predicate: 50185 cm) - Verified no impact on safety/effectiveness.
    X-ray TubeSame as predicate device.Same: RAY-12S_3 tube (170KJ (230kHU), 54KW input, 50/60 Hz anode freq).
    CollimatorSame as predicate device.Same: Manual collimator with blade position and Cu filter status feedback.
    GeneratorSame as predicate device.Same: 50KW and 40KW high frequency X-ray generators.
    Automatic Exposure Control (AEC)Same as predicate device.Same: 3 fields AEC chamber with analog interface.
    Patient TableFixed table: Same as predicate. Elevating table: Increases clinical flexibility/versatility with no impact on safety/effectiveness.Fixed table: Same with integrated rail. Elevating table (option): New, with independent rail mounting tube stand. Verified no impact on safety/effectiveness.
    Human Machine Interface (HMI)Tube-side control module (TCM): Same as predicate. Touch User Interface (TUI): Software update to support independent rail has no impact on safety/effectiveness.TCM: Same functions (SID display, tube angle display, release brakes). TUI: Added Rotation Vertical Axis (RVA) button in software for independent rail support. Verified no impact on safety/effectiveness.
    UI on Imaging SystemSame as predicate device.Same: Siemens UI concept.
    Software VersionUpdated to support hardware mods with no impact on safety/effectiveness.VB20 (Predicate: VB10) - Updated to support hardware modifications. Verified no impact on safety/effectiveness.
    Technical Specifications (Elevating Table vs. Reference)Motorized elevating table: Function reduced for lower cost/low-end market with no impact on safety/effectiveness. Tracking: No tracking between tube stand and bucky (vs. auto-tracking on reference) with no impact on safety/effectiveness. Max weight capacity: Same. Emergency stop: Same.Motorized elevating table (manual bucky movement vs. motorized on reference). Adjustable height: 52-96 cm (vs. 51.5-90 cm on reference). No tracking between tube stand and bucky. High weight capacity: 300 kg. With Emergency stop. Verified no impact on safety/effectiveness.
    Safety and EffectivenessDevice operates safely and effectively, no new safety risks.Risk management via hazard analysis and controls. Compliance with electrical, mechanical, and radiation safety standards.
    Software DocumentationConforms to FDA's Content of Premarket Submissions for Device Software Functions Guidance (basic level).Software documentation submitted and shown to conform to basic documentation level, demonstrating continued conformance with special controls for medical devices containing software.
    Non-clinical TestingCompliance with relevant industry standards.Complied with ANSI AAMI ES60601-1, IEC 60601-1-2, 60601-1-3, 60601-2-28, 60601-2-54, 60601-1-6, IEC 62366-1, ISO 14971, IEC 62304, IEC TR 60601-4-2, NEMA PS 3.1-3.20 (DICOM), ISO EN ISO 10993-1.
    Verification & ValidationTesting demonstrates intended performance and supports substantial equivalence claim.Non-clinical tests (integration and functional) successful. Risk analysis complete, controls implemented. Test results support all software specifications met acceptance criteria. Verification and validation found acceptable.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This document describes a premarket notification (510(k)) for a conventional X-ray system, not an AI/ML-driven device. The "test set" here refers to non-clinical verification and validation testing of the device hardware and software against engineering specifications and regulatory standards.

    • Sample Size: Not applicable in the context of patient imaging data for an AI algorithm. The testing involves system-level and component-level verification, functional tests, and safety tests performed on representative units of the device. The document does not specify a "sample size" of devices tested, but rather indicates that such testing was "successfully completed."
    • Data Provenance: Not applicable in the context of patient data. The "data" comes from engineering tests and measurements on the device itself, not from clinical images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. Ground truth as typically defined for medical image analysis (e.g., diagnosis of disease) is not established for this type of device submission. The "ground truth" for the performance of an X-ray system revolves around its physical characteristics, image quality parameters, and safety compliance, which are measured and evaluated by engineers and quality assurance personnel against established technical specifications and regulatory standards.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. Adjudication methods are relevant for clinical studies where expert consensus is needed to establish ground truth for image interpretation. This submission is based on engineering and manufacturing verification and validation, not clinical image interpretation.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This is not an AI-powered device, and no MRMC study was performed or required for this type of submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This is not an AI algorithm.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    The "ground truth" for this device's performance is established by engineering specifications, compliance with recognized industry standards (e.g., IEC, ISO, NEMA for X-ray systems), and adherence to manufacturing quality controls. This is demonstrated through physical measurements, electrical tests, safety circuit validation, software functionality tests, and image quality measurements, rather than clinical outcomes or diagnostic interpretations.

    8. The sample size for the training set

    Not applicable. This document does not describe an AI/ML device with a training set.

    9. How the ground truth for the training set was established

    Not applicable. This document does not describe an AI/ML device with a training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231577
    Device Name
    MOBILETT Impact
    Date Cleared
    2023-07-25

    (55 days)

    Product Code
    Regulation Number
    892.1720
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213700

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MOBILETT Impact is a mobile device intended to visualize anatomical structures of human beings by converting an Xray pattern into a visible image. MOBILETT Impact is not intended for mammography examinations.

    Device Description

    MOBILETT Impact is a complete X-ray imaging system on wheels. It contains a single tank high voltage generator with an X-ray tube and collimator attached to the end of a telescopic support arm connected to a swiveling column. The system is a manually driven system, with no motor support for movement. The system includes a digital image acquisition system with an image display and graphical user interface. The digital detector, Max wi-D can be stored in the built-in docking station in the system. In addition, the system can be used with detectors Max mini and Core-L. All three detectors are equipped with rechargeable batteries, which can be charged by external battery chargers. The system can perform X-Ray when it is connected to mains. Exposure can be released by a hand switch or remote control. The included system batteries only power the imaging system if it is not connected to the mains. Besides the detectors and image system, the hardware of the same as for the predicate device SEDECAL SM-V, which was cleared on 11/21/2022 with K222951.

    AI/ML Overview

    The provided text describes the Siemens MOBILETT Impact, a mobile X-ray system. However, it does not detail acceptance criteria or a study proving the device meets specific performance metrics for image quality or diagnostic accuracy in the way an AI/CADe device would.

    Instead, the document focuses on demonstrating substantial equivalence to a predicate device (SEDECAL SM-V, K222951) and a reference device (MULTIX Impact, K213700) based on regulatory compliance, technological characteristics, and safety.

    The summary highlights:

    • Regulatory Compliance: Adherence to various IEC, ANSI AAMI, ISO, NEMA, and FDA standards for medical electrical equipment, radiation protection, usability, risk management, software life cycle, and digital imaging.
    • Technological Equivalence: Comparison of features like regulation description, product code, indications for use (similar), high voltage generator, X-ray tube, tube voltage/current, collimator, touch screen control, movement (non-motorized), US Performance Standard, and power source (all same as predicate or similar).
    • Safety and Effectiveness Concerns: General statements about IFU, safety features (visual/audible warnings), error monitoring, and adherence to recognized industry practices to minimize electrical, mechanical, and radiation hazards.

    Regarding the specific information requested in your prompt for an AI/CADe device:

    1. Table of acceptance criteria and reported device performance: This information is not provided in the document. The document confirms regulatory compliance with various standards, but it doesn't state specific device performance metrics like sensitivity, specificity, or AUC for diagnostic tasks, as would be expected for an AI device.

    2. Sample size used for the test set and data provenance: This information is not provided. A "Customer Use Test (CUT)" was performed, but no details on sample size, data type, or provenance are given.

    3. Number of experts used to establish the ground truth for the test set and qualifications: This information is not provided. The CUT involved "gathering feedback on the device's usability in the clinical environment," but it doesn't describe a ground truth establishment process by experts for a diagnostic task.

    4. Adjudication method for the test set: This information is not provided.

    5. Multi-reader multi-case (MRMC) comparative effectiveness study: This information is not provided. The device is a mobile X-ray system, not an AI/CADe system designed to improve human reader performance.

    6. Standalone performance (algorithm only without human-in-the-loop performance): This information is not provided. The device is a hardware system for X-ray imaging, not a standalone algorithm.

    7. Type of ground truth used: For the "Customer Use Test (CUT)," the "ground truth" implicitly involved qualitative feedback on "system function and performance-related clinical workflow, image quality, ease of use, and overall performance and stability." This is not a quantitative ground truth for a diagnostic task like pathology or outcomes data.

    8. Sample size for the training set: This information is not provided. There is no mention of a training set as this is a hardware device.

    9. How the ground truth for the training set was established: This information is not provided.

    In summary, the provided FDA 510(k) clearance letter and its associated summary are for a conventional mobile X-ray imaging system, not an AI/CADe product. Therefore, the detailed performance metrics and study design elements typically associated with AI/CADe device validation (e.g., sensitivity, specificity, reader studies, ground truth establishment by experts) are not present in this document. The "study" mentioned is a "Customer Use Test (CUT)" focused on usability and overall system performance in a clinical workflow, rather than a diagnostic performance study.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1