Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K241319
    Device Name
    SKR 3000
    Date Cleared
    2024-11-21

    (195 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K210619

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.

    The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications. The P-53 is for adult use only.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction - (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.

    This submission is to add the new flat-panel x-ray detector (FPD) P-53 to the SKR 3000. The new P-53 panel shows improved performance compared to the predicate device. The P-53 employs the same surface material infused with Silver ions (antibacterial properties) as the reference device.

    The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or the study used to prove the device meets those criteria in the typical format of a clinical trial or performance study report. Instead, it is an FDA 510(k) clearance letter and a 510(k) Summary for the Konica Minolta SKR 3000 (K241319).

    This document focuses on demonstrating substantial equivalence to a predicate device (K151465 - AeroDR System2) rather than providing a detailed report of a performance study with specific acceptance criteria, sample sizes, expert involvement, and ground truth establishment, as one might find for a novel AI/software medical device.

    The "Performance Data" section mentions "comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device" and "the predetermined acceptance criteria were met." However, it does not specify what those acceptance criteria were, what the reported performance was against those criteria, or the methodology of the "comparative image testing."

    Therefore, I cannot populate the table or answer most of your specific questions based on the provided text.

    Here's what can be extracted and inferred from the text:

    1. A table of acceptance criteria and the reported device performance:

    This information is not explicitly provided in the document. The document states: "The performance testing was conducted according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices.' The comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device. The other verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results showed that the predetermined acceptance criteria were met. The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    The comparison table on page 6 provides a comparison of specifications between the subject device (SKR 3000 with P-53) and the predicate device (AeroDR System2 with P-52), which might imply performance improvements that were part of the "acceptance criteria" for demonstrating substantial equivalence:

    FeatureSubject Device (SKR 3000 / P-53)Predicate Device (AeroDR System2 / P-52)Implication (Potential "Performance")
    Pixel size150 µm175 µmImproved spatial resolution
    Max. Resolution2.5 lp/mm2.0 lp/mmHigher resolution
    DQE (1.0 lp/mm)40% @ 1mR35% @ 1mRImproved detective quantum efficiency

    2. Sample sized used for the test set and the data provenance:

    • Sample Size for Test Set: Not specified. The document mentions "comparative image testing" but does not detail the number of images or patients in the test set.
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. Given that it's a 510(k) for an X-ray system rather than an AI diagnostic algorithm, the "ground truth" for image quality assessment would likely be based on physical phantom measurements and potentially visual assessment by qualified individuals, but the details are not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not done/Not specified. This is not an AI-assisted device subject to typical MRMC studies. The device is a digital radiography system itself. The document states, "The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not applicable. This is a hardware device (X-ray detector and system), not a standalone algorithm.

    7. The type of ground truth used:

    • Inferred based on context: Likely objective physical measurements (e.g., resolution, DQE) and potentially qualitative image assessment against a known reference (predicate device or established norms). The phrase "comparative image testing" suggests direct comparison of images produced by the subject device vs. predicate device. Not explicitly stated to be expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    • Not applicable / Not specified. This is a hardware device; typical "training sets" are associated with machine learning algorithms. Its design and manufacturing would be based on engineering principles and quality control, not a data-driven training set in the AI sense.

    9. How the ground truth for the training set was established:

    • Not applicable / Not specified. (See point 8)

    Summary of what is known/inferred:

    • Acceptance Criteria: "Predetermined acceptance criteria were met" for performance parameters related to image quality and safety. Specific numerical criteria are not detailed, but improved resolution and DQE over the predicate are highlighted.
    • Study Design: "Comparative image testing" and general "performance testing" were conducted according to FDA guidance for solid-state X-ray imaging devices.
    • Sample Size/Provenance/Experts/Adjudication/MRMC: Not specified, expected as this is a hardware 510(k) for substantial equivalence demonstrating non-inferiority/improvement in physical specifications rather than a diagnostic AI/CADe study.
    • Ground Truth: Likely objective physical performance metrics and visual comparison with a predicate, not clinical diagnoses or outcomes.
    • Training Set: Not applicable for a hardware device in the context of AI.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221803
    Manufacturer
    Date Cleared
    2022-07-18

    (26 days)

    Product Code
    Regulation Number
    892.1720
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151465, K172793, K210619

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This is a digital mobile diagnostic x-ray system intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography.

    Device Description

    This is a modified version of our previous predicate mobile PHOENIX. The predicate PHOENIX mobile is interfaced with Konica – Minolta Digital X-ray panels and CS-7 or Ultra software image acquisition. PHOENIX mobile systems will be marketed in the USA by KONICA MINOLTA. Models with the CS-7 Software will be marketed as AeroDR TX m01. Models with the Ultra software will be marketed as mKDR Xpress. The modification adds two new models of compatible Konica-Minolta digital panels, the AeroDR P-65 and AeroDR P-75, cleared in K210619. These newly compatible models are capable of a mode called DDR, Dynamic Digital Radiography wherein a series of radiographic exposures can be rapidly acquired, up to 15 frames per seconds maximum (300 frames).

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a mobile x-ray system. The document focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than presenting a study to prove the device meets specific performance-based acceptance criteria for an AI/algorithm.

    Therefore, many of the requested details, such as specific acceptance criteria for algorithm performance, sample sizes for test sets, expert ground truth establishment, MRMC studies, or standalone algorithm performance, are not applicable or not present in this type of submission.

    The essence of this submission is that the entire mobile x-ray system, including its components (generator, panels, software), is deemed safe and effective because it is substantially equivalent to a previously cleared device, with only minor modifications (adding two new compatible digital panels and enabling a DDR function in the software, which is stated to be "unchanged firmware" and "moderate level of concern").

    Here's an attempt to address your questions based on the provided text, while acknowledging that many of them pertain to AI/algorithm performance studies, which are not the focus of this 510(k):

    1. A table of acceptance criteria and the reported device performance

    The document does not specify performance-based acceptance criteria for an AI/algorithm. Instead, it demonstrates substantial equivalence to a predicate device by comparing technical specifications. The "acceptance criteria" in this context are implicitly met if the new device's specifications (kW rating, kV range, mA range, collimator, power source, panel interface, image area sizes, pixel sizes, resolutions, MTF, DQE) are equivalent to or improve upon the predicate, and it remains compliant with relevant international standards.

    CharacteristicPredicate: K212291 PHOENIXPHOENIX/AeroDR TX m01 and PHOENIX/mKDR Xpress.Acceptance Criterion (Implicit)Reported Performance
    Indications for UseDigital mobile diagnostic x-ray for adults/pediatrics, skull, spine, chest, abdomen, extremities. Not for mammography.SAMEMust be identical to predicate.SAME (Identical)
    ConfigurationMobile System with digital x-ray panel and image acquisition computerSAMEMust be identical to predicate.SAME (Identical)
    X-ray Generator(s)kW: 20, 32, 40, 50 kW; kV: 40-150 kV (1 kV steps); mA: 10-650 mASAMEMust be identical to predicate.SAME (Identical)
    CollimatorRalco R108FSAMEMust be identical to predicate.SAME (Identical)
    Meets US Performance StandardYES 21 CFR 1020.30SAMEMust meet this standard.YES (Identical)
    Power SourceUniversal, 100-240 V~, 1 phase, 1.2 kVASAMEMust be identical to predicate.SAME (Identical)
    SoftwareKonica-Minolta CS-7 or UltraCS-7 and Ultra modified for DDR modeFunctions must be equivalent/improved; DDR enabled.CS-7 and Ultra modified for DDR mode
    Panel InterfaceEthernet or Wi-Fi wirelessSAMEMust be identical to predicate.SAME (Identical)
    Image Area Sizes (Panels)Listed AeroDR P-seriesListed AeroDR P-series + P-65, P-75Expanded range must be compatible and cleared.Expanded range compatible, previously cleared.
    Pixel Sizes (Panels)Listed AeroDR P-seriesListed AeroDR P-series + P-65, P-75Expanded range must be compatible and cleared.Expanded range compatible, previously cleared.
    Resolutions (Panels)Listed AeroDR P-seriesListed AeroDR P-series + P-65, P-75Expanded range must be compatible and cleared.Expanded range compatible, previously cleared.
    MTF (Panels)Listed AeroDR P-seriesListed AeroDR P-series + P-65, P-75Performance must be equivalent or better.P-65 (Non-binning) 0.62, (2x2 binning) 0.58; P-75 (Non-binning) 0.62, (2x2 binning) 0.58
    DQE (Panels)Listed AeroDR P-seriesListed AeroDR P-series + P-65, P-75Performance must be equivalent or better.P-65 0.56 @ 1 lp/mm; P-75 0.56 @ 1 lp/mm
    Compliance StandardsN/AIEC 60601-1, -1-2, -1-3, -2-54, -2-28, -1-6, IEC 62304Must meet relevant international safety standards.Meets all listed IEC standards.
    Diagnostic Quality ImagesN/AProduced diagnostic quality images as good as predicateMust produce images of equivalent diagnostic quality.Verified

    2. Sample size used for the test set and the data provenance

    No specific test set or data provenance (country, retrospective/prospective) is mentioned for AI/algorithm performance. The "testing" involved "bench and non-clinical tests" to verify proper system operation and safety, and that the modified combination of components produced diagnostic quality images "as good as our predicate generator/panel combination." This implies physical testing of the device rather than a dataset for algorithm evaluation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. There was no specific test set requiring expert-established ground truth for an AI/algorithm evaluation. The determination of "diagnostic quality images" likely involved internal assessment by qualified personnel within the manufacturer's testing process.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. No adjudication method is described as there was no formal expert-read test set for algorithm performance.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. An MRMC study was not conducted as this submission is not about an AI-assisted diagnostic tool designed to improve human reader performance. It is for a mobile x-ray system.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No. This submission is for a medical device (mobile x-ray system), not a standalone AI algorithm. The software components (CS-7 and Ultra) are part of the image acquisition process, and the only software "modification" mentioned is enabling the DDR function, which is a feature of the new panels, not an AI for diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. The substantial equivalence argument relies on comparing technical specifications and demonstrating that the physical device produces images of "diagnostic quality" equivalent to the predicate, rather than an AI producing diagnostic outputs against a specific ground truth.

    8. The sample size for the training set

    Not applicable. This is not an AI/ML algorithm submission requiring a training set. The software components are for image acquisition and processing, not for AI model training.

    9. How the ground truth for the training set was established

    Not applicable, as no training set for an AI/ML algorithm was used or mentioned.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1