Search Filters

Search Results

Found 25 results

510(k) Data Aggregation

    K Number
    K250665
    Device Name
    SKR 3000
    Date Cleared
    2025-06-17

    (104 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    KONICA MINOLTA, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures. This device is not indicated for use in mammography, fluoroscopy, and angiography applications.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an X-ray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.
      The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
      The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
      The SKR 3000 is distributed under a commercial name AeroDR 3.
      The purpose of the current premarket submission is to add pediatric use indications for the SKR 3000 imaging system.
    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the SKR 3000 device focuses on adding a pediatric use indication. However, it does not contain the detailed performance data, acceptance criteria, or study specifics typically found in a clinical study report. The document states that "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'" and that "pediatric image evaluation using small-size phantoms was performed on the P-53." It also mentions that "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."

    Based on the information provided, it's not possible to fully detail the acceptance criteria and the study that proves the device meets them according to your requested format. The document implies that the "acceptance criteria" likely revolved around demonstrating "substantially equivalent image performance" to a predicate device (AeroDR System 2 with P-52) for pediatric use, primarily through phantom studies, rather than a clinical study with human patients and detailed diagnostic performance metrics.

    Therefore, many of the requested fields cannot be filled directly from the provided text. I will provide the information that can be inferred or directly stated from the document and explicitly state when information is not available.

    Disclaimer: The information below is based solely on the provided 510(k) clearance letter and summary. For a comprehensive understanding, one would typically need access to the full 510(k) submission, which includes the detailed performance data and study reports.


    Acceptance Criteria and Device Performance Study for SKR 3000 (Pediatric Use Indication)

    The primary objective of the study mentioned in the 510(k) summary was to demonstrate substantial equivalence for the SKR 3000 (specifically with detector P-53) for pediatric use, compared to a predicate device (AeroDR System 2 with P-52).

    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the submission (adding a pediatric indication based on substantial equivalence), the acceptance criteria are not explicitly quantifiable metrics like sensitivity/specificity for a specific condition. Instead, the focus was on demonstrating "substantially equivalent image performance" through phantom studies.

    Acceptance Criteria (Inferred from Document)Reported Device Performance (Inferred/Stated)
    Image quality of SKR 3000 with P-53 for pediatric applications to be "substantially equivalent" to predicate device (AeroDR System 2 with P-52)."The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."
    Compliance with "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" for pediatric image evaluation using small-size phantoms."image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'. Pediatric image evaluation using small-size phantoms was performed on the P-53."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): Not specified. The document indicates "small-size phantoms" were used, implying a phantom study, not a human clinical trial. The number of phantom images or specific phantom configurations is not detailed.
    • Data Provenance: Not specified. Given it's a phantom study, geographical origin is less relevant than for patient data. It's an internal study conducted to support the 510(k) submission. Retrospective or prospective status is not applicable as it's a phantom study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified. Given this was a phantom study, ground truth would likely be based on physical measurements of the phantoms and expected image quality metrics, rather than expert interpretation of pathology or disease. If human evaluation was part of the "comparative image evaluation," the number and qualifications of evaluators are not provided.
    • Qualifications: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. For a phantom study demonstrating "substantially equivalent image performance," adjudication methods like 2+1 or 3+1 (common in clinical reader studies) are generally not applicable. The comparison would likely involve quantitative metrics from the generated images.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • MRMC Study: No. The document states "comparative image evaluation" and "pediatric image evaluation using small-size phantoms." This strongly implies a technical performance assessment using phantoms, rather than a clinical MRMC study with human readers interpreting patient cases. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance in image interpretation (e.g., CAD) is not the focus of this submission; it's about the imaging system's ability to produce quality images for diagnosis.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Standalone Performance: Not applicable in the traditional sense of an AI algorithm's diagnostic performance. The device is an X-ray imaging system. The "performance" being evaluated is its ability to generate images, not to provide an automated diagnosis. The "Intelligent-Grid" feature mentioned is an image processing algorithm (scattered radiation correction), but its standalone diagnostic performance is not the subject of this specific submission; its prior clearance (K151465) is referenced.

    7. The Type of Ground Truth Used

    • Ground Truth Type: For the pediatric image evaluation, the ground truth was based on phantom characteristics and expected image quality metrics. This is inferred from the statement "pediatric image evaluation using small-size phantoms was performed."

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. The SKR 3000 is an X-ray imaging system, not an AI model that requires a "training set" in the machine learning sense for its primary function of image acquisition. While image processing algorithms (like Intelligent-Grid) integrated into the system might have been developed using training data, the submission focuses on the imaging system's performance for pediatric use.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not applicable, as no training set (in the context of an AI model's image interpretation learning) is explicitly mentioned or relevant for the scope of this 510(k) submission for an X-ray system.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241319
    Device Name
    SKR 3000
    Date Cleared
    2024-11-21

    (195 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.

    The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications. The P-53 is for adult use only.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction - (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.

    This submission is to add the new flat-panel x-ray detector (FPD) P-53 to the SKR 3000. The new P-53 panel shows improved performance compared to the predicate device. The P-53 employs the same surface material infused with Silver ions (antibacterial properties) as the reference device.

    The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or the study used to prove the device meets those criteria in the typical format of a clinical trial or performance study report. Instead, it is an FDA 510(k) clearance letter and a 510(k) Summary for the Konica Minolta SKR 3000 (K241319).

    This document focuses on demonstrating substantial equivalence to a predicate device (K151465 - AeroDR System2) rather than providing a detailed report of a performance study with specific acceptance criteria, sample sizes, expert involvement, and ground truth establishment, as one might find for a novel AI/software medical device.

    The "Performance Data" section mentions "comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device" and "the predetermined acceptance criteria were met." However, it does not specify what those acceptance criteria were, what the reported performance was against those criteria, or the methodology of the "comparative image testing."

    Therefore, I cannot populate the table or answer most of your specific questions based on the provided text.

    Here's what can be extracted and inferred from the text:

    1. A table of acceptance criteria and the reported device performance:

    This information is not explicitly provided in the document. The document states: "The performance testing was conducted according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices.' The comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device. The other verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results showed that the predetermined acceptance criteria were met. The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    The comparison table on page 6 provides a comparison of specifications between the subject device (SKR 3000 with P-53) and the predicate device (AeroDR System2 with P-52), which might imply performance improvements that were part of the "acceptance criteria" for demonstrating substantial equivalence:

    FeatureSubject Device (SKR 3000 / P-53)Predicate Device (AeroDR System2 / P-52)Implication (Potential "Performance")
    Pixel size150 µm175 µmImproved spatial resolution
    Max. Resolution2.5 lp/mm2.0 lp/mmHigher resolution
    DQE (1.0 lp/mm)40% @ 1mR35% @ 1mRImproved detective quantum efficiency

    2. Sample sized used for the test set and the data provenance:

    • Sample Size for Test Set: Not specified. The document mentions "comparative image testing" but does not detail the number of images or patients in the test set.
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. Given that it's a 510(k) for an X-ray system rather than an AI diagnostic algorithm, the "ground truth" for image quality assessment would likely be based on physical phantom measurements and potentially visual assessment by qualified individuals, but the details are not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not done/Not specified. This is not an AI-assisted device subject to typical MRMC studies. The device is a digital radiography system itself. The document states, "The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not applicable. This is a hardware device (X-ray detector and system), not a standalone algorithm.

    7. The type of ground truth used:

    • Inferred based on context: Likely objective physical measurements (e.g., resolution, DQE) and potentially qualitative image assessment against a known reference (predicate device or established norms). The phrase "comparative image testing" suggests direct comparison of images produced by the subject device vs. predicate device. Not explicitly stated to be expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    • Not applicable / Not specified. This is a hardware device; typical "training sets" are associated with machine learning algorithms. Its design and manufacturing would be based on engineering principles and quality control, not a data-driven training set in the AI sense.

    9. How the ground truth for the training set was established:

    • Not applicable / Not specified. (See point 8)

    Summary of what is known/inferred:

    • Acceptance Criteria: "Predetermined acceptance criteria were met" for performance parameters related to image quality and safety. Specific numerical criteria are not detailed, but improved resolution and DQE over the predicate are highlighted.
    • Study Design: "Comparative image testing" and general "performance testing" were conducted according to FDA guidance for solid-state X-ray imaging devices.
    • Sample Size/Provenance/Experts/Adjudication/MRMC: Not specified, expected as this is a hardware 510(k) for substantial equivalence demonstrating non-inferiority/improvement in physical specifications rather than a diagnostic AI/CADe study.
    • Ground Truth: Likely objective physical performance metrics and visual comparison with a predicate, not clinical diagnoses or outcomes.
    • Training Set: Not applicable for a hardware device in the context of AI.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240281
    Date Cleared
    2024-05-31

    (120 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Bone Suppression Software is an image processing technology to improve the visibility of soft tissues in the lung area by suppressing the signals of ribs and clavicles in chest x-ray images.

    Device Description

    The purpose of this software is to provide Bone Suppression images. The software receives the exposed frontal plain chest X-ray images as inputs, then starts processing each image from the Senciafinder Gateway application. After the software is started, it performs the extraction process of the lung field area and Bone Suppression process to attenuate the signals of bones, and outputs Bone Suppression images with attenuated signals of ribs and clavicles in the extracted lung field area.

    In conjunction with the Senciafinder Gateway, images are received from diagnostic imaging equipment via network. Received images are applied Bone Suppression processing and output to image display system such as PACS / workstation via network.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for "Bone Suppression Software" (K240281) by Konica Minolta, Inc. The core information regarding acceptance criteria and performance studies is present in Section VII, "PERFORMANCE DATA."

    However, the provided text explicitly states: "No clinical studies were required to support the substantial equivalence." This means that the 510(k) submission relied on non-clinical performance data and comparison to a predicate device, rather than new clinical trials with human subjects.

    Therefore, many of the requested points in your prompt, such as data provenance for a test set, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance on a clinical test set, and detailed training set information, are not applicable or available from the provided text, because a clinical study (as typically understood in terms of human subjects and diagnostic performance analysis) was not performed or deemed necessary for this 510(k) clearance based on the provided document. The performance data relied on software verification and validation activities.

    Here's a breakdown of what can be answered based on the provided text, and what cannot:

    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria: The text states, "All the verification and validation activities required by the specification and the risk analysis for the Bone Suppression Software were performed and the results showed that the predetermined acceptance criteria were met." However, the specific acceptance criteria (e.g., in terms of image quality metrics, bone suppression ratios, or processing speed) are not detailed in the provided document.
    • Reported Device Performance: Similarly, the specific quantitative performance metrics are not reported in the document. It only states that the device "performs as specified and functions as intended."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Not applicable for a clinical test set (as no clinical study was required for substantial equivalence). The performance data referred to "Software Verification and Validation Testing," which typically involves simulated data, historical non-clinical data, or controlled laboratory tests, not a clinical "test set" in the diagnostic performance sense. The text does not provide details on the nature or origin of the data used for V&V activities.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not applicable. As no clinical study was performed or required for the 510(k) substantial equivalence, no expert-established ground truth for a clinical test set is mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. See point 3.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. The document explicitly states, "No clinical studies were required to support the substantial equivalence." Therefore, no MRMC study evaluating human reader improvement with AI assistance was conducted or reported for this 510(k).

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document implies that standalone performance was assessed as part of "Software Verification and Validation Testing" to ensure it "performs as specified and functions as intended." However, the metrics of this standalone performance (e.g., objective image quality measurements, quantitative bone suppression metrics) are not provided in the text. It's a statement that V&V was successful, not a report of the results themselves.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not explicitly stated for the V&V activities. For "Software Verification and Validation Testing," ground truth would typically be established based on the specifications and expected outputs of the software (e.g., if a bone is known to be in a certain location, can the software correctly identify and suppress it?). This is different from clinical ground truth based on expert reads, pathology, or outcomes. The document does not elaborate on how this "ground truth" for V&V was established.

    8. The sample size for the training set

    • Not applicable/Not provided. The document does not mention details about the development (training) of the software. As it's a 510(k) submission, the focus is on verification and validation against a predicate, not detailing the original development process.

    9. How the ground truth for the training set was established

    • Not applicable/Not provided. See point 8.

    Summary based on the provided text:

    The acceptance criteria for the Bone Suppression Software were met through "Software Verification and Validation Activities." These activities confirmed that the device "performs as specified and functions as intended." However, the specific quantitative acceptance criteria and the detailed performance results are not disclosed in this 510(k) summary. Crucially, no clinical studies involving human subjects were required or conducted to support this substantial equivalence determination, meaning aspects like test set data, expert ground truth, adjudication, and MRMC studies are not part of this submission's evidence as described.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230906
    Date Cleared
    2023-04-25

    (25 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KONICAMINOLTA DI-X1 is a software device that receives digital x-ray images and data from various sources (i.e. RF Units, digital radiographic devices or other imaging sources). Images and data can be stored, communicated, processed and displayed within the system and/or across computer networks at distributed locations. It is not intended for use in diagnostic review for mammography.

    Device Description

    KONICAMINOLTA DI-X1 is a software device that performs image processing and display using Xray digital images (single-frame images, multi-frame images) generated by various diagnostic imaging modality consoles. It is a standalone software device intended to install onto off-the shelf Servers and PCs.

    KONICAMINOLTA DI-X1 receives X-ray digital images, including serial images, processes the received images, as well as displays and sends the resulting images to PACS and other devices. In addition, KONICAMINOLTA DI-X1 can display images through the browser connection with the client that displays and process images, and instruct transmission of images.

    The personal computer used in KONICAMINOLTA DI-X1 stores the same data in two hard disks in real time using RAID1 mirroring function. Thus, even if one hard disk is defective, operations can be continued with the other hard disk which has the same data.

    Modifications are made to add the TD-MODE, Position Tracking and Signal Value Change. The TD-MODE is designed to extract the initial contour of the tracheal wall. The TD-MODE also displays the minimum and maximum tracheal diameter in the frame. The subject device also incorporates Position Tracking and Signal Value Change. Position Tracking is used to track the reference point specified in a frame and display a graph of the position change data. Signal Value Change graphically displays the signal value change within the ROI specified in a frame.

    AI/ML Overview

    The provided document is a 510(k) summary for the KONICAMINOLTA DI-X1 device. It describes the device, its intended use, and compares it to a predicate device. However, it explicitly states that "No clinical studies were required to support the substantial equivalence" and that "Performance tests demonstrate that the KONICAMINOLTA DI-X1 performs according to specifications and functions as intended."

    This indicates that the submission relies on non-clinical performance data and comparison to a predicate device rather than a comprehensive clinical study involving human readers and AI assistance. Therefore, many of the requested details, such as specific acceptance criteria for AI performance, sample sizes for test sets in a clinical AI study, expert qualifications, adjudication methods, MRMC studies, standalone AI performance, and ground truth establishment for a training set in an AI context, are not available in this document.

    The document discusses modifications to the device (TD-MODE, Position Tracking, and Signal Value Change) which are new features. While these new features clearly have specific functions (e.g., "TD-MODE is designed to extract the initial contour of the tracheal wall" and "displays the minimum and maximum tracheal diameter"), the document does not provide quantitative acceptance criteria or detailed study results for the performance of these specific new features. Instead, it broadly states that "All the verification activities required by the specification and the risk analysis for the KONICAMINOLTA DI-X1 were performed and the results showed that the predetermined acceptance criteria were met."

    Given this, I can only provide information based on what is stated in the document.


    Based on the provided FDA 510(k) summary for KONICAMINOLTA DI-X1:

    1. A table of acceptance criteria and the reported device performance:

    The document states that "All the verification activities required by the specification and the risk analysis for the KONICAMINOLTA DI-X1 were performed and the results showed that the predetermined acceptance criteria were met." However, the document does not detail these specific acceptance criteria in a table or quantify the reported device performance against them. It only provides a general statement of compliance.

    The new features added are:

    • TD-MODE: Designed to extract the initial contour of the tracheal wall and display minimum and maximum tracheal diameter in the frame.
    • Position Tracking: Used to track a specified reference point and display a graph of position change data.
    • Signal Value Change: Graphically displays signal value change within a specified ROI.

    No quantitative performance metrics or specific acceptance criteria for these features are provided in the summarized text.

    2. Sample size used for the test set and the data provenance:

    The document explicitly states: "No clinical studies were required to support the substantial equivalence." This means there was no clinical test set of patient data used in the typical sense for evaluating AI performance on diagnostic tasks. The evaluation was based on non-clinical performance tests and comparison to a predicate device. Therefore, no information on data provenance (country of origin, retrospective/prospective) for a clinical test set is available.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    Since no clinical study with a test set for diagnostic performance was conducted or reported in this summary, there is no information provided on the number or qualifications of experts used to establish ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    As no clinical test set for diagnostic performance was utilized, no adjudication method is mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document states: "No clinical studies were required to support the substantial equivalence." Thus, an MRMC comparative effectiveness study was not performed or, if it was, the results are not included in this 510(k) summary. Therefore, no effect size for human reader improvement with AI assistance is available.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The document describes the device as a "software device that performs image processing and display" and includes features like TD-MODE, Position Tracking, and Signal Value Change. While these are algorithmic functions, the document only states that "Performance tests demonstrate that the KONICAMINOLTA DI-X1 performs according to specifications and functions as intended." It does not provide specific standalone quantitative performance metrics or a detailed study of the algorithm's performance independent of human interaction for diagnostic purposes.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    Given the lack of a clinical study assessing diagnostic performance, no type of ground truth (expert consensus, pathology, outcomes data) for such a study is mentioned. The "ground truth" for the device's functionality would likely be derived from the software's specified outputs for given inputs based on engineering verification tests, rather than clinical diagnostic ground truth.

    8. The sample size for the training set:

    The document does not describe the use of machine learning or AI models that would require a "training set" in the context of diagnostic image analysis. Instead, it describes a software device with image processing and display functions. Therefore, no information on the sample size of a training set is provided.

    9. How the ground truth for the training set was established:

    As no training set for machine learning or AI diagnostic models is mentioned, this information is not applicable and not provided.

    In summary, the provided 510(k) summary primarily focuses on demonstrating substantial equivalence to a predicate device based on common intended use, technological characteristics, and principle operations, supported by non-clinical performance verification. It does not contain the details of a clinical study for AI performance as typically seen for devices that provide diagnostic interpretations or assistance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223267
    Device Name
    SKR 3000
    Date Cleared
    2022-11-17

    (24 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.

    The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with Moderate level of concern. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction - (Intelligent-Grid). The Intelligent-Grid is cleared in K151465. The CS-7 modifications have been made for a wireless serial radiography.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    This submission is to introduce a wireless serial radiography into the SKR 3000 system. The wireless serial radiography function of P-65 / P-75 used with Phoenix was cleared under K221803. These detectors are wireless and their serial radiography functions are not being controlled by the x-ray generator. Hence no detector integration testing is necessary.

    AI/ML Overview

    The provided text is a 510(k) Summary for the Konica Minolta SKR 3000 device, which is a digital radiography system. This document focuses on demonstrating substantial equivalence to a predicate device (K213908), rather than presenting a detailed study proving the device meets specific acceptance criteria with performance metrics, sample sizes, expert involvement, or statistical analysis.

    The document states that the changes made to the SKR 3000 (specifically the addition of wireless serial radiography for P-65 and P-75 detectors) did not require clinical studies. Therefore, the information requested about a study demonstrating the device meets acceptance criteria regarding clinical performance is not available in this filing. The "Performance Data" section primarily addresses compliance with electrical safety and EMC standards.

    However, based on the information provided, here's what can be extracted and inferred regarding "acceptance criteria" in the context of this 510(k) submission:

    1. Table of Acceptance Criteria and Reported Device Performance

    Given that no clinical study specific to this submission's modifications is presented, the "acceptance criteria" here relate to general regulatory and technical compliance rather than clinical performance metrics (e.g., sensitivity, specificity for a particular pathology). The "reported device performance" is essentially a statement of compliance.

    Acceptance Criteria CategoryReported Device Performance
    Safety and Effectiveness"The technological differences raised no new issues of safety or effectiveness as compared to its predicate device (K213908)."
    Performance to Specifications"Performance tests demonstrate that the SKR 3000 performs according to specifications and functions as intended."
    Compliance with Standards"The SKR 3000 is designed to comply with the following standard; AAMI/ANSI ES 60601-1 (Ed.3.1) and IEC 60601-1-2 (Ed.4.0)." (General electrical safety and electromagnetic compatibility standards are met.)
    Risk Analysis"The verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results demonstrated that the predetermined acceptance criteria were met. The results of risk management did not require clinical studies to demonstrate the substantial equivalency of the subject device modifications."
    Functional Equivalence (Wireless Radiography)The submission implies that the newly added wireless serial radiography functions (P-65 / P-75) are functionally equivalent to the wired serial radiography functions of the predicate device, especially since "no detector integration testing is necessary" because "their serial radiography functions are not being controlled by the x-ray generator."

    Note: This table reflects the nature of a 510(k) submission focused on substantial equivalence rather than a clinical performance study.


    Here's the breakdown for the other requested information, based on the limitations of the provided document:

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Not provided. The document states that "The results of risk management did not require clinical studies to demonstrate the substantial equivalency of the subject device modifications." This indicates that no clinical "test set" with patient data was used for this specific submission. The performance assessment was based on non-clinical testing (e.g., engineering verification, validation testing to internal specifications and regulatory standards).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable. As no clinical study or test set with patient data was conducted or analyzed, there were no experts establishing ground truth for performance metrics like diagnostic accuracy.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. No clinical test set.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This submission is for a digital radiography system, not an AI-powered diagnostic aide. No MRMC study was performed or is relevant to this submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This device is an imaging system, not an algorithm for standalone diagnosis. The "performance tests" mentioned are related to the hardware and software functionality of the imaging system itself (e.g., image quality specifications, electrical safety, EMC), not an algorithm's diagnostic performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not applicable. For the purpose of this 510(k) filing for device modifications, the "ground truth" for performance was implicitly defined by the compliance with engineering specifications, safety standards (AAMI/ANSI ES 60601-1, IEC 60601-1-2), and the functional equivalence to the predicate device. No clinical ground truth (e.g., pathology, outcomes) was established for this submission.

    8. The sample size for the training set

    • Not applicable. This device is not an AI/ML algorithm that requires a "training set" in the sense of patient data for learning.

    9. How the ground truth for the training set was established

    • Not applicable. No training set for an AI/ML algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220993
    Date Cleared
    2022-06-23

    (80 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Ultrasound System SONIMAGE MX1 and its transducers are products designed to collect ultrasonic image data of the human body for diagnostic purposes. The system employs the ultrasonic pulse-echo method to visualize the anatomic structures, characteristics, and dynamics of the human body, and using an image display, Doppler display and/or Doppler sound, offers a procedure applied to the human body for medical diagnosis or examination.

    The range of intended clinical applications is same as other conventional ultrasound imaging systems for general purpose, such as small parts, abdomen, musculoskeletal, cardiac, and peripheral vascular.

    This device is intended for use in healthcare facilities, such as health clinics and hospitals.

    Intended user for the device are physician, sonographer, and other trained qualified healthcare professionals.

    Modes of operation include B-mode, PWD-mode, CWD-mode, Color Doppler-mode, Power Doppler-mode, and their Combined mode.

    Device Description

    The Ultrasound System SONIMAGE MX1 is a portable diagnostic ultrasound system for general purposes. The system provides ultrasound imaging information such as used for the purpose of diagnosing the human body, which visually represents the internal geometry, characteristics and dynamics of the human body, and transmits / receives ultrasound waves to obtain image data of the visual representation.

    This system provides ultrasound images in conventional modes of B-mode, M-mode, Color Doppler-mode, Power Doppler-mode, PW Doppler-mode and CW Doppler-mode.

    The optional items are available, such as a Cradle, a Three port probe unit, an Additional Battery, and a Foot Switch with dual/triple pedals.

    The system can be connected to LAN through the wired Ethernet and, is also capable of wireless LAN with the OTS USB-WiFi adapter supporting security of WPA/WPA2 and WEP.

    This system conforms to Real Time Display of Thermal and Mechanical Output Indices on Diagnostic Ultrasound Equipment (Track 3). Transducers have their own characteristic applications and are brought into contact with the body surface.

    AI/ML Overview

    The provided text is a 510(k) summary for the Konica Minolta Ultrasound System SONIMAGE MX1. It does not contain any information about acceptance criteria or a study proving that the device meets specific performance criteria through a human-in-the-loop or standalone AI study.

    Instead, the document focuses on demonstrating substantial equivalence to a previously cleared predicate device (K180084). The "Performance Data" section primarily lists compliance with various medical device standards (e.g., AAMI ANSI ES60601-1, IEC 60601-1-2, IEC 60601-2-37, NEMA UD 2-2004, IEC 62304) and references relevant FDA guidance documents for diagnostic ultrasound systems, software, and cybersecurity.

    The key statement regarding performance is: "Risk Analysis and verification and validation activities demonstrate that the established specifications for these devices have been met. The results of risk management did not require clinical studies to demonstrate the substantial equivalency of the proposed device." This indicates that the device's performance was evaluated through engineering verification and validation (V&V) activities against technical specifications and established standards, rather than clinical studies involving human readers or AI algorithms with specific performance metrics like sensitivity, specificity, or AUC.

    Therefore, I cannot provide the requested information because it is not present in the provided document. The document describes a regulatory submission based on substantial equivalence, relying on standard compliance and V&V activities, not a performance study as typically seen for AI/ML-driven devices requiring clinical validation against defined acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K213908
    Device Name
    SKR 3000
    Date Cleared
    2022-01-31

    (48 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.

    The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation - correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.

    This submission is to add new flat-panel x-ray detectors (FPDs), P-82 and P-85, into the SKR 3000. The P-82 and P-85 employ the same surface material infused with Silver ions (antibacterial properties) as the predicate device. The only difference between the P-82 and P-85 is the number of Li-ion capacitors. The P-85 has two Li-ion capacitors and the P-82 has one. These new P-82 and P-85 are not applicable to the serial radiography which acquires multiple frames of radiography image serially.

    The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    AI/ML Overview

    The provided text describes the Konica Minolta SKR 3000, a digital radiography system, and seeks 510(k) clearance by demonstrating substantial equivalence to a predicate device (K210619), which is also an SKR 3000 model. The submission focuses on adding new flat-panel x-ray detectors (FPDs), P-82 and P-85, to the existing system.

    Here's an analysis of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document implicitly defines "acceptance criteria" by comparing the specifications and performance of the subject device (SKR 3000 with P-82/P-85 FPDs) against its predicate device (SKR 3000 with P-65 FPD). The acceptance criteria are essentially the performance levels of the predicate device, which the new FPDs must meet or exceed.

    Feature / Performance MetricAcceptance Criteria (Predicate P-65)Reported Device Performance (Subject P-82/P-85)Meets Criteria?
    Indications for UseSame as SubjectGenerates radiographic images of human anatomy, replaces film/screen in general diagnostic procedures, not for mammography, fluoroscopy, angiography.Yes
    Detection methodIndirect conversion methodIndirect conversion methodYes
    ScintillatorCsI (Cesium Iodide)CsI (Cesium Iodide)Yes
    TFT sensor substrateGlass-based TFT substrateFilm-based TFT substrateN/A (difference accepted, no new safety/effectiveness issues)
    Image area sizeP-65: 348.8×425.6mm (3,488×4,256 pixels)P-82/P-85: 348.8×425.6mm (3,488×4,256 pixels)Yes
    Pixel size100 µm / 200 µm / 400 µm100 µm / 200 µmYes (smaller range still includes acceptable sizes)
    A/D conversion16 bit (65,536 gradients)16 bit (65,536 gradients)Yes
    Max. ResolutionP-65: 4.0 lp/mmP-82/P-85: 4.0 lp/mmYes
    MTF (1.0 lp/mm)(Non-binning) 0.62, (2x2 binning) 0.58(Non-binning) 0.62, (2x2 binning) 0.58Yes
    DQE (1.0 lp/mm)56% @ 1mR59% @ 1mRYes (exceeds)
    External dimensionsP-65: 384(W)×460(D)×15(H)mmP-82/P-85: 384(W)×460(D)×15(H)mmYes
    IP Code (IEC 60529)IPX6IP56N/A (minor difference, presumed acceptable)
    Battery TypeLithium-ion capacitorLithium-ion capacitorYes
    Number of batteriesP-65: TwoP-82: One, P-85: TwoN/A (difference in configuration, performance evaluated)
    Battery duration in standbyP-65: Approx. 13.2 hoursP-82: Approx. 6.0 hours, P-85: Approx. 13.2 hoursYes (P-85 meets, P-82 is different but acceptable for its configuration)
    Surface MaterialSurface infused with Silver ions (antibacterial properties)Surface infused with Silver ions (antibacterial properties)Yes
    Communication I/FWired and WirelessWired and WirelessYes
    Operator console (Software)CS-7, AeroDR3 interface for P-65 (CTDS)CS-7, AeroDR3 interface for P-82 and P-85 (CTDS)Yes
    Image ProcessingSame complex image processing algorithmsSame complex image processing algorithmsYes
    Serial radiographyApplicableNot applicableN/A (difference in feature, not an "acceptance criterion" in this context as new FPDs don't support it)

    Note: The acceptance criteria are largely implied by the claim of substantial equivalence. The document primarily focuses on demonstrating that new FPDs (P-82 and P-85) either match or improve upon the predicate's performance for critical imaging parameters. Differences in the TFT substrate material, pixel size options, number of batteries, and serial radiography capability are noted but explained as not raising new safety or effectiveness concerns.

    2. Sample size used for the test set and the data provenance:

    The document states: "The performance tests according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices' and the other verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results demonstrated that the predetermined acceptance criteria were met."

    This indicates that specific performance tests were conducted. However, the document does not explicitly state the sample size used for the test sets (e.g., number of images, number of phantom studies, number of human subjects, if any) nor the data provenance (e.g., country of origin, retrospective or prospective nature of clinical data if used). Given the type of device (X-ray system component) and the nature of the submission (adding new FPDs to an existing cleared system), the "performance data" presented is primarily technical specifications and phantom-based measurements, not typically large-scale clinical trials with human subjects.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    The document does not mention the use of experts to establish ground truth. As this is a technical performance comparison of imaging hardware (FPDs), ground truth would likely be established through objective physical measurements and established technical standards (e.g., imaging phantoms, dosimeters) rather than expert human interpretation of medical images for diagnostic accuracy.

    4. Adjudication method for the test set:

    Since there is no mention of human experts or clinical image interpretation studies, there is no adjudication method described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document does not mention an MRMC study, nor does it refer to AI or AI-assisted improvements for human readers. This device is a digital radiography system (hardware), and the submission focuses on its technical performance compared to a predicate, not on AI algorithms or their impact on reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The device described is an X-ray imaging system, not an algorithm. Therefore, a standalone algorithm-only performance study is not applicable in this context. The performance evaluated is that of the hardware components (FPDs) within the system.

    7. The type of ground truth used:

    The ground truth for the performance parameters (e.g., Max. Resolution, MTF, DQE) would be established through objective physical measurements using standardized phantoms and test procedures as per industry standards (e.g., "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices"). For other specifications like battery life or dimensions, ground truth is based on engineering measurements and design specifications.

    8. The sample size for the training set:

    The document does not refer to a training set. This is because the submission is for hardware components (FPDs) for an X-ray system, not for a machine learning or AI-based diagnostic algorithm that would require training data.

    9. How the ground truth for the training set was established:

    As there is no mention of a training set, there is no information on how its ground truth would be established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212685
    Date Cleared
    2021-09-13

    (20 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KONICAMINOLTA DI-X1 is a software device that receives digital x-ray images and data from various sources (i.e. R/F Units, digital radiographic devices or other imaging sources). Images and data can be stored, communicated, processed and displayed within the system and/or across computer networks at distributed locations. It is not intended for use in diagnostic review for mammography.

    Device Description

    KONICAMINOLTA DI-X1 is a software device that performs image processing and display using X-ray digital images (single-frame images, multi-frame images) generated by various diagnostic imaging modality consoles. It is a standalone software device intended to install onto off-the -shelf Servers and PCs. KONICAMINOLTA DI-X1 receives X-ray digital images, including serial images, processes the received images, as well as displays and sends the resulting images to PACS and other devices. In addition, KONICAMINOLTA DI-X1 can display images through the browser connection with the client that displays and process images, and instruct transmission of images. The personal computer used in KONICAMINOLTA DI-X1 stores the same data in two hard disks in real time using RAID1 mirroring function. Thus, even if one hard disk is defective, operations can be continued with the other hard disk which has the same data. The modifications are made on software to the identified predicate device to add the PC client to connect to the server using the browser on a personal computer to display images for a WEB reference. In addition, additional imaging processing MODES are implemented into the subject device. The subject device also modifies the graphical display to compare the past exam. graphs based on the measurement values in a chronological order.

    AI/ML Overview

    Let's break down the information about the acceptance criteria and performance study for the KONICAMINOLTA DI-X1 device based on the provided FDA 510(k) summary.

    It's important to note that the provided document does not contain a detailed performance study with human readers, specific metrics for AI performance (like sensitivity/specificity), or the methodologies for establishing ground truth for a test set. The submission states that "No clinical studies were required to support the substantial equivalence." This implies that the device's modifications are considered minor enough that extensive clinical validation, as would be expected for a novel AI diagnostic device, was not necessary.

    The focus of this submission is on demonstrating substantial equivalence to a predicate device (K182431) for a medical image management and processing system, not a new diagnostic AI algorithm. The "modifications are made on software... to add the PC client to connect to the server using the browser... and additional imaging processing MODES are implemented... The subject device also modifies the graphical display to compare the past exam. graphs based on the measurement values in a chronological order." This suggests the "performance data" refers to validation of these functional additions and software changes, rather than a diagnostic accuracy study.

    Therefore, many of the requested points regarding AI performance and human reader studies cannot be precisely answered from this document.


    Here's the breakdown based on the provided text, with clarifications where information is absent:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criterion (Inferred from Device Description & Changes)Reported Device Performance
    Functional Equivalence to Predicate DeviceThe device has the same intended use, indications for use, technological characteristics, and principal operations as the predicate device (K182431).
    Correct Operation of New Features:
    * PC client connectivity via browser for WEB referenceDemonstrated to function as intended. (Implied by the statement: "Performance tests demonstrate that the KONICAMINOLTA DI-X1 performs according to specifications and functions as intended.")
    * New Imaging Processing Modes (PH-MODE, PH2-MODE, LM-MODE)Demonstrated to function as intended. (Implied by the statement: "Performance tests demonstrate that the KONICAMINOLTA DI-X1 performs according to specifications and functions as intended.")
    * Modified graphical display for chronological comparison of past exam graphs based on measurement valuesDemonstrated to function as intended. (Implied by the statement: "Performance tests demonstrate that the KONICAMINOLTA DI-X1 performs according to specifications and functions as intended.")
    Data Integrity and Reliability (RAID1 mirroring)"The personal computer used in KONICAMINOLTA DI-X1 stores the same data in two hard disks in real time using RAID1 mirroring function. Thus, even if one hard disk is defective, operations can be continued with the other hard disk which has the same data." (This is a design feature, its successful implementation and testing would be part of "performance tests.").
    Meeting all specifications and risk analysis requirements"All the verification activities required by the specification and the risk analysis for the KONICAMINOLTA DI-X1 were performed and the results demonstrated that the predetermined acceptance criteria were met."
    No new issues of safety or effectiveness"The technological differences raised no new issues of safety or effectiveness as compared to its predicate device (K182431)."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" in the context of a clinical performance study for an AI algorithm. The performance data section states: "All the verification activities required by the specification and the risk analysis for the KONICAMINOLTA DI-X1 were performed and the results demonstrated that the predetermined acceptance criteria were met. No clinical studies were required to support the substantial equivalence."

    This indicates that the "testing" was likely functional and verification testing of the software's new features and overall operation, rather than a diagnostic accuracy evaluation on a patient image dataset. Therefore, information regarding data provenance (country of origin, retrospective/prospective) and sample size for such a test set is not provided because such a clinical test set was not deemed necessary for this submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    Not applicable. As noted above, no clinical study requiring expert-established ground truth on a test set for diagnostic accuracy was reported or required for this 510(k) submission.

    4. Adjudication Method for the Test Set

    Not applicable. No clinical study requiring ground truth adjudication was reported.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document explicitly states: "No clinical studies were required to support the substantial equivalence." Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or reported.

    6. Standalone (Algorithm Only) Performance

    No. This device is described as a "medical image management and processing system" with enhancements, not a standalone AI diagnostic algorithm performing a specific diagnostic task (like detecting a disease). Its primary function is image handling, processing, and display. Therefore, a standalone performance metric (e.g., sensitivity/specificity for a disease) is not provided or applicable in the context of this submission.

    7. Type of Ground Truth Used

    Based on the lack of a clinical study, specific "ground truth" for diagnostic accuracy (e.g., pathology, outcomes data, expert consensus) was not established or used for performance evaluation in this 510(k). The "performance tests" focused on verifying the software's functional specifications.

    8. Sample Size for the Training Set

    Not applicable. This device is described as an image management and processing system, not a device incorporating a machine learning model that requires a "training set" in the typical sense of AI development for diagnostic tasks. The "modifications" were software developments, not AI model training.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there was no AI model training set as described in typical AI/ML submissions.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210619
    Device Name
    SKR 3000
    Date Cleared
    2021-08-24

    (176 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures.

    This device is not indicated for use in mammography, fluoroscopy, and angiography applications.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    The subject device SKR3000 is not intended for use in mammography
    This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation - correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.

    The proposed SKR 3000 is modified to consist of new FPD P-65 and P-75 in addition to previously cleared P-61, P-71, and P-81, Console CS-7 and other peripherals. The DR Detector uses the exposure signal or exposure from the X-ray device to generate X-ray digital image data for diagnosis, including serial exposure images, and send to the image processing controller.

    The operator console software, Console CS-7, is a software program for installation on a OTC PC. Software module modifications have been made to use new FPDs (P-65 and P-75) (Cassette Type Detection Software (CTDS)) and to support 40 seconds serial radiography (SIC).

    The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.

    The new DR panels, P-65 and P-75, employ the surface material containing antibacterial agent in both radiation and irradiation sides. In the serial radiography settings, acquisition time has been changed from up to 20 seconds to 40 seconds to observe a variety of dynamic objects. Other control parameters of serial radiography are not changed from the predicate device.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    AI/ML Overview

    This document describes the Konica Minolta SKR 3000, a digital radiography system, and its substantial equivalence to a predicate device. The information provided focuses on the device's design, specifications, and performance testing to demonstrate compliance with standards, but does not include a detailed study proving the device meets specific acceptance criteria related to diagnostic accuracy or clinical outcomes through a prospective trial involving human readers. The provided text primarily focuses on engineering and regulatory compliance, not clinical performance metrics in the context of AI assistance or human reader improvement.

    However, based on the provided text, here's a breakdown of the acceptance criteria met through performance testing as described, and the absence of certain study types:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document broadly states that "the performance tests according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices' and the other verification and validation including the items required by the risk analysis for the SKR3000 were performed and the results demonstrated that the predetermined acceptance criteria were met."

    While specific numerical acceptance criteria and their corresponding reported device performance values are not explicitly detailed in the text, the comparison table implicitly highlights characteristics where performance is expected to be equivalent or improved. For instance, the Signal-to-Noise Ratio (SNR) and Detective Quantum Efficiency (DQE) are critical performance metrics for X-ray detectors, and based on the equivalence asserted, one can infer that these metrics met predefined acceptance thresholds.

    Given the information in the "Comparison Table", the following can be inferred as performance aspects that were evaluated and met criteria for substantial equivalence:

    Acceptance Criteria (Implied from comparison)Reported Device Performance (Implied from comparison)
    Image Quality Metrics:
    MTF (1.0 cycle/mm)Non-binning: 0.62
    MTF (1.0 cycle/mm)2x2 binning: 0.58
    DQE (1.0 cycle/mm)56% @ 1mR
    DQE (0 cycle/mm)65% @ 0.02mR
    Exposure Acquisition TimeMax. acquisition time: 40 seconds (for serial radiography)
    Battery Duration in StandbyP-65: Approx. 13.2 hours; P-75: Approx. 12.2 hours
    Antibacterial PropertiesSurface infused with Silver ions (antibacterial properties)
    Environmental Protection (IPX)IPX6
    Regulatory ComplianceAAMI/ANSI ES 60601-1 (Ed.3.1), IEC 60601-1-2 (Ed.4.0), and ISO 10993-1 (2018) met.
    Software FunctionalityNew FPD support (CTDS) and 40 seconds serial radiography support (SIC) operating as intended.
    Absence of New Safety/Effectiveness IssuesPerformance tests demonstrated no new issues compared to predicate device.

    2. Sample Size Used for the Test Set and Data Provenance

    The provided document does not detail any clinical test set or data provenance in terms of patient images or specific study populations. The performance data mentioned refers to engineering and quality assurance tests, not clinical performance studies with patient data.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    This information is not applicable or disclosed in the provided text. The document refers to engineering performance tests and compliance with regulatory standards, not expert-adjudicated clinical ground truth.

    4. Adjudication Method for the Test Set

    This information is not applicable or disclosed as there is no mention of a human-reviewed test set or adjudication process for diagnostic performance.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study is not mentioned in the provided text. The document indicates that "the results of risk management did not require clinical studies to demonstrate the substantial equivalency of the proposed device," which suggests that comparative effectiveness with human readers or AI assistance was not a component of this 510(k) submission.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done

    The device itself is a digital radiography system, which generates images. While there are software components (like Console CS-7 for image processing), the submission focuses on the hardware (FPDs) and overall system performance in generating X-ray images, not an AI algorithm's standalone diagnostic performance. Therefore, such a standalone diagnostic algorithm study is not mentioned. The "performance tests" refer to technical specifications and safety, not diagnostic accuracy.

    7. The Type of Ground Truth Used

    Based on the document, the "ground truth" for the acceptance criteria was primarily based on technical specifications, regulatory standards, and engineering performance requirements. These include metrics like MTF, DQE, mechanical dimensions, battery life, IPX ratings, and compliance with electrical safety and electromagnetic compatibility standards. No clinical ground truth (e.g., pathology, outcomes data, or expert consensus on disease presence) is mentioned as being used for performance evaluation in this submission.

    8. The Sample Size for the Training Set

    This is not applicable or disclosed. The document does not describe the development or training of an AI algorithm in the context of machine learning, so there is no mention of a training set of images.

    9. How the Ground Truth for the Training Set Was Established

    This is not applicable or disclosed as there is no mention of an AI training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210066
    Device Name
    ImagePilot
    Date Cleared
    2021-05-06

    (115 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Konica Minolta, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ImagePilot software is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications. The ImagePilot software primarily facilitates processing and presentation of medical images on display monitors suitable for the medical task being performed. The ImagePilot software can process and display medical images from the following modality types: Plain X-ray Radiography, X-ray Computed Tomography, Magnetic Resonance imaging, Ultrasound, Nuclear Medicine and other DICOM compliant modalities. The ImagePilot must not be used for primary image diagnosis in mammography.

    Device Description

    The ImagePilot software is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications. The ImagePilot software primarily facilitates processing and presentation of medical images on display monitors suitable for the medical diagnostics task being performed. The ImagePilot software can process and display medical images from the following modality types: Plain X-ray Radiography, X-ray Computed Tomography, Magnetic Resonance imaging, Ultrasound, Nuclear Medicine and other DICOM compliant modalities including mammography. When used for mammography the ImagePilot should never be used as a diagnostic tool.

    AI/ML Overview

    This document (K210066) is a 510(k) Summary for a medical image management and processing system called ImagePilot. It addresses the device's substantial equivalence to previously cleared predicate devices.

    Crucially, this document states: "No clinical studies were required to support the substantial equivalence." This means the information requested regarding acceptance criteria, performance studies, sample sizes, expert ground truth, MRMC studies, and training data provenance is largely not applicable in the context of this 510(k) submission.

    The acceptance criteria here are related to design verification activities and demonstrating that the device performs according to specifications and functions as intended based on internal testing. There is no large-scale clinical performance study described to prove the device meets specific performance criteria against a clinical ground truth.

    However, I can extract information related to the device's technical characteristics and the basis for its clearance:


    Summary of Acceptance Criteria and Study Information as Implied by the 510(k):

    Given that "No clinical studies were required to support the substantial equivalence," the "acceptance criteria" and "study" are primarily focused on design verification and non-clinical performance testing to demonstrate that the updated software (ImagePilot v1.92) functions as intended and is equivalent to its predicates.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Since no clinical study data is reported, the acceptance criteria are implicitly met by "design verification activities" and demonstration that the "ImagePilot performs according to specifications and functions as intended." No specific quantitative performance metrics against a clinical outcome are presented.

    Acceptance Criteria (Implied)Reported Device Performance
    Device performs according to specifications."the results demonstrated that the predetermined acceptance criteria were met."
    Device functions as intended."the ImagePilot performs according to specifications and functions as intended."
    Modifications do not affect safety or effectiveness compared to predicates."These differences were found to not affect safety or effectiveness via design verification activities."
    Compatibility with specified operating systems (Microsoft Windows 10).Device is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications.
    Ability to process and display images from specified modalities (Plain X-ray Radiography, CT, MRI, Ultrasound, Nuclear Medicine, other DICOM compliant).Explicitly stated as a function of ImagePilot and compared as equivalent to predicates.
    Ability to perform specified image processing functions (e.g., automatic tone adjustment, sharpness processing, noise suppression, bone suppression).Explicitly stated as a function of ImagePilot and compared as equivalent to predicates.
    Must not be used for primary image diagnosis in mammography.Stated as a limitation in the Indications for Use.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not applicable. No clinical test set (i.e., patient data for clinical performance evaluation) was used for substantial equivalence. The "test set" would refer to internal software testing/design verification, for which details (like specific number of images or test cases) are not provided in this public summary.
    • Data Provenance: Not applicable for a clinical test set. The document does not specify the origin of any images used for internal testing/verification, if applicable. These would likely be internally generated or representative test images.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Not applicable. Since no clinical studies were required, there was no need for expert-established ground truth in the context of clinical performance evaluation for this 510(k). The "ground truth" for the device's functionality would be established by its adherence to engineering specifications and software quality assurance.

    4. Adjudication Method for the Test Set:

    • Not applicable. No clinical test set requiring adjudication was used.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No. The document explicitly states, "No clinical studies were required to support the substantial equivalence."

    6. Standalone (Algorithm Only) Performance:

    • This device is software that processes and presents medical images. Its "performance" in this context is primarily functional – correctly processing and displaying images as specified. It is not an AI algorithm designed to make a diagnostic determination independently. Its function is to facilitate image review by a human.
    • Therefore, performance is measured against its technical specifications, not typically as a standalone diagnostic algorithm.

    7. Type of Ground Truth Used:

    • Not applicable for clinical performance. The "ground truth" for this submission refers to the internal specifications and functional requirements of the software, and verifying that the software meets these requirements through design verification and non-clinical performance testing.

    8. Sample Size for the Training Set:

    • Not applicable. This device description does not indicate that it uses machine learning or an AI algorithm that requires a "training set" in the sense of a dataset for model development. Its function is image processing and presentation based on established algorithms, not learning from data.

    9. How the Ground Truth for the Training Set was Established:

    • Not applicable. As above, there's no mention or implication of a training set for an AI/ML model for this device.

    In conclusion, this 510(k) submission for ImagePilot v1.92 focused on demonstrating substantial equivalence through non-clinical performance testing and design verification, primarily to show that software modifications (e.g., OS update, incorporation of a previously cleared function) did not negatively impact the device's safety or effectiveness compared to its predicates. It does not contain information about clinical studies or AI model development and validation as would be found for a device leveraging advanced AI for diagnostic assistance.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3