Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K133730
    Date Cleared
    2014-01-31

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CO PILOT/REGIUS UNITEA

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CO pilot is intended for installation on an off-the-shelf PC (REGIUS Unitea / 510(K) number: K071436) meeting or exceeding minimum specifications. The CO pilot software primarily facilitates processing and presentation of medical images on display monitors suitable for the medical task being performed. The CO pilot software can process and display images from the following modality types: Plain X-ray Radiography, X-ray Computed Tomography, Magnetic Resonance imaging, Ultrasound, Nuclear Medicine and other DICOM compliant modalities. The CO pilot must not be used for primary image diagnosis in mammography.

    Device Description

    The "CO Pilot Software" is intended for installation on REGIUS Unitea (510(K) number: K071436), and provides unit for creating display-use annotation such as line, curve, and character information, unit for measuring the distance between 2 points and angle formed between 3 points and Transfer GUI data to the GUI control module to REGIUS Unitea System Control Program

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device called "CO Pilot" and the associated FDA clearance letter. It states that the CO Pilot is a software intended for installation on REGIUS Unitea (a Picture Archiving Communications System) and provides functions for creating annotations (line, curve, character information), measuring distances and angles, and transferring GUI data. It can process and display images from various modalities but "must not be used for primary image diagnosis in mammography."

    The summary explicitly states: "Verification and Validation showed equivalent evaluation outcome with the predicate devices, which has supported a fact that no impacts in technological characteristics such as design, material chemical composition energy sauce and other factors of the proposed device were recognized. The all evaluation results can assure that there is no safety, effectiveness and performance issue or no differences were found in further than the predicate devices have which has been regally marketed the United States. Therefore, we confirmed that the function quality of proposed device has the substantial equivalency with orthopedic or chiropractic supporting functions quality that predicate devices have."

    This indicates that Konica Minolta relied on demonstrating substantial equivalence to predicate devices rather than conducting a separate study with specific acceptance criteria and performance metrics for the CO Pilot itself. The summary does not include a detailed study that defines specific acceptance criteria, test sets, ground truth establishment, or multi-reader studies for the CO Pilot's performance. Instead, it argues that its performance is equivalent to already cleared devices through verification and validation activities.

    Therefore, many of the requested details cannot be extracted directly from the provided text because these types of studies were not presented in this 510(k) summary as a means to prove the device meets acceptance criteria.

    Here's a breakdown of what can and cannot be answered based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a table of acceptance criteria for the CO Pilot's performance beyond stating that its "function quality" is substantially equivalent to predicate devices. It doesn't report specific performance metrics like sensitivity, specificity, accuracy, or measurement precision for the CO Pilot itself.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    This information is not provided in the 510(k) summary. The summary refers to "Verification and Validation" but does not detail the datasets or studies used for these activities.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    This information is not provided.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    This information is not provided.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A MRMC comparative effectiveness study is not mentioned. The device's primary function is annotation and measurement, not necessarily to assist human readers in diagnosis in a way that would typically be evaluated by an MRMC study demonstrating improved diagnostic accuracy.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    The document does not describe a standalone performance study in the traditional sense for diagnostic accuracy. The device "primarily facilitates processing and presentation of medical images on display monitors" and provides "display-use annotation" and "measuring" functions. Its performance would likely be assessed on the accuracy of these functions rather than diagnostic output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    This information is not provided. Given the device's functions (annotation, measurement), ground truth would likely refer to the accuracy of these measurements relative to a known standard or expert measurement, but the details are not here.

    8. The sample size for the training set

    This information is not provided. Given the device's functions as annotation and measurement tools, it's less likely to involve a "training set" in the context of machine learning, unless specific automation features exist that weren't elaborated.

    9. How the ground truth for the training set was established

    This information is not provided.


    Summary of available information regarding acceptance criteria and study:

    The 510(k) summary for CO Pilot (K133730) does not present specific acceptance criteria in a quantitative manner or a detailed study with performance metrics for the device itself. Instead, it relies on demonstrating substantial equivalence to its predicate devices (REGIUS Unitea K071436, Acies K101842, Opal-RAD TM K063337) through "Verification and Validation."

    The core argument is:

    • Acceptance Criteria (Implicit): The CO Pilot's "function quality," safety, effectiveness, and performance should be equivalent to or not inferior to the legally marketed predicate devices.
    • Study Proving Acceptance (Method): "Verification and Validation" activities were conducted. These activities "showed equivalent evaluation outcome with the predicate devices" and confirmed "no impacts in technological characteristics" and "no differences were found" in comparison to the predicates.

    No specific data related to test sets, ground truth, expert involvement, or quantitative performance metrics for the CO Pilot itself are provided in this summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K071436
    Device Name
    REGIUS UNITEA
    Date Cleared
    2007-06-27

    (35 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    REGIUS UNITEA

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The REGIUS Unitea software is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications. The REGIUS Unitea software primarily facilitates processing and presentation of medical images on display monitors suitable for the medical task being performed. The REGIUS Unitea software can process and display medical images from the following modality types: Plain X-ray Radiography, X-ray Computed Tomography, Magnetic Resonance imaging, Ultrasound, Nuclear Medicine and other DICOM compliant modalities. The REGIUS Unitea must not be used for primary image diagnosis in mammography.

    Device Description

    REGIUS Unitea is a software which is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications. REGIUS unitea software controls and manages the cassette type CR (Computed Radiography) such as REGIUS MODEL 170,190 and 110 that is connected via the network. REGIUS Unitea receives and displays images from other DICOM compliant modalities connected via the network and whole use digital media such as DVD/CD-R, DSC and USB memory cards connected as disk drives.

    REGIUS Unitea software has the following set of features.

    1. Feature to automatically obtain patient demographic information (Name, Age, Sex, Date of Birth, etc) from Hospital Information Systems.
    2. Feature to specify the reading condition (Sampling Sensitivity of the sensor and so on) of the connected CR device.
    3. Receive and store image data from REGIUS MODEL 170, 190 and 100 CR, other DICOM compliant modalities and digital media (such as DVD/CD-R, DSC and USB memory cards).
    4. Display image data received from the REGIUS MODEL 170, 190 and 100 CR, other DICOM compliant modalities and digital media (such as DVD/CD-R, DSC and USB memory cards).
    5. Feature to apply image processing to images received from REGIUS MODEL 170. 190 and 100 CR.
    AI/ML Overview

    This 510(k) summary (K071436) describes a medical image processing workstation called REGIUS Unitea. The submission primarily focuses on establishing substantial equivalence to a predicate device (REGIUS CS-2000 and CS-3000, K051523) and details the device's features and intended use.

    Crucially, this 510(k) summary does NOT contain information about specific acceptance criteria or an analytical study proving the device meets those criteria, as it relates to performance metrics like accuracy, sensitivity, or specificity for a specific medical task.

    Instead, the submission for REGIUS Unitea, being a "Picture archiving and communications system" (PACS) product (21 CFR 892.2050), falls under a general controls regulatory pathway. For such devices, the primary "acceptance criteria" revolve around demonstrating that the device functions as intended, handles various image modalities, complies with DICOM standards, and does not introduce new safety or efficacy concerns compared to a legally marketed predicate device.

    The study described is largely a comparative analysis for substantial equivalence, not a performance study measuring clinical accuracy or effectiveness in a typical sense for an AI/CAD product.

    Given the information provided, here's a breakdown of the requested points:


    1. Table of Acceptance Criteria and Reported Device Performance

    Based on the provided K071436 summary, there are no explicit quantitative acceptance criteria or reported device performance metrics (e.g., sensitivity, specificity, AUC) for a specific diagnostic task. The acceptance is primarily based on:

    Acceptance Criteria CategoryReported Device Performance/Compliance
    Functional Equivalence"REGIUS Unitea is substantially equivalent to our REGIUS CS-2000 and CS-3000, 510(k) number: K051523. Comparison of the principal characteristics of these devices is shown in the Section 3." (Section 3 is not fully provided here, but it would detail feature-by-feature comparison). The new device offers similar image processing, display, and management capabilities to the predicate.
    Safety and Efficacy (New Issues)"REGIUS Unitea introduces no new safety and efficacy issues other than those already identified with the predicate device. The results of a hazard analysis, combined with the appropriate preventive measure taken indicate that the device is of minor level of concern as per the May 11, 2005 issue of the "Guidance for the Content of Premark Submissions for Software Contained in Medical Devices"."
    Intended Use ComplianceIntended for installation on off-the-shelf PC, processing and presentation of medical images on display monitors suitable for the medical task. Supports various modalities (Plain X-ray Radiography, CT, MRI, Ultrasound, Nuclear Medicine, other DICOM compliant). Explicitly not for primary image diagnosis in mammography. This matches the scope of a PACS device.
    DICOM ComplianceImplicitly stated throughout in relation to receiving/displaying images from "DICOM compliant modalities" and outputting to "other DICOM devices."
    Connectivity/CompatibilityControls/manages cassette-type CR (REGIUS MODEL 170, 190, 110), receives/displays images from other DICOM modalities, utilizes digital media (DVD/CD-R, DSC, USB).
    Image Processing FeaturesThe device successfully implements a range of image processing features as described (Contrast/Density adjustments, F-processing, E-processing, H-processing, I-processing, Masking, Rotating/Flipping, Re-sampling/Resizing, Stitching, Grid Suppression, Digital marker/Annotation). The "acceptance" here is that these features are present and function as specified, presumably verified through internal testing.

    2. Sample Size Used for the Test Set and Data Provenance

    Given this is a PACS device submission for substantial equivalence, there is no mention of a specific "test set" in the context of diagnostic performance (e.g., a set of patient cases to evaluate diagnostic accuracy).

    The evaluation typically involves:

    • Verification and Validation (V&V) testing: This would involve testing the software's functionalities (e.g., image loading, display, processing, storage, network communication) using various types of DICOM images and system configurations.
    • Hazard Analysis: To identify and mitigate risks.

    The K071436 does not specify the sample size of images or the origin of data used for these V&V activities. Since it's a software device interacting with CR systems, the "data" would consist of DICOM images (potentially synthetic, proprietary, or from various sites) to confirm proper handling. It's likely a mix of internal, proprietary data, and potentially public DICOM conformance test images. The provenance is not specified (e.g., country of origin, retrospective/prospective).


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Not applicable. This submission doesn't describe a study requiring diagnostic "ground truth" established by medical experts. A PACS workstation stores, retrieves, processes, and displays images; it doesn't make diagnostic interpretations itself requiring ground truth for AI performance evaluation. The "ground truth" for its functions would be whether it accurately performs the specified operations (e.g., does it correctly adjust contrast, does it store the image without corruption, does it display it correctly according to the DICOM header). These are typically verified by engineers and quality assurance personnel, not medical experts establishing diagnostic ground truth.


    4. Adjudication Method for the Test Set

    Not applicable. As no diagnostic performance test set is described, there is no adjudication method mentioned.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    Not applicable. This device is a PACS workstation, not an AI or CAD system intended to assist readers in diagnostic interpretation. Therefore, no MRMC study or assessment of human reader improvement with AI assistance was performed or reported.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Not applicable. The REGIUS Unitea is an image management and processing system, entirely human-in-the-loop, as it's a tool for medical professionals to view and manipulate images. It does not perform any standalone diagnostic analysis or algorithmic interpretation where "algorithm only" performance would be relevant.


    7. The Type of Ground Truth Used

    Not applicable in the diagnostic sense. The "ground truth" for a PACS system would pertain to its functional performance and adherence to standards:

    • Functional correctness: Does the software perform its operations (e.g., image processing, storage, display) as specified? (Verified by testing against software requirements and design specifications).
    • DICOM conformance: Does it correctly interpret and generate DICOM objects? (Verified using DICOM conformance tools and test images).
    • Image integrity: Are images stored and displayed without degradation or loss of information? (Verified by comparing processed/stored images with their originals, often using objective quality metrics or visual inspection).

    8. The Sample Size for the Training Set

    Not applicable. This device does not use machine learning or AI models that require a "training set." It is a software application developed using traditional programming paradigms.


    9. How the Ground Truth for the Training Set Was Established

    Not applicable. As there is no training set for an AI model, there is no corresponding ground truth establishment process.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1