Search Filters

Search Results

Found 7 results

510(k) Data Aggregation

    K Number
    K240680
    Device Name
    CDM Insights
    Date Cleared
    2024-12-06

    (270 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CDM Insights

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CDM Insights is a post-processing image analysis software that assists trained healthcare practitioners in viewing. analyzing, and evaluating MR brain images of adults > 45 years of age.

    CDM Insights provides the following functionalities:

    • Automated segmentation and quantitative analysis of individual brain structures and white matter hyperintensities
    • Quantitative comparison of brain structures and derived values with normative data from a healthy population
    • Presentation of results for reporting that includes numerical values as well as visualization of these results
    Device Description

    CDM Insights is automated post-processing medical device software that is used by radiologists, neurologists, and other trained healthcare practitioners familiar with the post-processing of magnetic resonance images. It accepts DICOM images using supported protocols and performs: automatic segmentation and quantification of brain structures and lesions, automatic post-acquisition analysis of diffusionweighted magnetic resonance imaging (DWI) data, and comparison of derived image metrics from multiple time-points.

    The values for a given patient are compared against age-matched percentile data from a population of healthy reference subjects. White matter hyperintensities can be visualized and quantified by volume. Output of the software provides numerical values and derived data as graphs and anatomical images with graphical color overlays.

    CDM Insights output is provided in standard DICOM format as a DICOMencapsulated PDF report.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device (CDM Insights) meets these criteria. Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    MeasureAcceptance Criteria (from primary predicate)Reported Device Performance
    Accuracy of Segmentation for White Matter Hyperintensities (WMH)Mean Dice overlap score ≥ 0.58Mean Dice overlap score = 0.66 (SD = 0.15)
    Accuracy of Segmentation for Cortical RegionsMean Dice overlap score ≥ 0.58 for each regionOrbito-frontal: 0.58 (0.10)
    Superior-frontal: 0.72 (0.05)
    Sensorimotor: 0.69 (0.14)
    Ventral-temporal: 0.58 (0.05)
    Anterior-cingulate: 0.60 (0.09)
    Precuneus: 0.58 (0.08)
    Lateral-occipital: 0.59 (0.11)
    Medial-occipital: 0.63 (0.06)
    Visual Ratings of Segmentation Quality and Cortical Surface QualityNot explicitly stated in numerical form, but implied "good" or "excellent" rating for acceptanceTypically rated by neuroradiologists as "good" or "excellent"
    Repeatability of MeasurementsNot explicitly stated in numerical form but implied successful confirmationConfirmed on a total of 121 healthy individuals with two or three repeated MRI scans.
    Reproducibility Across MRI Scanner Models and ProtocolsNot explicitly stated in numerical form but implied successful quantificationQuantified across a range of MRI scanner model and protocol parameters using scans from over 1500 unique subjects.
    Accuracy of Percentiles (of normative data)Not explicitly stated in numerical form but implied successful testingTested with almost 2000 test scans.

    2. Sample Sizes and Data Provenance

    • Test Set Sample Size:
      • Accuracy of segmentation: 60 cases for brain region and WMH segmentation accuracy.
      • Repeatability: 121 healthy individuals.
      • Reproducibility: Over 1500 unique subjects.
      • Accuracy of percentiles: Almost 2000 test scans.
    • Data Provenance: The data included scans acquired on different scanner models from multiple manufacturers. It included scans from a group of cognitively healthy individuals and a mix of individuals with disorders (Alzheimer's disease, mild cognitive impairment, frontotemporal dementia, multiple sclerosis). Data were obtained from 13 different source cohorts, with 7 of these based in the USA. The text indicates that information was available on race or ethnicity for the majority of individuals, with more than 20% non-white and more than 5% Hispanic. The studies included both retrospective (existing scans) and potentially some prospective components (implied by "repeated MRI scans" for repeatability) but primarily seems based on existing datasets.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated as a number, but referred to as "US board-certified neuroradiologists."
    • Qualifications of Experts: US board-certified neuroradiologists. Their years of experience are not specified.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It states "tested against a gold standard of US board-certified neuroradiologists," implying their consensus or individual expert delineation as the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text does not mention a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance. The study focuses on the standalone performance of the algorithm.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was done. The performance metrics (Dice scores, visual ratings) are presented for the algorithm's output directly against expert-established ground truth, without a human-in-the-loop component described in these performance tests.

    7. Type of Ground Truth Used

    The ground truth used for accuracy assessments (segmentation and cortical surfaces) was established by expert consensus/delineation (a "gold standard of US board-certified neuroradiologists").

    8. Sample Size for the Training Set

    The "almost 2000 test scans" for percentile accuracy are explicitly stated to be "independent of training scans used to derive percentiles." However, the sample size for the training set itself is not specified in this document.

    9. How Ground Truth for Training Set Was Established

    The document states that the percentiles were "used to derive percentiles," implying that the training set was used to construct the normative data. However, the method for establishing ground truth within that training set (e.g., for segmentation, if those were part of the training) is not detailed. It's implied that the normative data itself serves as the "ground truth" for the percentile comparisons.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Device Name :

    VARIANT II HEMOGLOBIN A1C PROGRAM (NEW), VARIANT II HEMOGLOBIN TESTING SYSTEM WITH CDM SOFTWARE, CDM

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Bio-Rad VARIANT™ II Hemoglobin A Ic Program is intended for the percent determination of hemoglobin A Ic in human whole blood using ion-exchange high-performance liquid chromatography (HPLC). The Bio-Rad VARIANT II Hemoglobin Alc Program is intended for Professional Use Only. For in vitro diagnostic use. Measurement of percent hemoglobin A1c is effective in monitoring long-term glucose control in individuals with diabetes mellitus.

    The VARIANT™ II ß-thalassemia Short Program is intended for the separation and area percent determinations of hemoglobins A2 and F, and as an aid in the identification of abnormal hemoglobins in whole blood using ion-exchange high-performance liquid chromatography (HPLC). The Bio-Rad VARIANT II ß-thalassemia Short Program is intended for use only with the Bio-Rad VARIANT II Hemoglobin Testing System. For in vitro diagnostic use. Measurement of the percent hemoglobin A2 and F are effective in screening of ß-thalassemia (i.e., hereditary hemolytic anemias characterized by decreased synthesis or more types of abnormal hemoglobin polypeptide chains).

    Device Description

    The VARIANT II Hemoglobin Testing System is a fully automated, highthroughput hemoglobin analyzer. The VARIANT II Hemoglobin Testing System provides an integrated method for sample preparation, separation and determination of the relative percent of specific hemoglobin in whole blood. It consists of two modules - the VARIANT II Chromatographic Station (VCS) and the VARIANT II Sampling Station (VSS). In addition, a personal computer is used to control the VARIANT System using Clinical Data Management (CDM) Software.

    A personal computer (PC) is used to control the VARIANT II Hemoglobin Testing System using Clinical Data Management (CDM™) software. The CDM software supports import of sample information from and export of patient results to a Laboratory Information System (LIS). Control results are displayed on Levy-Jennings Charts and are exportable to Unity Real Time™.

    AI/ML Overview

    This K130860 submission is a Special 510(k) for a device modification, meaning the changes are to existing predicate devices (VARIANT II Hemoglobin A1c Program and VARIANT II ß-thalassemia Short Program) and aim to demonstrate substantial equivalence without impacting the core performance specifications, intended use, or operating principles. The modifications primarily involve software and firmware updates, along with a PC Board replacement.

    Therefore, the study focuses on verification and validation (V&V) testing to ensure the modified device remains safe, effective, and substantially equivalent to its predicate. It does not present new performance data against specific acceptance criteria for diagnostic accuracy as would be expected for a novel device or a device with significant performance changes. Instead, it asserts that the changes do not affect the previously established performance claims.

    Here's an analysis based on the provided text, focusing on how the device meets acceptance criteria related to its modifications:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a device modification submission where the performance specifications are stated to be unchanged from the predicate, the acceptance criteria are implicitly that the modified device's performance is at least as good as the predicate device and that the modifications do not introduce new risks or degrade existing performance. The "performance" reported is the outcome of the verification/validation and risk management processes.

    Acceptance Criteria (Implicit for Device Modification)Reported Device Performance (as stated in the submission)
    No change to performance specifications"When compared to the predicate device, there are no changes to the performance specifications, intended or indications for use, or operating principles."
    "No change or impact, claims transferred from predicate device." (for both programs)
    No adverse impact on product safety and effectiveness"Risk Analysis and Verification/Validation testing results demonstrate that the changes do not affect product safety, effectiveness, and substantial equivalency claims."
    "Design verification/validation tests met established acceptance criteria."
    "deemed the modified product safe, effective, and comparable to the predicate device."
    Modifications developed under design controls"In addition, these changes were designed, developed and implemented under established design control and GMP processes..."
    Compliance with risk management for modifications"In accordance with ISO 14971, and internal risk management processes and procedures a defined risk analysis was used to identify, mitigate, or eliminate potential risks associated with the device modifications."
    "The risk evaluation for the device software and firmware modifications included the following tasks..."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a numerical sample size for a "test set" in the traditional sense of a clinical or analytical performance study. Given the nature of a Special 510(k) for software/firmware/hardware changes, the "test set" would refer to the data and scenarios used during verification and validation (V&V) testing.

    • Sample Size: Not explicitly stated as a number of patient samples. The V&V efforts would likely involve testing various functionalities, defect fixes using specific test cases, and potentially a range of instrument data (already available or specifically generated for V&V).
    • Data Provenance: Not explicitly stated. For "defect corrections," the data likely originated from "customer feedback" and scenarios that caused the identified defects. For general V&V, it would involve internal testing data. It's implied to be retrospective, as it addresses "customer feedback" and "defects" from prior versions, but specific details are absent.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This type of information is not applicable or not provided for this submission. The ground truth in this context is typically related to diagnostic accuracy, which is not being re-evaluated because performance claims are "transferred from predicate device."

    For defect fixes, the "ground truth" would be whether the defect is successfully resolved and the intended functionality works as designed. This is assessed by engineering and quality assurance teams during V&V. The document mentions "a trained risk assessment team" for FMEA, but not "experts" establishing a diagnostic ground truth for a test set.

    4. Adjudication Method for the Test Set

    Not applicable/not provided. No "adjudication method" for interpreting results from a test set is mentioned because the submission directly states that performance specifications and claims are unchanged and transferred from the predicate. The "ground truth" for V&V testing of software/firmware changes is based on successful execution of tests and resolution of identified bugs, not on expert consensus interpretation of diagnostic output.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is an automated in vitro diagnostic (HPLC system) for measuring specific hemoglobin levels, not an AI-assisted diagnostic imaging or interpretation tool that involves human readers. Therefore, an MRMC study is not relevant to this submission.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was Done

    The device itself (VARIANT II Hemoglobin Testing System with CDM Software) operates as a standalone automated system for measurement. The changes are to its software/firmware. The V&V testing would assess the functionality of this automated system in its modified state. So, the testing effectively evaluates the "standalone" performance of the modified system, but it's not a new standalone study; it's a re-validation of the existing standalone system after modifications. The performance claims are asserted to be the same as the predicate (which was a standalone device).

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    For the original predicate devices, the ground truth for establishing performance (e.g., accuracy of HbA1c or HbA2/F measurements) would have been based on comparison to reference methods or clinical outcomes.

    For this specific submission (device modification):

    • For defect corrections: The "ground truth" is the successful elimination of the defect and the proper functioning of the software features (e.g., CDM not crashing, calibrator reassignment working). This is validated through internal software testing.
    • For performance: The submission directly states "No change or impact, claims transferred from predicate device." This means the ground truth for performance measures (precision, accuracy, linearity, etc.) was established during the original predicate device's clearance and is implicitly inherited rather than re-established in detail for this modification. The V&V here confirmed that the modifications did not degrade the ability to achieve those previously established performance characteristics.

    8. The Sample Size for the Training Set

    Not applicable/not provided. This device is not described as involving a machine learning algorithm that requires a "training set." The software and firmware updates are for controlling the HPLC system and managing data, not for learning from data in the way an AI algorithm would.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable. As no training set for a machine learning algorithm is involved, this question is not relevant to the submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K082606
    Date Cleared
    2008-11-10

    (63 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    COMMISSIONING DATA MANAGEMENT SYSTEM (CDMS)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CDMS is a Microsoft Windows based software application designed to record and manage physics data acquired during acceptance testing, commissioning and calibration of radiation therapy treatment devices. In addition, CDMS uses the same physics data to allow users to perform MU calculations based on treatment field parameters that are either imported from the treatment planning system or entered manually. CDMS is also used to manage linac calibration using standard protocols.

    Device Description

    CDMS is a software program designed to record radiation beam data acquired during the commissioning, acceptance testing and calibration of a radiation therapy treatment device.

    AI/ML Overview

    The provided text describes the D3 Radiation Planning's CDMS (Commissioning Data Management System). Here's an analysis of the acceptance criteria and study information:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of acceptance criteria with numerical targets and the reported device performance against those targets. Instead, it relies on a "side by side comparison" of monitor unit/dose calculations and linear accelerator calibration against established protocols (AAPM Task Group 51 and 40) as evidence of substantial equivalence to predicate devices (RadCalc V4.0 and IMSure).

    The non-clinical tests involved:

    • Importing measured physics data.
    • Performing numerous monitor unit/dose calculations.
    • Calibrating a linear accelerator according to TG-51.

    The conclusion states that "Side by side comparison tables are shown in the supporting Validation & Verification documentation," implying that the device's calculations and data management capabilities were found to align with the expected performance as defined by these existing radiation therapy physics standards and predicate devices.

    Implied Acceptance Criteria (based on comparison to predicate devices and standards):

    Acceptance Criteria (Implied)Reported Device Performance
    Accurate monitor unit/dose calculations (comparable to predicates and established physics data)"Performed numerous monitor unit/dose calculations" showed "side by side comparison tables" match supporting Validation & Verification documentation.
    Accurate linear accelerator calibration management (according to AAPM TG-51 and TG-40)"Calibrated a linear accelerator according to TG-51."
    Effective management of physics data (recording, storage, accessibility)"Allowed for proper storage of calibration parameters as well as better management of calibration reports."
    Substantial equivalence to predicate devices (RadCalc V4.0 and IMSure)Concluded to be substantially equivalent based on intended use, technological characteristics, and non-clinical testing.

    Study Information

    Due to the nature of the device (software for data management and calculation, not directly involved in patient treatment delivery), the study conducted was non-clinical.

    1. Sample Size used for the test set and the data provenance:

      • Test Set Sample Size: The document does not specify a numerical sample size for the "numerous monitor unit/dose calculations" or the calibration tests. It refers to "measured physics data" being imported, suggesting the use of real-world or simulated clinical data.
      • Data Provenance: Not explicitly stated, but the mention of "measured physics data acquired during the commissioning, acceptance testing and calibration of a radiation therapy treatment device" implies that the data would be typical of those used in radiation oncology departments. The comparison to "peer reviewed/published or manufacturer provided measured values" suggests a mix of external and internal data sources. It is implicitly retrospective as it involves existing measured physics data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified. The ground truth appears to be based on established physics principles (Khan (Classical) algorithm), "peer reviewed/published or manufacturer provided measured values," and compliance with American Association of Physicists in Medicine (AAPM) Task Group 51 and 40 recommendations. These are considered authoritative sources in medical physics.
    3. Adjudication method for the test set:

      • Not applicable as it was not a reader study. The "adjudication" was effectively a comparison against established physical models, measured data, and existing predicate device performance (RadCalc V4.0 and IMSure).
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study was not conducted. This device is a software tool for physics data management and calculation, not an AI-assisted diagnostic or interpretive tool for human readers.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Yes, implicitly. The non-clinical tests involved CDMS (the algorithm) importing data and performing calculations. The evaluation was of the software's output against known physical parameters and established methods, hence standalone performance. Human input is for data entry and reviewing results, but the core calculation is algorithmic.
    6. The type of ground truth used:

      • Expert Consensus/Established Standards: The ground truth for calculations and calibration was based on:
        • The Khan (Classical) algorithm for MU calculation.
        • "Peer reviewed/published or manufacturer provided measured values."
        • Recommendations from AAPM Task Group 51 (for linac calibration) and AAPM Task Group 40 (for monthly QA parameters).
        • Comparison to the performance of predicate devices (RadCalc V4.0 and IMSure).
    7. The sample size for the training set:

      • This is not an AI/machine learning device that requires a training set in the conventional sense. The software uses established physics algorithms (e.g., Khan (Classical) algorithm) which are programmed based on physical laws and validated against measured data, rather than "trained" on a dataset. The "measured physics data" mentioned are used as input for calculations and management, not for training a model.
    8. How the ground truth for the training set was established:

      • Not applicable, as it's not a machine learning device. The algorithms are based on established scientific principles and formulas in radiation physics.
    Ask a Question

    Ask a specific question about this device

    K Number
    K063400
    Date Cleared
    2006-12-01

    (22 days)

    Product Code
    Regulation Number
    864.7470
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VARIANT II TURBO HEMOGLOBIN A1C PROGRAM, HEMOGLOBIN TESTING SYSTEM WITH CDM 4.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Bio-Rad VARIANT™ II TURBO Hemoglobin Alc Program is intended for the percent determination of hemoglobin A1c in human whole blood using ion-exchange high performance liquid chromatography (HPLC).

    The VARIANT II TURBO Hemoglobin A1c Program is intended for Professional Use Only. For In Vitro Diagnostic Use.

    Measurement of percent hemoglobin A1c is effective in monitoring long-term glucose control in individuals with diabetes mellitus.

    Device Description

    The VARIANT II TURBO Hemoglobin Testing System uses the principles of high performance liquid chromatography (HPLC). The VARIANT II TURBO Hemoglobin A1c Program is based on chromatographic separation of Hemoglobin A1c on a cation exchange cartridge.

    The new feature in this submission is the upgrade in CDM software. The current software (CDM 3.6T) requires Windows NT. This product is nearing the end of its lifecycle. CDM 4.0 software is needed to transfer the CDM software to Microsoft XP Operating System.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study conducted to demonstrate the substantial equivalence of the VARIANT™ II TURBO Hemoglobin A1c Program with CDM 4.0 software to its predicate device (VARIANT II TURBO Hemoglobin A1c Program with CDM 3.6T).

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as numerical targets that the device needed to meet independently for clinical accuracy. Instead, the study's goal was to demonstrate that the new device's performance was substantially equivalent to the predicate device. Therefore, the "reported device performance" is a comparison to the predicate device.

    Performance MetricAcceptance Criteria (Implied: Substantially Equivalent to Predicate)Reported Device Performance (Comparison to Predicate)
    Accuracy (Method Correlation)High correlation (r^2) to predicate device, slope near 1, intercept near 0.r^2 = 0.9991 (for 40 samples ranging 4.4% -11.6% HbA1c)
    Slope = 1.0174
    Intercept = 0.0559
    Precision (Within run %CV)Equivalent to predicate device.Normal Sample: New device: 0.9% CV; Predicate: 0.8% CV
    Diabetic Sample: New device: 0.9% CV; Predicate: 0.5% CV
    Precision (Total Precision %CV)Equivalent to predicate device.Normal Sample: New device: 1.2% CV; Predicate: 1.9% CV
    Diabetic Sample: New device: 1.9% CV; Predicate: 2.6% CV

    2. Sample Sizes and Data Provenance

    • Accuracy (Test Set):
      • Sample size: 40 EDTA whole blood samples.
      • Data provenance: Not specified (e.g., country of origin). The samples were described as "whole blood samples," suggesting they were human clinical samples. It's retrospective in the sense that these samples likely existed prior to the test for the software upgrade.
    • Precision (Test Set):
      • Sample size:
        • New device (CDM 4.0): 40 for Normal Sample, 40 for Diabetic Sample. (Note: These refer to the 'n' in the precision table, likely representing the number of data points or runs contributing to the precision calculation, not unique individuals).
        • Predicate device (CDM 3.6T): 80 for Normal Sample, 80 for Diabetic Sample.
      • Data Provenance: Not specified (e.g., country of origin). Human EDTA whole blood patient samples were used. The samples for the new device and predicate were run at "different time periods," implying they may not be the exact same set of physical samples, but rather samples representative of similar clinical conditions. This suggests a retrospective analysis of previously run precision studies for the predicate, and a new prospective study for the updated device, using similar protocols.

    3. Number of Experts and Qualifications for Ground Truth

    • Not Applicable. This is an in-vitro diagnostic (IVD) device, specifically a lab instrument measuring a biochemical marker (HbA1c). The ground truth is established by the analytical method itself (HPLC), not by expert interpretation of images or clinical data. The "ground truth" for comparison is the measurement obtained by the predicate device.

    4. Adjudication Method for the Test Set

    • Not Applicable. As mentioned above, this is an IVD device for quantitative measurement. There is no human interpretation or adjudication involved in determining the "ground truth" or the device's output. The comparison is objective, based on analytical results.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Not Applicable. This device is an automated laboratory instrument. There are no human readers involved in the primary function (measuring HbA1c), so no MRMC study was performed or is relevant.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, in essence. The study assessed the performance of the "VARIANT™ II TURBO Hemoglobin A1c Program run on the VARIANT II TURBO Hemoglobin Testing System using CDM 4.0" as a standalone system by comparing its results to results from the predicate system. The core software (CDM 4.0) is the "algorithm" here, and its performance is evaluated in direct comparison to the previous software version (CDM 3.6T) on essentially the same hardware.

    7. Type of Ground Truth Used

    • The ground truth used for comparison was the analytical measurement obtained from the predicate device (VARIANT II TURBO Hemoglobin A1c Program with CDM 3.6T). For the accuracy study, the individual sample results from the new device were compared against individual sample results from the predicate. For precision, the statistical measures (Mean, %CV) of replicate measurements were compared between the new device and the predicate.

    8. Sample Size for the Training Set

    • Not specified / Not applicable in the traditional sense of AI training. This submission describes a software upgrade for an existing analytical instrument, primarily focusing on migrating the operating system and making minor adjustments to data management. There is no mention of a machine learning or AI algorithm being trained on a specific dataset. The "development" of the CDM 4.0 software likely involved traditional software engineering and testing, not a data-driven training process in the way AI models are trained.

    9. How Ground Truth for the Training Set Was Established

    • Not applicable. As noted above, there's no indication of a machine learning or AI training set as part of this submission. The "ground truth" for software functionality and performance would have been established through standard software development and verification/validation processes against predefined specifications for the new operating system and database changes.
    Ask a Question

    Ask a specific question about this device

    K Number
    K053221
    Device Name
    CDMX-H
    Manufacturer
    Date Cleared
    2006-01-12

    (56 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    CDMX-H

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CDMx-H is intended for dental radiographic examination and diagnosis of the teeth, jaw and oral structure by using scanned dental x-ray images sent through the digital x-ray sensor.

    Device Description

    Not Found

    AI/ML Overview

    The provided document is a 510(k) clearance letter from the FDA for a device named CDMx-H. It states that the device is "substantially equivalent" to legally marketed predicate devices. Unfortunately, this document does not contain the specific acceptance criteria, performance data, or details about the studies conducted to demonstrate its performance.

    The FDA 510(k) clearance process primarily focuses on substantial equivalence to a predicate device, meaning it has the same intended use and technological characteristics, or if the technological characteristics are different, that the differences do not raise new questions of safety and effectiveness and that the device is as safe and effective as the predicate device. It does not typically require the submission of detailed performance studies in the same way a PMA (Premarket Approval) might.

    Therefore,Based on the provided text, I cannot extract the information required to populate the table and answer your questions directly. The document is solely an FDA clearance letter and does not include the details of the studies conducted to establish the device's performance against specific acceptance criteria.

    To answer your request, I would need access to the actual 510(k) summary or the full submission document, which would detail the testing and performance data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K010915
    Device Name
    CDMX
    Manufacturer
    Date Cleared
    2001-05-21

    (55 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    CDMX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K960392
    Device Name
    CDM
    Manufacturer
    Date Cleared
    1996-07-08

    (161 days)

    Product Code
    Regulation Number
    862.2260
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CDM

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    A software package and instrument interface that is used for quantitative analyses of high performance liquid chromatography (HPLC) test kits.

    Device Description

    The CDM makes use of a 486 IBM compatible computer and a separate communications interface. The system accepts signals from a detector, integrates the chromatograms, identifies reference peaks and performs calculations of percent area. In addition, the CDM can control pumps and an automated sampler and store quality control data.

    AI/ML Overview

    The provided document describes the Bio-Rad Laboratories Clinical Data Management (CDM) system, which is a software package and instrument interface used for quantitative analyses of high-performance liquid chromatography (HPLC) test kits. The study described aims to establish substantial equivalence of the CDM to existing predicate devices, specifically the Hewlett-Packard 3392A Reporting Integrator and, for analytical comparison, the Shimadzu CR501.

    Acceptance Criteria and Reported Device Performance

    The acceptance criterion implicitly is comparability to the existing Shimadzu CR501 integrator for analytical sensitivity and accuracy across various HPLC tests. The device performance is deemed acceptable if the results (standard deviation, coefficient of variation, correlation coefficient, y-intercept, and slope) are comparable to those obtained with the CR501.

    TestPerformance MetricCR501 (Reference)CDM (Device Performance)Acceptance Criteria Met (Yes/No)
    HVA by HPLCAnalytical Sensitivity (SD)0.0180.020Yes (Comparable)
    Analytical Sensitivity (CV)3.6%4.0%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A (implied high)0.9999Yes (Comparable)
    Accuracy (y-intercept)N/A (implied near 0)0.011Yes (Comparable)
    Accuracy (slope)N/A (implied near 1)0.994Yes (Comparable)
    Urinary Metanephrines by HPLC
    MetanephrineAnalytical Sensitivity (SD)0.9860.796Yes (Comparable)
    Analytical Sensitivity (CV)6.8%5.5%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.9999Yes (Comparable)
    Accuracy (y-intercept)N/A-1.074Yes (Comparable)
    Accuracy (slope)N/A1.017Yes (Comparable)
    NormetanephrineAnalytical Sensitivity (SD)1.1181.345Yes (Comparable)
    Analytical Sensitivity (CV)6.0%7.6%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.9999Yes (Comparable)
    Accuracy (y-intercept)N/A-2.867Yes (Comparable)
    Accuracy (slope)N/A1.023Yes (Comparable)
    3-MethoxytyramineAnalytical Sensitivity (SD)0.7500.546Yes (Comparable)
    Analytical Sensitivity (CV)9.6%7.8%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.999Yes (Comparable)
    Accuracy (y-intercept)N/A-0.552Yes (Comparable)
    Accuracy (slope)N/A0.999Yes (Comparable)
    VMA by HPLCAnalytical Sensitivity (SD)0.0100.010Yes (Comparable)
    Analytical Sensitivity (CV)1.9%1.9%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.9998Yes (Comparable)
    Accuracy (y-intercept)N/A-0.002Yes (Comparable)
    Accuracy (slope)N/A0.994Yes (Comparable)
    Plasma Catecholamines by HPLC
    EpinephrineAnalytical Sensitivity (SD)2.721.32Yes (Comparable)
    Analytical Sensitivity (CV)18%13%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.9998Yes (Comparable)
    Accuracy (y-intercept)N/A-2.90Yes (Comparable)
    Accuracy (slope)N/A0.979Yes (Comparable)
    NorepinephrineAnalytical Sensitivity (SD)5.043.74Yes (Comparable)
    Analytical Sensitivity (CV)14%12%Yes (Comparable)
    Accuracy (Correlation Coefficient)N/A0.9999Yes (Comparable)
    Accuracy (y-intercept)N/A-4.43Yes (Comparable)
    Accuracy (slope)N/A1.003Yes (Comparable)
    Benzodiazepines by HPLCMean Analytical Sensitivity (SD)1.641.59Yes (Comparable)
    Mean Analytical Sensitivity (CV)6.44%5.92%Yes (Comparable)
    Mean Accuracy (Correlation Coefficient)N/A0.9998Yes (Comparable)
    Mean Accuracy (y-intercept)N/A-0.156Yes (Comparable)
    Mean Accuracy (slope)N/A0.9982Yes (Comparable)
    Tricyclic Antidepressants by HPLCMean Analytical Sensitivity (SD)1.371.24Yes (Comparable)
    Mean Analytical Sensitivity (CV)5.23%4.74%Yes (Comparable)
    Mean Accuracy (Correlation Coefficient)N/A0.9997Yes (Comparable)
    Mean Accuracy (y-intercept)N/A4.25Yes (Comparable)
    Mean Accuracy (slope)N/A0.955Yes (Comparable)

    1. Sample sizes used for the test set and the data provenance:

    The document does not explicitly state the sample sizes (e.g., number of specimens or chromatograms) used for the analytical comparison. It mentions "each of the following five tests" and provides sensitivity and accuracy data for various analytes within these tests. The data provenance is not specified, but it implies a laboratory setting where these HPLC tests were performed. It is a retrospective analysis of the device's performance against a comparative device rather than a prospective clinical study involving patient data.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    This study is an analytical comparison of two laboratory instruments (integrators) for quantitative analysis. The "ground truth" here is the quantitative analytical results obtained by a well-established and accepted method (Shimadzu CR501 integrator). Therefore, there were no human experts establishing a "ground truth" in the diagnostic sense (like radiologists interpreting images). The accuracy is measured against the output of the reference instrument.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    Not applicable. This is an analytical device comparison, not a diagnostic study requiring adjudication of expert interpretations.

    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Not applicable. This is not an AI-assisted diagnostic study or an MRMC study. It's a comparison of two analytical instruments.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, this study represents a standalone comparison of the CDM system's performance (algorithm-driven integration and calculation) against another standalone instrument (CR501 integrator). There is no "human-in-the-loop performance" in evaluating the integrators' quantitative outputs. The humans operate the HPLC systems, but the comparison is on the data processing output.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The "ground truth" is established by the analytical measurements and calculations performed by the well-accepted and commercially available Shimadzu CR501 integrator. This is an analytical reference standard rather than a clinical ground truth like pathology or expert consensus. The assumption is that the CR501 provides accurate and precise measurements.

    7. The sample size for the training set:

    Not applicable. This document describes a performance evaluation of a device, not the development or training of a machine learning algorithm. The CDM is a "Clinical Data Management System" that performs calculations, integration, and control, rather than a learning AI system requiring a specific training set as understood in modern AI.

    8. How the ground truth for the training set was established:

    Not applicable, as there is no mention of a training set for an AI algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1