Search Filters

Search Results

Found 17 results

510(k) Data Aggregation

    K Number
    K240909
    Date Cleared
    2024-08-02

    (122 days)

    Product Code
    Regulation Number
    870.2345
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Samsung ECG App v 1.3 (ECG)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Samsung ECG app with IHRN is an over-the-counter (OTC) software-only, mobile medical application operating on a compatible Samsung Galaxy Watch and Phone for informational use only in adults 22 years and older. The app analyzes pulse rate data to identify episodes of irregular heart rhythms suggestive of atrial fibrillation (AFib) and provides a notification suggesting the user record an ECG to analyze the heart rhythm. The Irregular Heart Rhythm Notification Feature is not intended to provide a notification on every episode of irregular rhythm suggestive of AFib and the absence of a notification is not intended to indicate no disease process is present; rather the feature is intended to opportunistically acquire pulse rate data when the user is still and analyze the data when determined sufficient toward surfacing a notification.

    Following this prompt, or based on the user's own initiative, the app is intended to create, record, store, transfer, and display a single channel ECG, similar to a Lead I ECG. Classifiable traces are labeled by the app as sinus rhythm. AFib. high heart rate (non-AFib), or AFib with high heart rate with the intention of aiding heart rhythm identification.

    The app is not intended for users with other known arrhythmias, and it is not intended to replace traditional methods of diagnosis or treatment. Users should not interpret or take clinical action based on the device output without consultation of a qualified healthcare professional.

    Device Description

    The Samsung ECG App v1.3 is a software as a medical device (SaMD) that consists of a pair of mobile medical apps: one app on a compatible Samsung wearable and the other on a compatible Samsung phone, both general-purpose computing platforms.

    When enabled, the wearable application of the SaMD uses a wearable photoplethysmography (PPG) sensor to background monitor cardiac signals from the user. The application examines beat-to-beat intervals and generates an irregular rhythm notification indicative of atrial fibrillation (AFib). Upon receiving an irregular rhythm notification or at their discretion, the user can record a single-lead ECG using the same wearable. The wearable application then calculates the average heart rate from the ECG recording and produces a rhythm classification. The wearable application also securely transmits the data to the ECG phone application on the paired phone. The phone application shows a time-stamped irregular rhythm notification history with heart rate information; ECG measurement history; and generates a PDF file of the ECG signal, which the user can share with their healthcare provider.

    AI/ML Overview

    Acceptance Criteria and Device Performance for Samsung ECG App v1.3

    1. Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Reference Device: Apple ECG 2.0 App K201525)Reported Device Performance (Samsung ECG App v1.3)
    Heart Rate 50-150 BPM
    AFib Sensitivity98.5% (95% CI 97.3%, 99.6%)96.0% (95% CI 94.0%, 97.8%)
    Sinus Rhythm Specificity99.3% (95% CI 98.4%, 100%)98.7% (95% CI 94.0%, 97.8%)
    Heart Rate 100-150 BPM
    AFib Sensitivity90.7% (95% CI 86.7%, 94.6%)93.6% (95% CI 88.5%, 97.5%)
    Sinus Rhythm Specificity83% (95% CI 77.8%, 88%)96.3% (95% CI 93.5%, 98.9%)
    Visually Interpretable WaveformsNot explicitly stated for reference device, but implied by "sufficient" signal quality98.7% of cases
    Accuracy of Key Intervals (RR, PR, QRS) and R-wave amplitudeNot explicitly stated for reference device, but implied by "sufficient" signal qualityAccurately measured when compared against standard Lead I ECG

    Note: The reported performance for Samsung ECG App v1.3's "Sinus rhythm (HR 50-150 BPM)" and "AFib (HR 50-150 BPM)" is presented with the same 95% CI: (94.0%, 97.8%). This might be a transcription error in the document, as specificity and sensitivity for different conditions would typically have distinct confidence intervals. Assuming independent calculations, these values are presented as they appear in the source.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 1,013 subjects. These subjects contributed to 453 AFib recordings (heart rate 50 to 150 BPM) and 691 Sinus rhythm recordings (heart rate 50 to 150 BPM) for the primary endpoint analysis.
    • Data Provenance: The study was a multi-center study, implying data from multiple locations, likely within the US given the FDA submission context and the racial demographics provided (predominantly Caucasian). The study was likely prospective as it involved recruiting subjects and collecting data for validation.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the "number of experts" or their specific "qualifications" used to establish the ground truth for the test set. It mentions "Clinical Validation showing comparable clinical performance...compared to the reference device" and that the "ECG function accurately classified...compared against the standard Lead I ECG," implying that comparison was made to physician-adjudicated or expertly interpreted ECGs, but the details of the ground truth establishment are not provided.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for establishing the ground truth for the test set.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    There is no mention of a Multi Reader Multi Case (MRMC) comparative effectiveness study being done, or any effect size of how much human readers improve with AI vs without AI assistance. The study focuses on the standalone performance of the device's ECG rhythm classification compared to a reference device.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone study was conducted. The "Clinical Validation" section details the performance of the "ECG rhythm classification of the Samsung ECG App v1.3" in terms of sensitivity and specificity against a clinical ground truth, without explicit human-in-the-loop interaction for the classification task itself. The device "accurately classified" recordings.

    7. Type of Ground Truth Used

    The ground truth used was clinical diagnosis based on "446 subjects diagnosed with AFib, 536 subjects without AFib, and 31 subjects diagnosed with another type of irregular rhythm." The performance was evaluated by comparing the device's classifications against "standard Lead I ECG" interpretation, implying expert consensus (from qualified healthcare professionals interpreting the standard ECGs) or clinical diagnosis as the ground truth.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set. It focuses on the validation study.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232072
    Date Cleared
    2024-02-09

    (212 days)

    Product Code
    Regulation Number
    878.4400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AMICA-GEN AGN-H-1.3, AMICA-GEN AGN 3.3, AMICA-PROBE 17G & 18G

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Coagulation (thermoablation) of soft tissue. Not for use in cardiac procedures.

    Device Description

    The devices subject of this submission, generator AGN-H-1.3, generator AGN-3.3 and AMICA-PROBE 17G & 18G applicators, belong to the HS AMICA devices family and they represent an evolution and implementation of previous models, already authorized by FDA under K182605. The working configuration of the HS AMICA commercial system consists of:

    • . A programmable generator for the generation and control of the energy required for the thermoablative treatment;
    • . Disposable applied parts (or applicators) for the direct release of energy into the patients or for temperature measurements inside the patient's body.
      The result is an integrated system for thermoablation of tissues through controlled emission of nonionizing electromagnetic radiations in the microwave and radiofrequency ranges. The generators and its accessories are able to emit only microwaves (MW, 2450 MHz), or only radiofrequency waves (RF, 450 kHz) or either microwaves or radiofrequency waves (not simultaneously). The new electrosurgical devices introduced by H.S. Hospital Service S.p.A. are the AMICA-GEN models named AGN-H-1.3 and AGN-3.3, along with 17G and 18G AMICA-PROBE disposable applicators.
    AI/ML Overview

    This document describes the premarket notification (510(k)) for the HS AMICA devices family, specifically the AMICA-GEN AGN-H-1.3 and AGN-3.3 generators, and AMICA-PROBE 17G & 18G applicators. The submission aims to establish substantial equivalence to a previously cleared device (K182605).

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly based on demonstrating substantial equivalence to the predicate devices through adherence to safety and performance standards for electrosurgical cutting and coagulation devices. The reported device performance primarily focuses on the device's functionality, adherence to electrical safety and EMC standards, and a single ex-vivo bench test for the new applicators.

    Table of Acceptance Criteria (Implied) and Reported Device Performance:

    Feature/TestAcceptance Criteria (Implied by Predicate Equivalence and Standards)Reported Device Performance
    Indications for UseCoagulation (thermoablation) of soft tissue. Not for use in cardiac procedures.Coagulation (thermoablation) of soft tissue. Not for use in cardiac procedures. (Unchanged, met)
    Generators (AGN-H-1.3, AGN-3.3)Evolution of predicate models (AGN-H-1.2, AGN-3.2) with increased MW rated power (190W to 250W) in pulsed mode, while maintaining principles of operation, design, architecture, and critical components.Increased MW rated power from 190W to 250W (CW max) and 140W to 180W (pulsed mode). This change "solely affects the PULSED energy delivery mode and does not alter the total amount of microwave energy that the HS AMICA system may administer to a patient in a single treatment session." Tested for electrical safety and EMC.
    Applicators (AMICA-PROBE 17G & 18G)New, smaller diameter needles (17G, 18G) compared to predicate (11G, 14G, 16G); same performance specifications and manufacturing materials as authorized applicators. Sterile and disposable.New applicators are 17G and 18G. They share the same performance specifications and manufacturing materials with already approved applicators. Sterilization method (Ethylene Oxide) revalidated. Ex-vivo test conducted to validate functional ablation performances, supporting "moving shot" technique.
    Electrical SafetyCompliance with IEC 60601-2-6 and IEC 60601-2-2.Complies with IEC 60601-2-6 and IEC 60601-2-2. (Met)
    Electromagnetic Compatibility (EMC)Compliance with IEC 60601-1-2.Complies with IEC 60601-1-2. (Met)
    Software Verification & ValidationValidation according to FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 2005) and IEC 62304. Moderate Level of Concern (Class B).Software validated to specified guidance and standards. Considered "Moderate Level of Concern." "Changes introduced to adapt the functioning of the new applicators can be considered marginal and therefore only a few validation tests have been performed."
    BiocompatibilityNot necessary if using same materials as predicate devices.Not necessary, as manufactured using same materials as predicate devices. (Met)
    Performance (Functional Ablation)Demonstrated ability of new applicators to perform functional ablation, particularly for the thinnest probe (18G).Ex-vivo test conducted for 18G-gauge AMICA-PROBE to validate functional ablation performances, supporting the "moving shot" technique. Specific quantitative metrics of performance (e.g., ablation zone size, consistency) are not detailed in this summary.

    2. Sample Size Used for the Test Set and Data Provenance:

    • The document mentions "ex-vivo test" for the 18G-gauge AMICA-PROBE.
    • Sample Size: Not specified for the ex-vivo test.
    • Data Provenance: The ex-vivo test was conducted by HS Hospital Service Spa. The country of origin for the test is not explicitly stated, but the company is based in Italy. The study is a bench test, and inherently retrospective in the sense that the results were analyzed after the experiments. It is not a clinical study involving human subjects or patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • This information is not provided. The study appears to be a bench test rather than a clinical study requiring expert consensus on image interpretation or clinical outcomes. The "ground truth" for a bench test would be objective measurements of physical parameters (e.g., ablation volume, temperature distribution), which are typically established by the testing methodology itself, not by expert readers.

    4. Adjudication Method for the Test Set:

    • Not applicable as this is a bench test, not a study requiring reader adjudication on a test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, an MRMC comparative effectiveness study was not performed. This submission focuses on demonstrating substantial equivalence for an electrosurgical device family, particularly new models and smaller-gauge applicators, primarily through engineering principles, electrical safety, EMC, software validation, and bench testing. It does not involve AI software or human-in-the-loop performance evaluation.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    • No, a standalone algorithm performance study was not performed. This device is a hardware electrosurgical system with integrated software, not a standalone diagnostic AI algorithm.

    7. Type of Ground Truth Used:

    • For the ex-vivo bench test, the ground truth would be objective physical measurements related to ablation performance (e.g., thermal lesion size, shape, or temperature profiles) achieved in the ex-vivo tissue model. The document states "validate the functional ablation performances," implying that quantitative or qualitative measurements of the ablation were the "ground truth" for this specific test.

    8. Sample Size for the Training Set:

    • Not applicable. This device is an electrosurgical system, not an AI/ML model that undergoes a "training" phase with a dataset. The software validation is based on standard software engineering principles and testing, not machine learning.

    9. How the Ground Truth for the Training Set Was Established:

    • Not applicable, as there is no "training set" for this type of medical device submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232981
    Date Cleared
    2023-10-11

    (20 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Synq Software Version 1.3

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Synq Software is indicated for use in conjunction with Synq, a magnetic resonance diagnostic device (MRDD) that produces axial, sagittal, coronal, and oblique cross-sectional images and displays the internal structure and/or function of the head. Depending on the region of interest, contrast agents may be used. These images when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    The Synq Software allows a user to configure and initiate a magnetic resonance scan of a subject. In doing so, the software coordinates the interactions of the magnetic field, gradients, radio frequency (RF) transmitter and receiver coil in Synq (Previously known as EVRY, K200327) to produce axial, sagittal, coronal, and oblique cross-sectional images that represent the spatial distribution of protons with spin. The Synq Software Version 1.3 upgrades the current software version in the Synq system to include additional imaging applications, functionality, and minor bug fixes. The Software should be used only by qualified medical professionals who are trained in magnetic resonance diagnostic devices.

    AI/ML Overview

    This document describes the Synaptive Medical Inc. Syng Software Version 1.3, a magnetic resonance diagnostic device (MRDD). The 510(k) submission (K232981) asserts its substantial equivalence to the predicate device (Evry, K200327).

    Acceptance Criteria and Reported Device Performance

    The provided text does not contain a specific table detailing acceptance criteria alongside reported device performance. However, it states:

    "As per Bench Testing document attached under Bench Testing the image performance testing and safety testing meet all predefined acceptance criteria. Together, with an attestation from a U.S. Board certified radiologist, demonstrate substantial equivalence to the predicate device (EVRY K200327) by conforming to FDA recognized standards and addressing all requirements in FDA MRDD Guidance."

    This indicates that the acceptance criteria were based on FDA recognized standards (NEMA MS 1, 2, 3, 4, 5, 8, 9; IEC 62464-1, IEC-60601-2-33) and FDA MRDD Guidance, focusing on image performance and safety. The reported performance is that the device meets all these predefined acceptance criteria.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criterion CategorySpecific Criteria (Inferred from Text)Reported Device Performance
    Image PerformanceConformance to NEMA MS 1, 2, 3, 4, 5, 8, 9 standardsMet all predefined criteria
    Conformance to IEC 62464-1 (Medical electrical equipment - Requirements for the safety of MR equipment for medical diagnosis)Met all predefined criteria
    Compliance with FDA MRDD GuidanceMet all requirements
    Safety TestingConformance to IEC-60601-2-33 (Medical electrical equipment - Particular requirements for the basic safety and essential performance of magnetic resonance equipment for medical diagnosis)Met all predefined criteria
    Compliance with FDA MRDD GuidanceMet all requirements

    2. Sample Size Used for the Test Set and Data Provenance

    The text states: "A small, representative subset of clinical images are provided along with this 510(k) submission, as per guidance document 'Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices' issued November 18, 2016."

    • Sample Size: "A small, representative subset of clinical images." The exact number is not specified in the provided text.
    • Data Provenance: The text does not explicitly state the country of origin or whether the data was retrospective or prospective. It refers to "clinical images," which implies human subject data, but further details are absent.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The text mentions: "Together, with an attestation from a U.S. Board certified radiologist, demonstrate substantial equivalence to the predicate device (EVRY K200327) by conforming to FDA recognized standards and addressing all requirements in FDA MRDD Guidance."

    • Number of Experts: At least one U.S. Board certified radiologist provided an attestation. It's unclear if more than one was involved for ground truth establishment.
    • Qualifications: U.S. Board certified radiologist. The number of years of experience is not specified.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (like 2+1 or 3+1) for establishing the ground truth of the test set. It mentions an "attestation from a U.S. Board certified radiologist," suggesting a single expert's assessment was used, possibly against benchmarks.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and its Effect Size

    No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Synq Software Version 1.3 did not require clinical tests since substantial equivalence to the legally marketed predicate device was proven with the verification and validation testing." Therefore, there is no effect size of human readers improving with AI vs. without AI assistance to report.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, a standalone performance assessment was effectively conducted through "image performance testing and safety testing" against predefined acceptance criteria derived from FDA recognized standards. This implies the software's output (images) was evaluated independently as part of the verification and validation. The role of the radiologist was for "attestation" and interpretation of images, rather than as part of a human-in-the-loop performance study.

    7. The Type of Ground Truth Used

    The ground truth for the verification and validation testing appears to be based on conformance to established technical performance standards and safety requirements (NEMA MS and IEC standards, FDA MRDD Guidance). For the clinical images, the "attestation from a U.S. Board certified radiologist" suggests a form of expert consensus/opinion regarding the quality and diagnostic utility of the images produced by the device, likely assessed against expected clinical standards of MRI imaging. Pathology or outcomes data are not mentioned.

    8. The Sample Size for the Training Set

    The document does not provide information regarding a training set. The submission focuses on verification and validation testing for an updated software version (1.3) of an already cleared device, implying the device's core algorithms were likely developed and validated previously. This 510(k) is for updates and does not detail the original training data for the base software.

    9. How the Ground Truth for the Training Set Was Established

    Since no training set information is provided, there is no information on how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K150099
    Date Cleared
    2015-03-23

    (62 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Depuy Synthes Variable Angle Locking Hand System (1.3 mm and 2.0 mm Plates and Screws)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DePuy Synthes Variable Angle Locking Hand System is intended for fracture fixation of the hand and other small bones and small bone fragments, in adults and adolescents (12-21) particularly in osteopenic bone.

    System indications include the following:
    o Open reduction and internal fixation of fractures, mal-unions, and non-unions
    o Following excision of benign bone tumors
    o Replantations and reconstructions
    o Arthrodeses of joints involving small bones
    o Osteotomies, including deformity correction such as rotation, lengthening, shortening
    o Pathological fractures, including impending pathologic fractures

    Device Description

    The DePuy Synthes Variable Angle Locking Hand System (1.3 mm and 2.0 mm Plates and Screws) consists of stainless steel and titanium plates and screws that offer screw-to-plate locking designed for various fracture modes of the hand. The plates and screws contained in the DePuy Synthes Variable Angle Locking Hand System (1.3 mm and 2.0 mm Plates and Screws) are offered in a range of configurations to accommodate patient anatomy and surgical need. The subject system contains two plate and screw sizes, 1.3 mm and 2.0 mm, general instruments, and device specific instruments. The 1.3 mm plates in this submission are designed to accept new 1.3 mm cortex and locking screws. The 2.0 mm plates are designed to accept existing 2.0 mm cortex screws, 2.0 mm locking screws, and new 2.0 mm Variable Angle (VA) locking screws. The new 2.0 mm VA locking plates and screws feature existing variable angle locking technology (K100776).

    AI/ML Overview

    This document is a 510(k) premarket notification for the DePuy Synthes Variable Angle Locking Hand System. It is a submission to the FDA for a medical device, which means it describes the device and its intended use, but it does not contain information about an AI/ML-driven device or study results of the type you're asking for.

    Therefore, I cannot provide the requested information about acceptance criteria and study details for an AI/ML device from this document. This document pertains to a physical medical implant (plates and screws for hand fracture fixation).

    Ask a Question

    Ask a specific question about this device

    K Number
    K143564
    Manufacturer
    Date Cleared
    2015-03-05

    (79 days)

    Product Code
    Regulation Number
    878.4350
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Visual-ICE Cryoablation System, Software Revision 1.3.1

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Visual-ICE Cryoablation System is indicated for use as a cryosurgical tool in the fields of general surgery, dermatology, neurology (including cryoanalgesia), thoracic surgery, ENT, gynecology, oncology, and urology. This system is designed to destroy tissue (including prostate and kidney tissue, liver metastases, tumors, skin lesions) by the application of extremely cold temperatures. The Visual-ICE Cryoablation System has the following specific indications:

    · Urology: Ablation of prostate tissue in cases of prostate cancer and Benign Prostate Hyperplasia (BPH)

    • · Oncology: Ablation of cancerous or malignant tissue and benign tumors, and palliative intervention
    • · Dermatology: Ablation or freezing of skin cancers and other cutaneous disorders, destruction of warts or lesions, angiomas, sebaceous hyperplasia, basal cell tumors of the eyelid or canthus area, ulcerated basal cell tumors, dermatofibromas, small hemangiomas, mucocele cysts, multiple warts, actinic and seborrheic keratosis, cavernous hemangiomas, peri-anal condylomata, and palliation of tumors of the skin
    • · Gynecology: Ablation of malignant neoplasia or benign dysplasia of the female genitalia
    • · General surgery: Palliation of the rectum, hemorrhoids, anal fissures, pilonidal cysts, and recurrent cancerous lesions, ablation of breast fibroadenomas
    • ENT: Palliation of tumors of the oral cavity and ablation of leukoplakia of the mouth
    • · Thoracic surgery: Ablation of arrhythmic cardiac tissue cancerous lesions
    • · Proctology: Ablation of benign or malignant growths of the anus or rectum, and hemorrhoids
    Device Description

    The Visual-ICE Cryoablation System is a mobile console system intended for cryoablative tissue destruction using a minimally invasive procedure. The system is computer-controlled with a touch screen user interface that allows the user to control and monitor the procedure. The therapy delivered by the system is based on the Joule-Thomson effect displayed by compressed gases. The Visual-ICE System uses high-pressure argon gas that circulates through closed-tip cryoablation needles to induce tissue freezing. Active tissue thawing is achieved by circulating helium gas through the needles or, alternatively, by the use of Galil Medical i-Thaw® technology in which a heating element inside the cryoablation needle can be energized to cause thawing.

    This Special 510(k) is being submitted to modify the software with a variety of changes to enhance usability of the system. The functions of the system that users use to deliver the cryoablation treatment remain unchanged.

    AI/ML Overview

    The provided text is a 510(k) Summary for the Visual-ICE® Cryoablation System, Software Revision 1.3.1. It describes a medical device and its intended use, but it does not contain information about acceptance criteria, device performance metrics, sample sizes for test or training sets, expert qualifications, adjudication methods, or comparative effectiveness studies (MRMC or standalone).

    The document is a regulatory submission demonstrating substantial equivalence to a predicate device, focusing on software changes and their impact on safety and effectiveness. It states that "Software Revision 1.3.1 passed all verification and validation testing," implying that it met internal development and regulatory requirements, but it does not detail those specific acceptance criteria or the study data that proves the device meets them in terms of clinical performance.

    Therefore, many of the requested elements cannot be extracted from this document.

    Here's what can be inferred and what is explicitly not present:

    1. Table of acceptance criteria and reported device performance:

    • Acceptance Criteria: Not explicitly stated in terms of specific performance metrics or thresholds. The document implies an acceptance criterion of "passed all verification and validation testing" for the software changes, but no numerical or qualitative performance targets are provided.
    • Reported Device Performance: Not reported in terms of clinical outcomes or specific performance metrics. The document states "no new unacceptable risks were identified" and "no changes were made to the Visual-ICE System hardware," focusing on the safety and functional integrity of the software update rather than clinical performance data.

    2. Sample size used for the test set and the data provenance:

    • Sample Size for Test Set: Not specified. "Complete software verification and validation testing" was performed, but the size or nature of the test set (e.g., number of test cases, simulated procedures, or patient data if applicable) is not detailed.
    • Data Provenance: Not specified. Given it's a software update for an existing device, testing likely involved internal simulations, unit tests, integration tests, and system-level tests. There is no mention of patient data (retrospective or prospective, or country of origin) being used for this specific software revision's validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable/Not mentioned. The document describes software verification and validation, which typically involves engineering and quality assurance personnel, not clinical experts establishing ground truth for diagnostic or therapeutic accuracy in the same way an AI model for image interpretation would require.

    4. Adjudication method for the test set:

    • Not applicable/Not mentioned. Adjudication is relevant when multiple experts interpret data to establish a consensus ground truth. This type of process is not described for software verification and validation.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

    • No. This is not an AI-assisted diagnostic or therapeutic device that would involve human readers. The device is a cryoablation system, and this submission relates to a software update for its control system.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No. This device is a cryoablation system, which is inherently a human-in-the-loop interventional device. The software update affects its control and usability. Standalone algorithm performance is not applicable in this context.

    7. The type of ground truth used:

    • Not explicitly defined in the context of clinical ground truth (e.g., pathology, outcomes data). For software verification, the "ground truth" would be the expected behavior/output as defined by the software requirements and design specifications. For example, if a timer should display "X" seconds, the ground truth is "X" seconds, and the test verifies that it does.

    8. The sample size for the training set:

    • Not applicable/Not mentioned. This is not a machine learning or AI device that requires a training set in the conventional sense. The software was developed and verified, not "trained."

    9. How the ground truth for the training set was established:

    • Not applicable/Not mentioned, as there is no training set for this type of device.

    In summary: The provided document is a regulatory statement for a software update to a cryoablation system. It confirms that the software passed internal verification and validation, thus maintaining substantial equivalence to previously cleared devices. However, it does not include the detailed performance study information typically associated with AI/ML-based diagnostic or therapeutic devices such as acceptance criteria based on accuracy/sensitivity/specificity, clinical test sets, expert ground truth establishment, or comparative effectiveness studies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K101597
    Manufacturer
    Date Cleared
    2010-10-18

    (132 days)

    Product Code
    Regulation Number
    862.1345
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    WAVESENSE DIABETES MANAGER MODEL VERSION 1.3.4

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The WaveSense Diabetes Manager (WDM) application (app) is intended for use in the home and professional settings to aid individuals with diabetes and their healthcare professionals; in the review, analysis and evaluation of blood glucose readings to support an effective diabetes management program. The WaveSense Diabetes Manager application is a digital logbook and diabetes management tool designed to operate using the iPhone Operating System platform. The application can be used alone or with the WaveSense Direct Connect Cable and a WaveSense-enabled blood glucose meter (BGM) with a mini-USB port.

    Device Description

    The WaveSense Diabetes Manager (WDM) application (app) is a digital logbook and diabetes management tool for the iPhone operating system platform. The application can be used alone or with the WaveSense Direct Connect Cable and a WaveSense-enabled Blood Glucose Meter (BGM) with a mini-USB port.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the AgaMatrix WaveSense Diabetes Manager application:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text focuses on demonstrating substantial equivalence to a predicate device rather than explicitly defining and meeting specific analytical or clinical performance acceptance criteria for the WaveSense Diabetes Manager application itself. The study's focus was on the ease of use and functional equivalence as a data management tool.

    Acceptance Criteria (Implied)Reported Device Performance
    Ease of OperationDemonstrated ease of operating the WaveSense Diabetes Manager application as intended.
    Intended Use Equivalence to PredicateThe application is equivalent in performance to the predicate device for its intended use (review, analysis, evaluation of blood glucose results to support diabetes management).
    Accessory to BGM EquivalenceShares the same accessory relationship with WaveSense Blood Glucose Monitoring Meters as the predicate.
    Logbook FunctionalityProvides blood glucose readings logbook; adds insulin and carbohydrate intake logging compared to predicate.
    Platform CompatibilityOperates on the iPhone Operating System platform (predicate operated on PC).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated. The text mentions "Clinical setting by persons with diabetes," but does not provide a number for the participants in this evaluation.
    • Data Provenance: The study was conducted "in house and in a Clinical setting." The country of origin is not specified but is presumed to be the USA, given the submission to the FDA. The study appears to be prospective in nature, as it involved actively evaluating the device.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the document. The study primarily focused on user experience and functional equivalence rather than a diagnostic performance evaluation requiring expert ground truth establishment.

    4. Adjudication Method for the Test Set

    This information is not provided. Given the nature of the study (ease of use and functional equivalence), a formal adjudication method for diagnostic accuracy would likely not be relevant or necessary.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    No, an MRMC comparative effectiveness study was not done. The WaveSense Diabetes Manager application is a data management tool, not an AI-powered diagnostic device, and therefore this type of study is not applicable.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device is a standalone application in the sense that it collects and displays data. However, it's not an "algorithm-only" device for diagnostic or predictive purposes without human interaction. Its function is to facilitate human review and analysis of blood glucose data. The performance assessment was based on its operational ease and functional equivalence.

    7. The Type of Ground Truth Used

    The concept of "ground truth" as it applies to diagnostic accuracy (e.g., pathology, expert consensus) is not applicable to this device. The "ground truth" for this application would be the accurate transfer and display of blood glucose readings, which are generated by an external BGM, and the user's ability to easily navigate and utilize the app's features. The study implicitly evaluated the functional correctness and user experience as its "ground truth."

    8. The Sample Size for the Training Set

    This information is not applicable/not provided. The WaveSense Diabetes Manager is an application for data management, not a machine learning or AI model that requires a "training set" in the conventional sense.

    9. How the Ground Truth for the Training Set Was Established

    This information is not applicable. As stated above, this device does not utilize a "training set" in the context of an AI/ML model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K102556
    Date Cleared
    2010-10-07

    (30 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VOLPARA, VERSION 1.3

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Volpara is a software application intended for use with digital mammography systems. Volpara calculates volumetric breast density as a ratio of fibroglandular tissue and total breast volume estimates. Volpara provides these numerical values for each image to aid radiologists in the assessment of breast tissue composition. Volpara produces adjunctive information. It is not an interpretive or diagnostic aid. Volpara is a software application which runs on Windows or Linux based computers.

    Device Description

    Volpara™ analyzes raw ("for processing") digital mammograms in a fully automated, volumetric fashion and produces a quantitative assessment of breast composition, namely volume of fibroglandular tissue in cubic centimeters (cm³) volume of breast tissue in cm³ and their ratio, volumetric breast density. Volpara v1.3 handles DICOM files as input. Volpara v1.3has been built and tested on Windows XP and Linux. Volpara software is a component which accepts as input digital mammography images along with associated calibration data. The software processes the image according to proprietary algorithms. It provides measures of: volume of fibroglandular tissue, volume of breast, breast density. The software does not perform image display but outputs to the console.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the Volpara Imaging Software, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text describes several verification and validation tests, implying that acceptance criteria were met for each, but it does not explicitly list numerical acceptance criteria. Instead, it states that "All verification and validation testing was successful in that established acceptance criteria was met for all of the tests conducted."

    Acceptance Criteria (Implicit)Reported Device Performance (Implied)
    Verification Bench Testing:
    1. Volpara measurements compared to known values of standardized and calibrated breast phantoms.Test successful; Volpara measurements met established acceptance criteria when compared to known values from phantoms.
    2. Volpara results compared with BI-RADS scores from MQSA qualified radiologists for X-ray images.Test successful; Volpara results showed agreement with BI-RADS scores provided by MQSA qualified radiologists, meeting acceptance criteria for this comparison.
    3. Volpara estimates of fibroglandular tissue compared with 3D breast MRI data for X-ray images.Test successful; Volpara's fibroglandular tissue estimates showed acceptable correlation or agreement with 3D breast MRI data.
    4. Volpara breast density results compared with expected decrease in breast density with age in substantial datasets.Test successful; Volpara's results aligned with the known physiological decrease in breast density with age in large datasets.
    5. Volpara results for left and right breasts and CC and MLO views compared to confirm similarity.Test successful; Volpara consistently produced similar results across different views (CC, MLO) and between left and right breasts, indicating robustness and consistency.
    6. Volpara results compared for the same woman imaged on GE and Hologic systems one year apart, to confirm similarity.Test successful; Volpara provided similar results (within acceptance criteria) for the same individual when imaged on different mammography systems (GE and Hologic) over a one-year period, demonstrating inter-system consistency over time.
    Clinical Validation Testing:
    1. Beta site testing to assess the ability of physicians to successfully integrate the software into existing systems and assess usability for target users.Test successful; Physicians were able to successfully integrate and use the software in existing systems, indicating good usability and integration capabilities.
    2. Beta site testing to collect minimum, average, and maximum Volpara breast densities and compare these to other existing databases.Test successful; Volpara's breast density measurements (min, avg, max) were within acceptable ranges or demonstrated comparison with existing databases, meeting established acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The text mentions "substantial datasets" for several tests (e.g., comparison with age, left/right and CC/MLO views, GE/Hologic system comparison). However, it does not provide specific numerical sample sizes for any of the test sets.
    • Data Provenance: The images used for Verification and Validation testing were acquired from detectors manufactured by both GE and Hologic. The country of origin of the data is not specified. The studies appear to be retrospective as they involve existing images and data (e.g., images with existing BI-RADS scores, 3D breast MRI data, and data where women were imaged one year apart).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • For the comparison with BI-RADS scores, ground truth was established by "a MQSA qualified radiologist." It does not specify how many radiologists were involved, only stating "a radiologist" in the singular.
    • No specific qualifications beyond "MQSA qualified radiologist" are provided (e.g., years of experience).

    4. Adjudication Method for the Test Set

    • For the comparison with BI-RADS scores, it just states "a MQSA qualified radiologist" provided the score, implying no adjudication for this specific ground truth.
    • For other tests (e.g., phantoms, MRI, age correlation, left/right breast comparison), the "ground truth" seems to be objective measurements (phantoms, 3D MRI) or established medical knowledge (density change with age) rather than expert consensus requiring adjudication.
    • The text does not mention any expert adjudication methods (e.g., 2+1, 3+1) for establishing ground truth on any of the test sets.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as performed in the provided text.
    • The device is stated to "aid radiologists in the assessment of breast tissue composition" and "produces adjunctive information," but the studies described focus on the device's accuracy and consistency in calculating volumetric breast density rather than its impact on human reader performance, either with or without AI assistance. Therefore, no effect size for human reader improvement is provided.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance assessment was conducted through the "Verification Bench testing." These tests evaluated Volpara's outputs against objective measures (phantoms, 3D MRI), established clinical knowledge (density and age), and consistency checks (left/right, CC/MLO views, different systems over time). The software's output is numerical values, and these tests directly assess the algorithm's accuracy in calculating these values without a human-in-the-loop for interpretation of the device's output.

    7. The Type of Ground Truth Used

    The ground truth types varied depending on the specific test:

    • Known values from standardized and calibrated breast phantoms.
    • BI-RADS scores from an MQSA qualified radiologist.
    • 3D breast MRI data (for fibroglandular tissue estimates).
    • Expected and known decrease in breast density with age (established medical knowledge).
    • Consistency across different views and breasts (internal consistency checks).

    8. The Sample Size for the Training Set

    • The provided text does not mention any specific sample size for a training set. The descriptions focus on the testing of the software.

    9. How the Ground Truth for the Training Set Was Established

    • Since no information about a training set or its sample size is provided, there is also no information on how its ground truth was established. The document describes verification and validation of the developed software.
    Ask a Question

    Ask a specific question about this device

    K Number
    K080951
    Date Cleared
    2008-05-29

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    19-INCH (48CM) 1.3M COLOR LCD MONITOR CDL1909A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    19-inch (48 cm) 1.3M Color LCD Monitor CDL1909A is to be used in displaying and viewing medical images for diagnosis by trained medical practitioners. It is not meant to be used in digital mammography.

    Device Description

    CDL1909A is a 19-inch (48cm) 1.3M Color LCD monitor that has a multi-scanning function corresponding to resolution from VGA 640 x 400 to SXGA 1280 x 1024. This is also compliant with VESA standard display mode.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a medical display monitor, the 19-inch (48 cm) 1.3M Color LCD Monitor CDL1909A. However, it does not contain the detailed acceptance criteria or the study data that typically proves a device meets such criteria for diagnostic performance of an AI or image-processing algorithm.

    Instead, this document focuses on establishing substantial equivalence to a predicate device (CDL1904A, K051403) for regulatory clearance. The key information provided is about the device itself, its intended use, and the regulatory classification.

    Therefore, most of the requested information cannot be extracted from the given text.

    Here's what can be inferred or stated based on the provided text, and what cannot:

    1. A table of acceptance criteria and the reported device performance

    • Cannot be provided. The document does not define specific performance acceptance criteria for image quality or diagnostic accuracy in the way an AI or image processing algorithm would. As a display monitor, its "performance" is implicitly tied to its ability to accurately render images, which is typically assessed during manufacturing and quality control according to industry standards for medical displays, but not detailed here as specific acceptance criteria for a 510(k) submission in this format.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Not applicable/Cannot be provided. This document does not describe a performance study involving a test set of medical data for diagnostic purposes. It's a submission for a monitor, not an algorithm that interprets medical data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable/Cannot be provided. No ground truth establishment activity is described for this monitor.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable/Cannot be provided. No test set or adjudication process is described for this monitor.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. This is a medical display monitor, not an AI or image processing algorithm intended to assist human readers. Therefore, an MRMC comparative effectiveness study involving AI assistance would not be part of its submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No. This is a medical display monitor, not a standalone algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not applicable/Cannot be provided. No ground truth is mentioned or relevant to the regulatory submission for a display monitor.

    8. The sample size for the training set

    • Not applicable/Cannot be provided. This device is a monitor, not an AI algorithm requiring a training set.

    9. How the ground truth for the training set was established

    • Not applicable/Cannot be provided. No training set or ground truth establishment is relevant to this device.

    In summary: The provided 510(k) summary is for a medical display monitor and focuses on its substantial equivalence to a predicate device. It does not include information about diagnostic performance studies, AI algorithms, or the detailed acceptance criteria and study data typically associated with such criteria for an AI-powered diagnostic device. The "study" here is implicitly the demonstration of functional equivalence and adherence to relevant standards for medical displays (which are not explicitly detailed in this summary).

    Ask a Question

    Ask a specific question about this device

    K Number
    K072517
    Date Cleared
    2007-09-26

    (19 days)

    Product Code
    Regulation Number
    892.1560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ILAB ULTRASOUND IMAGING SYSTEM, VERSION 1.3

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The iLab™ Ultrasound Imaging System is intended for ultrasound examinations of intravascular pathology. Intravascular ultrasound is indicated in patients who are candidates for transluminal interventional procedures such as angioplasty and atherectomy.

    Device Description

    The iLab™ Ultrasound Imaging System is designed for real-time viewing of intravascular anatomies and is intended to be a basic diagnostic tool for imaging and evaluation of patients who are candidates for transluminal procedures. The iLab™ System consists of two compact PC units (one for Image Processing and one for Data Acquisition), up to two displays (one primary and an optional secondary). The iLab System imaging and processing PC are used during an intravascular procedure, at the end of the IVUS procedure, the processing PC supports the archiving of the images obtained during the procedure. The iLab System processing PC converts the native iLab images into DICOM format images prior to archiving to removal media such as a CD, DVD or removable hard disk cartridge. Images can also be archived to a DICOM network server. The iLab™ System is available in two configurations: a Cart-based Configuration and an Installed Configuration.

    AI/ML Overview

    The document provided is a 510(k) Summary for the Boston Scientific iLab™ Ultrasound Imaging System. It describes the device, its intended use, and comparison to a predicate device. The document details various non-clinical testing performed, including software and hardware verification and validation efforts. However, it does not contain any information about clinical studies with human participants, expert review of data, or establishment of ground truth in the way typically expected for AI/ML device evaluations.

    The testing described is primarily focused on engineering verification and validation of the system's components and software against predefined requirements, rather than a performance study involving a test set with established ground truth.

    Therefore, many of the requested elements for acceptance criteria and study details cannot be extracted from this document, as they are not relevant to the type of submission provided.

    Here's a breakdown of what can be inferred or explicitly stated based on the provided text, and what cannot:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (Inferred from testing types)Reported Device Performance
    Non-clinical electrical safety met performance requirementsMet or exceeded performance requirements
    Non-clinical acoustic output safety met performance requirementsMet or exceeded performance requirements
    Integrated Installation Configuration Option EMC complianceNo new external testing for EMC was required
    Software risk mitigations effectiveFMEA determined risk mitigations effective
    Software unit and system level acceptance criteria metAll requirements in Software Requirements Specifications verified
    Hardware risk mitigations effectiveFMEA determined risk mitigations effective
    Video interface requirements for external monitors defined, verified, and validatedSupported specified imaging medical device vendors
    Customer acceptance of image quality with external monitorsOn-site validation in progress at time of submission

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not applicable. The document describes software verification and validation, and hardware testing, not a clinical performance study with a "test set" in the context of patient data.
    • Data Provenance: Not applicable for a clinical test set. The verification and validation involved "multiple configurations of PC systems" and "production equivalent" systems. On-site validation was planned for "3 or more customer sites."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: Not applicable. No clinical expert review process is described for establishing ground truth for a test set.
    • Qualifications of Experts: Not applicable. The "software validation effort will be performed by testers with iLab clinical experience," but these are testers for engineering validation, not clinical experts establishing ground truth for a performance study.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not applicable. No clinical test set or adjudication process is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No. This document describes a traditional 510(k) for an ultrasound imaging system update, not an AI/ML device requiring an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable. This is an ultrasound imaging system, not an algorithm-only device. The testing focuses on the system's functional integrity.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: Not applicable. The "ground truth" in this context refers to the system meeting its engineering specifications and requirements, as outlined in "Software Requirements Specifications" and "Product and Marketing requirements." No clinical "ground truth" (e.g., pathology, outcomes) for diagnostic accuracy is mentioned.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable. This is not an AI/ML device that requires a training set in that context.

    9. How the ground truth for the training set was established

    • Training Set Ground Truth: Not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K072066
    Date Cleared
    2007-08-14

    (18 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    1.3M MONOCHROME LCD MONITOR MDL1908A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    19-inch (48cm) 1.3M Monochrome LCD Monitor MDL1908A is to be used in displaying and viewing medical images for diagnosis by trained medical practitioners. It is not meant to be used in digital mammography.

    Device Description

    MDL1908A is a 19-inch (48cm) 1.3 megapixel Monochrome LCD monitor that supports DVI video signal and provides UXGA (1280 x 1024) resolution for both landscape and portrait display.

    AI/ML Overview

    The provided text describes a 510(k) submission for a medical display monitor, not a device that processes or analyzes medical images using AI or other algorithms. Therefore, many of the requested criteria such as expert consensus, ground truth, training/test sets, and AI assistance are not applicable to this type of device.

    This document focuses on establishing substantial equivalence to a predicate device (another medical monitor) based on physical and performance characteristics typically assessed for displays.

    Here's a breakdown of the relevant information from the provided text, and an explanation of why other criteria are not applicable:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state "acceptance criteria" in a tabulated format for direct comparison like a diagnostic device might. Instead, it implies that the device meets the performance characteristics expected of a 1.3M Monochrome LCD monitor for diagnostic medical image viewing, and that these characteristics are substantially equivalent to its predicate device.

    The characteristics of the device itself are:

    Acceptance Criteria (Implied)Reported Device Performance (Characteristics)
    Display Type19-inch (48cm) 1.3M Monochrome LCD Monitor
    ResolutionUXGA (1280 x 1024)
    Signal InputSupports DVI video signal
    Display ModesBoth landscape and portrait display
    Intended UseDisplaying and viewing medical images for diagnosis by trained medical practitioners (excluding digital mammography)
    Substantial EquivalenceShares same characteristics with predicate device ME183L (MDL1812A) (K030272) except for a board, LCD panel, and power supply.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not applicable. This is a display monitor, not a diagnostic algorithm or device that analyzes patient data. There is no "test set" of patient data in the context of conventional diagnostic testing for this device. The evaluation would involve technical performance tests of the monitor itself (e.g., brightness, contrast, uniformity, resolution, color accuracy if applicable for color monitors, etc.), not a set of medical images for diagnostic accuracy.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. There is no "ground truth" to establish for a display monitor in the same way there would be for a diagnostic tool. The ground truth for a display is its adherence to technical specifications and display standards (e.g., DICOM Part 14 Grayscale Standard Display Function).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, for the same reasons as points 2 and 3.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is not an AI-assisted diagnostic tool. No MRMC study would be performed for a medical display monitor in this context. Its purpose is to accurately display images, not to assist in interpreting them directly or to perform AI analysis.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This device is purely a display monitor; there is no embedded algorithm for standalone performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. The "ground truth" for a monitor would be its technical specifications and compliance with relevant industry standards (e.g., DICOM Part 14 for grayscale display functions, NEMA display requirements).

    8. The sample size for the training set

    Not applicable. This device does not use a training set as it's a hardware display and not a learning algorithm.

    9. How the ground truth for the training set was established

    Not applicable. (See point 8).

    In summary, the 510(k) for a medical display monitor like the MDL1908A focuses on demonstrating that the device's technical specifications and performance are adequate for its intended use (displaying medical images for diagnosis) and that it is substantially equivalent to a legally marketed predicate device. The criteria typically associated with diagnostic algorithms (AI, ground truth, reader studies) are not relevant here.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2