Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K252500

    Validate with FDA (Live)

    Device Name
    CARA System
    Manufacturer
    Date Cleared
    2026-02-20

    (196 days)

    Product Code
    Regulation Number
    892.1650
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cara System is intended for preplanning and guidance of medical interventions in an area known to contain or be adjacent to the cardiac conduction system, such as percutaneous or surgical procedures, for example, transcatheter aortic valve replacement (TAVR), as well as medical procedures where the physician desires to deliver therapy to the patient's cardiac conduction system or to a targeted location within it (CSP).

    The Cara System uses computed tomography angiography (CTA)-based and user manually marked landmarks to identify the cardiac conduction axis and generate a three-dimensional (3D) map of the individual patient's cardiac conduction system. The system also overlays the anatomical location of the cardiac conduction system (generated by the Cara Metis Simulator using pre-procedure CT data) onto live fluoroscopic images.

    The software utilizes AI/ML algorithms to provide OCR detection, automated segmentation of anatomical structures, and detection of catheters.

    The CARA System is intended for use in adult patients (18 years of age and older).

    Device Description

    The CARA System is a medical device comprising two integrated functions. The CARA System device components include the CARA Metis Simulator and the CARA Atlas Navigator. Both components provide diagnostic imaging software and hardware functions that identify the personalized anatomical location of the cardiac conduction system in relation to other heart anatomies based on a patient's computed tomographic angiography (CTA). The former is intended for preplanning (1) a medical intervention in an area known to contain or be adjacent to the cardiac conduction system or (2) a medical procedure(s) where the physician desires to deliver therapy to the patient's cardiac conduction system. The latter identifies the personalized anatomical location of the cardiac conduction system overlaid on real-time, intra-procedural, fluoroscopic imaging and provides guidance during interventional structural heart disease procedures in an area known to contain or be adjacent to the cardiac conduction system or where the physician desires to deliver therapy to the patient's cardiac conduction system.

    The CARA Metis Simulator uses computed tomography angiography (CTA)-based landmarks to accurately identify the cardiac conduction axis and run a simulation generating the personalized three-dimensional (3D) map of the individual patient's cardiac conduction system.

    This 3D map is then utilized by the clinical operator to plan any procedure to either target, as in direct pacing, or avoid as in structural heart disease interventions, the cardiac conduction system. As described below, this technology is based on methodical translational studies investigating the 3D location of the cardiac conduction system relative to cardiac structures visible by clinical imaging with initial assessment and validation in the clinical setting.

    The CARA Atlas Navigator is designed to overlay the personalized anatomical location of the cardiac conduction system (generated by the Cara Metis Simulator using pre-procedure CT data) onto live fluoroscopic images. This functionality assists clinicians during fluoroscopy-guided interventional heart procedures.

    The Cara Atlas Navigator consists of both software and hardware components:

    1. Fluoroscopy Splitter (F-Splitter) – This device splits the live fluoroscopy image for integration with the CARA System.

    2. CARA Box – A standard workstation that receives live fluoroscopy images from the Fluoroscopy Splitter and enhances them by adding anatomical landmarks. The CARA Box acts as the system's central processing unit, handling data analysis and image processing. It is equipped with user interface devices, such as a mouse and keyboard.

    3. CARA Monitor – Displays the enhanced fluoroscopy images, including the analysis performed by the CARA Box. This monitor is typically located in the control room. The same output is also projected onto the main display in the operating room.

    The CARA System utilizes a specific on-premises workflow to ensure data integrity and clinical accuracy. Prior to physician use, a certified CARA Clinical Expert (CCE) must be physically present on-site. The CCE logs into the CARA Box workstation to prepare the CARA Metis pre-planning process. This includes initiating the automated segmentation, verifying the anatomical output, annotating landmarks, and saving the results to the local storage.

    The physician subsequently logs into the same workstation using distinct credentials to load, review, and confirm the pre-planned case. This workflow ensures that all generated outputs are professionally prepared and verified before clinical review.

    The CARA System utilizes AI/ML algorithms to provide OCR (Optical Character Recognition), automated segmentations and device tracking:

    • OCR detection - is used to automatically extract metadata from the live feed of the fluoroscopy machine (e.g., c-arm position, focal distance, etc.).
    • Segmentations - the system utilizes deep learning models to automatically generate anatomical segmentations of the heart chambers and aorta.
    • Device Detection - using a segmentation model the system detects the distal tips of specific interventional devices (e.g., Pigtail catheters, CS catheters, pacing leads) within the fluoroscopic image to support real-time tracking and present overlay.

    AI-based segmentations are provided to assist the workflow but may contain inaccuracies. The AI output should not be used as the sole basis for clinical decision-making. Clinical oversight is mandatory.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the CARA System, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance Study for the CARA System

    The CARA System's performance was evaluated through non-clinical, AI/ML validation, and retrospective clinical performance testing to demonstrate substantial equivalence to the predicate device, Cydar EV (K212442).

    1. Acceptance Criteria and Reported Device Performance

    Feature / MetricAcceptance CriteriaReported Device Performance
    Non-Clinical Performance
    CT-to-fluoroscopy registration errorMean ≤ 2.0 mm; Max ≤ 3.0 mmMean registration error ≤ 2.0 mm; Maximum error ≤ 3.0 mm
    System latency (95% upper bound)≤ 133 ms≤ 133 ms
    Image fidelity (PSNR)≥ 35 dB≥ 35 dB
    Image fidelity (SSIM)≥ 0.95≥ 0.95
    AI/ML Performance
    OCR Error rate0 errors (≤5% upper 95% CI bound)0 failures observed
    Anatomical Segmentation (Cardiac Chambers) - Dice Similarity Coefficient (DSC)≥ 0.85All evaluated structures met criteria
    Anatomical Segmentation (Cardiac Chambers) - Average Surface Distance (ASD)≤ 1.5 mmAll evaluated structures met criteria
    Aortic Segmentation (DSC)≥ 0.85Mean DSC = 0.962
    Catheter & Lead Detection - Median distal tip localization error≤ 0.9 mmAll evaluated catheter types met criteria
    Clinical Performance
    TAVR Cohort: Association between CARA-visualized CSA and Permanent Pacemaker Implantation (PPI) ratesAssociation observed consistent with clinical expectationsImplantation above CARA-visualized CSA: 11.2% PPI vs. 33.9% PPI when not above
    CSP Cohort: Association between CARA-visualized LBBP and LVEF improvementAssociation observed consistent with clinical expectationsPacing at CARA-identified LBBP: +11.2% LVEF improvement vs. +0.3% for non-specific septal pacing

    2. Sample Size and Data Provenance for AI/ML Test Set

    • OCR Test Set: 61 fluoroscopic images (retrospective, multi-site clinical datasets).
    • Anatomical Segmentation (Cardiac Chambers) Test Set: 50 retrospective CT scans (retrospective, multi-site clinical datasets).
    • Aortic Segmentation Test Set: 480 fluoroscopic images (retrospective, multi-site clinical datasets).
    • Catheter & Lead Detection Test Set: 2,139 fluoroscopic images (retrospective, multi-site clinical datasets).

    The specific country of origin for the retrospective, multi-site clinical datasets is not detailed in the provided information.

    3. Number and Qualifications of Experts for Ground Truth

    • Anatomical Segmentation (Cardiac Chambers) Ground Truth: Manual segmentation by trained technologists, adjudicated by a U.S. Board-Certified Interventional Cardiologist.
    • Aortic Segmentation Ground Truth: Manual contour annotation, adjudicated by a U.S. Board-Certified Interventional Cardiologist.
    • Catheter & Lead Detection Ground Truth: Manual distal tip annotation, adjudicated by a U.S. Board-Certified Interventional Cardiologist.
    • OCR Ground Truth: Manual verification of extracted parameters (no specific expert qualifications mentioned beyond "manual verification").

    The number of experts (U.S. Board-Certified Interventional Cardiologists) used for adjudication is not specified (e.g., whether it was one individual or a panel).

    4. Adjudication Method for the Test Set

    The adjudication method clearly states "adjudicated by a U.S. Board-Certified Interventional Cardiologist." This implies a single expert review of the preliminary ground truth established by trained technologists/manual annotators. It does not indicate a 2+1 or 3+1 consensus method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was mentioned in the provided document. The study described focuses on the device's standalone performance and a retrospective clinical correlation, not on comparing human reader performance with and without AI assistance.

    6. Standalone Performance Study

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was done for the AI/ML algorithms. The "AI/ML Performance Summary" table directly details the performance of the OCR, anatomical segmentation, and catheter/lead detection algorithms on independent test datasets, measured against ground truth.

    7. Type of Ground Truth Used

    • AI/ML Algorithms: Expert consensus (adjudication by a U.S. Board-Certified Interventional Cardiologist) applied to initial manual annotations by trained technologists for anatomical segmentations and catheter/lead detection. Manual verification for OCR.
    • Clinical Performance Data: Retrospective clinical outcomes data (Permanent Pacemaker Implantation rates, Left Ventricular Ejection Fraction improvement) associated with the CARA-visualized Conduction System Axis and Left Bundle Branch Pacing.

    8. Sample Size for the Training Set

    The document states, "Algorithms were trained using retrospective, multi-site clinical datasets," but does not specify the sample size used for the training set. It only mentions that "Training and test datasets were independent."

    9. How the Ground Truth for the Training Set was Established

    The document states, "Algorithms were trained using retrospective, multi-site clinical datasets." While it describes how ground truth was established for the validation/test set (manual segmentation by trained technologists with physician adjudication), it does not explicitly detail how the ground truth for the training set was established. It is implied that similar methods would have been used, but it's not directly stated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110869

    Validate with FDA (Live)

    Device Name
    CARA
    Manufacturer
    Date Cleared
    2011-07-14

    (107 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CARA is a comprehensive software platform intended for importing, processing, and storing of color fundus images as well as visualization of original and enhanced image through computerized networks.

    Device Description

    CARA is a software platform that collects, enhances, stores, and manages color fundus images. Through the internet, CARA software collects and manages color fundus images from a range of approved computerized digital imaging devices. CARA enables a real-time review of retinal image data (both original and enhanced) from an internet-browser-based user interface to allow authorized users to access and view data saved in a centralized database. The system utilizes state-of-the-art encryption tools to ensure a secure networking environment.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a device named CARA, a software platform for managing color fundus images.

    Here's an analysis based on the provided information, addressing your requested points:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission does not specify quantitative acceptance criteria or provide a table of device performance against such criteria. The document states "The results of performance and software validation and verification testing demonstrate that CARA performs as intended and meets the specifications. This supports the claim of substantial equivalence," but the specific metrics are not detailed.

    2. Sample Size Used for the Test Set and Data Provenance

    No specific test set or sample size for evaluating performance is mentioned. The submission states, "Since the CARA system currently is not a stand-alone tool, does not make any diagnostic claims and does not replace the existing retinal images or the treating physician, the sponsor believes that the software testing and validation presented in this 510(k) are sufficient and that there is no need for a clinical trial." This indicates that no human-read test set data was used to demonstrate performance. The country of origin for any internal software testing data is not specified, but the applicant's address is in Canada.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    Not applicable. No clinical test set requiring expert ground truth was used for this 510(k) submission.

    4. Adjudication Method for the Test Set

    Not applicable. No clinical test set requiring adjudication was used.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    No MRMC comparative effectiveness study was done. The device is not making diagnostic claims and "does not replace the existing retinal images or the treating physician," therefore, a study on human reader improvement with or without AI assistance was not deemed necessary by the sponsor.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The CARA system is explicitly stated as "not a stand-alone tool" and "does not make any diagnostic claims." The document does not describe any standalone performance testing of an algorithm making diagnostic claims. The "software testing and validation" mentioned are likely related to functional performance, security, and compatibility as a picture archiving and communication system, not diagnostic accuracy.

    7. The Type of Ground Truth Used

    For the purposes of this 510(k), which focuses on the device as a Picture Archiving and Communications System, the concept of "ground truth" for diagnostic accuracy (e.g., pathology, outcomes data) is not applicable. The system's "performance" is based on its ability to import, process, store, and visualize fundus images as intended by its specifications.

    8. The Sample Size for the Training Set

    Not applicable. As CARA is described as a software platform for managing and enhancing images, not a diagnostic AI algorithm, there is no mention of a "training set" in the context of machine learning model development.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no mention of a training set for a machine learning model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K082856

    Validate with FDA (Live)

    Date Cleared
    2008-10-15

    (16 days)

    Product Code
    Regulation Number
    N/A
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CARAPASTE® Oral Wound Dressing forms a protective layer over the oral mucosa by adhering to the mucosal surface which allows it to protect against further irritation and relieve pain. The paste may be used in the management of mouth lesions of all types including aphthous ulcer, stomatitis, mucositis, minor lesions, chafing and traumatic ulcers, abrasions caused by braces and ill fitting dentures, and lessions associated with oral surgery.

    Device Description

    CARAPASTE ® Oral Wound Dressing, Sucralfate HCl Topical Paste, is an amorphous hydrogel paste formed by the controlled reaction of sucralfate with a limited quantity of hydrochloric acid. The amorphous hydrogel paste formed by this reaction binds reversibly to wounds and is intended to form a protective film that covers lesions where gastric acid or local wound bed acidity is not available or is inconsistently present. CARAPASTE ® Oral Wound Dressing may be administered directly to an accessible oral wound to provide an adherent physical covering of the wound bed. Although prepared by reaction of sucralfate with strong acid, the polymerized sucralfate self-buffers to a pH of approximately 3.5.

    AI/ML Overview

    The provided document is a 510(k) summary for the CARAPASTE® Oral Wound Dressing, K082856. This type of submission relies on demonstrating substantial equivalence to a legally marketed predicate device, rather than requiring the submission of new clinical or performance data to establish safety and effectiveness.

    Therefore, the document does not contain information regarding a study that proves the device meets specific acceptance criteria based on its own performance. Instead, it asserts substantial equivalence based on technological characteristics and intended use.

    Here's a breakdown of the requested information based on the provided document:


    1. Table of Acceptance Criteria and Reported Device Performance

    This information is not provided in the 510(k) summary. For a 510(k) submission, the "acceptance criteria" are generally that the new device has substantially equivalent technological characteristics and intended use to a predicate device, and does not raise new questions of safety or effectiveness. Direct performance metrics are typically not required unless there are significant technological differences or new indications for use.

    Acceptance CriteriaReported Device Performance
    Not specified directly in the document, as this is a 510(k) submission based on substantial equivalence. The primary "acceptance criterion" for a 510(k) is demonstrating substantial equivalence to a predicate device in terms of intended use, technological characteristics, safety, and effectiveness.CARAPASTE® Oral Wound Dressing forms a protective layer over the oral mucosa, adheres to the mucosal surface, and relieves pain and promotes wound healing of mouth lesions. (This is a description of its mechanism and intended effects, not quantitative performance data against specific criteria.)

    2. Sample size used for the test set and the data provenance

    This information is not applicable/not provided as this 510(k) submission does not include primary clinical studies with a test set. The submission relies on demonstrating substantial equivalence to a predicate device (Sucralfate HCl Topical Paste K043587) rather than presenting new performance data from a specific test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not applicable/not provided. There is no mention of a test set with ground truth established by experts in this 510(k) summary.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not applicable/not provided. There is no mention of a test set requiring adjudication in this 510(k) summary.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This information is not applicable/not provided. The device is an oral wound dressing and does not involve AI or human readers for diagnostic interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This information is not applicable/not provided. The device is a physical wound dressing and does not involve algorithms.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    This information is not applicable/not provided. The 510(k) summary does not describe a study that established a "ground truth" for the device's performance, as it relies on substantial equivalence.

    8. The sample size for the training set

    This information is not applicable/not provided. There is no mention of a training set as this is not an AI/machine learning device.

    9. How the ground truth for the training set was established

    This information is not applicable/not provided. There is no mention of a training set or ground truth establishment in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K961758

    Validate with FDA (Live)

    Date Cleared
    1996-07-11

    (69 days)

    Product Code
    Regulation Number
    N/A
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Carrasyn™ wound dressings are either smooth, nonoily clear hydrogels or freeze-dried preparations of the same. They are supplied in either a liquid or dry state and are intended for the management of wounds.

    Device Description

    Carrasyn™ wound dressings are either smooth, nonoily clear hydrogels or freeze-dried preparations of the same. They are supplied in either a liquid or dry state and are intended for the management of wounds.

    AI/ML Overview

    The provided document describes the safety and effectiveness of Carrington's Carrasyn® wound dressings. It primarily focuses on demonstrating biocompatibility and some clinical observations. However, it does not include detailed acceptance criteria or a study designed to rigorously prove that the device meets specific performance metrics in the way that would typically be expected for a diagnostic or AI-based device's acceptance criteria.

    The information provided is more akin to a 510(k) premarket notification summary for a medical device, which generally focuses on demonstrating substantial equivalence to a predicate device and outlining safety and efficacy through biocompatibility and some clinical experience.

    Given this, I will interpret "acceptance criteria" as the overall goal of demonstrating safety and effectiveness as outlined in the summary, and "reported device performance" as the outcomes of the studies described.

    Here's an analysis based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Interpreted)Reported Device Performance
    Safety: Device is not a primary dermal or eye irritant.Biocompatibility Studies:- Primary Dermal Irritation testing: Demonstrated not to be a primary dermal irritant.- Primary Eye Irritation testing: Demonstrated not to be a primary eye irritant.
    Safety: Device does not cause adverse events.Clinical Experience (Radiation Dermatitis & Diabetic Ulcers): No mention of adverse events.Clinical Trial (Aphthous Ulcers - Carrington Patch™):- Randomized, double-blind study: No adverse events reported.- Open-label study: No adverse events reported.
    Effectiveness: Improves wound healing/manages wounds as intended.Clinical Experience (Radiation Dermatitis & Diabetic Ulcers): Evaluated "acceptability... to clinicians, to wound and skin appearance, and to the wound healing environment." Concluded to be "safe and effective for their intended use."
    Effectiveness: Reduces discomfort (specifically for aphthous ulcers).Clinical Trial (Aphthous Ulcers - Carrington Patch™):- Randomized, double-blind study: Found to "reduce discomfort."- Open-label study: Found to "significantly reduce discomfort within 2 minutes."

    2. Sample Size Used for the Test Set and Data Provenance

    Due to the nature of the device (wound dressing, not AI/diagnostic), the concept of a "test set" in the context of an AI model doesn't directly apply. The document describes clinical studies that serve as evidence of safety and effectiveness.

    • Clinical Experience (Radiation Dermatitis & Diabetic Ulcers):
      • Sample Size: 4 patients with radiation dermatitis, 30 patients with diabetic ulcers.
      • Data Provenance: Not specified, but generally implies a prospective clinical observation.
    • The Carrington™ Patch - Randomized, Double-Blind Study (Aphthous Ulcers):
      • Sample Size: 60 healthy volunteer patients (30 in treatment group, 30 in control group).
      • Data Provenance: Not specified, but implies a prospective clinical trial.
    • The Carrington™ Patch - Open-Label Study (Aphthous Ulcers):
      • Sample Size: 30 healthy volunteer patients.
      • Data Provenance: Not specified, but implies a prospective clinical trial.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    • Biocompatibility Studies: These were laboratory tests conforming to GLP regulations using animal models. The "ground truth" (i.e., irritation levels) would be established by trained technicians/toxicologists following standard protocols. Specific numbers or qualifications are not provided but are inherent in GLP compliance.
    • Clinical Experience (Radiation Dermatitis & Diabetic Ulcers): "Clinicians" were involved in evaluating acceptability and wound healing. The number and specific qualifications (e.g., dermatologists, wound care specialists) are not specified.
    • Clinical Trials (Aphthous Ulcers): Patients themselves provided input on discomfort via diaries and adverse event reports. Clinicians would have conducted assessments but their number and specific qualifications are not detailed.

    4. Adjudication Method for the Test Set

    • Biocompatibility Studies: Not applicable in the sense of expert adjudication. Results were objectively measured based on irritation scores.
    • Clinical Experience (Radiation Dermatitis & Diabetic Ulcers): Adjudication method not described. It appears to be clinician observation without a formal multi-reader adjudication process.
    • Clinical Trials (Aphthous Ulcers): Patient diaries and adverse event reports were primary data sources. Clinicians would have overseen the study, but a specific adjudication method for their observations is not detailed. The randomized double-blind nature of one study partially addresses bias for the discomfort assessment.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, an MRMC comparative effectiveness study was not done. The document does not describe human readers interpreting images or data with and without AI assistance. This device is a wound dressing, not an AI diagnostic tool.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Not applicable. The device is a physical wound dressing, not an algorithm.

    7. The Type of Ground Truth Used

    • Biocompatibility Studies: Objective measurements of dermal and ocular irritation in animal models.
    • Clinical Experience (Radiation Dermatitis & Diabetic Ulcers): Clinician observations and assessments of wound/skin appearance and healing environment. Patient acceptability. This is a form of expert clinical assessment.
    • Clinical Trials (Aphthous Ulcers):
      • Discomfort: Patient-reported outcome (via diary), which is a subjective but direct measure of a patient's experience.
      • Adverse Events: Patient-reported and clinically observed events.

    8. The Sample Size for the Training Set

    Not applicable. As this is not an AI/ML device, there is no "training set" in the conventional sense. The "training" for the device's formulation likely involved laboratory research and development, but not data-driven machine learning.

    9. How the Ground Truth for the Training Set was Established

    Not applicable, as there is no training set for an AI/ML model for this device. The development process for the dressing would have involved standard chemical and material science techniques, preclinical testing, and potentially iterative formulation based on performance in those settings.

    Ask a Question

    Ask a specific question about this device

    K Number
    K953118

    Validate with FDA (Live)

    Device Name
    CARACELL
    Manufacturer
    Date Cleared
    1996-02-23

    (235 days)

    Product Code
    Regulation Number
    868.5830
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1