Search Filters

Search Results

Found 30 results

510(k) Data Aggregation

    K Number
    K251726
    Date Cleared
    2025-09-03

    (90 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    SignalNED System (Model RE)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K242306
    Date Cleared
    2024-09-04

    (30 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SignalNED System (Model RE)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SignalNED Device is intended to record and display Quantitative EEG (qEEG) (relative band power, e.g., alpha, beta, delta, theta), which is intended to help the user analyze the EEG. The SignalNED does not provide any diagnostic conclusion about the patient's condition. The device is intended to be used on adults by qualified medical and clinical professionals.

    The SignalNED is intended to be used in a professional healthcare environment.

    Device Description

    The SignalNED Model RE machine uses 10 patient electrodes (4 left, 4 right, 2 midh are used to form the 8 channels. The SignalNED machine requires the use of the SignalNED Sensor Cap, and the system includes the following components:

    • Portable EEG machine (Device)
    • I Battery & External Battery Charger
    • I SignalNED Sensor Cap
    • I SignalNED Sensor Cap Cable

    The primary function of the SignalNED Model RE is to rapidly record EEG and derive the Quantitative EEG (qEEG) measurement of Relative Band Power for multiple bands (e.g., alpha, beta, theta) at each electrode. These measurements are intended to help the user analyze the underlying EEG. The SignalNED Model RE (client) achieves its intended without relying on wireless comectivity. The SignalNED RE does not provide any diagnostic conclusion about the patient's condition.

    AI/ML Overview

    This summary describes the acceptance criteria and the study that proves the SignalNED System (Model RE) meets those criteria, based on the provided FDA 510(k) summary.

    1. Table of Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    Lead Off DetectionAbility to detect disconnected electrodes.All testing passed acceptance criteria.
    Signal Acquisition Noise LevelsAcceptable noise levels in signal acquisition.All testing passed acceptance criteria.
    Software ADC Conversion AccuracyAccuracy of software in Analog-to-Digital Converter (ADC) conversion.All testing passed acceptance criteria.
    Quantitative Electroencephalogram (QEEG)Accuracy of the QEEG Relative Band Power calculation.All testing passed acceptance criteria.
    EC12:2020 Electrical PerformanceCompliance with EC12:2020 electrical standards.All testing passed acceptance criteria.
    Essential Performance Tests (IEC 80601-2-26)Compliance with IEC 80601-2-26 essential performance requirements.All testing passed acceptance criteria.
    Electrical Performance (IEC 60601-1, IEC 60601-1-2)Compliance with IEC 60601-1 and IEC 60601-1-2.All testing passed.
    Biocompatibility (ISO 10993-1, -5, -10, -23)Compliance with ISO 10993 for Cytotoxicity, Sensitization, and Irritation (for limited contact, intact skin).All testing passed.

    2. Sample Size Used for the Test Set and Data Provenance

    The provided document does not specify the sample sizes (e.g., number of subjects, number of EEG recordings) used for the non-clinical performance testing. It only states that "All testing passed acceptance criteria and details are contained in the test report." The data provenance (e.g., country of origin, retrospective or prospective) is also not detailed in this summary.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The provided document describes non-clinical performance testing (lead-off detection, noise levels, ADC accuracy, QEEG calculation accuracy, electrical performance, biocompatibility). These tests do not typically involve human experts establishing ground truth in the way a clinical study for diagnostic accuracy would. The ground truth for these tests would be established through defined technical specifications, measurement standards, and validated testing protocols. Therefore, information about the number and qualifications of experts for establishing ground truth is not applicable in this context.

    4. Adjudication Method for the Test Set

    As the performance testing described is non-clinical and based on technical specifications and standards, an adjudication method (like 2+1 or 3+1) used in clinical studies for discrepancies in expert readings is not applicable here. The acceptance criteria for each test inherently define the "ground truth" to which the device's performance is compared.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. without AI Assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided document. The SignalNED System is intended to record and display QEEG, which "is intended to help the user analyze the EEG." It explicitly states, "The SignalNED does not provide any diagnostic conclusion about the patient's condition." This indicates that the device is a tool for professional analysis rather than an AI-driven diagnostic aid for human readers. Therefore, an MRMC study comparing human readers with and without AI assistance is not described.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The document describes several standalone performance tests for the device's components and calculations (e.g., Lead Off Detection, Signal Acquisition Noise Levels, Software ADC Conversion Accuracy, Quantitative Electroencephalogram (QEEG) accuracy). These tests are conducted on the algorithm and hardware without human interpretation as part of the primary outcome assessment. For instance, the "Software ADC Conversion Accuracy" and "Quantitative Electroencephalogram (QEEG)" accuracy tests evaluate the algorithm's performance in generating calculated EEG measures.

    7. The Type of Ground Truth Used

    The ground truth used for the reported performance tests is based on:

    • Defined Technical Specifications and Engineering Standards: For tests like Lead Off Detection, Signal Acquisition Noise Levels, Software ADC Conversion Accuracy, EC12:2020 Electrical Performance, and Essential Performance Tests (IEC 80601-2-26).
    • Validated Calculation Methods: For the Quantitative Electroencephalogram (QEEG) Relative Band Power calculation, the ground truth would be based on established mathematical and signal processing principles for deriving these metrics from raw EEG data.
    • International Biocompatibility Standards: For ISO 10993 series tests (Cytotoxicity, Sensitization, Irritation).

    8. The Sample Size for the Training Set

    The provided document describes performance testing for substantial equivalence, not the development or validation of a machine learning model with distinct training and test sets in the typical sense. While the device calculates QEEG, the details on how the underlying algorithms were developed or "trained" (if machine learning is involved beyond standard signal processing) are not provided. Therefore, a specific sample size for a "training set" is not mentioned.

    9. How the Ground Truth for the Training Set Was Established

    As information about a distinct "training set" for machine learning algorithms is not provided, the method for establishing ground truth for such a set is also not described. The document focuses on performance testing against established engineering, electrical, and biocompatibility standards.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230842
    Device Name
    SignalHF (IM008)
    Manufacturer
    Date Cleared
    2023-10-25

    (211 days)

    Product Code
    Regulation Number
    870.2210
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SignalHF (IM008)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SignalHF System is intended for use by qualified healthcare professionals (HCP) managing patients over 18 years old who are receiving physiological monitoring for Heart Failure surveillance and implanted with a compatible Cardiac Implantable Electronic Devices (CIED) (i.e., compatible pacemakers, ICDs, and CRTs).

    The SignalHF System provides additive information to use in conjunction with standard clinical evaluation.

    The SignalHF HF Score is intended to calculate the risk of HF for a patient in the next 30 days.

    This System is intended for adjunctive use with other physiological vital signs and patient symptoms and is not intended to independently direct therapy.

    Device Description

    SignalHF is a software as medical device (SaMD) that uses a proprietary and validated algorithm, the SignalHF HF Score, to calculate the risk of a future worsening condition related to Heart Failure (HF). The algorithm computes this HF score using data obtained from (i) a diverse set of physiologic measures generated in the patient's remotely accessible pre-existing cardiac implant (activity, atrial burden, heart rate variability, heart rate, heart rate at rest, thoracic impedance (for fluid retention), and premature ventricular contractions per hour), and (ii) his/her available Personal Health Records (demographics). SignalHF provides information regarding the patient's health status (like a patient's stable HF condition) and also provides alerts based on the SignalHF HF evaluation. Based on an alert and a recovery threshold on the SignalHF score established during the learning phase of the algorithm and fixed for all patients, our monitoring system is expected to raise an alert 30 days (on median) before a predicted HF hospitalization event.

    SignalHF does not provide a real-time alert. Rather, it is designed to detect chronic worsening of HF status. SignalHF is designed to provide a score linked to the probability of a future decompensated heart failure event specific to each patient. Using this adjunctive information, healthcare professionals can make adjustments for the patient based on their clinical judgement and expertise.

    The score and score-based alerts provided through SignalHF can be displayed on any compatible HF monitoring platform, including the Implicity platform. The healthcare professional (HCP) can utilize the SignalHF HF score as adjunct information when monitoring CIED patients with remote monitoring capabilities.

    The HCP's decision is not based solely on the device data which serves as adjunct information, but rather on the full clinical and medical picture and record of the patient.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for SignalHF:

    Acceptance Criteria and Device Performance for SignalHF

    The SignalHF device was evaluated through the FORESEE-HF Study, a non-interventional clinical retrospective study.

    1. Table of Acceptance Criteria and Reported Device Performance

    For ICD/CRT-D Devices:

    EndpointsAcceptance Criteria (Objective)SignalHF Performance (ICD/CRT-D Devices)
    Sensitivity for detecting HF hospitalization (%)> 40%59.8% [54.0%; 65.4%]
    Unexplained Alert Rate PPY15 days35.0 [27.0; 52.0]

    For Pacemaker/CRT-P Devices:

    EndpointsAcceptance Criteria (Objective)SignalHF Performance (Pacemaker/CRT-P Devices)
    Sensitivity for detecting HF hospitalization (%)> 30%45.9% [38.1%; 53.8%]
    Unexplained Alert Rate PPY15 days37 [24.5; 53.0]

    2. Sample Size and Data Provenance for the Test Set

    • Test Set (Clinical Cohort) Sample Size: 6,740 patients (comprising PM 7,360, ICD 5,642, CRT-D 4,116 and CRT-P 856 - Note: there appears to be a discrepancy in the total sum provided, however, "6,740" is explicitly stated as the 'Clinical cohort' which is the test set).
    • Data Provenance: Retrospective study using data from the French national health database "SNDS" (SYSTÈME NATIONAL DES DONNÉES DE SANTÉ) and Implicity proprietary databases. The follow-up period was 2017-2021.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not explicitly state the number of experts used to establish ground truth or their specific qualifications (e.g., radiologist with 10 years of experience). However, the ground truth was "hospitalizations with HF as primary diagnosis" as recorded in the national health database, implying that these diagnoses were made by qualified healthcare professionals as part of routine clinical care documented within the SNDS.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method like 2+1 or 3+1 for establishing the ground truth diagnoses. The study relies on “hospitalizations with HF as primary diagnosis” from the national health database, suggesting that these are established clinical diagnoses within the healthcare system.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no indication that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to evaluate human reader improvement with AI assistance. The study focuses solely on the standalone performance of the SignalHF algorithm.

    6. Standalone Performance

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The FORESEE-HF study evaluated the SignalHF algorithm's performance in predicting heart failure hospitalizations based on CIED data and personal health records.

    7. Type of Ground Truth Used

    The ground truth used was outcomes data, specifically "hospitalizations with HF as primary diagnosis" recorded in the French national health database (SNDS).

    8. Sample Size for the Training Set

    • Training Cohort Sample Size: 7,556 patients

    9. How the Ground Truth for the Training Set Was Established

    The document states that the algorithm computes the HF score using physiological measures from compatible CIEDs and available Personal Health Records (demographics). It also mentions that the "recovery threshold on the SignalHF score established during the learning phase of the algorithm and fixed for all patients". This implies that the ground truth for the training set, similar to the test set, was derived from the same data sources: "hospitalizations with HF as primary diagnosis" documented within the SNDS database. The training process would have used these documented HF hospitalizations as the target outcome for the algorithm to learn from.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230655
    Date Cleared
    2023-05-03

    (55 days)

    Product Code
    Regulation Number
    888.3040
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PEEK RCI Screw; Bio-Composite Screw; SignaLoc Screw

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Signature Orthopaedics PEEK RCI, Bio-Composite and SignaLoc Screws intended for use in fixation of soft tissue including ligament or tendon to bone for cruciate ligament reconstruction surgeries of the knee. The screws are also intended for use in the following procedures:

    • · ACL repairs
    • · PCL repairs
    • · Extra-capsular repairs
      • · Medial collateral ligament
      • Lateral collateral ligament
      • · Posterior oblique ligament
    • · Patellar realignment and tendon repairs
      • Vastus medialis obliquus advancement
    • · Illiotibial band tenodesis
    Device Description

    The PEEK RCI, Bio-Composite and SignaLoc Screws are interference screws which provide compression of the graft or tendon to the bony wall for biological fixation of the ligament, tendon or soft tissue to bone. The screws feature an internal cannulation to accept a guide wire and have the same drive feature. The screws have an external variable thread along the length of the tapered shape and a rounded head. Each screw is provided individually packaged sterile for single use. The PEEK RCI is manufactured from unreinforced PEEK and the SignaLoc and Bio-Composite is manufactured from a PEEK/Hydroxyapatite composite.

    AI/ML Overview

    The provided document is a 510(k) premarket notification decision letter from the FDA for the PEEK RCI, Bio-Composite, and SignaLoc Screws. This document focuses on demonstrating substantial equivalence to previously cleared predicate devices, primarily through non-clinical performance testing. It does not describe a study involving an AI device or human-in-the-loop performance.

    Therefore, the requested information regarding acceptance criteria and study details for an AI-powered device, multi-reader multi-case studies, standalone algorithm performance, and ground truth establishment for AI training sets is not applicable to this document.

    However, I can provide the acceptance criteria and study information related to the non-clinical performance of the PEEK RCI, Bio-Composite, and SignaLoc Screws as detailed in the document.


    Acceptance Criteria and Device Performance for PEEK RCI, Bio-Composite, and SignaLoc Screws (Non-Clinical)

    The document primarily states that "Non-clinical testing and engineering evaluations were conducted to verify that the performance of the PEEK RCI, Bio-Composite and Signaloc Screws are adequate for anticipated in-vivo use." The acceptance criteria for these tests are implied to be meeting the requirements and performance characteristics of the referenced ASTM and ISO standards, and demonstrating substantial equivalence to the predicate devices. Specific quantitative acceptance values are not explicitly stated in this FDA letter but would have been part of the manufacturer's detailed testing report.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria CategoryStandard ReferenceReported Device Performance
    Insertion Torque TestingASTM F543Performance adequate for anticipated in-vivo use (meets standard requirements)
    Torque to Failure TestingASTM F543Performance adequate for anticipated in-vivo use (meets standard requirements)
    Pullout TestingASTM F543Performance adequate for anticipated in-vivo use (meets standard requirements)
    Biocompatibility EvaluationISO 10993-1Meets biocompatibility requirements
    Pyrogenicity and Endotoxin TestingAAMI ST72Meets pyrogenicity and endotoxin requirements
    Packaging and Shelf-Life TestingASTM F1980Meets packaging and shelf-life requirements
    Sterilization ValidationAAMI TIR 56, inclusive of EO and ECH Residual Testing per ISO 10993-7Meets sterilization requirements (effective and residuals within limits)
    Substantial Equivalence to PredicatesN/A (Comparative analysis)Found substantially equivalent in intended use, indications for use, materials, design features, and sterilization to predicate devices.

    2. Sample Size for Test Set and Data Provenance:

    • Sample Size: Not specified in the provided document. Non-clinical testing typically involves a sufficient number of samples to ensure statistical validity per the relevant standards, but exact numbers are not disclosed here.
    • Data Provenance: The tests are non-clinical (laboratory/bench top) and engineering evaluations performed by the manufacturer, Signature Orthopaedics Pty Ltd. The country of origin for the data generation would be Australia (where the manufacturer is based) or potentially related testing facilities. The data is prospective in the sense that it was generated specifically for this submission to support the device's performance.

    3. Number of Experts and Qualifications for Ground Truth:

    • Not applicable. This document describes non-clinical performance testing of medical screws, not an AI device or a study requiring human expert ground truth for image interpretation or diagnosis.

    4. Adjudication Method for Test Set:

    • Not applicable. As this is non-clinical bench testing, there is no "adjudication method" in the context of expert consensus or dispute resolution. The results are typically determined by calibrated equipment and adherence to standard protocols.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No. An MRMC study was not done as this is for a physical medical device (screws) and not an AI-assisted diagnostic or interpretative system.

    6. Standalone (Algorithm Only) Performance:

    • Not applicable. This device is a physical medical implant, not a software algorithm.

    7. Type of Ground Truth Used:

    • Not applicable in the context of expert consensus or pathology for clinical conditions. For non-clinical testing, the "ground truth" is established by the specifications and performance requirements defined in the referenced industry standards (ASTM, ISO, AAMI) and the properties of the predicate devices. The screws' ability to meet these specified physical and biological material properties serves as the ground truth.

    8. Sample Size for Training Set:

    • Not applicable. This device is not an AI algorithm that requires a training set.

    9. How Ground Truth for Training Set was Established:

    • Not applicable. This device is not an AI algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221168
    Date Cleared
    2023-02-01

    (285 days)

    Product Code
    Regulation Number
    874.3400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Tinnitogram Signal Generator

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Tinnitogram™ Signal Generator is a sound generating software used in a Timitus Management Program designed to provide temporary relief for people experiencing timitus symptoms. It is intended primarily for adults over 18 years of ages, but may also be used for children 5 years of age or older.

    Tinnitogram™ Signal Generator is for use by hearing healthcare professionals who are familiar with the evaluation and treatment of tinnitus and hearing losses. A hearing healthcare professional should recommend that patient listen to the Timitogram™ Signal Generator signal for 30 minutes twice a day at the barely audible level (minimally detectable level).

    Device Description

    GOLDENEAR COMPANY's TINNITOGRAM™ SIGNAL GENERATOR is a software as a medical device recommended to use a PC (desktop or laptop computer). TINNITOGRAM™ SIGNAL GENERATOR is fitted to the patient by the healthcare professional. The software enables qualified professional to create customized sounds with specific frequency range for sound therapy/masking.

    Device type is Stand-alone software as a medical device. The tinnitus masking signal is generated through a pre-process of securing the patient's customized signal. The test to find tinnitus frequencies, the pre-process, is performed automatically and this masking signal is generated at patient's barely audible level.

    AI/ML Overview

    The provided document is an FDA 510(k) clearance letter and summary for the Tinnitogram Signal Generator, a software device intended to provide temporary relief for tinnitus symptoms.

    Based on the content, the device functions as a sound generator for tinnitus management. The primary method of demonstrating acceptance and safety/effectiveness for this device is by showing substantial equivalence to an existing predicate device (KW Ear Lab's REVE134, K151719), rather than through a complex clinical study with specific performance acceptance criteria like those seen for diagnostic or therapeutic devices.

    Therefore, the requested information about acceptance criteria and a study proving the device meets those criteria (especially regarding performance metrics like sensitivity, specificity, or improvement in human reader performance) is not applicable in the traditional sense for this submission. The "study" here is essentially the non-clinical performance data (software verification and validation) to establish that the new device functions as intended and safely, despite some differences from the predicate.

    Here's an analysis based on the document's content, explaining why some sections of your request cannot be fulfilled and providing information where available:


    1. A table of acceptance criteria and the reported device performance

    This type of table, with quantitative performance metrics (e.g., sensitivity, specificity, accuracy) and corresponding acceptance thresholds, is typically required for diagnostic or AI-driven decision support devices. For the Tinnitogram Signal Generator, which is a sound-generating software for tinnitus masking, the "acceptance criteria" are related to its functional operation, safety, and equivalence to a predicate device.

    • Acceptance Criteria (Implied from the submission):

      • The software generates sounds for tinnitus masking as intended.
      • The software's functions (e.g., automated tinnitus frequency finding, signal generation) operate correctly.
      • The software's safety and effectiveness are comparable to the predicate device, despite minor technological differences (e.g., maximum output, how tests are performed).
      • The software adheres to relevant medical device software and risk management standards.
    • Reported Device Performance:
      "In all verification and validation process, GOLDENEAR COMPANY's TINNITOGRAM™ SIGNAL GENERATOR functioned properly as intended and the performance observed was as expected."

      Note: Specific quantitative performance metrics (e.g., sound output precision, accuracy of frequency determination) are not provided in numerical form in this summary, beyond the specifications listed (e.g., max output 104 dB SPL, frequency range 262-11840 Hz). The "performance" is primarily demonstrated through successful completion of software verification and validation activities.

    Table (Best approximation based on available information):

    Acceptance Criteria (Implied)Reported Device Performance
    Software generates customized sounds for tinnitus masking."Functioned properly as intended."
    Software properly performs automated tinnitus frequency finding."Functioned properly as intended."
    Software's safety is comparable to predicate device."Bench performance testing... demonstrated these differences do not affect safety."
    Software's effectiveness is comparable to predicate device."Bench performance testing... demonstrated these differences do not affect effectiveness."
    Adherence to medical device software development standards (IEC 62304)."Software development, verification, and validation have been carried out in accordance with FDA guidelines."
    Adherence to risk management standards (ISO 14971)."Software Hazard analysis was completed and risk control implemented."
    All software specifications meet acceptance criteria."The testing results support that all the software specifications have met the acceptance criteria of each module and interaction of processes."

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: This kind of "test set" (e.g., a set of patient data or images) is not applicable here as this is not a diagnostic or AI-based image analysis device. The "test set" in this context refers to the software testing environment.
    • Data Provenance: Not applicable. The "data" being tested is the software's functionality, not patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not Applicable. This device uses software verification and validation, not clinical experts establishing ground truth from patient data cases.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Not Applicable. As above, this is for software verification, not expert adjudication of clinical cases.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC study was not done. This device is a sound generator, not an AI-assisted diagnostic tool that would involve human readers.
    • Effect Size: Not applicable.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, in spirit, a form of standalone testing was done. The "software as a medical device" was "verified and validated" for its intended functions (e.g., generating signals, performing the automated test to find tinnitus frequencies). This testing assesses the algorithm's performance in isolation from patient interaction, ensuring it produces the correct outputs for given inputs. The summary states: "The software was tested against the established Software Design Specifications for each of the test plans to assure the device performs as intended." This constitutes the "algorithm only" performance assessment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • The "ground truth" for this device is the software's design specifications and expected functional behavior. For instance, if the software is designed to generate a 1 kHz tone at 54 dB SPL, the "ground truth" is that 1 kHz tone at 54 dB SPL, and the testing verifies if the software actually produces it. It's a functional "ground truth" rather than a clinical "ground truth."

    8. The sample size for the training set

    • Not Applicable. This device is not described as using machine learning models that require a training set of data. It is a rule-based or algorithmic sound generator.

    9. How the ground truth for the training set was established

    • Not Applicable. (See #8).
    Ask a Question

    Ask a specific question about this device

    K Number
    K202174
    Date Cleared
    2021-02-10

    (190 days)

    Product Code
    Regulation Number
    882.1835
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Digital NeuroPort Biopotential Signal Processing System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Digital NeuroPort Biopotential Signal Processing System supports recording, processing, and display of biopotential signals from user-supplied electrodes. Biopotential signals include: Electrocorticography (ECoG), electroencephalography (EEG), electromyography (EMG), electrocardiography (ECG), electroculography (EOG), and Evoked Potential (EP).

    Device Description

    The Digital NeuroPort Biopotential Signal Processing System is used to acquire, process, visualize, archive/record signals as acquired from user-supplied electrodes for biopotential monitoring. Signals are acquired using a headstage relay that attaches to the pedestal interface and digitizes the signal through the hub. The Digital NeuroPort System uses preamplifiers, analog to digital converters, a signal processing unit, and software running on a personal computer to visualize and record biopotentials from electrodes in contact with the body.

    AI/ML Overview

    The document describes the Digital NeuroPort Biopotential Signal Processing System, which is a physiological signal amplifier. The device's substantial equivalence to a predicate device (K090957, NeuroPort Biopotential Signal Processing System) is affirmed based on various performance data.

    Here's an analysis of the acceptance criteria and the supporting studies:

    1. Table of Acceptance Criteria and Reported Device Performance:

      Test / CharacteristicAcceptance CriteriaReported Device Performance
      NeuroPlex E Functional Testing
      MatingScrews down on pedestal and LED turns greenPass
      CrosstalkIsolation resistance of 1kΩ at 500 V DCPass
      Label DurabilityIEC 60601-1:2005/A1:2012, Edition 3.1 7.1.3Pass
      Digital AccuracyAppropriate voltages for different filters (0.02-10 kHz Wide, 0.3-7.5 kHz Standard); Peak-to-peak of 500mV ±10%Pass
      Input Impedance≥10MΩPass
      Impedance Measurement820 ± 15% kOhms and 170 ± 15% kOhmsPass
      Current Rating
    Ask a Question

    Ask a specific question about this device

    K Number
    K200484
    Date Cleared
    2020-11-25

    (272 days)

    Product Code
    Regulation Number
    870.1425
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CARTO® 3 EP Navigation System with Signal Processing Unit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the CARTO® 3 System is catheter-based cardiac electrophysiological (EP) procedures. The CARTO® 3 System provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure. The system has no special contraindications.

    Device Description

    The CARTO® 3 EP Navigation System with SPU is a catheter-based atrial and ventricular mapping system designed to acquire and analyze data points and use this information to display 3D anatomical and electroanatomical maps of the human heart. The location information needed to create the cardiac maps and the local electrograms are acquired using a specialized mapping catheters and reference devices. The system allows electrograms and cardiac maps display based on the received intracardiac signals from the catheters. The CARTO® 3 System uses two distinct types of location technology – magnetic sensor technology and Advanced Catheter Location (ACL) technology.

    The CARTO® 3 System with SPU consists of the following components:
    Patient Interface Unit (PIU) Graphic User Interface (GUI) Wide-Screen monitors, keyboard, and mouse Intracardiac In Port Intracardiac Out Port Power Supply Patches Connection Box and Cables Pedals Location Pad SPU

    AI/ML Overview

    It appears you've provided an FDA 510(k) Summary for the CARTO® 3 EP Navigation System with Signal Processing Unit (K200484). This document primarily focuses on demonstrating substantial equivalence to a predicate device, rather than providing a detailed clinical study demonstrating the device's performance against specific acceptance criteria for a new and unproven diagnostic or AI-driven medical device.

    The provided document describes the device, its intended use, and its technological characteristics compared to a predicate device. It states that the differences are the addition of a Signal Processing Unit (SPU), an upgrade to Windows 10, and blocking certain optional features.

    Therefore, the study design and acceptance criteria outlined in your request (particularly related to AI performance, human reader improvement, and extensive ground truth establishment) are not applicable to this specific 510(k) submission. This is because the submission is for a modification of an already cleared device, where the key is to show that the modifications do not negatively impact safety or effectiveness, rather than a de novo submission or a submission for a novel AI diagnostic algorithm.

    However, I can extract the information that is present and explain why certain sections of your request cannot be fulfilled from this document.

    Here's an analysis based on the provided document:


    Acceptance Criteria and Device Performance (Not Applicable in the Requested Format)

    • The document primarily focuses on demonstrating that the modifications (addition of SPU, OS upgrade, blocked features) to the CARTO® 3 system do not negatively affect its safety and effectiveness, and that the new accessory (SPU) functions as intended.
    • It does not present acceptance criteria in the format of specific sensitivity, specificity, accuracy, or other performance metrics, nor does it typically involve a clinical study demonstrating diagnostic accuracy with human readers or standalone AI performance.
    • Instead, the "acceptance criteria" were met through:
      • Bench Testing: Verifying visualization of catheters, system behavior, location accuracy of catheters position, ECG channel characteristics and performance, load capacity, and data synchronization.
      • Functional Verification: Testing functional requirements, hardware configurations with SPU, regression testing of legacy features, system functionality for supported catheters, and usability.
      • Animal Testing: Evaluating functionality under simulated clinical workflow.
    • The reported performance is that "All testing passed in accordance with appropriate test criteria and standards, and the modified device did not raise new questions of safety or effectiveness." and "All testing performed met the acceptance criteria." and "All system features were found to perform according to specifications and met the tests acceptance criteria."

    1. Table of Acceptance Criteria and Reported Device Performance

    As explained, discrete quantitative performance metrics (like sensitivity/specificity for a diagnostic AI) are not presented because this is a 510(k) for a device modification, not a novel diagnostic AI.

    Acceptance Criteria CategorySpecific Tests/VerificationReported Device Performance
    Electrical Safety & EMIIEC 60601-1, IEC 60601-1-2In compliance with standards
    Proof of DesignVisualization of catheters, system behavior,
    location accuracy, ECG channel characteristics/performance,
    load capacity, data synchronizationAll testing met acceptance criteria
    Functional VerificationFunctional requirements & HW configurations with SPU,
    regression for legacy features, supported catheter functionality, usability (IEC 60601-1-6)All system features performed according to specifications and met test acceptance criteria
    Simulated Clinical UseAnimal Testing for SPU functionality under simulated clinical workflowAll test protocol steps successfully completed and expected results achieved

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not applicable in the context of human patient data for an AI performance study. The "test set" consisted of bench models, simulated conditions, and animal subjects. No specific number of "cases" or "patients" for a diagnostic accuracy study is provided.
    • Data Provenance: The bench and animal testing were conducted by Biosense Webster, Inc., with manufacturing facilities in Israel and the USA. The data is pre-clinical/bench data, not clinical patient data from specific countries.
    • Retrospective/Prospective: The testing described is prospective, in that it was specifically designed and executed to evaluate the device changes for this 510(k) submission.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Not applicable. This submission does not involve evaluation of an AI or diagnostic algorithm requiring expert-established ground truth on medical images or data. The "ground truth" for the engineering and animal tests would be the known electrical and positioning parameters of the system, established by engineering specifications and direct measurement.

    4. Adjudication Method for the Test Set

    • Not applicable. No human experts were adjudicating clinical cases for ground truth in this type of submission.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No. This was not a comparative effectiveness study for human readers with or without AI assistance. The submission is about the performance of the CARTO® 3 system itself, including its new SPU accessory, for its intended use in cardiac EP procedures, not about a diagnostic AI.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    • Not applicable in the AI sense. While the device performs complex calculations (e.g., location determination, signal processing), it is inherently a system that assists a human clinician during EP procedures. It's not a standalone diagnostic AI algorithm that provides a diagnosis without human interaction. The "performance data" presented relates to the system's ability to accurately process signals and determine catheter location, which is a standalone function of the device's software and hardware.

    7. The Type of Ground Truth Used

    • For bench testing: Engineering specifications, known physical parameters, and reference measurements.
    • For animal testing: Physiological responses within the animal model relative to expected system behavior.
    • This is not "expert consensus, pathology, or outcomes data" in the typical sense of a diagnostic AI study.

    8. The Sample Size for the Training Set

    • Not applicable. This document does not describe the development or validation of a new AI algorithm requiring a training set. The CARTO® 3 system's underlying algorithms would have been developed and refined over many years, potentially using large internal datasets, but this 510(k) is about a modification, not the initial development or a significant new AI feature.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable. As above, this document does not refer to a "training set" for a new AI algorithm.

    In summary, the provided FDA 510(k) notification focuses on demonstrating the safety and effectiveness of a modified medical device through engineering and pre-clinical verification, rather than the clinical validation of a new diagnostic AI system. Therefore, many of the questions related to AI study design and clinical ground truth establishment are not addressed in this specific document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K190689
    Date Cleared
    2019-08-14

    (149 days)

    Product Code
    Regulation Number
    878.4300
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SignalMark Breast Marker

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SignalMark Breast Marker is intended to provide accuracy in marking a surgical site and/or a biopsy location for visualization during surgical resection.

    Device Description

    The SignalMark Breast Marker is a medical device used by a physician to percutaneously place a small implantable hydrogel marker in breast tissue to "mark" the location of the biopsy or surgical site. It is intended to be used on adults undergoing open surgical breast biopsy or percutaneous breast biopsy, in a surgical setting, such as a hospital or medical clinic with operating suites. The SignalMark Breast Marker consists of two components:

    • . Applicator: Component made of plastic and stainless steel that pushes the marker into the tissue.
    • Marker Pad: Component made of USP-grade porcine gelatin-based hydrogel with methylene blue-colored silicon dioxide microspheres. The marker aids in the visualization of tissue allowing surgeons to readily locate the biopsy site for subsequent tissue or tumor resection.
    AI/ML Overview

    The provided document, a 510(k) summary for the SignalMark Breast Marker, describes non-clinical testing performed to demonstrate substantial equivalence to predicate devices, rather than a study designed to prove the device meets specific acceptance criteria in the context of clinical performance or diagnostic accuracy. Therefore, a table of acceptance criteria and reported device performance in those terms is not available from this document.

    However, the document does detail other aspects of the testing performed, which can be extracted and summarized.

    No clinical studies involving human patients, multi-reader multi-case (MRMC) comparative effectiveness studies, or standalone algorithm performance studies are described in this 510(k) summary, as it focuses on non-clinical (bench and animal) testing. The device is a physical marker and not an AI/algorithm-based diagnostic tool, so certain sections of your request (e.g., effect size of human readers improving with AI, standalone performance, training set details) are not applicable.


    1. A table of acceptance criteria and the reported device performance

    As mentioned, this document does not provide a table of acceptance criteria for diagnostic performance or clinical effectiveness, nor does it report device performance against such criteria. The testing focused on demonstrating equivalence to predicate devices through physical and biological testing.

    The document lists the following non-clinical tests performed:

    Test CategorySpecific TestsReported Device Performance and Acceptance Criteria
    BiocompatibilityISO 10993-1 Biological evaluation of medical devices – Part 1: Evaluation and testing with a risk management processPatient-contacting material was subjected to biocompatibility testing in compliance with ISO 10993-1. This implies the device met the requirements of this standard.
    Performance (Bench)Visual Inspection of the ApplicatorNot explicitly stated, but implied to meet internal specifications for visual quality.
    Applicator Deployment TestNot explicitly stated, but implied to meet internal specifications for proper deployment.
    Applicator Dimensional InspectionNot explicitly stated, but implied to meet internal specifications for dimensions.
    Applicator Stroke Length TestNot explicitly stated, but implied to meet internal specifications for stroke length.
    Applicator Compression TestNot explicitly stated, but implied to meet internal specifications for compression limits.
    Applicator Tensile TestNot explicitly stated, but implied to meet internal specifications for tensile strength.
    Visual Inspection of the Marker PadNot explicitly stated, but implied to meet internal specifications for visual quality.
    Marker Pad DiameterNot explicitly stated, but implied to meet internal specifications for diameter.
    Marker Pad HydrationNot explicitly stated, but implied to meet internal specifications for hydration properties and expansion.
    Marker Pad Ultrasound Visual TestThe document states both the subject device and predicate are visible under ultrasound, implying the test confirmed this visibility.
    Labeling wipe test with 70% IPANot explicitly stated, but implied to meet internal specifications for labeling durability.
    Packaged Contents VerificationNot explicitly stated, but implied to meet internal specifications for package integrity and contents.
    Performance (Animal)Biodistribution in rodentsNot explicitly stated, but implied to demonstrate acceptable biodistribution without adverse effects or to be comparable to predicate.
    Safety and efficacy in porcineNot explicitly stated, but implied to demonstrate safety and function in an animal model, comparable to predicate.
    Biologic response in porcineNot explicitly stated, but implied to demonstrate an acceptable biological response in an animal model.

    Note: The document states, "No FDA performance standards have been established for the SignalMark Breast Marker." This indicates that the equivalence was demonstrated against the predicate's known performance for these non-clinical aspects, rather than against specific regulatory performance metrics for this type of device.


    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: The document does not specify the exact sample sizes for each of the bench and animal tests. It only lists the types of tests conducted. For example, for "Biodistribution in rodents" and "Safety and efficacy in porcine," the specific number of animals used is not provided.
    • Data Provenance: The document does not mention the country of origin of the data. The studies were non-clinical (bench and animal), so the terms "retrospective" or "prospective" as they apply to human clinical studies are not directly relevant. These were experimental studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This section is not applicable as the document describes non-clinical (bench and animal) testing, not a diagnostic accuracy study requiring expert establishment of ground truth.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This section is not applicable as the document describes non-clinical (bench and animal) testing, not a diagnostic accuracy study requiring adjudication.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This section is not applicable. The SignalMark Breast Marker is a physical implantable marker, not an AI or imaging diagnostic device. No MRMC study or AI assistance is mentioned or relevant to this device's 510(k) submission.


    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This section is not applicable. The SignalMark Breast Marker is a physical implantable marker, not an algorithm, and therefore does not have "standalone performance" in the context of AI or software.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the non-clinical tests described:

    • For Biocompatibility, the ground truth is adherence to the ISO 10993-1 standard.
    • For Bench Performance tests (e.g., dimensional inspection, deployment, hydration, ultrasound visibility), the ground truth would be the pre-defined engineering specifications, material properties, and expected physical behaviors of the device, often compared to the predicate device's characteristics.
    • For Animal Performance tests (biodistribution, safety, efficacy, biologic response), the ground truth is derived from the observed biological reactions, histology, and functional outcomes in the animal models, compared to established norms or the predicate device's effects.

    8. The sample size for the training set

    This section is not applicable. The SignalMark Breast Marker is a physical medical device, not a machine learning or AI model, and therefore does not have a "training set" in that context. The device's design and manufacturing processes are developed through traditional engineering and material science principles.


    9. How the ground truth for the training set was established

    This section is not applicable for the same reason as point 8.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182635
    Device Name
    Signal Catheter
    Date Cleared
    2019-01-10

    (108 days)

    Product Code
    Regulation Number
    876.5130
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Signal Catheter

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Signal Catheter is indicated for urological bladder drainage, with a maximum patient indwelling time of 29 days.

    Device Description

    The Signal Catheter is a 16 French, 2-way silicone Foley catheter, designed to be inserted into the bladder through the urethra to drain urine. The unique signal balloon included in the catheter hub is designed to inflate during excessive pressure in the retention balloon. This typically occurs when the retention balloon is constricted and cannot be inflated at the nominal inflation pressure of the catheter. In this case, the signal balloon inflates to alleviate the fluid and resulting pressure in the retention balloon.

    AI/ML Overview

    The Signal Catheter is a medical device for urological bladder drainage. Based on the provided 510(k) summary, here's an analysis of its acceptance criteria and the supporting study information:

    1. A table of acceptance criteria and the reported device performance:

    The document acts as a 510(k) summary, which typically focuses on demonstrating substantial equivalence to a predicate device rather than explicitly stating acceptance criteria and direct performance metrics in a readily quantifiable "reported device performance" table format for a novel performance claim. However, it does indicate the studies performed and their objectives. The "acceptance criteria" here are implicitly linked to compliance with recognized standards and successful demonstration of substantial equivalence.

    Acceptance Criteria (Implied)Reported Device Performance (Summary)
    Biocompatibility: Meet biological safety standards for patient-contacting materials.Patient contacting material was subjected to biocompatibility testing according to the recommendations of ISO 10993-1. (Implies successful completion and meeting the standard's requirements, as it supports substantial equivalence).
    Dimensional Verification: Conform to design specifications.Performance testing included dimensional verification. (Implies successful verification that dimensions are as designed and comparable to predicate, as it supports substantial equivalence).
    Functional and Performance Testing: Device operates as intended, particularly its unique "signal balloon" mechanism to alleviate pressure.Performance testing included functional and performance testing. The "signal balloon" in the catheter hub inflates during excessive pressure in the retention balloon, alleviating fluid and pressure. This technological characteristic underwent testing to ensure substantial equivalence. (Implies that the mechanism functions as designed and demonstrates comparable performance to predicates in terms of function).
    Compliance to ASTM F623: Meet standard performance specifications for Foley Catheters.Performance testing showed compliance to ASTM F623 Standard Performance Specification for Foley Catheter requirements. (Implies successful adherence to all relevant criteria within this standard).
    Compliance to EN 1616: Meet standards for sterile urethral catheters for single use.Performance testing showed compliance to EN 1616 Sterile urethral catheters for single use. (Implies successful adherence to all relevant criteria within this standard).
    Overall Safety and Effectiveness: Does not raise new issues of safety or effectiveness compared to predicates.The results of these tests indicate that the Signal Catheter is substantially equivalent to the predicate devices. "Based on the testing performed... it can be concluded that the subject device does not raise new issues of safety or effectiveness compared to the predicate devices." (This is the overarching conclusion based on all non-clinical tests).

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document primarily describes non-clinical (bench) testing. For such tests, the concept of "sample size for the test set" is usually described in terms of the number of tested devices or batches, which is not explicitly provided in this summary. The data provenance (country of origin, retrospective/prospective) is not applicable or provided for these types of non-clinical tests in this summary.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not applicable to the non-clinical (bench) testing described. "Ground truth" established by experts is typically relevant for clinical studies or studies involving diagnostic accuracy, which were not performed here.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable to the non-clinical (bench) testing described. Adjudication methods are usually used in clinical studies for disagreement resolution among expert readers.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study was done or is referenced. This device is a catheter, not an AI-powered diagnostic tool, so MRMC studies involving human readers and AI assistance are not relevant.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This device is a physical medical device (catheter), not an algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the non-clinical tests performed:

    • Biocompatibility: The "ground truth" is compliance with ISO 10993-1, which is a recognized international standard based on established scientific principles for biological evaluation.
    • Dimensional Verification: The "ground truth" is the engineering design specifications and possibly comparative measurements to predicate devices.
    • Functional and Performance Testing: The "ground truth" is the designed functional specification of the device (e.g., the signal balloon inflates under specific pressure conditions) and compliance with performance standards like ASTM F623 and EN 1616. These standards themselves define the "ground truth" for acceptable performance.

    8. The sample size for the training set

    Not applicable. This device is a physical medical device, not a machine learning algorithm that requires a training set.

    9. How the ground truth for the training set was established

    Not applicable, as no training set was used.

    Ask a Question

    Ask a specific question about this device

    K Number
    K180175
    Date Cleared
    2018-12-07

    (319 days)

    Product Code
    Regulation Number
    878.4750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SignalMark Lung Biopsy Site Marker

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SignalMark Lung Biopsy Site Marker is intended to provide accuracy in marking a biopsy location for visualization during surgical resection.

    Device Description

    The SignalMark Lung Biopsy Site Marker is a medical device used by a physician to percutaneously place a small implantable hydrogel marker in lung tissue biopsy to "mark" the location of the biopsy site. It is intended to be used on adults undergoing percutaneous lung biopsies, in surgical settings, such as hospitals or medical clinics with operating suites. The SignalMark Lung Biopsy Site Marker consists of two components:

    • Applicator: Component made of plastic and stainless steel that pushes the marker into the tissue.
    • Marker: Component made of USP-grade porcine gelatin-based hydrogel with methylene blue-colored silicon dioxide microspheres. The marker aids in the visualization of tissue allowing surgeons to readily locate the biopsy site for subsequent tissue or tumor resection.
    AI/ML Overview

    This document is a 510(k) summary for the SignalMark Lung Biopsy Site Marker. It details the process taken to demonstrate substantial equivalence to a predicate device, rather than proving the device meets specific acceptance criteria based on clinical performance in an AI/imaging context. Therefore, most of the requested information regarding acceptance criteria, specific study design (e.g., MRMC, standalone), ground truth establishment, expert qualifications, and sample sizes for training/test sets are not applicable or extractable from this document.

    This device is not an AI/imaging device. It is an implantable marker used to physically mark a biopsy site for later surgical resection. The acceptance criteria and study detailed in this document are primarily focused on non-clinical performance (bench and animal testing) to demonstrate its safety and biological compatibility, and technical equivalence to a previously cleared predicate device.

    However, I can extract information related to the device's performance testing from the "Summary of Non-Clinical Testing" section and interpret it in the context of "acceptance criteria" for this type of device.

    Here's the relevant information that can be extracted, and where the requested information is not applicable:

    1. A table of acceptance criteria and the reported device performance

    For this device, "acceptance criteria" are implied by the non-clinical testing performed to demonstrate equivalence and safety. The document states "No FDA performance standards have been established for SignalMark Lung Biopsy Site Marker," meaning there aren't quantitative metrics like accuracy, sensitivity, specificity that need to be met. Instead, "acceptance" is demonstrated through successful completion of the listed tests, ensuring the device functions as intended and is safe.

    Acceptance Criteria (Implied)Reported Device Performance
    Biocompatibility in compliance to ISO 10993-1Patient-contacting material was subjected to biocompatibility testing. (Implicitly passed)
    Applicator Functionality and DimensionsVisual Inspection of the Applicator (Passed)
    Applicator Deployment Test (Passed)
    Applicator Dimensional Inspection (Passed)
    Applicator Stroke Length Test (Passed)
    Applicator Compression Test (Passed)
    Applicator Tensile Test (Passed)
    Marker Pad Visual and Physical CharacteristicsVisual Inspection of the Marker Pad (Passed)
    Marker Pad Diameter (Passed)
    Marker Pad Length (Passed)
    Marker Pad Hydration (Passed)
    Marker Pad Imaging VisibilityMarker Pad Ultrasound Visual Test (Passed)
    Cleanliness and Packaging IntegrityWipe test with 70% IPA (Passed)
    Packaged Contents Verification (Passed)
    In-vivo Biodistribution and Safety/EfficacyBiodistribution in rodents (Passed)
    Safety and efficacy in porcine (Passed)
    Biologic response in porcine (Passed)

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not specify exact sample sizes for each of the bench or animal tests. It only lists the types of tests performed (e.g., "Biodistribution in rodents," "Safety and efficacy in porcine").
    • Data Provenance: The document does not specify the country of origin of the data or whether the studies were retrospective or prospective. Given the nature of a 510(k) submission, these are typically pre-market studies conducted specifically for regulatory submission, implying they are prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not applicable. This device is not an AI/imaging device, and thus there is no "ground truth" to be established by experts in the context of image interpretation or diagnostic accuracy. The "ground truth" for this device's performance testing would be the physical properties measured in bench testing and biological responses observed in animal studies.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. This applies to establishing ground truth for diagnostic accuracy in imaging studies, which is not relevant here.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This type of study is relevant for AI-assisted diagnostic tools, not for an implantable biopsy site marker.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This also applies to AI algorithms.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • The "ground truth" for this device's evaluation is based on engineering measurements and biological observations from bench and animal testing. This includes:
      • Physical dimensions and deployment success (bench testing).
      • Material biocompatibility (ISO 10993-1).
      • Biodistribution, safety, and efficacy in animal models.

    8. The sample size for the training set

    • Not applicable. This device does not involve machine learning or a "training set."

    9. How the ground truth for the training set was established

    • Not applicable. See point 8.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3