Search Filters

Search Results

Found 2518 results

510(k) Data Aggregation

    K Number
    K242314
    Date Cleared
    2025-09-11

    (402 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    Minimally Invasive Prostate Surgery Navigation System(Model:AmaKris SR1-A-2)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251573
    Manufacturer
    Date Cleared
    2025-09-10

    (111 days)

    Product Code
    Regulation Number
    886.5925
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    VizionFocus (somofilcon A) Silicone Hydrogel Soft (hydrophilic) Contact Lens

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Device Name :

    F&P OptiNIV Hospital Vented Full Face Mask Compatible with Single-limb Circuits – Size A (ONIV117A);
    Hospital Vented Full Face Mask with optional Expiratory Filter Compatible with Single-limb Circuits - Size A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K250132
    Date Cleared
    2025-09-05

    (231 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    Ureteral Access Sheath

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251108
    Date Cleared
    2025-08-29

    (140 days)

    Product Code
    Regulation Number
    878.4400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Erbe ESU Model VIO® 3n with Accessories

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Erbe Electrosurgical Unit (ESU/Generator) model VIO 3n with instruments and accessories is intended to deliver high frequency (HF) electrical current for the cutting and/or coagulation of tissue.

    Device Description

    The Erbe ESU Model VIO® 3n is an electrosurgical unit (ESU) that delivers high-frequency (HF) electrical current for cutting and/or coagulation of tissue. The unit can be mounted/secured to a cart/system carrier or on a ceiling mount. Different footswitches are available for activating the ESU. The ESU VIO® 3n has several clearly defined monopolar and bipolar cutting and coagulation modes with different electrical waveforms and electrical parameters that are programmed with defined effect levels. Each effect level corresponds to a defined maximum power output and a voltage limitation. In combination with the compatible argon plasma coagulation unit APC 3 (K191234), it offers monopolar modes for argon plasma coagulation and argon-supported modes. The ESU has a touchscreen monitor that provides the user with an onscreen tutorial as well as settings and operational information. It also provides a small number of physical controls, such as the power switch, instrument sockets and a neutral electrode receptacle. Connections for the central power supply, for footswitches, for potential equalization of the operating theatre and Erbe Communication Bus (ECB) connections to other Erbe modules are located at the rear. The ESU emits sounds when instruments are activated, and messages are signaled. The actual application is carried out using compatible electrosurgical instruments that are connected to the generator. The VIO® 3n can be combined with matching Erbe devices and modules, instruments, and accessories.

    To address various clinical needs, the ESU is available in 5 different configurations which are called "Fire", "Metal", "Stone", "Water" and "Timber". Whereas the configuration "Fire" includes all available modes and functionalities, the other configurations only offer a reduced number of modes and functionalities.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the Erbe ESU Model VIO® 3n with Accessories do not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a specific study proving the device meets those criteria, particularly for an AI/software as a medical device (SaMD).

    This document pertains to an electrosurgical unit, which is a hardware device for cutting and coagulating tissue using high-frequency electrical current. The "software" mentioned in the document refers to the operating software of the ESU itself, not an AI or diagnostic algorithm, and thus the type of performance metrics, ground truth, and study designs you're asking about (e.g., MRMC, standalone performance, expert consensus on diagnostic images) are not applicable to this type of device submission.

    Therefore, I cannot provide a table of acceptance criteria and device performance in the context of an AI/SaMD, nor detailed information on test set sample sizes, data provenance, number of experts for ground truth, adjudication methods, MRMC studies, or specific training set details, because this document describes a hardware device.

    However, I can extract the information that is present about the non-clinical performance testing and what it aims to demonstrate:


    1. Table of Acceptance Criteria and Reported Device Performance

    As this is a hardware electrosurgical unit, the "acceptance criteria" are generally related to compliance with electrical safety, EMC, and functional performance standards for tissue cutting/coagulation. The document does not provide specific quantitative acceptance criteria values or detailed performance metrics in a table. It states that the device "performs as intended per the product specifications and requirements."

    Acceptance Criteria Category (Inferred from testing)Reported Device Performance (Summary from submission)
    Functional Performance (Cutting and Coagulation Mode)"Validated the cutting and coagulation mode performance compared to the predicate device(s)." "Performs as intended and meets design specifications."
    Electromagnetic Compatibility (EMC)"Tested in compliance with IEC 60601-1-2 and FDA Guidance 'Electromagnetic Compatibility (EMC) of Medical Devices'."
    Electrical Safety"Tested in compliance with IEC 60601-1 and IEC 60601-2-2, as applicable."
    Software Verification"Provided for an enhanced documentation level in compliance with IEC 62304 and FDA Guidance 'Content of Premarket Submissions for Device Software Functions'."
    Cybersecurity"Tested and assessed according to FDA Guidance 'Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions'."

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: Not specified for any of the non-clinical tests.
    • Data Provenance: Not specified, but the tests were performed "non-clinical," implying laboratory or bench testing rather than clinical patient data. The manufacturer is Erbe Elektromedizin GmbH (Germany).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This question is not applicable to the non-clinical testing of an electrosurgical hardware device. Ground truth, in the context of AI/SaMD, usually refers to labeled diagnostic data. For this device, "ground truth" would be the measurable physical parameters and effects on tissue.

    4. Adjudication method for the test set

    • This question is not applicable. Adjudication methods like 2+1 or 3+1 are used for expert consensus on ambiguous diagnostic cases in AI/SaMD studies. For an ESU, performance is measured against engineering specifications and observed physical effects.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC study was not done. This type of study is relevant for diagnostic AI tools that assist human readers (e.g., radiologists interpreting images). The Erbe ESU Model VIO® 3n is an interventional hardware device, not a diagnostic AI.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • The term "standalone performance" for an AI algorithm is not directly applicable here. However, the non-clinical performance testing (functional, EMC, electrical safety) can be considered "standalone" in the sense that the device's technical capabilities were tested independently against specifications without a human operator's diagnostic interpretation loop. The device directly performs an action (cutting/coagulation) rather than providing information for human interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • For the functional testing, the "ground truth" would be the observable physical effects on tissue (e.g., degree of cutting, coagulation depth, eschar formation) and measured electrical parameters (power output, voltage, current) compared to established engineering specifications and the performance of predicate devices.
    • For safety and EMC, the "ground truth" is compliance with international standards (e.g., IEC 60601 series).

    8. The sample size for the training set

    • This question is not applicable. The Erbe ESU Model VIO® 3n is an electrosurgical hardware device. It does not use a "training set" in the machine learning sense to learn and develop an algorithm. Its operating software is developed through traditional software engineering processes, not machine learning model training.

    9. How the ground truth for the training set was established

    • This question is not applicable as there is no machine learning training set for this device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243901
    Manufacturer
    Date Cleared
    2025-08-28

    (252 days)

    Product Code
    Regulation Number
    880.5860
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SmartPilot YpsoMate NS-A2.25

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SmartPilot YpsoMate NS-A2.25 is indicated for use with the compatible disposable autoinjector to capture and record injection information that provides feedback to the user.

    Device Description

    The SmartPilot YpsoMate NS-A2.25 is an optional, battery operated, reusable device designed to be used together with a compatible autoinjector (a single use, needle based, pre-filled injection device for delivery of a drug or biologic into subcutaneous tissue). Figure 1 shows the SmartPilot YpsoMate NS-A2.25 with the paired autoinjector. The SmartPilot YpsoMate NS-A2.25 records device data, injection data and injection process status. The SmartPilot YpsoMate NS-A2.25 also provides guidance feedback to the user during the injection.

    Note that the SmartPilot YpsoMate NS-A2.25 does not interfere with autoinjector function.

    AI/ML Overview

    The provided 510(k) clearance letter details the substantial equivalence of the SmartPilot YpsoMate NS-A2.25 device to its predicate. While it lists various performance tests and standards met, it does not contain specific acceptance criteria values or detailed study results for metrics like sensitivity, specificity, or improvement in human reader performance. This document primarily focuses on demonstrating that the new device does not raise new questions of safety and effectiveness compared to the predicate, due to similar technological characteristics and adherence to relevant safety standards.

    Therefore, many of the requested details about acceptance criteria, study design (sample size, data provenance, expert adjudication, MRMC studies), and ground truth establishment (especially for AI-driven performance) cannot be extracted directly from this regulatory document. The information primarily pertains to hardware, software, and usability testing.

    However, based on the provided text, here's what can be inferred or stated about the device's acceptance criteria and proven performance:

    Device: SmartPilot YpsoMate NS-A2.25

    Indication for Use: The SmartPilot YpsoMate NS-A2.25 is indicated for use with the compatible disposable autoinjector to capture and record injection information that provides feedback to the user. Specifically compatible with Novartis/Sandoz Secukinumab (Cosentyx).


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not provide a table with quantitative acceptance criteria and reported performance values for metrics typically associated with AI/software performance (e.g., sensitivity, specificity, accuracy of data capture in a clinical context). Instead, it focuses on meeting established engineering, safety, and quality standards.

    Here's a summary of the types of performance criteria implied by the successful completion of the listed tests:

    Acceptance Criterion (Implied)Reported Device Performance (Achieved)Supporting Test / Standard
    BiocompatibilityMeets requirements for intact skin contact.ISO 10993-1, -5, -10, -23
    Compatibility with AutoinjectorNo negative impact on Essential Performance Requirements (EPRs) of compatible YpsoMate 2.25ml autoinjector.ISO 11608-1:2022, ISO 11608-5:2022 (Influence Testing)
    Basic SafetyComplies with general safety standards.IEC 60601-1, Ed.3.2 2020-08
    Electromagnetic Compatibility (EMC)Complies with EMC standards.IEC 60601-1-2:2014 incl. AMD 1:2021
    Battery SafetyComplies with battery safety standards.IEC 62133-2:2017 + A1:2021
    Wireless Communication (FCC)Complies with FCC regulations for wireless devices.FCC 47 CFR Part 15B, Part 15.225, Part 15.247
    Wireless CoexistenceComplies with standards for wireless coexistence.IEEE ANSI USEMCSC C63.27-2021; AIM 7351731:2021
    Software Verification & ValidationDocumentation level "enhanced," meets requirements for safety, cybersecurity, and interoperability. Software classified as B per ANSI AAMI ISO 62304:2006/A1:2016.FDA Guidance on Software Functions, ANSI AAMI ISO 62304, Cybersecurity Testing, Interoperability testing
    Electrical Hardware FunctionalityBLE, NFC, inductance measurement, electromechanical switches, motion detection, temperature measurement all functional.Electrical Hardware Requirements Testing
    Indicator & Feedback SystemsVisual (LEDs with specified wavelength/intensity) and acoustic (adjustable sound volume) feedback systems are functional.Electrical Hardware Requirements Testing
    Durability & LifetimeMeets specifications for switching cycles, 3-year storage, 2-year or 120-use operational lifespan, and operational tolerances.Electrical Hardware Requirements Testing, Lifetime and Shelf Life Testing
    Mechanical IntegrityWithstands use force, axial/twisting loads on inserted autoinjector, and maintains locking flag visibility.Mechanical Testing
    Shelf LifeAchieves a 3-year shelf life.Shelf Life Testing
    Human Factors/UsabilityComplies with human factors engineering standards; formative and summative usability evaluations completed.IEC 60601-1-6:2010/AMD2:2020, ANSI AAMI IEC 62366-1:2015 + AMD1 2020
    Transportation SafetyMaintains integrity after transportation simulation.ASTM D4169-22
    Dose Accuracy (Influence)Meets ISO 11608-1 requirements when evaluated with compatible YpsoMate AutoInjectors. This is related to the autoinjector's performance when used with the SmartPilot, not the SmartPilot's accuracy in measuring dose itself, as it states the SmartPilot "does not capture dosing information."Influence Testing based on ISO 11608-1:2022

    Note: The device's primary function is to "capture and record injection information that provides feedback to the user," and it "does not capture dosing information" or "electronically controlled dosing." Therefore, criteria related to dosing volume accuracy or AI interpretation of medical images/signals for diagnosis are not applicable to this device. The focus is on the accurate capture of event data (injection start/end, result) and providing timely feedback, as well as general device safety and functionality.


    2. Sample Size Used for the Test Set and Data Provenance

    The document describes various types of tests (e.g., Biocompatibility, EMC, Software V&V, Mechanical, Lifetime, Human Factors), but does not specify the sample sizes used for each test dataset.

    Data Provenance: The document does not explicitly state the country of origin for the data or whether the studies were retrospective or prospective. Given that Ypsomed AG is based in Switzerland and the testing references international and US standards, the testing likely involved a mix of internal validation, third-party lab testing, and possibly user studies in relevant regions. All tests described are part of preclinical (non-clinical) performance validation, making them inherently prospective for the purpose of demonstrating device function and safety prior to marketing.


    3. Number of Experts and Qualifications for Ground Truth

    The document does not mention the use of experts in the context of establishing ground truth for the device's functional performance, as it is not an AI-driven diagnostic or interpretative device that relies on human expert consensus for its output. Its performance is evaluated against engineering specifications and physical/software functional requirements. The "Human Factors" testing would involve users, but not necessarily "experts" adjudicating correctness in the sense of accuracy for a diagnostic task.


    4. Adjudication Method for the Test Set

    Not applicable. The device's performance is determined by meeting pre-defined engineering and regulatory standards and testing protocols, not by expert adjudication of its output, as it does not produce subjective or interpretative results like an AI diagnostic algorithm.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Not performed/applicable. An MRMC study is relevant for AI systems that assist human readers in tasks like image interpretation to demonstrate improved diagnostic accuracy. This device is an "Injection Data Capture Device" providing feedback and recording information; it does not involve human readers interpreting data that the device enhances.


    6. Standalone (Algorithm Only) Performance

    While the device has software and algorithms to detect injection events and provide feedback, the document does not report "standalone" performance metrics in the way an AI diagnostic algorithm would (e.g., sensitivity, specificity). Its performance is demonstrated through the verification and validation of its hardware and software components (e.g., ability to detect spring position, successful data transfer, correct LED/audible feedback). The "Influence Testing" evaluates its performance in conjunction with the autoinjector, proving it does not negatively interfere.


    7. Type of Ground Truth Used

    The ground truth for the verification and validation of this device is engineering specifications, physical measurements, and adherence to established regulatory and industry standards. For example:

    • Biocompatibility: Measured against established thresholds for cytotoxicity, sensitization, and irritation.
    • EMC/Safety: Compliance with current versions of IEC standards.
    • Software V&V: Compliance with software lifecycle processes and cybersecurity standards, and correct execution of defined functions (e.g., data recording, feedback activation).
    • Mechanical/Lifetime: Physical measurements (e.g., activation force, dimension checks), cycle counts, and functional checks after simulated use/aging.
    • Human Factors: User performance and subjective feedback against usability goals.

    There is no "expert consensus," "pathology," or "outcomes data" ground truth in the context of its direct function (data capture and feedback).


    8. Sample Size for the Training Set

    Not applicable. This device is not an AI/machine learning system that requires a "training set" in the conventional sense (i.e., for learning to perform a complex, data-driven task like image recognition or diagnosis). Its functionality is based on programmed logic and sensor readings, not statistical learning from a large dataset.


    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no "training set" for the type of device described. Input signals (e.g., from the inductive sensor about spring position) are processed based on predefined engineering parameters and logical rules to determine injection status, not learned from a dataset.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251604
    Manufacturer
    Date Cleared
    2025-08-22

    (87 days)

    Product Code
    Regulation Number
    866.3987
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CareSuperb COVID-19/Flu A&B Antigen Combo Home Test

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigens directly in anterior nasal swab samples from individuals with signs and symptoms of respiratory tract infection. Symptoms of respiratory infections due to SARS-CoV-2 and influenza can be similar. This test is for non-prescription home use by individuals aged 14 years or older testing themselves, or adults testing individuals aged 2 years or older.

    All negative results are presumptive and should be confirmed with an FDA-cleared molecular assay when determined to be appropriate by a healthcare provider. Negative results do not rule out infection with influenza, SARS-CoV-2, or other pathogens.

    Individuals who test negative and experience continued or worsening respiratory symptoms, such as fever, cough, and/or shortness of breath, should seek follow up care from their healthcare provider.

    Positive results do not rule out co-infection with other respiratory pathogens and therefore do not substitute for a visit to a healthcare provider for appropriate follow-up.

    Device Description

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunoassay intended for the qualitative detection and differentiation of SARS-CoV-2 nucleocapsid antigen, Influenza A nucleoprotein antigen, and Influenza B nucleoprotein antigen from anterior nasal swab specimens.

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test utilizes an adaptor-based lateral flow assay platform integrating a conjugate wick filter to facilitate sample processing. Each test cassette contains a nitrocellulose membrane with immobilized capture antibodies for SARS-CoV-2, Influenza A, Influenza B, and internal control. Following specimen application to the sample port, viral antigens, if present, bind to labeled detection antibodies embedded in the conjugate wick filter. The resulting immune complexes migrate along the test strip and are captured at the respective test lines (C19 for SARS-CoV-2, A for Influenza A, and B for Influenza B), forming visible colored lines. A visible control line (Cont) confirms proper sample migration and test validity. The absence of a control line invalidates the test result.

    Each kit includes a single-use test cassette, assay buffer dropper vial, nasal swab, and Quick Reference Instructions (QRI). Test results are visually interpreted 10 minutes after swab removal.

    AI/ML Overview

    The provided document describes the CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test, an over-the-counter lateral flow immunoassay for lay users. The study aimed to demonstrate its substantial equivalence to a predicate device and its performance characteristics for qualitative detection and differentiation of SARS-CoV-2, Influenza A, and Influenza B antigens in anterior nasal swab samples.

    Here's an analysis of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    While specific acceptance criteria (i.e., pre-defined thresholds the device must meet for clearance) are not explicitly stated as numbered points in this 510(k) summary, they can be inferred from the reported performance data and common FDA expectations for such devices. The performance data presented serves as the evidence that the device met these implied criteria.

    Performance CharacteristicImplied Acceptance Criteria (e.g., typical FDA expectations)Reported Device Performance
    Clinical Performance (vs. Molecular Assay)
    SARS-CoV-2 - Positive Percent Agreement (PPA)High PPA (e.g., >80-90%)92.5% (95% CI: 86.4%-96.0%)
    SARS-CoV-2 - Negative Percent Agreement (NPA)Very high NPA (e.g., >98%)99.6% (95% CI: 99.1%-99.8%)
    Influenza A - PPAHigh PPA (e.g., >80-90%)85.6% (95% CI: 77.9%-90.9%)
    Influenza A - NPAVery high NPA (e.g., >98%)99.0% (95% CI: 98.4%-99.4%)
    Influenza B - PPAHigh PPA (e.g., >80-90%)86.0% (95% CI: 72.7%-93.4%)
    Influenza B - NPAVery high NPA (e.g., >98%)99.7% (95% CI: 99.3%-99.9%)
    Analytical Performance
    Precision (1x LoD)≥95% agreement99.2% for SARS-CoV-2, 99.2% for Flu A, 99.7% for Flu B (all at 1x LoD)
    Precision (3x LoD)100% agreement expected at higher concentrations100% for all analytes at 3x LoD
    Limit of Detection (LoD)Lowest detectable concentration with ≥95% positive agreementConfirmed LoDs provided for various strains (e.g., SARS-CoV-2 Omicron: 7.50 x 10^0 TCID₅₀/Swab at 100% agreement)
    Co-spike LoD≥95% result agreement in presence of multiple analytesMet for Panel I and II (e.g., 98% for SARS-CoV-2, 97% for Flu A in Panel I)
    Inclusivity (Analytical Reactivity)Demonstrate reactivity with diverse strainsLow reactive concentrations established for a wide range of SARS-CoV-2, Flu A, Flu B strains, with 5/5 replicates positive
    Competitive InterferenceNo interference from high concentrations of other analytes100% agreement, no competitive interference observed
    Hook EffectNo false negatives at high antigen concentrations100% positive result agreement, no hook effect observed
    Analytical Sensitivity (WHO Std)Demonstrate sensitivity using international standardLoD of 8 IU/Swab with 95% (19/20) agreement
    Cross-Reactivity/Microbial InterferenceNo false positives (cross-reactivity) or reduced performance (interference)No cross-reactivity or microbial interference observed (100% agreement for positive samples, 0% for negative)
    Endogenous/Exogenous Substances InterferenceNo false positives or reduced performanceNo cross-reactivity or interference observed (all target analytes accurately detected)
    Biotin InterferenceClearly define impact of biotin; specify concentration for potential interferenceFalse negatives for Influenza A at 3,750 ng/mL and 5,000 ng/mL (Important finding for labeling)
    Real-time StabilitySupport claimed shelf-life100% expected results over 15 months, supporting 13-month shelf-life
    Transportation StabilityWithstand simulated transport conditions100% expected results, no false positives/negatives under extreme conditions
    Usability StudyHigh percentage of correct performance and interpretation by lay users>98% correct completion of critical steps, 98.7% observer agreement with user interpretation, >94% found instructions easy/test simple
    Readability StudyHigh percentage of correct interpretation from QRI by untrained lay users94.8% correct interpretation of mock devices from QRI without assistance

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Clinical Performance Test Set (Human Samples): N=1644 total participants.
      • Self-collecting: N=1447 (individuals aged 14 or older testing themselves)
      • Lay-user/Tester Collection: N=197 (adults testing individuals aged 2-17 years)
    • Data Provenance:
      • Country of Origin: United States ("13 clinical sites across the U.S.").
      • Retrospective/Prospective: The clinical study was prospective, as samples were collected "between November of 2023 and March of 2025" from "symptomatic subjects, suspected of respiratory infection."
    • Analytical Performance Test Sets (Contrived/Spiked Samples): Sample sizes vary per study:
      • Precision Study 1: 360 results per panel member (negative, 1x LoD positive, 3x LoD positive).
      • Precision Study 2: 36 sample replicates/lot (for negative and 0.75x LoD positive samples).
      • LoD Confirmation: 20 replicates per LoD concentration.
      • Co-spike LoD: 20 replicates per panel (multiple panels tested).
      • Inclusivity: 5 replicates per strain (for identifying lowest reactive concentration).
      • Competitive Interference: 3 replicates per of 19 sample configurations.
      • Hook Effect: 5 replicates per concentration.
      • WHO Standard LoD: 20 replicates for confirmation.
      • Cross-Reactivity/Microbial Interference: 3 replicates per microorganism (in absence and presence of analytes).
      • Endogenous/Exogenous Substances Interference: 3 replicates per substance (in absence and presence of analytes).
      • Biotin Interference: 3 replicates per biotin concentration.
      • Real-time Stability: 5 replicates per lot at each time point.
      • Transportation Stability: 5 replicates per sample type per lot for each condition.
    • Usability Study: 1,795 participants.
    • Readability Study: 50 participants.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Clinical Performance (Reference Method - Test Set Ground Truth): The ground truth for the clinical test set was established using FDA-cleared molecular RT-PCR comparator assays for SARS-CoV-2, Influenza A, and Influenza B.

      • This implies that the "experts" were the established and validated molecular diagnostic platforms, rather than human expert readers/adjudicators for visual interpretation.
    • Usability/Readability Studies:

      • Usability Study: "Observer agreement with user-interpreted results was 98.7%." This suggests trained observers (likely not "experts" in the sense of clinical specialists, but rather study personnel trained in test interpretation as per IFU) established agreement with user results.
      • Readability Study: The study focused on whether lay users themselves could interpret results after reading the QRI. Ground truth for the mock devices would be pre-determined by the device manufacturer based on their design.

    4. Adjudication Method for the Test Set

    • Clinical Performance: No human adjudication method (e.g., 2+1, 3+1) is mentioned for the clinical test set. The direct comparison was made against molecular RT-PCR as the gold standard, which serves as the definitive ground truth for the presence or absence of the viruses. This type of diagnostic test typically relies on a definitive laboratory method for ground truth, not human interpretation consensus.
    • Usability/Readability Studies: The usability study mentioned "Observer agreement with user-interpreted results," implying direct comparison between user interpretation and a pre-defined correct interpretation or an observer's interpretation. The readability study involved participants interpreting mock devices based on the QRI, with performance measured against the pre-determined correct interpretation of those mock devices.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No AI Component: This device (CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test) is a lateral flow immunoassay for visual interpretation. It is not an AI-powered diagnostic device, nor does it have a human-in-the-loop AI assistance component. Therefore, an MRMC study related to AI assistance was not applicable and not performed.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Not Applicable: As this is a visually interpreted antigen test, there is no "algorithm only" or standalone algorithm performance to evaluate. The device's performance is intrinsically linked to its chemical reactions and subsequent visual interpretation by the user (or observer in studies).

    7. The Type of Ground Truth Used

    • Clinical Performance Test Set: FDA-cleared molecular RT-PCR comparator assays (molecular ground truth). This is generally considered a highly reliable and objective ground truth for viral detection.
    • Analytical Performance Test Sets: Generally contrived samples with known concentrations of viral analytes or microorganisms against negative pooled swab matrix. This allows for precise control of the 'ground truth' concentration and presence/absence.
    • Usability/Readability Studies: For readability, it was pre-defined correct interpretations of "mock test devices." For usability, it was observation of correct procedural steps and comparison of user interpretation to trained observer interpretation.

    8. The Sample Size for the Training Set

    • Not explicitly stated in terms of a "training set" for the device itself. As a lateral flow immunoassay, this device is developed through biochemical design, antigen-antibody interactions, and manufacturing processes, rather than through machine learning models that require distinct training datasets.
    • The document describes the analytical studies (LoD, inclusivity, interference, etc.) which inform the device's technical specifications and ensure it's robust. The clinical study and usability/readability studies are typically considered validation/test sets for the final manufactured device.
    • If this were an AI/ML device, a specific training set size would be crucial. For this type of IVD, the "training" analogous to an AI model would be the research, development, and optimization of the assay components (antibodies, membrane, buffer, etc.) using various known positive and negative samples in the lab.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable in the context of a machine learning training set.
    • For the development and optimization of the assay (analogous to training), ground truth would have been established through:
      • Using quantified viral stocks (e.g., TCID₅₀/mL, CEID₅₀/mL, FFU/mL, IU/mL) to precisely spike into negative matrix (PNSM) to create known positive and negative samples at various concentrations.
      • Employing established laboratory reference methods (e.g., molecular assays) to confirm the presence/absence and concentration of analytes in developmental samples.
      • Utilizing characterized clinical samples (if available) with confirmed statuses from gold-standard methods early in development.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251471
    Date Cleared
    2025-08-20

    (99 days)

    Product Code
    Regulation Number
    872.3630
    Panel
    Dental
    Why did this record match?
    Device Name :

    IPD Dental Implant Abutments

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IPD Dental Implant Abutments are intended to be used in conjunction with endosseous dental implants in the maxillary or mandibular arch to provide support for single or multiple dental prosthetic restorations.

    Device Description

    IPD Dental Implant Abutments is a dental implant abutment system composed of dental abutments, screws, as well as other dental abutment accessories, intended to be placed into dental implants to provide support for dental prosthetic restorations.

    Abutments provide basis for single or multiple tooth prosthetic restorations. They are available in a variety of connection types to enable compatibility with commercially available dental implants systems.

    IPD Dental Implant Abutments includes the following categories of dental abutment designs:

    • Titanium base (Interface) abutments (INC3D);
    • Multi-Unit abutments (MUA);
    • Overdenture Abutments (PSD);
    • Temporary Abutments (PP);
    • Healing Abutments (TC).

    The system also includes the use of the corresponding screws intended to attach the prosthesis to the dental implant. Specifically:

    • Ti Screw (TT): Used during restoration fabrication.
    • TiN Screw (TTN): Used in finished restorations, with TiN coating.
    • TPA Screw (TPA): Used in finished angulated restorations, with TiN coating.

    The metallic components of the subject abutments and screws are made of titanium alloy conforming to ISO 5832-3 "Implant for surgery – Metallic materials – Part 3: Wrought titanium 6-aluminium 4-vanadium alloy".

    The purpose of this submission is to expand IPD Dental Implant Abutments offerings with:
    • New IPD's compatible dental implant systems,
    • New angulations available abutment-category specific.
    • New in-house TiN coating.

    IPD dental implant abutments and screws are compatible with the following commercially available dental implant systems:
    (Table 2. Summary of IPD abutments categories with compatibilized OEM Implant/Abutment Systems with specific reference to maximum angulation specifically included in this submission. provided in original text)

    Ti Base (Interface) abutments are attached (screw-retained) to the implant/abutment and cemented to the zirconia superstructure.

    The Ti Base is a two-piece abutment composed of the titanium component, as the bottom-half, and the zirconia superstructure, as the top-half. It consists of a pre-manufactured prosthetic component in Titanium alloy per ISO 5832-3, as well as the supporting digital library file for FDA-cleared design software (3Shape Abutment Designer™ Software, cleared under K151455) which enables the design of a patient-specific superstructure by the laboratory/clinician and which will be manufactured in FDA-cleared Zirconia (e.g., DD Bio Z, K142987) according to digital dentistry workflow at the point of care, or at a dental laboratory.

    The design and fabrication of the zirconia superstructure for Ti Base (Interface) will be conducted using a digital dentistry workflow requiring the use of the following equipment, software and materials:
    Scanner: 3D Scanner D850.
    Design Software: 3Shape Abutment Designer Software, K151455.
    Zirconia Material: DD Bio Z, K142987.
    Milling machine/Brand: Dental Concept System Model: DC1 Milling System.
    Cement: Multilink® Automix, K123397.

    Ti Base (Interface) abutment design parameters for the zirconia superstructure are defined as follows:
    Minimum gingival height: 1.5 mm
    Minimum wall thickness: 0.43 mm
    Minimum post height for single-unit restorations: 4.75 mm (1)
    Maximum gingival height: 6.0 mm
    Maximum angulation of the final abutment 30° (2)

    The resulting final prosthetic restoration is screwed to the dental implant. All subject abutments are single-use and provided non-sterile. Final restoration (which includes the corresponding screw) is intended to be sterilized at the dental clinic before it is placed in the patient.

    AI/ML Overview

    The provided FDA 510(k) clearance letter pertains to IPD Dental Implant Abutments, a medical device, not an AI/ML-driven software product. Therefore, the information requested regarding acceptance criteria and study data for an AI/ML device (e.g., sample size for test/training sets, expert ground truthing, MRMC studies, standalone performance) is not applicable to this document.

    The document describes the device, its intended use, comparison to predicate devices, and the non-clinical performance testing conducted to demonstrate substantial equivalence. These tests are physical and chemical in nature, not related to the performance of an AI/ML algorithm.

    Here's a breakdown of why an AI/ML-focused response is not possible, based on the provided text:

    • Device Type: The device is "IPD Dental Implant Abutments," which are physical components used in dentistry (titanium alloy abutments, screws, designed for zirconia superstructures). It is not software, a diagnostic imaging tool, or an AI/ML algorithm.
    • Purpose of Submission: The submission aims to expand compatibility with new dental implant systems and include new angulations and in-house TiN coating. This is a modification of a physical medical device, not a new AI/ML development.
    • Performance Data (Section VII): This section explicitly lists non-clinical performance testing such as:
      • Sterilization validation (ISO 17665-1)
      • Biocompatibility testing (Cytotoxicity, Sensitization, Irritation per ISO 10993)
      • Reverse engineering and dimensional analysis for compatibility
      • Validation of the digital workflow and software system (but this refers to the CAD/CAM software used to design the physical abutments, not an AI/ML diagnostic tool)
      • Static and dynamic fatigue testing (ISO 14801)
      • Modified Surfaces Information
      • MRI safety review

    Conclusion:

    The provided document describes a 510(k) clearance for a physical dental implant component. It does not contain any information about the acceptance criteria or study design for an AI/ML driven medical device. Therefore, a table of acceptance criteria and reported device performance related to AI/ML, sample sizes for test/training sets, details on expert ground truthing, MRMC studies, or standalone performance of an algorithm cannot be extracted from this text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251563
    Date Cleared
    2025-08-20

    (90 days)

    Product Code
    Regulation Number
    866.3987
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    WELLlife Flu A&B Home Test; WELLlife Influenza A&B Test

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    WELLlife Flu A&B Home Test:
    The WELLlife Flu A&B Home Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B nucleoprotein antigens directly in anterior nasal swab samples from individuals with signs and symptoms of respiratory tract infection. This test is for non-prescription home use by individuals aged 14 years or older testing themselves, or adults testing other individuals aged 2 years or older.

    All negative results are presumptive and should be confirmed with an FDA-cleared molecular assay when determined to be appropriate by a healthcare provider. Negative results do not rule out infection with influenza or other pathogens. Individuals who test negative and experience continued or worsening respiratory symptoms, such as fever, cough and/or shortness of breath, should seek follow-up care from their healthcare provider.

    Positive results do not rule out co-infection with other respiratory pathogens, and therefore do not substitute for a visit to a healthcare provider or appropriate follow-up.

    WELLlife Influenza A&B Test:
    The WELLlife Influenza A&B Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B nucleoprotein antigens directly in anterior nasal swab samples from individuals with signs and symptoms of respiratory tract infection. This test is for use by individuals aged 14 years or older testing themselves, or adults testing other individuals aged 2 years or older.

    All negative results are presumptive and should be confirmed with an FDA-cleared molecular assay when determined to be appropriate by a healthcare provider. Negative results do not rule out infection with influenza or other pathogens. Individuals who test negative and experience continued or worsening respiratory symptoms, such as fever, cough and/or shortness of breath, should seek follow-up care from their healthcare providers.

    Positive results do not rule out co-infection with other respiratory pathogens.

    Test results should not be used as the sole basis for treatment or other patient management decisions.

    Device Description

    The WELLlife Flu A&B Home Test and WELLlife Influenza A&B Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B protein antigens. The test has two versions, one for over the counter (OTC) use (WELLlife Flu A&B Home Test) and one for professional use (WELLlife Influenza A&B Test). Both versions of the WELLlife Influenza A&B Test that have an identical general design and are intended for the qualitative detection of protein antigens directly in anterior nasal swab specimens from individuals with respiratory signs and symptoms. Results are for the identification and differentiation of nucleoprotein antigen from influenza A virus, and nucleoprotein antigen from influenza B virus. The test cassette in the test kit is assembled with a test strip in a plastic housing that contains a nitrocellulose membrane with three lines: two test lines (Flu A line, Flu B line) and a control line (C line). The device is for in vitro diagnostic use only.

    AI/ML Overview

    The provided FDA Clearance Letter for the WELLlife Flu A&B Home Test includes details on the device's performance based on non-clinical and clinical studies. Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria for performance are generally implicit in these types of submissions, aiming for high agreement with a comparative method. The reported performance is presented through Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA).

    Table 1: Acceptance Criteria and Reported Device Performance (Implicit Criteria)

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Influenza A)Reported Device Performance (Influenza B)
    Clinical Performance (Agreement):
    Positive Percent Agreement (PPA)High agreement, typically >90% for acute infections [Implied]92.4% (95% CI: 87.2%-95.6%)91.4% (95% CI: 77.6%-97.0%)
    Negative Percent Agreement (NPA)Very high agreement, typically >98% [Implied]100% (95% CI: 99.3%-100%)100.0% (95% CI: 99.4%-100%)
    Non-clinical Performance (Precision):
    Lot-to-Lot Repeatability (1x LoD, positive)100% agreement over multiple lots, operators, and days [Implied]100% (180/180)100% (180/180)
    Lot-to-Lot Repeatability (Negative)0% false positives [Implied]0% (0/180)0% (0/180)
    Site-to-Site Reproducibility (1x LoD, positive)Near 100% agreement across sites and operators [Implied]97.0% (131/135)99.3% (134/135)
    Site-to-Site Reproducibility (Negative)0% false positives [Implied]0% (0/135) for Negative Sample0.7% (1/135) for Flu B High Negative (0.1x LoD)
    Non-clinical Performance (Analytical Sensitivity):
    Limit of Detection (LoD)Specific concentrations where ≥95% detection is achievedRanges from $3.89 \times 10^0$ to $4.17 \times 10^2$ TCID50/mL for A strainsRanges from $1.17 \times 10^1$ to $1.05 \times 10^3$ TCID50/mL for B strains
    Non-clinical Performance (Analytical Specificity):
    Cross-reactivity / Microbial InterferenceNo cross-reactivity or interference with listed organisms/viruses0/3 for all microorganisms/viruses tested0/3 for all microorganisms/viruses tested
    Endogenous Interfering SubstancesNo interference with listed substances at specific concentrationsNo interference with most substances, except FluMist Quadrivalent Live Intranasal Influenza Virus Vaccine (false positive at high concentrations)No interference with most substances, except FluMist Quadrivalent Live Intranasal Influenza Virus Vaccine (false positive at high concentrations)
    High Dose Hook EffectNo hook effect observed at high viral concentrations9/9 positive for Flu A strains9/9 positive for Flu B strains
    Competitive InterferenceDetection of low levels of one analyte in presence of high levels of another100% detection for all tested combinations100% detection for all tested combinations

    Study Details

    1. A table of acceptance criteria and the reported device performance

    • See Table 1 above. The acceptance criteria are inferred from what is typically expected for a diagnostic device of this type seeking FDA clearance (e.g., high sensitivity and specificity, consistent performance).

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size:
      • Clinical Study: 680 evaluable subjects (from 766 enrolled) were used for clinical performance evaluation.
      • Non-clinical Studies: Sample sizes vary by study:
        • Lot-to-Lot Precision: 180 results per sample type (3 lots x 3 operators x 2 replicates x 2 runs per day x 5 days).
        • Site-to-Site Reproducibility: 135 replicates per sample type (3 sites x 3 operators x 5 days).
        • LoD: 20 replicates for confirmatory testing.
        • Analytical Reactivity: Triplicates for initial range finding, then triplicates for two-fold dilutions.
        • Cross-Reactivity/Microbial Interference: 3 replicates per organism/virus.
        • Endogenous Interfering Substances: 3 replicates per substance.
        • High Dose Hook Effect: 9 replicates (across 3 lots).
        • Competitive Interference: 9 replicates for each combination.
    • Data Provenance:
      • Clinical Study: "A prospective study was performed... between January 2025 and March 2025... at six (6) clinical sites." The country of origin is not explicitly stated, but the FDA clearance implies US-based or FDA-accepted international clinical trials. It's a prospective study.
      • Non-clinical Studies: Performed internally at one site (Lot-to-Lot Precision) or at three external sites (Site-to-Site Reproducibility). These are also prospective experimental studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • The ground truth for the clinical test set was established using an "FDA-cleared molecular comparator method." This is a laboratory-based, highly sensitive, and specific molecular test, which serves as the gold standard for detecting influenza RNA/DNA.
    • There is no mention of human experts (e.g., radiologists, pathologists) being used to establish the ground truth for this in vitro diagnostic device. The comparator method itself is the "expert" ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • The document does not describe an adjudication method for conflicting results between the investigational device and the comparator method. Results from the WELLlife Flu A&B Home Test were compared directly to the FDA-cleared molecular comparator method. For an in-vitro diagnostic, typically the molecular comparator is considered the definitive truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC study was performed. This device is a lateral flow immunochromatographic assay, a rapid antigen test that produces visible lines interpreted directly by the user (either a lay user at home or a professional user). It does not involve "human readers" interpreting complex images or AI assistance in the interpretation of results in the way an imaging AI device would. Therefore, this question is not applicable to the WELLlife Flu A&B Home Test.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • This question is primarily relevant for AI/ML-driven software as a medical device (SaMD) where an algorithm provides an output. The WELLlife Flu A&B Home Test is a rapid diagnostic test interpreted visually. Its performance is inherent to the chemical reactions on the test strip, and it's designed for human interpretation (either self-testing or professional use). Therefore, a "standalone algorithm-only" performance study is not applicable in the context of this device's technology. The "device performance" metrics (PPA, NPA) are effectively its standalone performance as interpreted by a human user following instructions.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • The ground truth for the clinical study was an FDA-cleared molecular comparator method (e.g., PCR or equivalent), considered the gold standard for influenza detection.

    8. The sample size for the training set

    • The provided document describes clinical and non-clinical performance evaluation studies. For IVD devices like this one, it's common that the "training set" is not a distinct, formally defined dataset as it would be for a machine learning model. Instead, the device's design, reagent formulation, and manufacturing processes are optimized and validated through iterative development and verification testing (analogue to "training" and "internal validation"). The studies described in this summary are primarily validation studies demonstrating the final product's performance. Therefore, a specific "training set sample size" as one might see for an AI model is not applicable/not explicitly defined in this context.

    9. How the ground truth for the training set was established

    • As mentioned above, for a rapid diagnostic test, there isn't a "training set" in the sense of a machine learning model. Instead, the development process involves:
      • Analytical Validation: Establishing LoD, reactivity, specificity (cross-reactivity, interference) using reference strains, cultured microorganisms, and purified substances with known concentrations and characteristics. This essentially acts as the "ground truth" during the development phase.
      • Design Iteration: The test components (antibodies, membrane, buffer) are optimized to achieve desired sensitivity and specificity against known influenza strains and potential interferents. This iterative process, using well-characterized samples, ensures the device learns (is developed) to correctly identify targets.
      • The FDA-cleared molecular comparator assays serve as the ultimate "ground truth" against which the device's overall clinical performance is measured.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251877
    Date Cleared
    2025-08-15

    (58 days)

    Product Code
    Regulation Number
    876.5540
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    JMS CAVEO A.V. Fistula Needle Set

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    JMS CAVEO A.V. Fistula Needle Set is intended for temporary cannulation to vascular access for extracorporeal blood treatment for hemodialysis. This device is intended for single use only. The anti-needlestick safety feature aids in prevention of needle stick injuries when removing and discarding the needle after dialysis. The device also has an integrated safety mechanism that is designed to automatically generate a partial occlusion of the internal fluid path and trigger the hemodialysis machine to alarm and shut off if a complete dislodgement of the venous needle from the arm inadvertently occurs. In vitro testing supports that this feature triggers the hemodialysis machine to alarm and shut off.

    Device Description

    The subject device is the JMS CAVEO A.V. Fistula Needle Set (CAVEO) with an anti-needlestick safety feature. The Caveo is predicted to protect patients from the risks associated with venous needle dislodgement (VND) based on bench testing results. It contains an integrated stainless steel torsion spring mechanism and bottom footplate that provides an open/fluid path when the AV fistula set is fully cannulated into the access site. When the venous needle becomes completely dislodged from the patient's arm, this mechanism enables the footplate to partially occlude the blood path, generating an increased venous line pressure high enough to trigger automatic alarm and halt further blood pumping of the hemodialysis machine. In vitro testing supports that this feature triggers the hemodialysis machine to alarm and shut off. Based on bench testing results, this may significantly reduce patient blood loss in the event of a complete VND. The Caveo has a pre-attached anti-stick needle guard for prevention of needlestick injury at the time of needle withdraw after completion of a hemodialysis procedure.

    In vitro performance testing using dialysis machine Fresenius 2008K supports the function of the Caveo VND feature with a venous pressure limit set to 200mmHg symmetric mode, a maximum dialyzer membrane surface area of 2.5 m2, minimum blood flow rate of 200 mL/min, maximum ultrafiltration rate of 4000 mL/hour, and simulated treatment duration of 8 hours. If different machine and/or setting are used, before introducing the device, refer to Directions for Use.

    AI/ML Overview

    This document describes the FDA 510(k) clearance for the JMS CAVEO A.V. Fistula Needle Set. This device is a physical medical device, specifically a needle set for hemodialysis, and does not involve Artificial Intelligence (AI). Therefore, many of the requested criteria related to AI/software performance, ground truth establishment, expert adjudication, MRMC studies, and training datasets are not applicable.

    The document primarily focuses on bench testing (in vitro performance) and a simulated clinical usability study to demonstrate device safety and effectiveness.

    Here's a breakdown based on the provided text:

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria are primarily demonstrated through various performance tests, with "Passed" as the acceptance. The document doesn't explicitly state numerical acceptance thresholds for all tests (e.g., how much "Force to Depress the Footplate" is acceptable), but it implies successful completion. For some, like needle penetration resistance and retraction lock strength, numerical criteria are provided.

    Acceptance Criterion (Test)Reported Device Performance
    Needle Penetration Resistance
    14G≤ 40g (Predicate: ≤ 40g)
    15G≤ 35g (Predicate: ≤ 35g)
    16G≤ 30g (Predicate: ≤ 30g)
    17G≤ 30g (Predicate: ≤ 30g)
    Needle Retention Strength> 6.0kgf (Predicate: > 6.0kgf)
    Needle Surface (Visually)No dented/damaged needle (Predicate: No dented/damaged needle)
    Product LeakNo air bubble should appear when subjected to air pressure 0.40 kgf/cm2 and immersed in water. (Predicate: Same)
    Needle Retraction Final Lock Strength≤ 2.0kgf (Predicate: ≤ 2.0kgf)
    Connector (Air tightness, Luer fit)Passed (ISO 80369-7 compliant)
    Connection Strength (Tube to Connector/Joint, Tube to Pivot Valve Core)> 6.0kgf (Predicate: > 3.0kgf for Tube to Connector/Joint, > 6.0kgf for Tube to Hub)
    Leakage by Pressure Decay (Female Luer Lock)Passed
    Positive Pressure Liquid Leakage (Female Luer Lock)Passed
    Sub-atmospheric Pressure Air Leakage (Female Luer Lock)Passed
    Stress Cracking (Female Luer Lock)Passed
    Resistance to Separation from Axial Load (Female Luer Lock)Passed
    Resistance to Separation from Unscrewing (Female Luer Lock)Passed
    Resistance to Overriding (Female Luer Lock)Passed
    Tube to Connector Pull Test (Female Luer Lock)Passed
    Luer Lock Cover Open Torque Test (Female Luer Lock)Passed
    Testing Activation of the Sharps Injury Protection FeaturePassed
    Needle Pushback Strength TestPassed
    Needle Guard Detachment Strength TestPassed
    Appearance Check (Caveo)Passed
    Cover Pull with Hub (Caveo)Passed
    Air Leak Test (Caveo)Passed
    Positive Pressure Leak Test (Caveo)Passed
    Negative Pressure Leak Test (Caveo)Passed
    Needle Guard Retraction Final Lock Test (Caveo)Passed
    Tube to Hub Pull Test (Caveo)Passed
    Cannula to Hub Tensile Test (Caveo)Passed
    Dimensional Analysis of Footplate to Pivot Valve Core (Caveo)Passed
    TPE Front & Back Ends Internal Diameter (Y-axis) Measurements (Caveo)Passed
    TPE Surface Roughness (Caveo)Passed
    Cannulation at 15 and 45-Degree Angles (Caveo)Passed
    Occlusion After Taping (Caveo)Passed
    VND Performance (Venous Needle Dislodgement)Passed
    Baseline Pressure Comparison (Caveo)Passed
    Force to Depress the Footplate (Caveo)Passed
    Mechanical Hemolysis Testing (Caveo)Passed
    Simulated Clinical Usability StudySuccessful
    Transportation TestPassed
    Human Factors TestingPassed
    Biocompatibility (Cytotoxicity, Sensitization, Irritation, Hemocompatibility, Pyrogenicity, Acute Systemic Toxicity, Subacute Toxicity, Genotoxicity)Passed (for all, per ISO 10993 standards)

    2. Sample Size for Test Set and Data Provenance

    • Test Set (Clinical Trial): 15 subjects (2 females, 13 males).
    • Data Provenance: The document does not explicitly state the country of origin. It describes the recruitment from "the general hemodialysis population," and mentions racial/ethnic demographics, but not geographic. Given the company is "JMS North America Corporation" (Hayward, CA), it is highly probable the study was conducted in the US. The study appears to be prospective as it involves recruitment and device use to confirm safety, performance, and usability.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    This question is largely not applicable as the device is a physical medical device. The "ground truth" for performance is established through bench testing (objective physical measurements) and the success of the device in a simulated clinical setting. There is no mention of human experts establishing a "ground truth" for diagnostic or AI-related interpretations.

    For the simulated clinical usability study, the "ground truth" is whether the device performed as intended and was usable, as observed by clinicians/researchers during the study. The qualifications of those assessing the usability are not specified, beyond the implication that they are competent to conduct a clinical trial for hemodialysis devices.

    4. Adjudication Method for the Test Set

    Adjudication methods (like 2+1, 3+1) are typically used in studies involving human interpretation (e.g., radiology reads) where there might be disagreement in expert opinions needing a tie-breaker. This is not applicable here as:

    • The primary "test set" involves objective performance characteristics (bench testing).
    • The clinical usability study likely involved observing successful function and user feedback, not a diagnostic interpretation needing adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No. An MRMC study is relevant for evaluating the performance of AI systems or diagnostic tools where multiple human readers interpret cases, often with and without AI assistance, to see if the AI improves human performance. This device is a physical hemodialysis needle set, not an AI or diagnostic tool.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Not applicable. This refers to the performance of an AI algorithm on its own. The device is a physical product. Its "standalone" performance is assessed through bench testing (e.g., VND Performance, Mechanical Hemolysis Testing, Needle Penetration Resistance). These tests evaluate the device's inherent functional characteristics independent of human interaction during the critical failure modes.

    7. The Type of Ground Truth Used

    • Bench Testing: The ground truth is based on objective physical measurements and engineering specifications (e.g., force measurements, leak tests, dimensional analyses) and functional success/failure (e.g., did the VND feature trigger the alarm?).
    • Simulated Clinical Usability Study: The ground truth is based on observed device performance during simulated use and the successful delivery of hemodialysis without impedance by the device's novel features. This is akin to outcomes data in a controlled simulated environment.

    8. The Sample Size for the Training Set

    Not applicable. The device is a physical product and does not involve AI or machine learning models that require a "training set."

    9. How the Ground Truth for the Training Set was Established

    Not applicable. As there is no training set for an AI model.

    In summary, the provided document details the non-clinical and limited clinical testing of a physical medical device (hemodialysis needle set). The acceptance criteria are largely met through rigorous bench testing demonstrating physical and functional robustness, and a small simulated clinical study confirming usability and safety in a controlled environment. AI-specific criteria are not relevant to this type of device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 252