Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K031292
    Date Cleared
    2003-05-22

    (29 days)

    Product Code
    Regulation Number
    876.4620
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Fossa Ureteral Stone Sweeper is indicated for use as an indwelling ureteral catheter to promote drainage of urine from the kidney to the bladder, and/or for the manipulation, capture and removal of urinary calculi.

    Device Description

    The Fossa Ureteral Stone Sweeper set consists of a flexible, Pigtail tipped, selfexpanding stent with: Insertion sheath, "Pusher," and optional pre-attached suture to facilitate stent removal. The stent is offered in various diameters and working lengths. Short slits along the working length of the stent allow radial device expansion to permit fluid flow through and around the device, and to permit stone passage into the inner stent's lumen.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or a dedicated study with quantifiable performance metrics for the Fossa Ureteral Stone Sweeper based on a multi-reader multi-case (MRMC) comparative effectiveness study, or a standalone algorithm performance study.

    This submission is a Special 510(k): Device Modification, meaning it relies heavily on the substantial equivalence to a predicate device (Fossa Ureteral Stone Sweeper K021602) and a demonstration of compliance with design control requirements and risk analysis. The performance testing mentioned is primarily related to material and mechanical properties, not clinical efficacy or diagnostic accuracy.

    Therefore, the following information cannot be extracted from the provided document:

    • A table of acceptance criteria and reported device performance related to clinical outcomes or diagnostic accuracy.
    • Sample size used for a test set, its data provenance, or the number/qualifications of experts for ground truth.
    • Adjudication method for a test set.
    • Information about a multi-reader multi-case (MRMC) comparative effectiveness study or its effect size.
    • Information about a standalone (algorithm only) performance study.
    • Type of ground truth used (expert consensus, pathology, outcomes data).
    • Sample size for a training set or how its ground truth was established.

    Here's what can be extracted regarding the "acceptance criteria" and "study" as presented in the document, albeit in a different context than a typical AI/diagnostic device study:

    1. A table of acceptance criteria and the reported device performance

    The document mentions "Performance testing conducted in support of this submission includes dimensional inspection, elongation/yield and tensile strength testing, and lubricity evaluation." However, it does not provide specific acceptance criteria values (e.g., "tensile strength must be > X N") or the corresponding reported device performance values. The conclusion states the device "has been shown to be safe and effective for its intended use" based on these tests and comparison to the predicate, implying these tests met internal acceptance criteria for substantial equivalence.

    Acceptance Criteria CategoryReported Device Performance
    Dimensional InspectionPassed (implied)
    Elongation/Yield TestingPassed (implied)
    Tensile Strength TestingPassed (implied)
    Lubricity EvaluationPassed (implied)

    Self-correction: The document explicitly states these tests were performed but does not list the specific numerical criteria or results. The "Passed (implied)" is an inference based on the overall conclusion of safety and effectiveness.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not applicable. The performance testing described (dimensional, tensile, lubricity) would involve material samples, not a clinical "test set" of patients or images. Therefore, data provenance in a clinical sense is not relevant here.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as there is no mention of a "test set" requiring expert ground truth in the context of clinical or diagnostic performance. The testing pertains to the physical and mechanical properties of the device.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as there is no "test set" requiring adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The device is a ureteral stent/retrieval basket, not an AI or diagnostic system. Therefore, an MRMC study is not relevant to this submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No. This is a physical medical device, not an algorithm.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    For the mechanical and material tests, the "ground truth" would be established by standard engineering and material science testing methods and specifications (e.g., ASTM standards for tensile strength). There is no "expert consensus," "pathology," or "outcomes data" in the traditional sense for these types of tests.

    8. The sample size for the training set

    Not applicable. This device does not involve a "training set" in the context of AI or machine learning.

    9. How the ground truth for the training set was established

    Not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1