Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242763
    Manufacturer
    Date Cleared
    2025-05-02

    (232 days)

    Product Code
    Regulation Number
    880.5440
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    JetCan**®** Pro Safety Huber Needle

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The JetCan® Pro Safety Huber Needle is indicated for use in the delivery of fluids and drugs, as well as blood sampling through surgically implanted vascular access ports for up to 24 hours. It is compatible with power injection ports and associated power injection procedures up to 325 psi.

    The JetCan® Pro Safety Huber Needle incorporates a passive safety mechanism, activated upon the withdrawal from a port catheter to aid in the prevention of accidental needle sticks and minimize exposure to hazardous fluids.

    Device Description

    The JetCan® Pro Safety Huber Needle is a non-coring Huber needle used to access the septum of a surgically implanted vascular access port for the delivery of intravenous fluids, blood sampling and power injection of contrast media, up to 5 mL/s at 325 psi.

    The JetCan® Pro Safety Huber Needle is a single use, external communicating device, with direct blood contact and a duration of use of less than or equal to 24 hours.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the JetCan® Pro Safety Huber Needle primarily focuses on the device's technical characteristics, regulatory compliance, and a comparison to a predicate device. It does not contain information about a study proving the device meets specific acceptance criteria based on AI/ML performance, nor does it refer to human reader studies (MRMC), standalone algorithm performance, or the establishment of ground truth for such studies.

    The document describes performance testing related to the physical and functional aspects of the needle, such as burst pressure, flow rate, and safety mechanism activation. These are engineering and performance specifications, not acceptance criteria for an AI/ML model's diagnostic accuracy or similar AI-driven performance.

    Therefore, I cannot extract the requested information (points 1-9) about AI/ML performance acceptance criteria or studies from this document. The device is a medical needle, not an AI/ML-powered diagnostic or assistive tool.

    To illustrate what a response would look like if the document did contain such information, I will provide a hypothetical example, assuming the JetCan® Pro Safety Huber Needle was an AI-powered device for, e.g., predicting proper needle placement.


    Hypothetical Example (if the JetCan® Pro Safety Huber Needle were an AI-powered device):

    The JetCan® Pro Safety Huber Needle, if it were an AI-powered device, would require a different set of acceptance criteria and a different type of study to prove it meets those criteria. Based on the provided document, the device is a physical medical instrument (a Huber needle). Therefore, the information requested regarding AI/ML performance, MRMC studies, standalone algorithm performance, and ground truth establishment is not present in the provided 510(k) letter.

    However, to demonstrate how such a request would ideally be answered if the document pertained to an AI/ML device, below is a hypothetical example of how a study proving an AI-powered device meets acceptance criteria might be described:


    Hypothetical Study Description for an AI-Powered JetCan® Pro Safety Huber Needle (Not present in the provided document):

    Let's assume, for the sake of this hypothetical example, that the "JetCan® Pro Safety Huber Needle" was an AI-powered imaging analysis device designed to accurately identify the optimal insertion point for a Huber needle on a vascular access port, thus preventing accidental needle sticks or unsuccessful insertions.


    1. Table of Acceptance Criteria and Reported Device Performance (Hypothetical)

    Performance MetricAcceptance CriteriaReported Device Performance
    Primary Endpoints
    Sensitivity (Optimal Insertion Point Detection)≥ 95%96.2%
    Specificity (Non-Optimal Insertion Point Rejection)≥ 90%91.5%
    False Positive Rate (FPR) per image≤ 0.050.03
    Secondary Endpoints
    Time to detect (average)≤ 3 seconds2.1 seconds
    Reader Overlap with AI (Jaccard Index)≥ 0.850.88

    2. Sample Size Used for the Test Set and Data Provenance (Hypothetical)

    • Test Set Sample Size: 500 unique patient images (e.g., ultrasound or fluoroscopic images of vascular access ports).
    • Data Provenance: Retrospective data collected from five major medical centers across the United States (40%), Europe (30%), and Asia (30%). Data was anonymized and de-identified prior to analysis.

    3. Number, Qualifications, and Adjudication Method of Experts for Ground Truth (Hypothetical)

    • Number of Experts: A panel of 5 board-certified interventional radiologists and vascular surgeons.
    • Qualifications of Experts: Each expert had a minimum of 10-15 years of experience in vascular access procedures, with specific expertise in port placement and troubleshooting. They were blinded to the device's performance during ground truth establishment.
    • Adjudication Method: A "3+1" adjudication method was used. Initially, three experts independently reviewed each image and marked the optimal insertion point. If at least two out of three experts agreed on a location, that became the preliminary ground truth. If there was no majority consensus (e.g., 1-1-1 split), a fourth senior expert, blinded to the initial ratings, was brought in as a tie-breaker. All discrepancies were resolved through consensus meetings.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study (Hypothetical)

    • Yes, an MRMC study was performed.
    • Design: A crossover design was employed where 10 independent vascular access specialists (not involved in ground truth establishment) reviewed the test set images.
      • Phase 1 (Without AI Assistance): Specialists individually identified the optimal insertion point on all 500 images.
      • Phase 2 (With AI Assistance): After a washout period, the same specialists reviewed the same 500 images but with the AI device providing its predicted optimal insertion point. Specialists could accept, reject, or modify the AI's suggestion.
    • Effect Size: The study demonstrated a statistically significant improvement in human reader performance with AI assistance.
      • Improvement in Sensitivity: Human readers' average sensitivity for identifying optimal insertion points increased from 82% (without AI) to 94% (with AI assistance), representing a 12% absolute improvement.
      • Reduction in Time to Decision: The average time taken by human readers to identify the optimal point decreased from 15 seconds (without AI) to 8 seconds (with AI assistance), representing a 47% reduction in time.
      • Reduction in Critical Errors: The number of significant misidentification errors (leading to potential adverse events) decreased by 60% when readers used AI assistance.

    5. Standalone (Algorithm Only) Performance (Hypothetical)

    • Yes, a standalone performance evaluation was conducted.
    • Metrics: The algorithm's standalone performance on the test set against the established ground truth showed:
      • Sensitivity: 96.2%
      • Specificity: 91.5%
      • Accuracy: 93.8%
      • Area Under the Receiver Operating Characteristic Curve (AUC): 0.97

    6. Type of Ground Truth Used (Hypothetical)

    • Expert Consensus: The ground truth was established by the consensus of multiple, highly experienced interventional radiologists and vascular surgeons through a detailed, adjudicated review process of imaging data.

    7. Sample Size for the Training Set (Hypothetical)

    • Training Set Sample Size: 20,000 unique patient images (e.g., ultrasound and fluoroscopic images) of vascular access ports.

    8. How Ground Truth for the Training Set Was Established (Hypothetical)

    • Hybrid Approach:
      • Initial Annotation: A team of trained clinical annotators (e.g., medical imaging technicians or nurses with vascular access experience) performed initial annotations of optimal insertion points on all 20,000 images under the supervision of a senior radiologist.
      • Expert Review/Correction: A subset of 2,000 (10%) randomly selected images from the training set, along with all images flagged as challenging or ambiguous by the annotators, underwent expert review by two board-certified interventional radiologists. Discrepancies were resolved through discussion to refine the ground truth.
      • Automated Quality Control: Automated scripts were used to check for consistency in annotations (e.g., size, shape, location of marked areas) and flag outliers for further manual review.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1