Search Filters

Search Results

Found 166 results

510(k) Data Aggregation

    K Number
    K223923
    Date Cleared
    2023-03-30

    (90 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For telescopes with diameter ranging from 3mm-5mm
    The HOPKINS Telescopes are intended to provide visualization during laparoscopy and general surgery in adults and pediatrics.
    For telescopes with diameter ranging from 5.5mm- 11mm
    The HOPKINS Telescopes are intended to provide visualization during laparoscopy and general surgery in adults.

    Device Description

    The HOPKINS Telescopes are rigid telescopes that utilize the rod lens technology. At the distal end of the telescope's shaft is the lens and the other end of the shaft is attached to the eyepiece. Throughout the central lumen of the HOPKINS Telescopes, optical glass rods are used to transmit and magnify the image received from the lens. The HOPKINS Telescopes are available with 0°, 6°, 30° and 45° direction of view, diameters ranging from 3mm to 11mm and working lengths from 18cm- 50cm.

    AI/ML Overview

    The provided text describes the submission of a 510(k) summary for the HOPKINS Telescopes, comparing it to a predicate device. It details non-clinical performance data and references published literature for clinical performance, but it does not contain information about acceptance criteria or a specific study proving the device meets those criteria, especially in the context of an AI/ML device.

    The device described, "HOPKINS Telescopes," is a rigid endoscope that utilizes rod lens technology for visualization during surgery. There is no mention of AI or machine learning components. Therefore, many of the requested points, such as sample sizes for test/training sets, expert ground truth adjudication, MRMC studies, and standalone AI performance, are not applicable to the information provided in this document.

    However, based on the non-clinical performance data section, we can infer some "acceptance criteria" related to standards compliance and general performance.

    Here's an attempt to answer the questions based only on the provided text, recognizing that it does not pertain to AI/ML device performance:

    1. A table of acceptance criteria and the reported device performance

    Since this is not an AI/ML device, the "acceptance criteria" are related to compliance with recognized standards and successful bench testing. The document states:
    "the HOPKINS Telescopes has met all its design specification and is substantially equivalent to its predicate device."

    Criteria TypeAcceptance Criteria (Implied)Reported Device Performance
    Functional Standards (Endoscopy)Compliance with ISO 8600-1, ISO 8600-3, ISO 8600-5, ISO 8600-6Met these standards.
    BiocompatibilityCompliance with ISO 10993-5, -11, -10 (twice)Passed Cytotoxicity, Acute Systemic Toxicity, Intracutaneous Irritation, Maximization Sensitization tests.
    Electrical & Thermal SafetyCompliance with IEC 60601-2-18:2009 (3RD Edition)Met this standard.
    Reprocessing (Cleaning & Sterilization)Compliance with AAMI TIR12, TIR30, ST8, ST77, ST79, ST81, ISO 14937, ISO 17655-1Met these standards during reprocessing validation.
    General PerformanceSubstantial equivalence to predicate device in bench testingDemonstrated substantial equivalence; met all design specifications.

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set" in the context of data for an AI/ML model. The evaluation involves non-clinical bench testing and comparison to a predicate device. No specific sample sizes for these tests are mentioned, nor is there information about data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This is not applicable as the device is a physical endoscope, not an AI/ML diagnostic tool requiring expert ground truth for image interpretation.

    4. Adjudication method for the test set

    Not applicable.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable, as this is not an AI-assisted device.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Not applicable, as this is not an AI/ML algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the non-clinical testing, the "ground truth" would be established by physical measurements, chemical analyses, and adherence to the specified performance parameters outlined in the referenced standards. For the pediatrics indication expansion, the ground truth is based on "published literature" supporting safety and effectiveness, which would implicitly rely on clinical data outcomes.

    8. The sample size for the training set

    Not applicable, as this is not an AI/ML device with a training set.

    9. How the ground truth for the training set was established

    Not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K202957
    Date Cleared
    2020-10-29

    (29 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Flexible Video Cysto-Urethroscope C-VIEW is used to provide visualization and operative access during diagnostic and therapeutic endoscopic procedures of urinary tract including the urethra, bladder, ureters, and kidneys.

    Device Description

    The Flexible Video Cysto-Urethroscope C-VIEW (Part Number: 11272VUE) is intended to be used with multiple compatible CCUs: C-MAC (8403ZX, cleared via K182186) and C-HUB II (20290301, cleared via K182186). Identical to the predicates, the scope cannot be operated on its own because it is a videoscope whose image data output is provided in the form of video signals and sent to the CCU for decoding and display. Therefore, when the scope is connected to the compatible CCUs, it becomes the Flexible Video Cysto-Urethroscope C-VIEW System, which provides visualization and operative access in the urinary tract.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a medical device, the Flexible Video Cysto-Urethroscope (C-view). This type of submission aims to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving efficacy through extensive clinical trials as would be required for a PMA. As such, the information typically provided to meet acceptance criteria, specifically for software-based medical devices or AI/ML-driven devices, is not present here.

    Based on the provided document, the "device" in question is a hardware endoscope, not an AI/ML-driven device or a device that typically has "acceptance criteria" for software performance. The document focuses on demonstrating substantial equivalence to predicate devices through technical specifications, physical characteristics, and compliance with recognized standards.

    Therefore, many of the requested items (e.g., sample size for test/training sets, experts for ground truth, MRMC study, standalone algorithm performance, ground truth type) are not applicable to this submission as they pertain to the evaluation of AI/ML software or diagnostic image processing, which is not the primary function of this endoscope.

    However, I can extract and interpret the available information relevant to device performance and the study conducted.

    Here's an analysis of the provided information in relation to your request:

    Acceptance Criteria and Device Performance (Interpreted)

    Since this is a hardware device submission focused on substantial equivalence, the "acceptance criteria" are implicitly defined by the technical specifications and performance characteristics compared to the predicate devices. The "study" proving it meets these involves non-clinical bench testing and adherence to recognized standards.

    Table 1: Acceptance Criteria (Implied) and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implied by Predicate/Standards)Reported Device Performance (Subject Device)
    Device TypeFlexible Video Cysto-UrethroscopeFlexible Video Cysto-Urethroscope
    Insertion Shaft DiameterSimilar to predicates (5.5 mm)5.2 mm
    Insertion Shaft LengthSame as primary predicate (37 cm)37 cm
    Working Channel DiameterSimilar to primary predicate (2.3 mm)2.3 mm
    Suction Port/ChannelPresent (via working channel)Via working channel
    Deflection (°) - UpSimilar to predicates (210°)210°
    Deflection (°) - DownSimilar to predicates (140°)140°
    Type of ImagerCMOSCMOS
    Field of ViewSimilar to predicates (100°)100°
    Direction of ViewSimilar to predicates (0°)
    Depth of FieldSimilar to predicates (5-50 mm)5-50 mm
    On-axis Resolution (minimal)Comparable to predicates (e.g., 2.5 lp/mm @ 3mm, 40 lp/mm @ 50mm for primary predicate)1.8 lp/mm @ 5 mm, 16 lp/mm @ 50 mm
    Light SourceInternal LEDInternal LED
    Compatible CCUsAbility to connect to compatible CCUsC-MAC (8403ZX), C-HUB II (P/N20290301)
    Electrical SafetyCompliance with IEC 60601-1, IEC 60601-1-2, etc.Compliant
    EMCCompliance with IEC 60601-1-2Compliant
    BiocompatibilityCompliance with ISO 10993Compliant
    Performance (Optical)Color Reproduction & Color Contrast, Illumination Detection Uniformity, Spatial Resolution & Depth of Field, Latency, Distortion, Signal-to-Noise Ratio (SNR) & SensitivityTested and met design specifications
    Reprocessing ValidationCompliance with FDA Guidance DocumentCompliant

    Note on "Acceptance Criteria" for this specific device: For a 510(k) of a hardware device like an endoscope, acceptance criteria are generally not explicitly stated as performance thresholds in the same way they would be for an AI/ML algorithm (e.g., "sensitivity must be >X%"). Instead, the acceptance criteria are implicitly met by demonstrating that the device is as safe and effective as the predicate devices and meets applicable consensus standards. The "study" mainly involves engineering tests and comparisons.

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Not applicable in the context of a typical "test set" for an AI/ML algorithm. The "test set" for this hardware device primarily refers to the physical units subjected to bench testing and measurements. No specific sample size (e.g., N=X images or patient cases) is provided as it's not a software performance study.
      • Data Provenance: The "data" comes from engineering measurements and tests performed on the device itself, not from patient studies or image datasets.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • Not applicable. "Ground truth" in the sense of expert annotation for images or clinical outcomes is not relevant here. The "ground truth" for the device's physical and optical properties is established by objective engineering measurements and validated against industry standards and the predicate device's specifications.
    3. Adjudication Method for the Test Set:

      • Not applicable. Adjudication methods (e.g., 2+1, 3+1) are used for resolving disagreements among human readers/annotators in clinical studies, particularly for creating ground truth for AI algorithms. This is a hardware validation.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study is a clinical study design used to evaluate the diagnostic performance of a system (often AI-assisted) by comparing multiple readers' interpretations of cases. This device is an endoscope providing visualization; it's not an AI-based diagnostic tool. Therefore, an MRMC study was not performed, nor is it relevant to demonstrate substantial equivalence for this type of device.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

      • Not applicable. This is a hardware device (an endoscope). There is no "algorithm only" performance to be evaluated. Its function is to provide real-time visualization for a human operator.
    6. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.):

      • For this hardware device, the "ground truth" for its performance characteristics (e.g., resolution, field of view, deflection angles) is established by objective engineering measurements and adherence to recognized industry standards (e.g., IEC, ISO standards for electrical safety, biocompatibility, optical performance). The comparison to the predicate devices also serves as a benchmark for "ground truth" in terms of what is "safe and effective."
    7. The Sample Size for the Training Set:

      • Not applicable. There is no AI/ML algorithm training involved with this hardware device.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable. As there is no training set, this question is not relevant.

    In summary: The provided document is a 510(k) submission for a conventional medical device (an endoscope). The "acceptance criteria" and "study" described are focused on engineering performance, safety, and substantial equivalence to existing predicate devices, rather than the rigorous statistical evaluation and ground truth establishment typically required for AI/ML-driven diagnostic software.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200318
    Date Cleared
    2020-10-22

    (258 days)

    Product Code
    Regulation Number
    874.4250
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The UNIDRIVE® S III ENT system consists of an active control unit used in conjunction with the High-Speed Micro Motor and DrillCut-X® II Shaver handpieces. The system is intended for use by qualified surgeons to provide controlled cutting, drilling, debriding, sawing, and shaving for the ablation, excision, removal, or transection of tissue or bone during head, neck, ENT, or otoneurological surgical procedures.

    Device Description

    The UNIDRIVE S III ENT is a motorized surgical device system used for the excision, ablation, removal or transection of bones/tissues during head, neck, ENT, or otoneurological surgical procedures. The system components include a control unit used in conjunction with the High-Speed Micro Motor that houses the high-speed handpieces and DrillCut-X® II Shaver handpieces. The modifications made to the UNIDRIVE® S III ENT system is the addition of the DrillCut-X® II-35 and DrillCut-X® II-35 N Shaver handpieces. Additional accessories used with the UNIDRIVE S III ENT system include the shaver blades, sinus burrs and the sinus burr 35k.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the UNIDRIVE S III ENT system. This document focuses on demonstrating substantial equivalence to a predicate device, rather than providing a study for a new AI/CAD device. Therefore, many of the requested criteria related to AI/CAD system performance, such as human reader improvement with AI assistance, sample sizes for AI training/test sets, expert qualifications for ground truth, and adjudication methods, are not applicable or findable in this document.

    However, I can extract the information relevant to the device's technical performance and the non-clinical testing performed to demonstrate its acceptance criteria.

    1. A table of acceptance criteria and the reported device performance

    The document doesn't present acceptance criteria in a quantitative table with specific performance metrics for the device itself (like accuracy, sensitivity, specificity for an AI system). Instead, its acceptance appears to be based on adherence to recognized consensus standards and successful completion of specific non-clinical tests to demonstrate safety and effectiveness, and substantial equivalence to a predicate device.

    Here's a summary of the non-clinical performance data provided, which serves as the "acceptance criteria" through compliance:

    Acceptance Criteria Category (Testing Type)Reported Device Performance / Compliance
    Electrical Safety TestingPassed, certified to be Class I protection against electrical shock.
    EMC TestingPassed.
    Consensus Standards ComplianceFollows FDA recognized consensus standards: • IEC 60601-1 • IEC 60601-1-2 • ISO 14971 • ISO 10993
    Cleaning and Sterilization Validations (for patient-contacting components)Conducted, complies with: • ANSI/AAMI ST81:2004/(R) 2010 • ANSI/AAMI ST79: 2010/A4:2013 • AAMI TIR 12:2010 • ANSI/AAMI/ISO 17665-1:2006 • ANSI/AAMI/ISO 17665-2:2009 • AAMI TIR 39:2009
    Bench Testing (to meet design specifications)Performed, specifically "Inspection of rotation speed and torque". Bench testing demonstrated substantial equivalence to the predicate device.
    Risk Evaluation on ModificationConducted, concluded that differences do not raise new questions of safety and effectiveness.
    Biological EvaluationConducted, summarized.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not applicable. This device is a surgical drill system, not an AI/CAD system evaluated on patient data. The "test set" here refers to the device itself undergoing various engineering and biological safety tests. No patient data or associated provenance is mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. Ground truth in the context of AI/CAD systems usually refers to verified diagnoses or findings from medical images. For this surgical device, "ground truth" would be established by engineering standards, material science, and biological safety standards, which are inherent to the testing methodologies used. The document does not specify the number or qualifications of experts involved in the direct testing of the device, beyond implying qualified personnel conducting the tests and regulatory bodies reviewing the submissions.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. Adjudication methods are typically used to resolve discrepancies in expert interpretations for establishing ground truth in AI/CAD image analysis studies. This document discusses mechanical, electrical, and biological testing of a surgical device.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. An MRMC study is relevant for AI/CAD systems that assist human interpretation of medical images. This document describes a surgical drill system.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This device is a surgical hardware system, not an AI algorithm. Its performance is inherent to its mechanical and electrical function.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The "ground truth" for this device's acceptance is established by engineering standards, material science, and biological safety standards, as demonstrated through the various non-clinical tests (electrical safety, EMC, cleaning/sterilization validity, rotation speed/torque inspection, biological evaluation) and compliance with national and international standards (IEC, ISO, ANSI/AAMI).

    8. The sample size for the training set

    Not applicable. This document describes a hardware device, not an AI algorithm that requires a training set.

    9. How the ground truth for the training set was established

    Not applicable. As noted above, this is not an AI/CAD algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201135
    Date Cleared
    2020-09-01

    (126 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Image 1 S is a camera control unit (CCU) for use with camera heads or video endoscopes for the visualization, image recording and documentation during general endoscopic and microscopic procedures.

    The Image 1 S 4U camera head is intended to the Image1 S Camera Control Unit (CCU) and compatible endoscope for visualization, image recording and documentation during general endoscopic and microscopic procedure.

    Device Description

    The Image1 S camera control unit is a medical device which consists of the Image1 S Connect (TC200US), Image1 S Connect II (TC201US) modules and the link modules. The link modules are Image1 S H3-Link (TC300US), Image1 S X-Link (TC301US), Image1 S D3-Link (TC302US) and Image1 S 4U-Link (TC304US).

    The Image1 S Connect (TC200US) and the Image1 S Connect II (TC201US) modules can be connected to a minimum of one and a maximum of three links modules. The modularity enables customers to customize their Image1 S system to their specific current and future video needs.

    The Image1 S includes, but not limited to, the following features:
    • Brightness control
    • Enhancement Control
    • Automatic Light Source Control
    • Shutter Control
    • Image/Video Capture

    When the Image1 S Connect II module is used with the 4U-Link and the Image1 S 4U camera head, it can output a 4K image to the monitor and also offers 7 increments of zoom ranging from 1x to 2.5x.

    The software version of the Image1 S camera control is upgraded to version 4.0. Software version 4.0 introduces the KS HIVE, an Ethernet based interface that allows for communication between the Image 1 S camera control unit and certain KARL STORZ devices.

    AI/ML Overview

    This is a premarket notification (510(k)) for an endoscopic video imaging system, the Image1 S CCU and Image1 S 4U Camera Head. The primary purpose of this 510(k) is to demonstrate that the new device is substantially equivalent to a previously cleared predicate device (Image1 S, K160044). Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context refer to the testing and analysis performed to demonstrate this substantial equivalence, rather than a clinical outcome-based performance study typically associated with AI/ML devices.

    Here's the breakdown of the information requested, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Since this is a substantial equivalence submission for an imaging system rather than an AI/ML diagnostic or prognostic device, the "acceptance criteria" are related to safety, effectiveness, and technological characteristics compared to the predicate device. The "reported device performance" are the results of the non-clinical tests demonstrating these aspects.

    Acceptance Criteria CategorySpecific Criteria/TestReported Device Performance / Outcome
    Optical PerformanceComparative testing of optical parameters (e.g., imager type, sensor resolution, zoom capabilities)Both subject and predicate use CMOS for imager type. Sensor resolution increased from 1920x1080p (predicate) to 3840x2160p (subject). Zoom capabilities expanded from 1x, 1.2x, 1.5x, 1.75x, 2x (predicate) to include 2.25x, 2.5x, and "Adaptive Zoom" (subject).
    Software PerformanceSoftware verification and validation testing, including compliance with FDA Guidance for "Software Contained in Medical Device"Software version upgraded from 2.4 (predicate) to 4.0 (subject), introducing KS HIVE (Ethernet-based interface). Testing demonstrated the software functions as intended and is safe. (Level of concern: Moderate)
    Electrical Safety & EMCCompliance with international standards: IEC 60601-1, IEC 60601-1-2, IEC 60601-2-18Device complies with all listed electrical safety and electromagnetic compatibility standards.
    Reprocessing (Cleaning & Sterilization)Validation of cleaning and sterilization for the Image1 S 4U camera head against specified standards (e.g., ANSI/AAMI/ISO 14937, AAMI TIR 12, ANSI/AAMI ST81, ST79, ST58, ISO 14161).Reprocessing data submitted is in compliance with all relevant standards.
    Design SpecificationsBench testing to ensure the device meets all its design specificationsBench testing performed verified and validated that the Image1 S has met all its design specifications.
    Substantial EquivalenceDemonstrated through differences that do not raise new questions of safety and effectiveness, compared to the predicate device.Conclusions from all technical and performance tests demonstrated the subject device is as safe and effective as the predicate, and differences do not raise new questions of safety and effectiveness.

    2. Sample sizes used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The document does not specify a "test set" in the context of clinical images or patient data that would typically be used for AI/ML performance evaluation. The testing primarily involved non-clinical bench testing, software verification/validation, and compliance with standards. Therefore, the "sample size" would refer to the various components and systems tested, but no numerical count is provided for this.
    • Data Provenance: Not applicable as no clinical patient data was used for performance evaluation in this 510(k). The testing is primarily engineering and regulatory compliance based.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts & Qualifications: Not applicable. Ground truth, in the sense of clinical annotations by experts for AI/ML performance, was not established as this is not an AI/ML diagnostic device with a clinical performance study. The "ground truth" here is compliance with engineering standards, design specifications, and the functionality of the device as tested by engineers and technicians.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not applicable. There was no clinical ground truth established requiring expert adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. This device is an endoscopic video imaging system, not an AI/ML diagnostic assistant, so such a study is not relevant to this submission. The document explicitly states: "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable. This is not an AI/ML algorithm. The "standalone performance" refers to the device's functional operation (imaging, brightness control, zoom, etc.) as detailed in the non-clinical performance data and bench testing, which was indeed performed without human-in-the-loop diagnostic interpretations.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: For this device, the "ground truth" used is primarily engineering specifications, compliance with recognized international standards (e.g., IEC 60601 series, ANSI/AAMI reprocessing standards), and the functional design requirements of the imaging system. It's not clinical ground truth like pathology or expert consensus on disease.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable. This device does not involve an AI/ML algorithm that would require a "training set" of clinical data.

    9. How the ground truth for the training set was established

    • Training Set Ground Truth Establishment: Not applicable, as there is no AI/ML training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K191357
    Date Cleared
    2019-09-18

    (120 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Flexible HD Cysto-Urethroscope System is used to provide visualization and operative access during diagnostic and therapeutic endoscopic procedures of urinary tract including the urethra, bladder, ureters, and kidneys.

    Device Description

    The components subject of this submission are: the Flexible HD Cysto-Urethroscope (Part Number: 11272VH(U)), the LUER ports (Part Number: 11014L(U)), the Suction Valve (Part Number: 11301CE1/20), and the IMAGE1 S CCU. The CCU consists of the IMAGE1 S Connect Module (Model Number: TC200US) and IMAGE1 S X-Link (Model Number: TC301US).

    AI/ML Overview

    The provided information is a 510(k) summary for the KARL STORZ Flexible HD Cysto-Urethroscope System. This type of submission is for demonstrating substantial equivalence to a legally marketed predicate device, not for proving a device meets specific acceptance criteria based on AI performance or clinical efficacy. The changes submitted in this 510(k) are related to adding new sterilization methods (Sterilization V PRO-60 and High Level Disinfection).

    Therefore, the document does not contain the kind of information requested regarding acceptance criteria and performance studies for an AI/ML driven device, specifically:

    • A table of acceptance criteria and the reported device performance: This document does not describe performance metrics for a diagnostic or AI-driven device, but rather refers to reprocessing validation and biocompatibility.
    • Sample size used for the test set and the data provenance: Not applicable as there's no mention of a test set for AI performance. The studies mentioned are non-clinical (biocompatibility and reprocessing validation).
    • Number of experts used to establish the ground truth...: Not applicable as there's no AI component or ground truth establishment for diagnostic performance.
    • Adjudication method: Not applicable.
    • If a multi-reader multi-case (MRMC) comparative effectiveness study was done: The document explicitly states "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications." This means no MRMC study was performed.
    • If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable as this is not an AI/ML device.
    • The type of ground truth used: Not applicable.
    • The sample size for the training set: Not applicable.
    • How the ground truth for the training set was established: Not applicable.

    Summary of Device Performance and Acceptance Criteria from the Provided Document:

    The acceptance criteria and performance data in this 510(k) summary are related to the safety and functionality of the endoscope itself, particularly regarding its reprocessing and biocompatibility, as opposed to diagnostic performance of an AI component.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria CategorySpecific Criteria/TestsReported Device Performance/Conclusion
    BiocompatibilityISO 10993-5:2009/(R) 2014 CytotoxicityPerformed according to ISO 10993-1 and FDA Guidance.
    ISO 10993-10:2010 Sensitization & IrritationPerformed according to ISO 10993-1 and FDA Guidance.
    ISO 10993-11:2006/(R) 2010 Systemic ToxicityPerformed according to ISO 10993-1 and FDA Guidance.
    Overall ConclusionThe biocompatibility evaluation for patient-contacting components was performed and deemed acceptable.
    Reprocessing ValidationCleaning ValidationValidation activities performed according to FDA Guidance. The device maintained functionality after reprocessing cycles.
    Sterilization Validation (V-PRO 60 and HLD)Validation activities performed according to FDA Guidance and relevant standards (AAMI TIR 12, ISO 15883-5, AAMI TIR 30, AAMI/ANSI/ISO 11737-1, ASTM E1837-96). Confirmed effective sterilization for specified methods.
    Overall ConclusionThe reprocessing data submitted is in compliance with relevant standards, demonstrating the device can be effectively cleaned, sterilized, and high-level disinfected.
    Functional EquivalenceComparison to Predicate Device (K182723)The nonclinical tests demonstrate that the subject device performs as well as or better than the legally marketed predicate device (KARL STORZ Flexible HD Cysto-Urethroscope K182723).

    2. Sample size used for the test set and the data provenance:

    • The document does not specify sample sizes for the biocompatibility or reprocessing validation tests. These tests are typically performed on a limited number of device samples or representative materials.
    • Data provenance is not specified beyond being "non-clinical bench testing." There's no mention of country of origin or retrospective/prospective nature as this applies to clinical study data, which was explicitly not required.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. Ground truth establishment by experts pertains to diagnostic or clinical performance evaluations, which were not part of this 510(k) submission.

    4. Adjudication method for the test set:

    • Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications." This device is an endoscope system, not an AI-driven diagnostic tool.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not applicable. This is not an algorithm or AI-driven device.

    7. The type of ground truth used:

    • Not applicable in the context of diagnostic performance. For biocompatibility and reprocessing, the "ground truth" is adherence to established international standards and FDA guidance for these types of non-clinical tests.

    8. The sample size for the training set:

    • Not applicable. No training set is mentioned as this is not an AI/ML device.

    9. How the ground truth for the training set was established:

    • Not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K182723
    Date Cleared
    2019-04-23

    (207 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Flexible HD Cysto-Urethroscope System is used to provide visualization and operative access during diagnostic and therapeutic endoscopic procedures of urinary tract including the urethra, bladder, ureters, and kidneys.

    Device Description

    The components subject of this submission are: the Flexible HD Cysto-Urethroscope (Part Number: 11272VH(U)), the LUER ports (Part Number: 11014L(U)), the Suction Valve (Part Number: 091011-20), and the IMAGE1 S CCU. The CCU consists of the IMAGE1 S Connect Module (Model Number: TC200US) and IMAGE1 S X-Link (Model Number: TC301US). The Flexible HD Cysto-Urethroscope (Part Number: 11272VH(U)) is a reusable, flexible video scope with an insertion shaft OD of 5.5 mm and length of 37 cm, a working channel OD of 2.3 mm, and a suction channel. Users can choose to attach either a LUER port with stopcocks (Part Number: 11014L) or a double LUER port (Part Number: 11014LU) to the working channel port. In terms of optics, it has direction of view of 0 degrees and field of view of 100 degrees.

    AI/ML Overview

    The provided document does not describe the acceptance criteria or a study that proves the device meets those criteria in the context of diagnostic performance (e.g., sensitivity, specificity, accuracy). Instead, it focuses on the substantial equivalence of the Flexible HD Cysto-Urethroscope System to predicate devices based on non-clinical performance data, primarily concerning electrical safety, EMC, biocompatibility, and reprocessing validation.

    Here's an analysis of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not provide a table with specific performance metrics like sensitivity, specificity, or image quality scores that would typically be associated with acceptance criteria for diagnostic devices. Instead, the "performance" is demonstrated through compliance with various safety and technical standards for medical devices.

    Acceptance Criteria CategoryReported Device Performance (Compliance)
    Electrical SafetyCompliant with ANSI/AAMI ES:60601-1:2005
    Electromagnetic CompatibilityCompliant with IEC 60601-1-2:2007
    BiocompatibilityCompliant with ISO 10993-1, ISO 10993-5:2009/(R) 2014, ISO 10993-10:2010, ISO 10993-11:2006/(R) 2010, and FDA Guidance
    Photobiological SafetyCompliant with IEC 62471:2006
    Reprocessing Validation (Cleaning & Sterilization)Compliant with AAMI TIR 12:2010, ISO 15883-5:2005, AAMI TIR 30:2011, AAMI/ANSI/ISO 11737-1:2006/(R)2011, ASTM E1837-96:2014, and FDA Guidance

    2. Sample Size Used for the Test Set and Data Provenance:

    No sample size for a "test set" in the context of diagnostic performance (e.g., patient data, image dataset) is mentioned. The studies performed were non-clinical bench testing to evaluate electrical safety, EMC, biocompatibility, and reprocessing. These tests involve laboratory procedures on device components or the entire device, not on human subjects or patient data. Therefore, data provenance (country of origin, retrospective/prospective) is not applicable in this context.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    Not applicable. Since the studies were non-clinical bench tests (e.g., electrical measurements, material testing, sterilization efficacy), there was no "ground truth" to be established by experts in the diagnostic sense. The results of these tests are typically evaluated against established engineering specifications and regulatory standards.

    4. Adjudication Method for the Test Set:

    Not applicable for the same reasons as above. There was no need for expert adjudication for non-clinical bench testing.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications." This indicates that there was no human reader study, and therefore, no effect size of AI assistance could be determined.

    6. If a Standalone (algorithm only without human-in-the-loop performance) was done:

    No, a standalone algorithm performance study was not done. This device is a physical endoscope system, not an AI algorithm.

    7. The Type of Ground Truth Used:

    As noted, there was no "ground truth" in the diagnostic context. For the non-clinical tests, the "ground truth" implicitly refers to the established scientific and regulatory standards (e.g., specific voltage limits for electrical safety, acceptable cytotoxicity levels for biocompatibility, sterility assurance levels for reprocessing).

    8. The Sample Size for the Training Set:

    Not applicable. This device is not an AI/ML algorithm that requires a training set.

    9. How the Ground Truth for the Training Set was Established:

    Not applicable for the same reason as above.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182186
    Date Cleared
    2019-03-22

    (221 days)

    Product Code
    Regulation Number
    874.4760
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CMOS Video-Rhino-Laryngoscope System is indicated to provide visualization of the nasal lumens and airway anatomy (including nasopharyngeal and trachea) during diagnostic procedures.

    Device Description

    The CMOS Video-Rhino-Laryngoscope System includes two main components: (1) the CMOS Video-Rhino-Laryngoscope (11102CM) and (2) the CCU. The CMOS Video-Rhino-Laryngoscope is compatible with two KARL STORZ CCUs: C-HUB and C-MAC.

    AI/ML Overview

    The document provided is a 510(k) Summary for the KARL STORZ CMOS Video-Rhino-Laryngoscope System (K182186). This type of submission is for demonstrating substantial equivalence to a legally marketed predicate device, not for proving specific clinical performance or to meet acceptance criteria through a clinical study in the way an AI/ML device would.

    Therefore, the requested information regarding acceptance criteria, device performance, sample sizes, ground truth establishment, expert qualifications, adjudication methods, and multi-reader multi-case studies is not applicable in this context.

    This document focuses on comparing the new device's technical specifications and non-clinical performance data (electrical safety, EMC, bench testing, biocompatibility, reprocessing validation) against recognized standards and its predicate device to show that it is substantially equivalent and does not raise new questions of safety or effectiveness.

    Here's a summary of the provided information, addressing the spirit of your request where possible, but highlighting the absence of specific clinical performance acceptance criteria:

    1. Table of Acceptance Criteria and Reported Device Performance

    This type of request for an AI/ML device would typically involve metrics like sensitivity, specificity, AUC, F1-score, etc., with predefined thresholds. For this traditional medical device, the "acceptance criteria" are compliance with established medical device standards and demonstrating substantial equivalence to a predicate device.

    CategoryAcceptance Criteria (Compliance with Standards/Predicate Equivalence)Reported Device Performance
    Electrical Safety & EM CompatibilityCompliance with ANSI/AAMI ES:60601-1:2005 and IEC 60601-1-2:2007.Data included in submission in compliance with these standards.
    Bench Testing (Performance Standards)Compliance with ISO 8600-1:2015, ISO 8600-3:1997, ISO 8600-4:2014, ISO 8600-5:2005, IEC 62471:2006, IEC 60601-2-18:2009.Data included in submission in compliance with these standards.
    BiocompatibilityEvaluation performed according to ISO 10993-1 and FDA Guidance, including specific tests (ISO 10993-5, ISO 10993-10, ISO 10993-11) based on contact type and duration.Tests conducted: ISO 10993-5:2009/(R) 2014, ISO 10993-10:2010, ISO 10993-11:2006/(R) 2010.
    Reprocessing ValidationCompliance with FDA Guidance and standards for cleaning, sterilization, and high-level disinfection (AAMI TIR 12:2010, ISO 15883-5:2005, AAMI TIR 30:2011, AAMI/ANSI/ISO 11737-1:2006/(R)2011, ASTM E1837-96:2014). For semi-critical device status.Reprocessing data submitted in compliance with cited standards, validated for cleaning, sterilization, and HLD.
    Technological Characteristics (Comparison to Predicate K103467)Demonstrate substantial equivalence without raising new questions of safety/effectiveness. Key characteristics compared: Type of Scope, Insertion Shaft Diameter, Length, Deflection, Type of Imager, Direction of View, Light Source, Field of View, Depth of Field, On-axis Resolution.Subject Device K182186 vs. Predicate K103467:- Insertion Shaft Diameter: 2.9 mm (vs. 3.7 mm for predicate)- Field of View: 100° (vs. 85° for predicate)- Depth of Field: 5-50mm (vs. 8-50mm for predicate)- On-axis Resolution: 16 Lp/mm at 5mm, 1.8Lp/mm at 50mm (vs. 8.0 Lp/mm at 8mm, 1.4 Lp/mm at 50 mm for predicate)- Other characteristics (Scope Type, Shaft Length, Deflection, Imager Type, Direction of View, Light Source, Reprocessing Methods) are "Same as the subject device" or "Same as the predicate device" (implied same).

    2. Sample size used for the test set and the data provenance:

    • Not Applicable. This document does not describe a clinical study with a "test set" in the context of an AI/ML algorithm evaluation. The assessments are based on engineering bench tests, biocompatibility tests, and reprocessing validations rather than clinical data sets.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not Applicable. Ground truth, in the AI/ML sense, is not established for this type of device submission.

    4. Adjudication method for the test set:

    • Not Applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not Applicable. This device is a visualization tool, not an AI-assisted diagnostic or interpretive system. Clinical studies comparing human reader performance are not described.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Not Applicable. This is a physical endoscope system, not an algorithm.

    7. The type of ground truth used:

    • Not Applicable. The "truth" in this context is defined by adherence to engineering specifications, safety standards, and performance characteristics as verified through non-clinical testing.

    8. The sample size for the training set:

    • Not Applicable. This device does not involve a "training set" as it is not an AI/ML system.

    9. How the ground truth for the training set was established:

    • Not Applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K183264
    Device Name
    Flex-THOR scope
    Date Cleared
    2019-01-18

    (56 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Flex-THOR System is indicated for use in providing access to, and visualization of, the thoracic and abdominal cavities, to allow for the performance of various diagnostic and therapeutic surgical procedures.

    Device Description

    The Flex-THOR System includes two main components: (1) the Flex-THOR Scope (Part Number: 11292VS(U)A-THOR), and (2) the Camera Control Unit (CCU). The insertion shaft of the Flex-THOR Scope (Part Number: 11292VS(U)A-THOR) has an outer diameter of 2.9 mm and a working length of 675 mm with 8.5 Fr elliptical shaped distal tip (major diameter of 3.2 mm and minor diameter of 2.4 mm). Users can access the 1.2 mm working channel through the Luer ports.

    AI/ML Overview

    This FDA 510(k) summary for the KARL STORZ Flex-THOR System does not contain information about acceptance criteria or a study proving the device meets specific performance criteria related to diagnostic accuracy or clinical effectiveness.

    Instead, it focuses on demonstrating substantial equivalence to a predicate device through non-clinical performance data (electrical safety, EMC, and reprocessing validation) and a comparison of technological characteristics.

    Therefore, I cannot fulfill the request to provide details about acceptance criteria and a study that proves the device meets them in the context of clinical performance or diagnostic accuracy.

    Here's what can be extracted from the provided text:


    Acceptance Criteria and Device Performance (Not applicable for clinical performance/diagnostic accuracy)

    As mentioned, there are no specific acceptance criteria for clinical performance or diagnostic accuracy provided in this document. The focus for this 510(k) submission is on demonstrating safety and effectiveness through substantial equivalence, primarily via non-clinical testing.

    The document does report performance against safety and reprocessing standards:

    Acceptance Criteria (Non-Clinical)Reported Device Performance
    ANSI/AAMI ES:60601-1:2005Compliance
    IEC 60601-1-2:2007Compliance
    AAMI TIR 12:2010Compliance
    ISO 15883-5:2005Compliance
    AAMI TIR 30:2011Compliance
    AAMI/ANSI/ISO 11737-1:2006/ (R)2011Compliance
    ASTM E1837-96:2014Compliance

    Study Information (Primarily Non-Clinical)

    1. Sample size used for the test set and the data provenance (country of origin of the data, retrospective or prospective):

      • Test Set Sample Size: Not applicable. The document discusses non-clinical bench testing for electrical safety, EMC, and reprocessing. These types of tests typically involve devices/prototypes and simulated conditions, not a "test set" of patient data in the sense of clinical performance or diagnostic accuracy.
      • Data Provenance: Not specified, but generally, bench testing is performed in a controlled laboratory environment. It is not patient data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth in the context of clinical expert consensus is not part of this 510(k) submission for this device. The non-clinical tests rely on established engineering and microbiological methodologies.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable. Adjudication methods are relevant for expert review of images or data, which is not described here.

    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was done, as this device (an endoscope) does not involve AI or image interpretation for diagnostic assistance that would require such a study.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This device is an endoscope, not an algorithm.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc): For the non-clinical tests:

      • Electrical Safety & EMC: Compliance with international standards.
      • Reprocessing: Verification based on microbiological testing (reduction of microbial load) and visual inspection against established reprocessing protocols outlined in the standards.
    7. The sample size for the training set: Not applicable. This device is not an AI algorithm requiring a training set.

    8. How the ground truth for the training set was established: Not applicable, as there is no training set for an AI algorithm.


    Summary of what the document does provide:

    • Clinical Performance Data: The document explicitly states: "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications."
    • Purpose of Non-Clinical Data: The non-clinical data covered electrical safety, electromagnetic compatibility (EMC), and reprocessing validation, all to demonstrate the device is as safe and effective as the predicate device.
    • Conclusion: The submission concludes that based on non-clinical performance data and comparison of device characteristics, the Flex-THOR System is substantially equivalent to the predicate device, and the differences do not raise new questions of safety and effectiveness.
    Ask a Question

    Ask a specific question about this device

    K Number
    K182696
    Device Name
    Telepack X LED
    Date Cleared
    2018-11-20

    (54 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TELE PACK X LED is an all-in-one Imaging System, which comprises a light source for illumination, Camera Control Unit (CCU) for use with compatible camera heads or video endoscopes for image processing, as well as a monitor for image display, intended for the visualization of endoscopic and microscopic procedures.

    Device Description

    The Telepack X LED is a portable and compact all-in-one imaging system that includes a 15 inch screen display, a camera control unit and internal LED light source, that is intended to be connected to a compatible device (camera head or videonendoscope) for the purpose of visualization and documentation of endoscopic and microscopic procedures as well as stroboscopy.

    The Telepack X LED includes a LED illumination light source to illuminate the intended area and a 15 inch monitor for display. It also allows the users to redefine the functions that take place when a button is pressed. The Telepack X LED is a non-patient contacting and require only wipe down as needed.

    AI/ML Overview

    The provided document describes a 510(k) submission for the KARL STORZ Endoscopy-America, Inc. Telepack X LED, an all-in-one imaging system for endoscopic and microscopic procedures. It is a traditional 510(k) submission, demonstrating substantial equivalence to a predicate device.

    The study presented focuses primarily on non-clinical performance and bench testing to demonstrate substantial equivalence, rather than a clinical study involving human readers or AI. Therefore, many of the requested elements for an AI study (e.g., acceptance criteria for AI performance, MRMC study, expert ground truth for test sets) are not applicable to this submission.

    Here's a breakdown based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of acceptance criteria with numerical targets against which performance of an AI algorithm is measured. Instead, it relies on comparison to a predicate device and adherence to recognized consensus standards for its non-AI related functions.

    However, it lists various performance testing conducted to ensure the device met its design specifications and is substantially equivalent to its predicate. These can be considered the "acceptance criteria" for the device's fundamental image capture and display capabilities in a non-AI context.

    Acceptance Criteria CategoryReported Device Performance / Evaluation Method
    Electrical Safety & EMCTested and passed:
    - IEC 60601-1 (Medical electrical equipment – Part 1: General requirements for basic safety and essential performance)
    - IEC 60601-1-2 (Medical electrical equipment – Part 1-2: General requirements for basic safety and essential performance – Collateral Standard: Electromagnetic compatibility – Requirements and tests)
    - IEC 60601-2-18 (Medical electrical equipment – Part 2-18: Particular requirements for the basic safety and essential performance of endoscopic equipment)
    - IEC 62471 (Photobiological safety of lamps and lamp systems) - Compliance due to internal light source
    - Certified to be Class I protection against electrical shock.
    - Type BF protection against electrical shock from stroboscopy and camera applied parts.
    - Type CF protection against electrical shock from light.
    - Drip-water protection against moisture per IPX1.
    Software V&V- Followed "Guidance for the Content of Premarket Submissions for Software Contained in Medical Device."
    - Software level of concern: Moderate.
    Performance TestingAdditional bench testing performed and results verified the device met all design specifications. This included:
    - Minimum Illumination
    - Spatial Resolution
    - Color Performance
    - Latency
    - White Balance
    - AE Step Response (Auto Exposure)
    - Head Button Functionality
    Image Quality EvaluationSubstantial equivalence on the effectiveness is supported by comparison of images and standard image quality characteristics (resolution, latency, white balance, AE step response) between the subject and predicate devices.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not applicable. The submission primarily relies on bench testing and comparison to a predicate device's technical specifications and previously cleared performance, rather than a "test set" of clinical cases for an AI algorithm.
    • Data Provenance: Not applicable for an AI test set. Bench testing data is typically generated internally by the manufacturer. If any clinical "data" were used for comparison (e.g., images), the provenance is not specified. The document mentions "Clinical published literatures were provided to support the effectiveness of NIR imaging," but this is for reference, not a direct clinical test of this specific device's AI capabilities.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    Not applicable as this is not an AI device validation requiring ground truth established by clinical experts on a test set. This device is an imaging system, not an AI diagnostic algorithm.

    4. Adjudication Method for the Test Set

    Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done

    No. An MRMC study is typically performed to evaluate the diagnostic performance of an AI system, often in comparison to or in assistance of human readers. This submission is for an imaging system hardware, not an AI diagnostic algorithm.

    6. If a Standalone (algorithm only without human-in-the-loop performance) was Done

    Not applicable. There is no AI algorithm being evaluated for standalone performance.

    7. The Type of Ground Truth Used

    Not applicable for an AI algorithm. The "ground truth" for this device's performance is its ability to meet engineering specifications and produce images comparable to the predicate device, as verified through bench testing.

    8. The Sample Size for the Training Set

    Not applicable. This document pertains to the submission of an imaging system, not the training of an AI algorithm.

    9. How the Ground Truth for the Training Set was Established

    Not applicable.


    Summary of the study that proves the device meets acceptance criteria (as per this 510(k) submission):

    The KARL STORZ Telepack X LED demonstrated that it meets acceptance criteria, primarily through non-clinical bench testing and comparison to its predicate device (Image 1 Video Imaging System K070716).

    The studies performed included:

    • Electrical Safety and EMC Testing: Conformance to IEC 60601-1, IEC 60601-1-2, IEC 60601-2-18, and IEC 62471. The device was certified for various classes of electrical shock protection and drip-water protection (IPX1).
    • Software Verification and Validation (V&V) Testing: Conducted in accordance with FDA guidance for software in medical devices, with a "Moderate" level of concern.
    • Performance Testing: Bench tests were conducted to verify parameters such as minimum illumination, spatial resolution, color performance, latency, white balance, AE step response, and head button functionality. These tests aimed to ensure the device met its design specifications and performed comparably to the predicate device in terms of image quality characteristics.

    The conclusion drawn was that the Telepack X LED's intended use, operating principles, technological characteristics, and features are similar, if not identical, to the predicate device. Differences identified (e.g., integrated light source, storage methods, absence of interoperability/split screen enhancement) were determined not to raise new or different questions of safety and effectiveness, as the underlying principles, functions, and compliance to standards were maintained. Clinical performance data was not required or provided to establish substantial equivalence for this type of device, as non-clinical bench testing was deemed sufficient.

    Ask a Question

    Ask a specific question about this device

    K Number
    K180977
    Date Cleared
    2018-07-18

    (96 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KARL STORZ New Generation Trocars are intended to be used during endoscopic and laparoscopic procedures in general surgery and thoracoscopy in adult and pediatric patients to create and maintain a port of entry.

    Device Description

    The KARL STORZ New Generation Trocars provide a port of entry during endoscopic and laparoscopic procedures in pediatric and adult patients. The New Generation Trocars are available in diameters 2.5mm to 13.5mm and consist of a cannula, trocar, and a valve seal. The trocars combine single-use and reusable components since the valve seal is single use while the cannula and trocar are reusable components. The trocars are color coded by size and are available with pyramidal, conical, or conical-blunt tips. Reducers can be used to reduce the size of the trocar to accommodate a smaller instrument without losing pneumoperitoneum.

    AI/ML Overview

    The KARL STORZ New Generation Trocars are intended to be used during endoscopic and laparoscopic procedures in general surgery and thoracoscopy in adult and pediatric patients to create and maintain a port of entry.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/TestAcceptance Criteria (Implied)Reported Device Performance
    Mechanical Performance
    Seal Leak TestAdequate sealing to maintain pneumoperitoneumBench verification performance testing was performed.
    Seal Leak Test while Under TorqueAdequate sealing under torque conditionsBench verification performance testing was performed.
    Instrument Insertion and Retention TestInstruments can be inserted smoothly and retained securelyBench verification performance testing was performed.
    Penetration ForceAcceptable force required for tissue penetrationBench verification performance testing was performed.
    Compression Testing of the CannulaCannula withstands compressive forces without failureBench verification performance testing was performed.
    Bending Testing of the CannulaCannula withstands bending forces without failureBench verification performance testing was performed.
    Torsional Testing of the CannulaCannula withstands torsional forces without failureBench verification performance testing was performed.
    Biocompatibility
    BiocompatibilityCompliant with ISO 10993 standardsBiocompatibility evaluation was performed to ISO 10993.
    Sterility (Reusable Components)
    Sterility (Reusable Cannula, Trocar, Reducer)Compliant with ISO 11135 (Steam sterilization)Sterility of the Reusable Components was validated in accordance with ISO 11135.
    Sterility (Sterile Single-Use Components)
    Sterility (Sterile Single-Use Valve Seals)Compliant with ISO 11135 and ISO 10993Sterility of the Sterile Single Use Components was validated in accordance with ISO 11135 and ISO 10993.
    Packaging (Sterile Single-Use Components)
    Packaging (Sterile Single-Use Valve Seals)Compliant with ISO 11607 and ISO 11737Packaging for the Sterile Single Use Components was validated to ISO 11607 and ISO 11737.
    Clinical PerformanceDemonstrate substantial equivalence to predicate device in terms of safety and effectiveness for intended useClinical testing was not required; non-clinical bench testing and medical literature review were considered sufficient to assess safety and effectiveness and establish substantial equivalence.
    Usage in Pediatric PopulationSafe and effective for use in pediatric patientsSupported by real-world evidence (medical literature) referencing pediatric minimally invasive surgery effectiveness and safety.
    Electromagnetic Compatibility (EMC)Not applicableThe subject device does not require electromagnetic compatibility, electrical safety, or software validation documentation.
    Electrical SafetyNot applicableThe subject device does not require electromagnetic compatibility, electrical safety, or software validation documentation.
    Software ValidationNot applicableThe subject device does not require electromagnetic compatibility, electrical safety, or software validation documentation.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify exact sample sizes for each bench test performed. It generally states that "Bench verification performance testing has been performed" for the listed parameters.

    • Data Provenance: The studies were bench tests performed by the manufacturer, KARL STORZ Endoscopy-America, Inc. There is no information about the country of origin of the data beyond the manufacturer's location in El Segundo, CA. All testing appears to be retrospective in the sense that it was completed prior to the 510(k) submission.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Not applicable. The "ground truth" for the performance tests was established through direct measurement against pre-defined engineering and regulatory standards (e.g., ISO standards, specific force/leakage thresholds set by the manufacturer for functional performance). There is no mention of human experts establishing ground truth for these objective bench tests.

    4. Adjudication Method for the Test Set

    Not applicable. For objective bench tests, results are typically determined by measurement against specified criteria, not by human adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    Not applicable. This device is a mechanical surgical instrument (trocar), not an AI-powered diagnostic or assistive tool that would involve human readers or AI assistance. No MRMC study was performed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Not applicable. This device is a mechanical surgical instrument; it does not involve algorithms or AI.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    The ground truth for the bench tests was based on engineering specifications and established regulatory standards (e.g., ISO standards). For example, for biocompatibility, the ground truth was compliance with ISO 10993. For sterility, the ground truth was validation against ISO 11135 and ISO 11737. For mechanical tests, the ground truth would be the defined acceptable ranges for forces, leakage, etc.

    For the expansion of indications to include a pediatric population, the "ground truth" was established through medical literature review (real-world evidence) from various peer-reviewed publications covering pediatric minimally invasive surgery.

    8. The Sample Size for the Training Set

    Not applicable. This device does not use an algorithm that requires a training set. The term "training set" is typically associated with machine learning or AI models.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable. As there is no algorithm or AI, there is no training set or ground truth associated with it.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 17