Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K130718
    Device Name
    PEERSCOPE SYSTEM
    Manufacturer
    Date Cleared
    2013-04-16

    (29 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PEERMEDICAL LTD.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the lower digestive tract (including the anus, rectum, sigmoid colon, colon and ileocecal valve) for adult patients. The PeerScope System consists of PeerMedical camera heads, endoscopes, video system, light source and other ancillary equipment.

    Device Description

    The PeerScope system Model H is a GI platform for diagnostic visualization and therapeutic intervention of the lower digestive tract. The PeerScope system Model H is a modification of the legally marketed (K 120289) PeerScope system Model B (the predicate device). The system consists of endoscopic Main Control Unit (MCU H) and of the PeerScope CS colonoscope, enabling physicians to view a high resolution wide field of view of up to 300°. The system is labeled for use in healthcare facilities / hospitals for adult patients.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategoryReported Device Performance
    Risk AnalysisConducted in accordance with ISO 14971; tests performed and met criteria.
    Software ValidationCarried out in accordance with FDA Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 2005)".
    Reprocessing ValidationCarried out in accordance with FDA Guidance Document "Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance (May 2011)".
    Device Safety & PerformanceVerified by PeerMedical and accredited third-party laboratories.
    UsabilityTested in a clinical environment by seven experienced GI physicians; device meets specifications; "at least as safe and effective for its intended use as the predicate device."
    Technological Differences ImpactImpact of differences between Model H and Model B (video resolution, user interface, I/O ports, video signal output/input, monitor display options) deemed "insignificant in terms of the device safety and effectiveness for the device intended use" based on verification and performance testing.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Usability Study: 7 experienced GI physicians.
      • Bench Data (Verification Tests): Not explicitly stated, but implies sufficient tests were run to meet ISO 14971 risk analysis, software validation, and reprocessing validation criteria.
    • Data Provenance: Clinical environment in a US medical center (for usability testing). Other bench data provenance is not specified beyond "PeerMedical and accredited third party laboratories." The study is prospective in nature for the usability and verification testing of the new Model H.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: 7
    • Qualifications of Experts: Experienced GI physicians. No further details (e.g., years of experience, specific board certifications) are provided.

    4. Adjudication Method (for the test set)

    The document does not explicitly state an adjudication method (such as 2+1 or 3+1) for the usability testing. It mentions "Device Usability was carried out by means of testing within a clinical environment in a US medical center by seven experienced GI physicians," and that "The conclusions drawn... demonstrate that the device meets its specifications." This suggests a consensus or individual assessment by the physicians during the usability testing, rather than a formal adjudication process comparing their findings against a pre-established ground truth for diagnostic accuracy.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This submission is for a device modification (PeerScope System Model H) and focuses on demonstrating substantial equivalence to its predicate device (PeerScope System Model B). The "improvements" are in technological features (e.g., HD video, digital output) and usability, not in AI-assisted diagnostic performance or human reader improvement with AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    No, a standalone (algorithm only) performance study was not done. The device is a colonoscope system, an endoscopic tool for visualization and intervention, not an AI algorithm intended to perform standalone diagnostic functions.

    7. The Type of Ground Truth Used

    • Usability: User satisfaction and observed performance against device specifications in a clinical setting.
    • Bench Data: Compliance with established engineering standards (e.g., ISO 14971, IEC 60601 series, ISO 10993 series, ASTM E 1837-96), software validation guidance, and reprocessing validation guidance.
    • The "ground truth" here is adherence to regulatory and performance standards and user satisfaction, rather than a diagnostic 'truth' (like pathology or clinical outcomes for disease detection) because the device's primary function is visualization and access for intervention, not autonomous diagnostic interpretation.

    8. The Sample Size for the Training Set

    No training set is mentioned or applicable here. This device is a hardware system (colonoscope and accessories) for visualization and therapeutic intervention, not a machine learning or AI model that requires a training set.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no training set for this device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120289
    Device Name
    PEERSCOPE SYSTEM
    Manufacturer
    Date Cleared
    2012-09-28

    (241 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    PEERMEDICAL

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the lower digestive tract (including the anus, rectum, sigmoid colon, colon and ileocecal valve) for adult patients.

    Device Description

    The PeerScope system consists of endoscopic Main Control Unit (MCU), and of the PeerScope CS colonoscope. The MCU controls the endoscope. As other endoscopic legally marketed systems, it includes video system, light source, and interface to other ancillary equipment. The device labeled for use in healthcare facility/hospital for endoscopy and endoscopic treatment within the lower digestive tract. The operation principles of the PeerScope System are similar to those of other legally marketed standard colonoscopy systems. The PeerScope system model provides 160° standard front field of view and a 300° wide field of view.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the PeerScope System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Although explicit "acceptance criteria" and "reported device performance" in a quantitative table are not provided in this 510(k) summary, the document states that the device "passed the experiment criteria" and "met its intended use and specifications" in both bench and clinical validations. The criteria are implicit in meeting the device's intended use and specifications, with a focus on safety and effectiveness comparable to the predicate device.

    Acceptance Criteria (Implicit)Reported Device Performance
    Device meets its intended use and specifications for diagnostic visualization and therapeutic interventions within the lower digestive tract for adult patients.Clinical Data: "The results of the clinical validation passed the experiment criteria. No adverse events were observed." "The conclusions drawn from the bench and clinical tests demonstrate that the device meets its specifications, intended use and indication for use." "The device met its intended use and specifications."
    Device demonstrates safety and effectiveness without raising new questions compared to the predicate device."Based on the results of verification, validation and performance testing, the impact of the above differences [from predicate] is insignificant in terms of the device safety and effectiveness for its intended use." "No adverse events were observed." "The PeerScope System does not raise different questions of safety and effectiveness."
    Device's wide 300° field of view feature is efficient and effective.Bench Data (Simulated Clinical Conditions): "The results of the bench validation passed the experiment criteria. The device met its intended use and specifications. Hazardous conditions were not observed." (This specifically mentions the wide 300° field of view evaluation).
    Reprocessing validation in accordance with relevant draft guidance."Reprocessing validation was carried out in accordance with 'Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance / May 2011'."
    Jet irrigation for auxiliary jet water supply is efficient despite reduced flow rate compared to predicate."The design of the reduced jet water flow has been verified & validated. The results demonstrated that the jet irrigation is efficient."
    Compliance with relevant medical device safety, performance, and biocompatibility standards."The device safety and performance were verified by tests by PeerMedical and accredited third party laboratories. List of standards was used / relied upon for testing: [lists various IEC, ISO, ASTM standards]."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set:
      • Clinical Data: Fifty (50) patients.
      • Bench Data (Simulated Clinical Conditions): Not explicitly stated, but implies multiple trials or evaluations of "a final production unit."
    • Data Provenance:
      • Clinical Data: Prospective, conducted at Elisha Medical Center, Haifa, Israel.
      • Bench Data (Simulated Clinical Conditions): Prospective, performed by physicians at three US medical centers.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Clinical Data: The procedure was performed by "US, European and Israeli physicians." The exact number of physicians (experts) is not specified. Their qualifications are implied as being practicing physicians performing endoscopic procedures, but specific experience levels (e.g., "radiologist with 10 years of experience") are not provided.
    • Bench Data (Simulated Clinical Conditions): "The procedure was performed by physicians at three US medical centers." The exact number of physicians is not specified, nor are their specific qualifications beyond being "physicians."

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set. It simply states that physicians performed the procedures and the results "passed the experiment criteria."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or performed. This device is a colonoscope, not an AI-assisted diagnostic tool designed to improve human reader performance. The focus is on the safety and effectiveness of the endoscope itself.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • This question is not applicable. The PeerScope System is an endoscope controlled by a human operator, not an AI algorithm. Therefore, no standalone algorithm performance was assessed.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    • The ground truth for the clinical data is implicitly based on the expert judgment/observations of the performing physicians during the endoscopic procedures, combined with the successful completion of the procedures without adverse events and meeting intended use. There is no mention of independent pathology or specific outcomes data beyond the successful completion of the procedure itself.

    8. The Sample Size for the Training Set

    • The document describes validation and verification testing of a "final production unit" and clinical data from "50 patients." It does not mention any "training set" in the context of machine learning or AI. This is a conventional medical device submission, not an AI/ML device.

    9. How the Ground Truth for the Training Set Was Established

    • This question is not applicable as there is no mention of a training set for machine learning.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1