Search Results
Found 3 results
510(k) Data Aggregation
(147 days)
PEERSCOPE SYSTEM
The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the upper digestive tract (including the esophagus, stomach, and duodenum).
The PeerScope System Model HG is a G1 platform for diagnostic visualization and therapeutic intervention of the digestive tract for use in healthcare facility/hospital. The system consists of the Main Control Unit (MCU) Videoprocessor that provides the device controls, user interfaces, image processing, pneumatic controls and interfaces with various external accessories, and of the PeerScope GS flexible video Gastroscope labeled for repeatable clinical usage within the upper digestive tract. The operation principles of the PeerScope System are similar to those of other legally marketed standard Gastroscopic systems. The system also provides the physicians with two viewing capabilities: Standard 160° front field of view, and 210° wide field of view.
The provided document describes the PeerScope System (Model HG), a gastroscope used for diagnostic visualization and therapeutic interventions in the upper digestive tract. The submission to the FDA focuses on establishing substantial equivalence to a predicate device, the EVIS EXERA II 180 System (K100584), rather than a standalone clinical study demonstrating specific performance metrics against pre-defined acceptance criteria for a new AI-powered diagnostic device.
Therefore, many of the requested elements for an AI device study are not applicable to this submission, as it concerns a medical device with similar technological characteristics to existing devices, seeking clearance under the 510(k) pathway.
However, I can extract information related to the device's performance, testing, and basis for demonstrating safety and effectiveness as presented in the document.
Summary of Acceptance Criteria and Device Performance (as inferable from the 510(k) submission):
The submission does not present a table of explicit acceptance criteria for a novel AI algorithm's performance. Instead, the "acceptance criteria" are implied by demonstrating that the PeerScope System Model HG performs equivalently to, and is as safe and effective as, its legally marketed predicate devices. The performance data presented focuses on design verification, software validation, reprocessing validation, and usability, all of which met their respective internal and regulatory acceptance criteria.
Category | Acceptance Criteria (Implied by Predicate Equivalence & Regulatory Standards) | Reported Device Performance (PeerScope GS Gastroscope) |
---|---|---|
Functional Equivalence | Must have the same intended use and indications for use as the predicate device (diagnostic visualization and therapeutic interventions in the upper digestive tract). | The PeerScope System Model HG has the same intended use and indications for use in the upper digestive tract. It provides diagnostic visualization and access for therapeutic interventions using standard endoscopy tools. |
Technological Characteristics | Must utilize similar technologies to its predicate device(s) or demonstrate that differences do not raise new questions of safety and effectiveness. | The PeerScope System Model HG uses similar technologies. Key differences like having a side view, a wider field of view (210° vs. 140° for predicate's standard view), and integral LED illumination were assessed and deemed to have insignificant impact on device safety and effectiveness. Conversely, features present in the predicate but not the PeerScope (e.g., narrow band illumination) were noted as not impacting the PeerScope's safety/effectiveness for its intended use. |
Risk Management | Risk analysis must be conducted in accordance with ISO 14971, identifying and mitigating unacceptable risks. | Risk analysis was conducted in accordance with ISO 14971. |
Design Verification | Design verification tests must be identified, performed, and meet their acceptance criteria to ensure the device performs as intended and meets specifications. | Design verification tests were identified, performed, and met. |
Software Validation | Must be carried out in accordance with FDA Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 2005)". | Software validation was carried out and met the requirements. |
Reprocessing Validation | Must be carried out in accordance with FDA Guidance Document "Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance (May 2011)". | Reprocessing validation was carried out and met the requirements. |
Usability | Must demonstrate safe and effective use in a clinical environment by experienced physicians. | Device usability was carried out by five experienced GI physicians in a US medical center, demonstrating that the device meets its specifications. |
Compliance with Standards | Must comply with relevant international and national standards for medical devices, electrical safety, biocompatibility, and sterilization (e.g., AAMI / ANSI ES 60601-1, IEC 60601-1-2, IEC 60601-2-18, IEC 62304, ISO 10993, ISO 8600, ASTM E 1837). | The device was tested against and relies upon a comprehensive list of standards, including AAMI / ANSI ES 60601-1, IEC 60601-1-2, IEC 60601-2-18, IEC 62304, ISO 10993 (Parts #1, #5, #10, #12), ISO 8600 (Parts 1, 3, 4, 6), and ASTM E 1837-96. |
Detailed Responses to Specific Questions:
-
A table of acceptance criteria and the reported device performance:
- Please refer to the table above. The acceptance criteria are largely implied by demonstrating substantial equivalence to the predicate device and compliance with applicable standards.
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Usability Study: Five experienced GI physicians participated in the usability testing.
- Data Provenance: The usability testing was conducted in a U.S. medical center. The nature of this testing (observational use in a clinical environment) makes it prospective rather than retrospective.
- For other tests (Risk Analysis, Design Verification, Software Validation, Reprocessing Validation, Bench Data), the sample sizes are not explicitly stated, as these typically refer to engineering verification and validation activities rather than clinical study cohorts. The "device safety and performance were verified by EndoChoice Innovation Center Ltd. and accredited third party laboratories" (Israel and likely other locations for third-party labs).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- For the usability study, five experienced GI physicians were used. Their specific years of experience are not detailed, but "experienced" implies sufficient expertise for evaluating a gastroscope.
- For other technical verification tests, "ground truth" is established by engineering specifications, regulatory standards, and validated test methods, rather than expert consensus on diagnostic interpretations.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document does not describe an adjudication method for the usability study. It states that device usability was "carried out by means of testing within a clinical environment ... by five experienced GI physicians." It implies a consensus or individual assessment by these physicians that the device meets its specifications for usability. Given the nature of a 510(k) for an endoscope, a formal adjudication process for diagnostic accuracy (like in an AI study) is not typically required.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This submission is for a medical device (gastroscope) and not an AI-powered diagnostic tool. The focus is on demonstrating substantial equivalence to a predicate device, not on improving human reader performance with AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. This submission is for an endoscope system, not an AI algorithm. There is no standalone algorithm performance to report.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the usability study, the "ground truth" was the device's adherence to its specifications and satisfactory performance in a clinical environment, as assessed by experienced GI physicians.
- For bench testing and design verification, the "ground truth" comprises engineering specifications, performance standards (e.g., scope angulation, field of view, depth of field), and regulatory compliance requirements.
-
The sample size for the training set:
- Not applicable. As this is not an AI device, there is no training set mentioned or required.
-
How the ground truth for the training set was established:
- Not applicable. As there is no training set, there is no ground truth establishment for it.
In conclusion, this 510(k) submission for the PeerScope System (Model HG) demonstrates its safety and effectiveness primarily through bench testing, adherence to recognized standards, and a small-scale usability study, all aimed at proving substantial equivalence to a previously cleared predicate device. It does not involve the type of clinical performance study with acceptance criteria and ground truth validation that would be typical for a novel AI-driven diagnostic device.
Ask a specific question about this device
(29 days)
PEERSCOPE SYSTEM
The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the lower digestive tract (including the anus, rectum, sigmoid colon, colon and ileocecal valve) for adult patients. The PeerScope System consists of PeerMedical camera heads, endoscopes, video system, light source and other ancillary equipment.
The PeerScope system Model H is a GI platform for diagnostic visualization and therapeutic intervention of the lower digestive tract. The PeerScope system Model H is a modification of the legally marketed (K 120289) PeerScope system Model B (the predicate device). The system consists of endoscopic Main Control Unit (MCU H) and of the PeerScope CS colonoscope, enabling physicians to view a high resolution wide field of view of up to 300°. The system is labeled for use in healthcare facilities / hospitals for adult patients.
Here's a breakdown of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Reported Device Performance |
---|---|
Risk Analysis | Conducted in accordance with ISO 14971; tests performed and met criteria. |
Software Validation | Carried out in accordance with FDA Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 2005)". |
Reprocessing Validation | Carried out in accordance with FDA Guidance Document "Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance (May 2011)". |
Device Safety & Performance | Verified by PeerMedical and accredited third-party laboratories. |
Usability | Tested in a clinical environment by seven experienced GI physicians; device meets specifications; "at least as safe and effective for its intended use as the predicate device." |
Technological Differences Impact | Impact of differences between Model H and Model B (video resolution, user interface, I/O ports, video signal output/input, monitor display options) deemed "insignificant in terms of the device safety and effectiveness for the device intended use" based on verification and performance testing. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Usability Study: 7 experienced GI physicians.
- Bench Data (Verification Tests): Not explicitly stated, but implies sufficient tests were run to meet ISO 14971 risk analysis, software validation, and reprocessing validation criteria.
- Data Provenance: Clinical environment in a US medical center (for usability testing). Other bench data provenance is not specified beyond "PeerMedical and accredited third party laboratories." The study is prospective in nature for the usability and verification testing of the new Model H.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: 7
- Qualifications of Experts: Experienced GI physicians. No further details (e.g., years of experience, specific board certifications) are provided.
4. Adjudication Method (for the test set)
The document does not explicitly state an adjudication method (such as 2+1 or 3+1) for the usability testing. It mentions "Device Usability was carried out by means of testing within a clinical environment in a US medical center by seven experienced GI physicians," and that "The conclusions drawn... demonstrate that the device meets its specifications." This suggests a consensus or individual assessment by the physicians during the usability testing, rather than a formal adjudication process comparing their findings against a pre-established ground truth for diagnostic accuracy.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This submission is for a device modification (PeerScope System Model H) and focuses on demonstrating substantial equivalence to its predicate device (PeerScope System Model B). The "improvements" are in technological features (e.g., HD video, digital output) and usability, not in AI-assisted diagnostic performance or human reader improvement with AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
No, a standalone (algorithm only) performance study was not done. The device is a colonoscope system, an endoscopic tool for visualization and intervention, not an AI algorithm intended to perform standalone diagnostic functions.
7. The Type of Ground Truth Used
- Usability: User satisfaction and observed performance against device specifications in a clinical setting.
- Bench Data: Compliance with established engineering standards (e.g., ISO 14971, IEC 60601 series, ISO 10993 series, ASTM E 1837-96), software validation guidance, and reprocessing validation guidance.
- The "ground truth" here is adherence to regulatory and performance standards and user satisfaction, rather than a diagnostic 'truth' (like pathology or clinical outcomes for disease detection) because the device's primary function is visualization and access for intervention, not autonomous diagnostic interpretation.
8. The Sample Size for the Training Set
No training set is mentioned or applicable here. This device is a hardware system (colonoscope and accessories) for visualization and therapeutic intervention, not a machine learning or AI model that requires a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no training set for this device.
Ask a specific question about this device
(241 days)
PEERSCOPE SYSTEM
The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the lower digestive tract (including the anus, rectum, sigmoid colon, colon and ileocecal valve) for adult patients.
The PeerScope system consists of endoscopic Main Control Unit (MCU), and of the PeerScope CS colonoscope. The MCU controls the endoscope. As other endoscopic legally marketed systems, it includes video system, light source, and interface to other ancillary equipment. The device labeled for use in healthcare facility/hospital for endoscopy and endoscopic treatment within the lower digestive tract. The operation principles of the PeerScope System are similar to those of other legally marketed standard colonoscopy systems. The PeerScope system model provides 160° standard front field of view and a 300° wide field of view.
Here's a breakdown of the acceptance criteria and study information for the PeerScope System, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Although explicit "acceptance criteria" and "reported device performance" in a quantitative table are not provided in this 510(k) summary, the document states that the device "passed the experiment criteria" and "met its intended use and specifications" in both bench and clinical validations. The criteria are implicit in meeting the device's intended use and specifications, with a focus on safety and effectiveness comparable to the predicate device.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Device meets its intended use and specifications for diagnostic visualization and therapeutic interventions within the lower digestive tract for adult patients. | Clinical Data: "The results of the clinical validation passed the experiment criteria. No adverse events were observed." "The conclusions drawn from the bench and clinical tests demonstrate that the device meets its specifications, intended use and indication for use." "The device met its intended use and specifications." |
Device demonstrates safety and effectiveness without raising new questions compared to the predicate device. | "Based on the results of verification, validation and performance testing, the impact of the above differences [from predicate] is insignificant in terms of the device safety and effectiveness for its intended use." "No adverse events were observed." "The PeerScope System does not raise different questions of safety and effectiveness." |
Device's wide 300° field of view feature is efficient and effective. | Bench Data (Simulated Clinical Conditions): "The results of the bench validation passed the experiment criteria. The device met its intended use and specifications. Hazardous conditions were not observed." (This specifically mentions the wide 300° field of view evaluation). |
Reprocessing validation in accordance with relevant draft guidance. | "Reprocessing validation was carried out in accordance with 'Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance / May 2011'." |
Jet irrigation for auxiliary jet water supply is efficient despite reduced flow rate compared to predicate. | "The design of the reduced jet water flow has been verified & validated. The results demonstrated that the jet irrigation is efficient." |
Compliance with relevant medical device safety, performance, and biocompatibility standards. | "The device safety and performance were verified by tests by PeerMedical and accredited third party laboratories. List of standards was used / relied upon for testing: [lists various IEC, ISO, ASTM standards]." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set:
- Clinical Data: Fifty (50) patients.
- Bench Data (Simulated Clinical Conditions): Not explicitly stated, but implies multiple trials or evaluations of "a final production unit."
- Data Provenance:
- Clinical Data: Prospective, conducted at Elisha Medical Center, Haifa, Israel.
- Bench Data (Simulated Clinical Conditions): Prospective, performed by physicians at three US medical centers.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Clinical Data: The procedure was performed by "US, European and Israeli physicians." The exact number of physicians (experts) is not specified. Their qualifications are implied as being practicing physicians performing endoscopic procedures, but specific experience levels (e.g., "radiologist with 10 years of experience") are not provided.
- Bench Data (Simulated Clinical Conditions): "The procedure was performed by physicians at three US medical centers." The exact number of physicians is not specified, nor are their specific qualifications beyond being "physicians."
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set. It simply states that physicians performed the procedures and the results "passed the experiment criteria."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or performed. This device is a colonoscope, not an AI-assisted diagnostic tool designed to improve human reader performance. The focus is on the safety and effectiveness of the endoscope itself.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- This question is not applicable. The PeerScope System is an endoscope controlled by a human operator, not an AI algorithm. Therefore, no standalone algorithm performance was assessed.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
- The ground truth for the clinical data is implicitly based on the expert judgment/observations of the performing physicians during the endoscopic procedures, combined with the successful completion of the procedures without adverse events and meeting intended use. There is no mention of independent pathology or specific outcomes data beyond the successful completion of the procedure itself.
8. The Sample Size for the Training Set
- The document describes validation and verification testing of a "final production unit" and clinical data from "50 patients." It does not mention any "training set" in the context of machine learning or AI. This is a conventional medical device submission, not an AI/ML device.
9. How the Ground Truth for the Training Set Was Established
- This question is not applicable as there is no mention of a training set for machine learning.
Ask a specific question about this device
Page 1 of 1