Search Results
Found 1 results
510(k) Data Aggregation
(147 days)
The PeerScope System is intended for diagnostic visualization of the digestive tract. The system also provides access for therapeutic interventions using standard endoscopy tools. The PeerScope System is indicated for use within the upper digestive tract (including the esophagus, stomach, and duodenum).
The PeerScope System Model HG is a G1 platform for diagnostic visualization and therapeutic intervention of the digestive tract for use in healthcare facility/hospital. The system consists of the Main Control Unit (MCU) Videoprocessor that provides the device controls, user interfaces, image processing, pneumatic controls and interfaces with various external accessories, and of the PeerScope GS flexible video Gastroscope labeled for repeatable clinical usage within the upper digestive tract. The operation principles of the PeerScope System are similar to those of other legally marketed standard Gastroscopic systems. The system also provides the physicians with two viewing capabilities: Standard 160° front field of view, and 210° wide field of view.
The provided document describes the PeerScope System (Model HG), a gastroscope used for diagnostic visualization and therapeutic interventions in the upper digestive tract. The submission to the FDA focuses on establishing substantial equivalence to a predicate device, the EVIS EXERA II 180 System (K100584), rather than a standalone clinical study demonstrating specific performance metrics against pre-defined acceptance criteria for a new AI-powered diagnostic device.
Therefore, many of the requested elements for an AI device study are not applicable to this submission, as it concerns a medical device with similar technological characteristics to existing devices, seeking clearance under the 510(k) pathway.
However, I can extract information related to the device's performance, testing, and basis for demonstrating safety and effectiveness as presented in the document.
Summary of Acceptance Criteria and Device Performance (as inferable from the 510(k) submission):
The submission does not present a table of explicit acceptance criteria for a novel AI algorithm's performance. Instead, the "acceptance criteria" are implied by demonstrating that the PeerScope System Model HG performs equivalently to, and is as safe and effective as, its legally marketed predicate devices. The performance data presented focuses on design verification, software validation, reprocessing validation, and usability, all of which met their respective internal and regulatory acceptance criteria.
| Category | Acceptance Criteria (Implied by Predicate Equivalence & Regulatory Standards) | Reported Device Performance (PeerScope GS Gastroscope) |
|---|---|---|
| Functional Equivalence | Must have the same intended use and indications for use as the predicate device (diagnostic visualization and therapeutic interventions in the upper digestive tract). | The PeerScope System Model HG has the same intended use and indications for use in the upper digestive tract. It provides diagnostic visualization and access for therapeutic interventions using standard endoscopy tools. |
| Technological Characteristics | Must utilize similar technologies to its predicate device(s) or demonstrate that differences do not raise new questions of safety and effectiveness. | The PeerScope System Model HG uses similar technologies. Key differences like having a side view, a wider field of view (210° vs. 140° for predicate's standard view), and integral LED illumination were assessed and deemed to have insignificant impact on device safety and effectiveness. Conversely, features present in the predicate but not the PeerScope (e.g., narrow band illumination) were noted as not impacting the PeerScope's safety/effectiveness for its intended use. |
| Risk Management | Risk analysis must be conducted in accordance with ISO 14971, identifying and mitigating unacceptable risks. | Risk analysis was conducted in accordance with ISO 14971. |
| Design Verification | Design verification tests must be identified, performed, and meet their acceptance criteria to ensure the device performs as intended and meets specifications. | Design verification tests were identified, performed, and met. |
| Software Validation | Must be carried out in accordance with FDA Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (May 2005)". | Software validation was carried out and met the requirements. |
| Reprocessing Validation | Must be carried out in accordance with FDA Guidance Document "Processing/Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling Draft Guidance (May 2011)". | Reprocessing validation was carried out and met the requirements. |
| Usability | Must demonstrate safe and effective use in a clinical environment by experienced physicians. | Device usability was carried out by five experienced GI physicians in a US medical center, demonstrating that the device meets its specifications. |
| Compliance with Standards | Must comply with relevant international and national standards for medical devices, electrical safety, biocompatibility, and sterilization (e.g., AAMI / ANSI ES 60601-1, IEC 60601-1-2, IEC 60601-2-18, IEC 62304, ISO 10993, ISO 8600, ASTM E 1837). | The device was tested against and relies upon a comprehensive list of standards, including AAMI / ANSI ES 60601-1, IEC 60601-1-2, IEC 60601-2-18, IEC 62304, ISO 10993 (Parts #1, #5, #10, #12), ISO 8600 (Parts 1, 3, 4, 6), and ASTM E 1837-96. |
Detailed Responses to Specific Questions:
-
A table of acceptance criteria and the reported device performance:
- Please refer to the table above. The acceptance criteria are largely implied by demonstrating substantial equivalence to the predicate device and compliance with applicable standards.
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Usability Study: Five experienced GI physicians participated in the usability testing.
- Data Provenance: The usability testing was conducted in a U.S. medical center. The nature of this testing (observational use in a clinical environment) makes it prospective rather than retrospective.
- For other tests (Risk Analysis, Design Verification, Software Validation, Reprocessing Validation, Bench Data), the sample sizes are not explicitly stated, as these typically refer to engineering verification and validation activities rather than clinical study cohorts. The "device safety and performance were verified by EndoChoice Innovation Center Ltd. and accredited third party laboratories" (Israel and likely other locations for third-party labs).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- For the usability study, five experienced GI physicians were used. Their specific years of experience are not detailed, but "experienced" implies sufficient expertise for evaluating a gastroscope.
- For other technical verification tests, "ground truth" is established by engineering specifications, regulatory standards, and validated test methods, rather than expert consensus on diagnostic interpretations.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document does not describe an adjudication method for the usability study. It states that device usability was "carried out by means of testing within a clinical environment ... by five experienced GI physicians." It implies a consensus or individual assessment by these physicians that the device meets its specifications for usability. Given the nature of a 510(k) for an endoscope, a formal adjudication process for diagnostic accuracy (like in an AI study) is not typically required.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This submission is for a medical device (gastroscope) and not an AI-powered diagnostic tool. The focus is on demonstrating substantial equivalence to a predicate device, not on improving human reader performance with AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. This submission is for an endoscope system, not an AI algorithm. There is no standalone algorithm performance to report.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the usability study, the "ground truth" was the device's adherence to its specifications and satisfactory performance in a clinical environment, as assessed by experienced GI physicians.
- For bench testing and design verification, the "ground truth" comprises engineering specifications, performance standards (e.g., scope angulation, field of view, depth of field), and regulatory compliance requirements.
-
The sample size for the training set:
- Not applicable. As this is not an AI device, there is no training set mentioned or required.
-
How the ground truth for the training set was established:
- Not applicable. As there is no training set, there is no ground truth establishment for it.
In conclusion, this 510(k) submission for the PeerScope System (Model HG) demonstrates its safety and effectiveness primarily through bench testing, adherence to recognized standards, and a small-scale usability study, all aimed at proving substantial equivalence to a previously cleared predicate device. It does not involve the type of clinical performance study with acceptance criteria and ground truth validation that would be typical for a novel AI-driven diagnostic device.
Ask a specific question about this device
Page 1 of 1