Search Results
Found 65 results
510(k) Data Aggregation
(234 days)
Regulatory Class | II | II | Same |
| Regulation Number | 21 CFR 876.1500 | 21 CFR 876.1500 21 CFR 882.1480
not provide visualization within the central nervous system, meaning that regulation number 21 CFR 882.1480
The Confocal Microprobe Imaging System can enter the human body cavity or surgical channel through an endoscope, allowing confocal laser imaging of the microstructure of tissues, including but not limited to the identification of cells, vessels and their organization or architecture.
The working principle of Confocal Microprobe Imaging System is based on probe-based confocal laser endomicroscopy technology (pCLE). The system combines confocal technology and fiber beam imaging technology. The fiber Optic Microprobe can enter the human cavity through the endoscopic working channel and contact the tissue cells through the object lens at the front end of the Fiber Optic Microprobe. The imaging principle of the device is as follows:
The laser scanning beam emitted by the laser in the Laser Scanning System forms a light source through the grating pinhole and is transmitted to the focal plane of the fluorescent labeled tissue cells through the Fiber Optic Microprobe. The fluorescent substance in the measured tissue cell emits fluorescence under the excitation of the laser. The fluorescence signal is collected by Fiber Optic Microprobe front end object lens and transmitted through the fiber beam microprobe to the detecting hole and then is transmitted to the photomultiplier tube (PMT) of the Photoelectric detector and then to the host for signal analysis and processing. Finally, the image is formed on the computer monitoring screen after software processing.
Light emitted at the top and bottom of the focal plane of the tested tissue produces a large diameter spot (much larger than that of the detecting hole) at the detecting hole, thus only a very small part of the light can be received by the detector through the detecting hole. Moreover, the larger the distance from the focal plane of the object lens, the larger the diffuse spot produced by the non-targeted tissue in the detecting hole and the lesser energy passes through the detecting hole (from 10% to 1%, slowly close to 0%), thus the weaker the unwanted signal is generated on the detector, and smaller the impact is caused by non-targeted tissue. Because confocal microscopy only images the focal plane of the target tissue, it effectively avoids the interference of diffracting light and scattered light, so that it has a higher resolution than ordinary microscopy and has been widely used in biology.
It appears that the provided FDA 510(k) Clearance Letter does not contain detailed information about a clinical study involving human readers or a specific "acceptance criteria" table with reported performance metrics for an AI component.
The document discusses the Confocal Microprobe Imaging System, which is a hardware device for imaging tissues. While it mentions "software" and "cybersecurity," these sections focus on general software validation and cybersecurity considerations, not the performance evaluation of an AI algorithm designed to interpret or analyze the images beyond the device's basic function.
The "Performance Testing" section states that "Performance Verification Test has been conducted in accordance with the internal performance requirements stated in the Performance Validation Scheme (HRD0003932 & HRD0004124)" and lists technical performance requirements like "Field of view, Horizontal resolution, Depth of Observation, Frame rate." These relate to the imaging system's hardware performance, not an AI's diagnostic accuracy.
Therefore,Based on the provided FDA 510(k) clearance letter, I cannot fulfill your request for detailed information regarding acceptance criteria for an AI component and the study that proves the device meets those criteria. The letter primarily addresses the clearance of a Confocal Microprobe Imaging System (hardware), focusing on its substantial equivalence to predicate devices based on technological characteristics, biocompatibility, reprocessing, electrical safety, and general software/cybersecurity validation.
There is no mention of an AI-specific component, its performance criteria, or any clinical studies (e.g., MRMC studies) pertaining to AI algorithm performance within this document. The "Performance Testing" section refers to the optical and functional performance of the imaging system itself, not the diagnostic performance of an AI that might interpret the images generated.
If such an AI component exists, its performance evaluation would typically be described in a separate section with specific metrics like sensitivity, specificity, or AUC, and details about the study design (test set, ground truth, expert adjudication, etc.). This information is absent in the provided text.
Ask a specific question about this device
(197 days)
Device Name: HJY VisualNext 3D Endoscopic Vision System (HDSES01 / HDSES301)
Regulation Number: 21 CFR 882.1480
Trade name:** HJY VisualNext 3D Endoscopic Vision System
Product code: GWG
Regulation number: 882.1480
Product code: GWG
Regulation number: 882.1480
Device class: II
510(k) number: K222735
This device is a self-help tool for individuals aged 18 or older with diagnosed depression. It is intended to be used in addition to usual care and not as a replacement for it.
[Input Description text here]
The provided FDA 510(k) Clearance Letter for the HJY VisualNext 3D Endoscopic Vision System focuses on the device's substantial equivalence to a predicate device, as opposed to a detailed standalone or comparative effectiveness study of an AI-powered diagnostic device. Therefore, many of the requested details, particularly those related to AI algorithm performance (e.g., sample size for test/training sets, data provenance, ground truth establishment, MRMC studies, and effect size of human reader improvement with AI assistance), are not present in this document.
However, based on the information available, here's a breakdown of the acceptance criteria and the study that proves the device meets them:
Device Type: The HJY VisualNext 3D Endoscopic Vision System is an endoscopic vision system, not an AI-powered diagnostic device. Its primary function is to provide 3D visualization during surgical procedures, differentiating it from an AI-based system that might perform automated image analysis or diagnosis.
Acceptance Criteria and Reported Device Performance:
The document outlines acceptance criteria implicitly through the performance of various non-clinical tests. The criteria are met if the device "Pass[es]" the respective tests and demonstrates performance metrics comparable to predefined standards or the predicate device.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Sterility (Device must be sterile as labeled) | Testing completed in accordance with FDA guidance. (Result: Met) |
Biocompatibility (Safe for contact with neural tissue, bone, dentin, blood) | All acceptance criteria for cytotoxicity, sensitization, irritation/intracutaneous reactivity, acute systemic toxicity, neurotoxicity, and hemocompatibility met. (Result: Favorable biocompatibility profile) |
Software Validation (Software functions as intended and safely) | Completed in accordance with FDA guidance document "Content of Premarket Submissions for Device Software Functions". (Result: Met requirements) |
Electromagnetic Compatibility (EMC) & Thermal Safety (Meets safety standards for electrical and thermal properties) | Completed in accordance with IEC60601-1, IEC60601-1-2, IEC60601-2-18. (Result: Met standards) |
Photobiological Safety (No hazardous light emission) | Completed in accordance with IEC 62471. (Result: Met standards) |
Bench Testing - Image Quality & Performance (FOV, DOV, DOF, Optical Magnification, Distortion, Image Intensity Uniformity, Signal-to-Noise Ratio, Sensitivity, Resolution (MTF) of aged and non-aged devices comparable to predicate) | Both aged and non-aged subject devices met the predefined acceptance criteria, demonstrating consistent image quality metrics comparable to the predicate device. (Result: Pass) |
Animal Study Testing - 3D Visualization Performance (Clear and stable 3D visualization of brain and spine tissues, with resolution, color representation, contrast, and noise comparable to predicate, and compatibility with 3D monitor) | The subject device provided clear and stable 3D visualization of brain and spine tissues across all tested conditions. Image quality parameters, including resolution, color representation, contrast, and noise, met the predefined acceptance criteria when compared to the predicate device. Testing also validated compatibility with the Sony LMD-2451MT 3D Monitor. (Result: Pass) |
Study Details (for the Non-Clinical Performance Testing):
Since the device is a vision system and not an AI algorithm, the traditional "test set" and "training set" concepts as applied to AI models do not directly apply in the same way. The non-clinical testing evaluates the physical and functional performance of the device itself.
-
Sample size used for the Test Set and Data Provenance:
- Bench Testing: The sample size is not explicitly stated, but it involved "aged and non-aged subject devices" and direct comparison to the predicate device. The data provenance would be laboratory-generated data from device performance measurements.
- Animal Study Testing: "A porcine animal model" was used. The specific number of animals or trials within the animal study is not provided. The data provenance is described as being from a porcine animal model. This would be prospective data collection, specifically for this study.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- This metric is not applicable in the context of this device's testing. The "ground truth" for a vision system's performance is typically established by objective physical measurements (e.g., MTF for resolution, calibrated light meters for illumination) and expert subjective evaluation of visual quality in a controlled setting, rather than a consensus on diagnostic findings. The document does not specify the number or qualifications of any human evaluators involved in the image quality assessment during bench or animal testing, only that the data "met the predefined acceptance criteria."
-
Adjudication Method for the Test Set:
- Not applicable as the testing involves objective performance measurements and comparison against predefined criteria, not diagnostic interpretations requiring adjudication.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No. An MRMC study is typically performed for diagnostic devices where human readers interpret medical images, often with and without AI assistance, to measure diagnostic accuracy and efficiency. This device is a surgical visualization tool, not a diagnostic imaging device.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This is not an AI algorithm, so the concept of "standalone performance" of an algorithm is not applicable. The core function of the device is to provide images for human viewing. The non-clinical tests assess the device's ability to produce high-quality images and function as intended.
-
The Type of Ground Truth Used:
- For Bench Testing: Objective physical measurements (e.g., resolution targets, light sensors, distortion grids) served as the "ground truth" for parameters like FOV, DOF, resolution, etc., along with comparison to the known performance of the predicate device.
- For Animal Study Testing: The "ground truth" for image quality (resolution, illumination, color representation, contrast, noise) was likely based on objective evaluation against predefined standards and comparative assessment by skilled observers (e.g., surgeons, imaging specialists) who could judge the clarity and utility of the visualization in an anatomical context, compared to the predicate device's 2D view. Anatomical structures within the porcine model served as the "true" objects being visualized.
-
The Sample Size for the Training Set:
- Not applicable. This device is a hardware system, not an AI algorithm trained on data. There is no "training set" in the context of machine learning.
-
How the Ground Truth for the Training Set was Established:
- Not applicable, as there is no training set.
Ask a specific question about this device
(28 days)
AURORA Surgiscope System (ASX15/60); AURORA Surgiscope System (ASX15/80)
Regulation Number: 21 CFR 882.1480
| Neurological endoscope |
| Classification Name | Endoscope, Neurological |
| Regulation Number | 882.1480
The AURORA Surgiscope System is intended for use in neurosurgery and endoscopic neurosurgery and pure neuroendoscopy (i.e. ventriculoscopy) for visualization, diagnostic, and/or therapeutic procedures, such as ventriculostomies, biopsies and removal of cysts, tumors, and other obstructions.
The Aurora Surgiscope System consists of two components: (1) a sterile, single use, Sheath with integrated illumination LEDs and camera, with an Obturator, and (2) a non-sterile, reusable control unit, Image Control Box (ICB).
The Sheath is intended to provide access to the surgical site by acting as the insertable portion of the device, as well as the instrument channel to accommodate other surgical tools. Depth markers are present along the length of the Sheath for user reference. The proximal end of the Sheath also incorporates a Tab, which serves as the location for fixation arm to hold the device.
At the proximal end of the Sheath is the Imager, which comprises the following components: LEDs (light emitting diodes), camera (and optical components), and focus knob.
- The LEDs provide illumination to the surgical field by directing light down the Sheath, along the working channel.
- The camera captures videos of the surgical field.
- The focus knob allows the user to adjust the focus of the camera to obtain the desired image quality.
To facilitate insertion of the Sheath into the surgical site, an Obturator is provided with the device. During device insertion, the Obturator is fully inserted into the Sheath, and the entire AURORA Surgiscope is advanced to the desired surgical location. The distal end of the Obturator is conical in shape to minimize tissue damage during device insertion. In addition, the proximal handle of the Obturator is designed to accommodate various stereotactic instruments for neuronavigation, which can further aid in device placement. The Obturator is removed after insertion.
The ICB is a non-sterile device that provides three main functions in the AURORA Surgiscope System:
- To power the LEDs and camera of the AURORA Surgiscope.
- To relay the video feed captured by the AURORA Surgiscope camera to a connected Medical Grade Surgical Monitor for real-time image visualization.
- To allow the user to make adjustments to the displayed video feed (e.g., contrast, brightness), and to vary the light output of the LEDs.
The user interface is a membrane keypad with buttons located on the ICB that can be depressed for image adjustment, such as zoom, contrast, and brightness. The connection ports to the AURORA Surgiscope, Medical Grade Surgical Display Monitor, and Power are located on the side of the ICB, along with the ON/OFF switch.
The provided FDA 510(k) clearance letter for the AURORA Surgiscope System (K250752) does not contain the detailed information necessary to fully answer all the questions regarding acceptance criteria and the study that proves the device meets them.
The document primarily focuses on demonstrating substantial equivalence to a predicate device (K201840) based on technological characteristics and functional requirements. It explicitly states that "No clinical test/studies were required or performed as all conducted performance tests appropriately support a determination of substantial equivalence compared with the predicate device (K201840)."
Therefore, for many of the requested points, the answer will be that the information is not available in the provided text.
Here's a breakdown of what can and cannot be answered based on the input:
1. A table of acceptance criteria and the reported device performance
The document mentions "functional requirements" and "performance tests" but does not detail specific acceptance criteria or quantitative performance results. It only states that the device meets these requirements after sterilization, environmental, and transit conditioning, and equivalent to a 1-year shelf-life.
Acceptance Criterion | Reported Device Performance |
---|---|
Functional requirements after 2X EO sterilization | Device meets functional requirements |
Functional requirements after environmental and transit conditioning | Device meets functional requirements |
Functional requirements after equivalent of 1-year claimed shelf-life | Device meets functional requirements |
Obturator handle strength (improved connection) | Met (due to design modification with two bridge features) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size for test set: Not specified. The document states "non-clinical testing was performed" but does not detail the number of units tested.
- Data provenance: Not specified. The nature of the testing (functional performance, sterilization effects) suggests it would be laboratory testing rather than patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable / Not specified. Since no clinical studies were performed, there was no need for expert review of clinical data to establish ground truth. The "ground truth" here would be the successful function of the device in engineering tests.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable / Not specified. No clinical data was being adjudicated.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The document explicitly states: "No clinical test/studies were required or performed". This device is a surgical endoscope, not an AI-assisted diagnostic tool, so an MRMC study comparing human readers with and without AI assistance is not relevant to its clearance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is a physical medical device (endoscope), not a standalone algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the non-clinical performance tests mentioned, the "ground truth" would be the engineering specifications and functional integrity of the device. This is typically verified through direct measurement, visual inspection, and functional tests (e.g., image quality assessment, illumination intensity, camera function, mechanical integrity) against predefined specifications. It is not based on clinical "ground truth" like pathology or expert consensus.
8. The sample size for the training set
- Not applicable. This notice does not describe an AI/machine learning device that requires a training set.
9. How the ground truth for the training set was established
- Not applicable. No training set was involved for this device.
Ask a specific question about this device
(324 days)
110 Centennial, CO 80112
Re: K233391
Trade/Device Name: cCeLL - In vivo Regulation Number: 21 CFR 882.1480
Classification Name: | Endoscope, Neurological |
| Regulation Number: | 21 CFR 882.1480
Neurological Endoscope Endoscope, Neurological 21 CFR 882.1480; 21 CFR 876.1500 Class II GWG, OWN
{5
The cCeLL - In vivo is an optic scanner probe placed in direct contact with tissue to create images of the internal microstructure of tissues and is indicated for use with indocyanine green (ICG) for fluorescence imaging as an aid in the visualization of vessels (micro- and macro-vasculature) blood flow in the cerebrovasculature before, during or after cranial diagnostic and therapeutic procedures, such as tumor biopsy and resection.
The cCeLL - In vivo is used to provide real-time endoscopic images of near-infrared (NIR) indocyanine green (ICG) dye fluorescence during minimally invasive, neurosurgery in adults.
The overall system includes a 6 mm Pixection ICG/NIR Endoscope (0°) for use in neurosurgery, a light source for emission of NIR illumination, a photo-multiplier tube capable of capturing NIR imaging, and a sterile probe sheath intended for maintaining a sterile barrier between the subject device and the patient. The cCeLL - In vivo can be used with any medical grade high definition (HD) monitor with a DVI-D or RGB input. The patient contacting components contact tissue or bone with a duration of less than 24 hours.
The provided text describes the cCeLL - In vivo device and its performance testing to demonstrate substantial equivalence to a predicate device. However, it does not include specific quantitative acceptance criteria or a dedicated study demonstrating the device meets those criteria in the typical sense of showing numerical thresholds for performance. Instead, the performance testing focuses on equivalence to a predicate device.
Here's an analysis of the provided information:
Acceptance Criteria and Reported Device Performance
The document describes various performance tests conducted to demonstrate the substantial equivalence of the cCeLL - In vivo device to its predicate. The "acceptance criteria" are implied to be achieving performance comparable to or equivalent to the predicate device, or meeting recognized safety and design specifications. The reported "device performance" is consistently "PASS" for all tests, indicating that the device met these implicit criteria of equivalence or specification conformance.
Test | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Image Sensitivity Analysis | Ability to visualize cerebral microstructures and vascular systems, including tumor tissue, surrounding normal tissue, and blood vessels using clinically relevant ICG concentrations. | PASS |
Image Comparison Analysis | Visualize vessels of various sizes and changes in blood flow with image quality comparable to the predicate device. | PASS |
Detection Linearity | Equivalent performance to the predicate device in capturing fluorescence intensity. | PASS |
Geometric Distortion | Equivalent performance to the predicate device regarding geometric distortion. | PASS |
Dynamic Range | Equivalent performance to the predicate device in gradation performance across its dynamic range. | PASS |
Illumination & Detection Uniformity | Equivalent performance to the predicate device in illumination and detection uniformity (average intensity of fluorescent dots, illumination uniformity). | PASS |
SNR & Sensitivity | Equivalent performance to the predicate device in signal-to-noise ratio (SNR) and sensitivity. | PASS |
Video Latency | Equivalent performance to the predicate device in dynamic vision capability (initialization and stoppage of motion on screen). | PASS |
Sterile Probe Sheath Tear Resistance | Withstand forces greater than those expected during clinical use for breaking strength at different joint interfaces on aged samples. | PASS |
Electrical Safety / EMC | Conformity to IEC 60601-1:2005 + A1:2012 + A2:2021 and IEC 60601-1-2:2014 + A1:2020. | PASS |
Software / Cybersecurity (Enhanced Level) | Conformity to FDA's "Content of Premarket Submissions for Device Software Functions" (June 14, 2023) and "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions, Postmarket Management of Cybersecurity in Medical Devices". | PASS |
Biocompatibility | Biocompatibility of patient-contacting components (Sterile Probe Sheath) according to FDA's guidance and ISO 10993-1 for various endpoints (Cytotoxicity, Sensitization, Intracutaneous reactivity, Acute systemic toxicity, Material medicated pyrogenicity, Hemocompatibility (indirect), Neurotoxicity). | PASS |
Sterility / Shelf Life | Sterilization and shelf-life testing demonstrated the device is and can remain sterile and functional for the documented shelf life, conforming to ISO 11135:2014 + Amd1:2018, ASTM F1980-21, and ISO 11607–1:2019. | PASS |
Study Details for Performance Testing:
The document does not detail a single comprehensive "study" but rather a series of "Performance Testing" activities.
-
Sample size used for the test set and the data provenance:
- For "Image Sensitivity Analysis" and "Image Comparison Analysis," a "small animal model" was used. The exact number of animals is not specified.
- For other tests like Detection Linearity, Geometric Distortion, Dynamic Range, Illumination & Detection Uniformity, SNR & Sensitivity, and Video Latency, the tests were conducted using the subject and predicate devices, likely in a laboratory setting, without specific mention of "samples" in a patient data context.
- Data provenance is not explicitly stated as retrospective or prospective, or country of origin for the animal model. Given the context of equivalence testing against a predicate device, it is likely that these were controlled laboratory/pre-clinical tests.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the "Image Comparison Analysis," the results state: "as assessed across users with a range of experience." This implies human assessment, but the number of experts, their specific qualifications (e.g., radiologist with X years of experience), and how their input established ground truth are not specified.
- For all other tests, no experts were mentioned for establishing ground truth; the "ground truth" was likely defined by established physical properties or engineering measurements (e.g., optical power, brightness, tear resistance force).
-
Adjudication method for the test set:
- The document does not specify any adjudication method (e.g., 2+1, 3+1, none). For the "Image Comparison Analysis" where "users with a range of experience" assessed images, the method of combining their assessments or resolving discrepancies is not provided.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The document describes performance testing for substantial equivalence, not a comparative effectiveness study involving human readers with and without AI assistance. The device is a medical imaging device, but the testing focuses on its technical performance and equivalence to a predicate, not on improving human reader performance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The device is described as an "optic scanner probe" and a "medical fluorescence imaging device." The tests described are primarily focused on the intrinsic technical performance of this imaging device (e.g., image sensitivity, detection linearity, dynamic range, SNR, video latency). While "Image Comparison Analysis" involved "users," most tests, particularly the quantitative ones, appear to be standalone algorithm/device performance tests. The device itself is an imaging tool, not one that implies an "algorithm only" component separate from the imaging hardware.
-
The type of ground truth used:
- For "Image Sensitivity Analysis" and "Image Comparison Analysis," the ground truth was inferred from the "visualization of cerebral microstructures and vascular systems" in a small animal model, which represents a biological/physiological ground truth.
- For other engineering-focused tests (Detection Linearity, Geometric Distortion, Dynamic Range, Illumination & Detection Uniformity, SNR & Sensitivity, Video Latency, Sterile Probe Sheath Tear Resistance, Electrical Safety / EMC), the ground truth would be based on physical measurements and established standards/specifications.
- For Biocompatibility and Sterility/Shelf Life, the ground truth is adherence to validated international standards and protocols.
-
The sample size for the training set:
- The document does not provide any information regarding a "training set" or "sample size for the training set." This suggests the device (cCeLL - In vivo) is an imaging system, likely based on established optical and fluorescence imaging principles, and not necessarily an AI/machine learning product that requires a distinct training dataset in the typical sense for its core functionality.
-
How the ground truth for the training set was established:
- Since no training set information is provided, this question is not applicable based on the given text.
Ask a specific question about this device
(351 days)
Trade/Device Name: Digital ClarusScope System Digital NeuroPEN System Regulation Number: 21 CFR 882.1480
Predicate Device:
Endoscope, Neurological GWG 21 CFR 882.1480 Neurological endoscope ll
Digital
The Digital ClarusScope System and Digital NeuroPEN System are intended for use in neurosurgery, endoscopic neurosurgery, and ventriculoscopy for visualization of ventricles and structures within the brain during neurological surgical procedures, diagnostic and/or therapeutic procedures such as ventriculostomies, biopsies and removal of cysts, tumors and other obstructions.
The Digital ClarusScope System and Digital NeuroPEN System are neurological endoscopes which provide a light source, camera, and HDMI output for visualization. Irrigation is provided for flushing during the procedure. The working channel facilitates the use of tools necessary for neurological procedures (Digital ClarusScope versions only). The Digital ClarusScope and Digital NeuroPEN are intended to be used with the non-sterile, reusable Clarus Digital Control Module with standard HDMI video output. The proximal end of the Digital ClarusScope and Digital NeuroPEN terminate in two fittings: the endoscope connector attaches to the Clarus Digital Control Module, which interfaces to a standard off-the-shelf HDMI video monitor which is not provided by Clarus and is not part of this 510(k) application: the other fitting is an irrigation extension tube with a female Luerlock connector.
The provided text describes the regulatory clearance of the Clarus Medical Digital ClarusScope System and Digital NeuroPEN System, but it does not contain the specific acceptance criteria or a study proving that the device meets those criteria, as typically found in clinical performance study results.
The document details non-clinical performance data and a comparison to predicate and reference devices to establish substantial equivalence. It lists various tests performed, such as dimensional verification, mechanical strength, functional tests like fluid patency and image output, simulated use, sterility validation, shelf-life, environmental conditioning, distribution, biocompatibility, electrical safety, and electromagnetic compatibility.
However, it explicitly states:
"H. Non-Clinical Performance Data: The Digital ClarusScope, Digital NeuroPEN, and Digital Control Module have been thoroughly tested through verification of product specifications and user requirements. The following quality assurance and performance measures were applied during the development of the systems:
...
- Performance Testing (Verification):
- Endoscope dimensional verification
- o Mechanical strength requirements
- Functional Tests
- Endoscope fluid patency o
- O System image output
- Simulated Use Test
- o Interconnection testing between endoscope and control module and accessories
- Compatibility with introducer O
- Compatibility of endoscope working channel with accessory devices O"
This section indicates that performance testing was conducted for verification, but it does not provide:
- A table of acceptance criteria and reported device performance against those criteria.
- Details of a clinical study with patient data, ground truth establishment, or expert reviews, which would be typical for proving performance in a diagnostic or image-interpretation context (e.g., accuracy, sensitivity, specificity).
- Information regarding sample sizes for test sets, data provenance, number or qualifications of experts, or adjudication methods for establishing clinical ground truth.
- Whether a multi-reader multi-case (MRMC) comparative effectiveness study was done to assess human reader improvement with AI assistance (the device is a visualization system and not explicitly described as an AI-enabled diagnostic aid in this document).
- Details on standalone algorithm performance.
- The type of ground truth used for performance evaluation in a clinical context.
- Sample size or ground truth establishment for a training set, as this document focuses on the device performance and not the performance of an embedded AI algorithm that would typically require such training data.
Based on the provided text, the device primarily focuses on visualization and mechanical/electrical safety and functionality, not on diagnostic accuracy based on image interpretation by an algorithm. Therefore, the information requested regarding acceptance criteria and clinical study details for diagnostic performance is not present in this document.
The document's conclusion of "Substantial Equivalence" is based on "performance testing, design and non-clinical testing," which aligns with the details provided in section H.
Ask a specific question about this device
|
| Classification
Name: | 21 CFR 876.1500 (Endoscope and Accessories)
21 CFR 882.1480
KARL STORZ ICG Imaging System
The KARL STORZ ICG Imaging System is intended to provide real-time visible (VIS) and near-infrared (NIR) fluorescence imaging.
Endoscopic ICG System
Upon intravenous administration and use of ICG consistent with its approved label, the KARL STORZ Endoscopic ICG System enables surgeons to perform minimally invasive surgery using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion in adults and pediatric patients ≥1 month of age, and at least one of the major extrahepatic bile duct, common bile duct and common hepatic duct) in adults and pediatric patients ≥ 12 years of age, using near infrared imaging in accordance with the appropriately indicated endoscope. Fluorescence imaging of biliary ducts with the KARL STORZ Endoscopic ICG System is intended for use with standard of care white light and, when indicated, intraoperative cholangiography. The device is not intended for standalone use for biliary duct visualization.
Additionally, the KARL STORZ Endoscopic ICG System enables surgeon to perform minimally invasive cranial neurosurgery in adults and pediatrics and endonasal skull base surgery in adults and pediatrics > 6 years of age using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion using near infrared imaging.
Upon interstitial administration and use of ICG consistent with its approved label, the KARL STORZ Endoscopic ICG System is used to perform intraoperative fluorescence imaging and visualization of the lymphatic system, including lymphatic vessels and lymph nodes.
VITOM II ICG System
The KARL STORZ VITOM II ICG System is intended for capturing fluorescent mages for the visual assessment of blood flow, as an adjunctive method for the evaluation of tissue perfusion, and related tissue-transfer circulation in tissue and free flaps used in plastic, micro- and reconstructive surgical procedures. The VITOM II ICG System is intended to provide a magnified view of the surgical field in standard white light.
KARL STORZ Image1 S CCU
The Imagel S camera control unit (CU) in combination with either a compatible camera head or an appropriately indicated video endoscope is intended for real-time visualization, image recording and documentation during general endoscopic and microscopic procedures in adults and pediatrics.
KARL STORZ ICG Imaging System
The KARL STORZ ICG Imaging System is intended to provide real-time visible (VIS) and near-infrared (NIR) fluorescence imaging for general surgical sites including the abdomen, bile ducts, brain/skull base, and the lymph nodes/lymphatic vessels. Components of the system include:
Scopes:
3D TIPCAM®1 Rubina videoendoscope
26006ACA/BCA, 26616ACA/BCA Rigid Endoscope
26003ACA/ARA/BCA/BRA/FCA/FRA/FCEA/FREA
26046ACA/ARA/BCA/BRA/FCA/FRA
28164AC/BC/FC VITOM II ICG/NIR Telescope
20916025AGA
Light Source:
Power LED Rubina (TL400) Foot Switch (UF101) Fiber Optic Light Cable (495TIP/NCSC/NAC)
Camera Head:
Image1 S 4U Rubina (TH121)
Camera Control Unit (CCU):
Image1 S Connect II (TC201US) Image1 S 4U-Link (TC304US)
KARL STORZ Image1 S CCU
The KARL STORZ IMAGE1 S Camera Control Unit (CCU) is a modular CCU that consists of Image1 S Connect and Connect II modules and the link modules. The Connect modules can be connected to minimum of one and a maximum of three links modules. The modularity enables customers to customize their Image1 S system to their specific video needs.
The Image1 S includes, but not limited to, the following features:
Brightness control Enhancement Control Automatic Light Source Control Shutter Control Image/Video Capture
Seven increments of zoom from 1-2.5x and adaptive zoom
Modules of the Image1 S CCU include: Image1 S Connect (TC200US) Image1 S Connect II (TC201US) Image1 S H3-Link (TC300US) Image1 S X-Link (TC301US) Image1 S D3-Link (TC302US) Image1 S 4U-Link (TC304US)
Accessories to the Image1 S CCU include: Microscope Footswitch (TC019) Image1 S Pilot (TC014) LINK Cable (TC011, TC012)
The provided text describes the regulatory submission for the KARL STORZ ICG Imaging System and KARL STORZ Image 1S Camera Control Unit.
However, the document explicitly states that "Clinical testing was not required to demonstrate substantial equivalence to the predicate devices." This means that the submission does not contain information about a study proving the device meets acceptance criteria based on clinical performance metrics (like sensitivity, specificity, accuracy, or human reader improvement with AI assistance).
The acceptance criteria and performance data mentioned in the document are non-clinical performance data, specifically related to electrical safety, electromagnetic compatibility, and software verification and validation. This type of information is usually presented as compliance with established standards rather than a clinical study with a test set, ground truth, or expert readers.
Therefore, most of the requested information regarding clinical study design (sample size, data provenance, expert ground truth, adjudication, MRMC studies, standalone performance, training set details) cannot be extracted from this document because such a clinical study was not required or provided for this specific submission as per the FDA's determination of substantial equivalence to predicate devices (K212695 and K201135).
Here's what can be extracted and inferred from the document:
1. A table of acceptance criteria and the reported device performance
Based on the "Non-Clinical Performance Data" section, the acceptance criteria are compliance with relevant safety and software standards.
Acceptance Criteria Category | Specific Standard/Requirement | Reported Device Performance/Compliance |
---|---|---|
Electrical Safety | IEC 60601-1:2005 + A1:2012 + A2:2021 Medical electrical equipment - Part 1: General requirements for basic safety and essential performance | Electrical Safety testing was conducted in accordance with the specified standard. (Implies compliance, as it's part of a successful 510(k) submission). |
Electromagnetic Compatibility (EMC) | IEC 60601-1-2: 2014 + A1:2020, Medical Electrical Equipment – Part 1-2: General requirements for basic safety and essential performance - Electromagnetic Compatibility | Electromagnetic Compatibility testing was conducted in accordance with the specified standard. (Implies compliance). |
Software Verification and Validation | FDA's Guidance for Industry and FDA Staff, "Content of Premarket Submissions for Device Software Functions" issued June 14, 2023. The software documentation level conforms to the Basic Level of documentation (no identified risks where a failure or flaw could present a hazardous situation with a probable risk of death or serious injury to a patient, user, or others). | Software verification and validation testing was conducted and documentation was provided as recommended by the FDA guidance. The software documentation level conforms to the Basic Level of documentation as there are no risks identified in which a failure or flaw of any device software function(s) could present a hazardous situation with a probable risk of death or serious injury, either to a patient, user of the device, or others in the environment of use. (Implies successful verification and validation according to the stated guidance and risk level). |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not applicable / not provided. The document states "Clinical testing was not required." The "performance data" refers to non-clinical engineering and software testing, which does not involve a "test set" of clinical cases in the sense of imaging data for diagnostic performance evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable / not provided. No clinical ground truth was established from experts as clinical testing was not required.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable / not provided. No clinical ground truth was established, therefore no adjudication method was used for clinical interpretation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable / not provided. No MRMC study was performed as clinical testing was not required for this submission. The device is an imaging system, not an AI-based diagnostic aid that would assist human readers in interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable / not provided. The device is an imaging system, not a standalone diagnostic algorithm. No such performance study was conducted or required.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable / not provided for clinical performance. For the non-clinical performance data, the "ground truth" or standard was compliance with specified international and FDA-recognized standards for electrical safety, EMC, and software validation.
8. The sample size for the training set
- Not applicable / not provided. As this is an imaging system and not an AI/ML-based diagnostic algorithm, there is no mention of a "training set" of data in the context of machine learning model development. The software testing mentioned refers to standard software verification and validation, not machine learning model training.
9. How the ground truth for the training set was established
- Not applicable / not provided. See above.
Ask a specific question about this device
(52 days)
|
| Regulation: | 21 CFR 876.1500 (Endoscope and Accessories)
21 CFR 882.1480
The KARL STORZ ICG Imaging System is intended to provide real-time visible (VIS) and near-infrared (NIR) fluorescence imaging.
Upon intravenous administration and use of ICG consistent with its approved label, the KARL STORZ Endoscopic ICG System enables surgeons to perform minimally invasive surgery using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion, and at least one of the major extra-hepatic bile ducts (cystic duct, common bile duct and common hepatic duct), using near infrared imaging of biliary ducts with the KARL STORZ Endoscopic ICG System is intended for use with standard of care white light and, when indicated, intraoperative cholangiography. The device is not intended for biliary duct visualization.
Additionally, the KARL STORZ Endoscopic ICG System enables surgeon to perform minimally invasive cranial neurosurgery in adults and pediatrics and endonasal skull base surgery in adults and pediatrics > 6 years of age using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion using near infrared imaging.
The KARL STORZ VITOM ICG System is intended for capturing fluorescent images for the visual assessment of blood flow, as an adjunctive method for the evaluation of tissue perfusion, and related tissue-transfer circulation in tissue and free flaps used in plastic, micro- and reconstructive surgical procedures in adults and pediatrics >Imonth of age. The VITOM ICG System is intended to provide a magnified view of the surgical field.
Upon interstitial administration and use of ICG consistent with its approved label, the KARL STORZ Endoscopic ICG System is used to perform intraoperative fluorescence imaging and visualization of the lymphatic system, including lymphatic vessels and lymph nodes.
The subject device KARL STORZ ICG System includes the following components:
- VITOM EAGLE (TH201): a 3D video exoscope with 4K resolution used during open procedures for the evaluation of tissue perfusion, related tissue-transfer circulation in tissue and free flaps used in plastic, micro and reconstructive surgical procedures. The subject device VITOM EAGLE System is being indicated for use in in adults and pediatrics >1month of age.
- Fiber Light Cable (495VTE): used to transmit visible and NIR light from the Power LED Rubina light source to the VITOM Eagle.
- IMAGE1 Pilot (TC014): used to control the optical functions of the VITOM EAGLE.
- Microscope Footswitch (TC019): alternatively used control the optical functions of the VITOM EAGLE
- The Power LED Rubina light source (TL400) along with the footswitch (UF101): previously cleared in K201399, K202925 and K212695.
- Imagel S Camera Control Unit (TC201US, TC304US): previously cleared in K201399, K202925 and K212695.
The provided text describes the KARL STORZ ICG Imaging System and its acceptance criteria, along with a summary of the non-clinical performance data used to demonstrate substantial equivalence to a predicate device. However, it does not describe a study involving an AI algorithm. The device is an imaging system that uses Indocyanine Green (ICG) fluorescence for various surgical visualizations.
Here's a breakdown of the requested information based only on the provided text, heavily noting limitations due to the absence of AI-specific study details:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a non-AI imaging system without specific AI performance metrics, the acceptance criteria are generally related to the technical performance of the imaging capabilities. The document states that the KARL STORZ ICG Imaging System (subject device) was compared to the predicate VITOM II ICG/NIR telescope of the KARL STORZ ICG Imaging System (K212695). The performance was demonstrated by testing for:
Acceptance Criteria (Performance Metric) | Reported Device Performance (Subject Device vs. Predicate) |
---|---|
Spatial Resolution | Successfully demonstrated by comparison |
Signal to Noise Ratio and Noise | Successfully demonstrated by comparison |
Dynamic Range | Successfully demonstrated by comparison |
Geometric Distortion | Successfully demonstrated by comparison |
Depth of Field | Successfully demonstrated by comparison |
Illumination Detection Uniformity | Successfully demonstrated by comparison |
Latency | Successfully demonstrated by comparison |
Penetration Depth | Successfully demonstrated by comparison |
Simultaneous Color Contrast | Successfully demonstrated by comparison |
Minimum Detectable Concentration of ICG | Successfully demonstrated by comparison |
3D Zoom and Rotation | Successfully demonstrated by comparison |
2D and 3D Mode Transition | Successfully demonstrated by comparison |
Image Alignment | Successfully demonstrated by comparison |
Photobiological Safety | Successfully demonstrated by comparison |
Electrical Safety and EMC (IEC 60601-1, IEC 60601-1-2) | Follows FDA recognized consensus standards and tested accordingly |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" in the context of an AI algorithm or patient data. The performance evaluation was based on non-clinical bench testing comparing the subject device's imaging capabilities to a predicate device. Therefore, there's no mention of sample size in terms of patient data or data provenance (country of origin, retrospective/prospective). The "sample" here would refer to the physical devices and controlled test scenarios.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. As this is a non-AI imaging system being evaluated via bench testing, there are no "experts" establishing ground truth for a test set of images or patient data. The ground truth for the technical performance criteria would be established by validated measurement techniques and instrumentation during the bench tests.
4. Adjudication method for the test set
Not applicable, as there is no "test set" in the context of human expert review or an AI algorithm's output. The evaluation was based on objective technical measurements.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document explicitly states "Clinical testing was not required to demonstrate the substantial equivalence to the predicate devices. Non-clinical bench testing was sufficient to establish the substantial equivalence of the modifications." Furthermore, this is not an AI-assisted device, so MRMC studies on AI assistance would not be relevant.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not applicable. The device is an imaging system, not a standalone AI algorithm. It produces images for human surgeons to interpret.
7. The type of ground truth used
For the non-clinical performance data, the ground truth was based on objective measurements from bench testing (e.g., measuring spatial resolution, signal-to-noise ratio, etc.) against established technical specifications or a predicate device's performance.
8. The sample size for the training set
Not applicable. The device does not involve an AI algorithm that requires a training set.
9. How the ground truth for the training set was established
Not applicable, as there is no AI algorithm or training set.
Summary of AI-related information (or lack thereof):
The provided text describes a submission for a KARL STORZ ICG Imaging System, which is a medical device for real-time visible and near-infrared fluorescence imaging during surgery. The entire document focuses on demonstrating the substantial equivalence of this updated imaging system to a previously cleared predicate device through non-clinical bench testing. There is no mention of any artificial intelligence (AI) component, machine learning model, or any studies related to AI performance, human-in-the-loop improvements with AI, or standalone algorithm performance. Therefore, most of the questions regarding AI-specific criteria cannot be answered from the provided text.
Ask a specific question about this device
(59 days)
California 92618
Re: K232618
Trade/Device Name: Aurora Surgiscope System Regulation Number: 21 CFR 882.1480
|
| Regulation Number | 21 CFR 882.1480
The Aurora Surgiscope System is intended for use in neurosurgery and endoscopic neurosurgery and pure neuroendoscopy (i.e. ventriculoscopy) for visualization, diagnostic and/or therapeutic procedures such as ventriculostomies, biopsies and removal of cysts, tumors and other obstructions.
The Aurora Surgiscope System consists of two components: (1) a sterile, single use, sheath with integrated illumination LEDs and camera, with an obturator, and (2) a non-sterile, reusable control unit, Image Control Box (ICB).
The sheath is intended to provide access to the surgical site by acting as the insertable portion of the device, as well as the instrument channel to accommodate other surgical tools. Depth markers are present along the length of the sheath for user reference.
At the proximal end of the sheath is the imager, which comprises the following components: LEDs (light emitting diodes), camera (and optical components), and focus knob.
- The LEDs provide illumination to the surgical field by directing light down the sheath, along . the instrument channel.
- The camera captures video image of the surgical field. ●
The proximal end of the sheath also contains a tab, which may be used to manually hold the device. To facilitate insertion of the surgical site, an obturator is provided with the device. During insertion, the obturator is fully inserted into the sheath, and the entire unit is advanced to the desired location. The distal end of the obturator is conical in shape to minimize tissue damage. In addition, the proximal handle of the obturator is designed to accommodate various stereotactic instruments for neuronavigation. Once inserted, the obturator is removed. Two sterile, single use accessories optional for use are provided with the Aurora Surgiscope System: an Irrigation Device and 12 French Suction Device.
The ICB is a non-sterile device that provides three main functions in the Aurora Surgiscope System:
- To power the Surgiscope LEDs and camera
- . To relay the video feed captured by the Surgiscope camera to a display monitor for real-time image visualization
- . To allow the user to make adjustments to the displayed video feed (e.g., contrast, brightness), as well as vary the LED light output.
The user interface is a membrane keypad with buttons located on the ICB that can be depressed for image adjustment, such as zoom, contrast, brightness, and orientation. The ICB is supplied with two cables: A power cable for connection to an AC wall outlet, and a display cable for connection to a high definition surgical monitor.
The provided FDA 510(k) clearance letter and summary for the Aurora Surgiscope System do not contain information typically found in a clinical study report or performance evaluation for an AI/software device. The document focuses on demonstrating substantial equivalence to a predicate device, which means proving that the new device is as safe and effective as a legally marketed device, rather than rigorously quantifying performance against defined acceptance criteria in a study setting.
Specifically, the document does not include:
- A table of acceptance criteria and reported device performance related to a diagnostic or AI function.
- Sample sizes for test sets or data provenance for AI model validation.
- Details about expert readers, ground truth establishment, or adjudication methods for AI performance.
- Information on multi-reader multi-case (MRMC) comparative effectiveness studies.
- Standalone algorithm performance data.
- Training set details for an AI model.
The "testing" mentioned in the document pertains to traditional medical device testing for hardware, biocompatibility, electrical safety, and mechanical aspects. While it states "Software verification and validation testing" was conducted and "documentation provided as recommended by the FDA Guidance Content of Premarket Submissions for Device Software Functions," it does not provide any specific performance metrics or acceptance criteria for software functionality that would typically be associated with an AI/ML-driven device's diagnostic performance. The "Image Control Box" software mentioned focuses on image adjustment and display, not diagnostic interpretation.
Therefore, based solely on the provided text, it is not possible to describe the acceptance criteria and the study proving the device meets those criteria from an AI/ML perspective. The device, as described, appears to be a neurological endoscope system for visualization, diagnostic, and therapeutic procedures, with software for image display and adjustment, not an AI-powered diagnostic tool.
If this were an AI-powered device, the information requested would be crucial for its evaluation. Without it, I cannot fulfill the request for AI-related performance criteria.
Ask a specific question about this device
(269 days)
The Neuroblade System is a neuroendoscopy system indicated for the illumination and visualization of intracranial tissue and fluids, controlled aspiration of tissue and or fluid, powered cutting of soft tissue, and coagulation of tissue under direct visualization during surgery of the ventricular system or cerebrum.
The Neuroblade System is comprised of three components: Neuroblade, Neuropad and Cart. The Neuroblade is a hand-held, sterile neuroendoscope with lighting and camera at its distal end. The camera images are displayed on the Neuropad via a connecting cable that extends from the proximal end of the Neuroblade. It has integrated irrigation and aspiration functions. The distal bipolar electrode allows the application of RF energy from a third-party radiofrequency (RF) generator for coagulation of bleeding vessels in the neuro space. A cutting window on the side of distal end is for the removal of blood clots. The Neuroblade System, like other neuroendoscopes, is advanced into the brain through a burr hole created in the patient's skull. The tip of the Neuroblade is advanced under visualization via the illuminated camera image transmitted from the distal tip of the Neuroblade to the Neuropad. The Neuroblade has irrigation and vacuum tubing (2.0 m) that allows for connection to a third party saline infusion bag and a vacuum waste bucket, respectively. The waste bucket is connected to a vacuum regulator which is attached to the hospital's vacuum system. The bipolar electrode is incorporated into the distal tip of the Neuroblade and has a bipolar plug (20 cm) on the proximal end that allows for connection to a third-party bipolar cord. The proximal end of that bipolar cord connects to the RF Generator. An RF Generator and bipolar cord are common accessories in the operating room. The Neuroblade has a working channel that will facilitate the introduction of flexible endoscopic surgical devices (≤1.7 mm outer diameter) into the surgical site. That same working channel will also facilitate irrigation of the target site and is the channel that supports the aspiration of fluid and tissues. The Neuropad can be installed onto the Cart and adjusted by the user for height and tilt. The Neuropad allows the user to input patient data, control some aspects of the image, and record the case.
I am sorry, but the provided text does not contain the specific acceptance criteria for the device, nor the detailed results of a study that proves the device meets those criteria in a format that would allow me to populate the requested table directly. The document primarily describes the device, compares it to predicate devices, and lists various tests performed (biocompatibility, electrical safety, bench testing, an animal study, etc.) with a "Pass" result, but without specifying the quantitative or qualitative acceptance criteria for each of those tests or linking them to a comprehensive performance evaluation in the way requested.
Specifically, the document lacks:
- A table of acceptance criteria and reported device performance: While tests are listed, the specific criteria for "Pass" are not detailed, nor are numerical or descriptive performance metrics provided for each criterion.
- Sample size used for the test set and data provenance: A general animal study is mentioned, but specific sample sizes for particular performance tests are not given.
- Number of experts used to establish ground truth and their qualifications: Not explicitly stated for any specific test.
- Adjudication method: Not discussed.
- MRMC comparative effectiveness study: No mention of such a study or effect sizes of human reader improvement with AI. The device is a neuroendoscopy system, not an AI-assisted diagnostic tool.
- Standalone performance: The tests are generally standalone device performance evaluations, but the specific metrics are not provided as requested.
- Type of ground truth: For the animal study, necropsy and histopathology were used for confirmation, but for other tests, "ground truth" in the requested sense is not clearly defined.
- Sample size for the training set: Not applicable as this is not an AI/ML device with a separate training set.
- How ground truth for the training set was established: Not applicable.
The document mainly focuses on proving substantial equivalence to predicate devices through various engineering and safety tests, rather than presenting a clinical performance study with detailed acceptance criteria and results.
Ask a specific question about this device
(63 days)
Dublin, California 94568
Re: K232159
Trade/Device Name: QEVO System Regulation Number: 21 CFR 882.1480 |
---|
Common Name: |
Classification: |
------ |
Manufacturer: |
Classification: |
while the QEVO System |
portion remains |
identical. |
Classification Regulation |
Product Code |
882.1480 |
The QEVO System is intended for viewing internal surgical sites during general surgical procedures and for use in visualization of ventricles and structures within the brain during neurological surgical procedures as well as for viewing internal surgical sites during anterior spinal procedures, such as nucleotomy, discectory, and foraminotomy.
The QEVO System comprises of the QEVO ECU (Endoscope Control Unit) and QEVO endoscope. The system is intended for viewing internal surgical sites and for use in visualization during general and certain neurosurgical and spinal procedures. The QEVO System has to be installed and integrated with a host display device (surgical microscope, a monitor, etc). Requirements for physical integration, connectivity, power supply, display resolution, and software integration are established and tested.
The provided document is a 510(k) premarket notification summary for the QEVO System, declaring its substantial equivalence to a predicate device (QEVO System with KINEVO 900). This type of submission focuses on demonstrating that a new device is as safe and effective as a legally marketed device, primarily by showing similar technological characteristics and intended use.
Crucially, this document does NOT contain information about acceptance criteria or a study proving the device meets those criteria in the context of AI/ML performance testing. The "Summary of Studies" section only mentions:
- Sterilization and Shelf Life: The device is reusable and the reprocessing instructions are identical to the predicate device.
- Biocompatibility: Testing was done in accordance with ISO 10993 for the patient-contacting component (insertion tube).
- Performance Testing - Bench: Optical safety was assessed according to IEC 62471:2006. It explicitly states: "The determination of substantial equivalence was not based on an assessment of performance data." This indicates that no clinical performance study (like an MRMC study or standalone algorithm performance) was submitted or required for this 510(k) clearance, as the device is an endoscopic visualization system, not an AI/ML diagnostic tool.
Therefore, I cannot extract the information required to answer your prompt because the provided text pertains to a traditional medical device (an endoscope system) clearance, not an AI/ML-driven device that would involve the rigorous testing methodologies you've asked about (e.g., ground truth, reader studies, test set sizes, etc.).
To summarize why I cannot provide the requested information based on the given text:
- No AI/ML Component: The QEVO System is described as a visualization system (endoscope) that displays images. There's no mention of an embedded AI/ML algorithm for image analysis, diagnosis, or decision support.
- No Performance Data for Clinical Effectiveness: The submission explicitly states that the substantial equivalence determination was not based on performance data. This implies a reliance on technological similarity to the predicate and standard bench testing for safety (electrical, optical) and functionality (image resolution, field of view, etc.).
- Focus on Substantial Equivalence: The entire document is about demonstrating that the new QEVO System is "substantially equivalent" to an existing predicate device, primarily by comparing their specifications and intended use, rather than proving a new diagnostic capability through clinical performance studies.
If you have a document related to an AI/ML medical device, please provide that, and I would be able to address your specific questions about acceptance criteria, study design, and ground truth establishment.
Ask a specific question about this device
Page 1 of 7