Search Results
Found 143 results
510(k) Data Aggregation
(270 days)
Goleta, California 93117
Re: K243077
Trade/Device Name: Affirm 800
Regulation Number: 21 CFR 892.1600
DEVICE CLASS:** Class II
CLASSIFICATION NAME: System, X-Ray, Angiographic
REGULATION NUMBER: 892.1600
DEVICE CLASS:** Class II
CLASSIFICATION NAME: System, X-Ray, Angiographic
REGULATION NUMBER: 892.1600
Submission#** K243077 | K202391 | |
| Product Code IZI | IZI | Identical |
| Regulation number 892.1600
| 892.1600 | Identical |
| Class No. II | II | Identical |
| Indications for Use The Affirm
The Affirm 800 is used in viewing intra-operative blood flow in the cerebral vascular area including, but not limited to, assessing cerebral vessel branch occlusion, as well as intraoperative blood flow and vessel patency in bypass surgical procedures in neurosurgery.
The Affirm 800 module is designed to work in conjunction with the Class I Digital Surgical Microscope (DSM) RE3. The Affirm 800 module, a Class II device, comprises hardware and software components that enable the Digital Surgical Microscope to emit excitation light in specific wavelengths to activate the fluorescence properties of Indocyanine Green (ICG). The fluorescence signal emitted by the patient represents the distribution of the infrared dye in the patient's blood vessels during surgery. The emitted light is then captured by the optics of the digital microscope, passed through filters to remove unwanted wavelengths of light, and finally detected by the image sensors. This detected signal is then projected on a 3D monitor, which is part of the microscope system, enabling the surgeon to view the magnified image.
The integrated Affirm 800 fluorescence module is used to visualize infrared fluorescent areas in the surgical field and includes features to record and play back a video clip of the area of focus where fluorescent light is emitted. The module has been designed for excitation in the wavelength range from 740 nm to 800 nm and for fluorescence visualization in the wavelength range from 820 nm to 900 nm. The fluorescence feature generates an image in the infrared spectrum, which means it cannot be seen by the naked eye.
Here's a breakdown of the acceptance criteria and the study details for the Affirm 800 device, based on the provided FDA 510(k) Clearance Letter:
Note: The provided document is an FDA 510(k) Clearance Letter and a 510(k) Summary. These documents summarize the manufacturer's performance testing and justification for substantial equivalence. They don't typically include granular details about every aspect of the study design (like specific expert qualifications, adjudication methods, or sample sizes for specific training sets) that might be found in a full study report or a more comprehensive submission. Therefore, some information requested may not be explicitly available in this type of document and will be noted as "Not explicitly stated."
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric (as implied by test methods) | Acceptance Criteria (implied) | Reported Device Performance |
---|---|---|---|
Image Quality (NIR Fluorescence) | Sensitivity to ICG Molar Concentration: | Equivalent to predicate | Pass (for: Weber contrast, SNR, Fluorescence Pixel Intensity, LOD, LOQ, Signal linearity) |
- Weber contrast | (implied by "confirm equivalent performance") | ||
- Signal to Noise Ratio (SNR) | |||
- Fluorescence Pixel Intensity | |||
- Limit of detection (LOD) | |||
- Limit of quantification (LOQ) | |||
- Signal linearity (up to 3.2 micromolar) | |||
Spatial & Temporal Noise | Spatial noise | Within acceptable limits | Pass |
Temporal noise | |||
Stereoscopic Display | Left and Right Eye Comparison | Consistent performance | Pass |
Stereoscopic Crosstalk: | Within acceptable limits | Pass (for: Stereoscopic Extinction Ratio, Stereoscopic Crosstalk) | |
- Stereoscopic Extinction Ratio | |||
- Stereoscopic Crosstalk | |||
ICG Depth Penetration | Sensitivity to NIR fluorescence depth penetration: | Equivalent to predicate | Pass (for: Weber Contrast, SNR) |
- Weber Contrast | (implied by "confirm equivalent performance") | ||
- Signal-to-Noise Ratio | |||
Clinical Image Quality | Surgeons' image quality evaluation (scale 1-5) | Satisfactory clinical image quality | Pass (image quality evaluated in 5 porcine models) |
Software Functionality | Software Verification & Validation (V&V) | Compliance with FDA guidance | Pass |
Study Details
-
Sample Size used for the test set and the data provenance:
- Sample Size (Clinical/Animal Study): 5 porcine models.
- Data Provenance: Not explicitly stated, but likely conducted by the manufacturer or a contracted research organization in a controlled setting, given it's an animal study. It is a prospective study by nature of being a "conducted" animal study.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated. The document mentions "Surgeons evaluated image quality." The exact number of surgeons is not provided.
- Qualifications of Experts: Only "Surgeons" are mentioned. No specific experience levels (e.g., "radiologist with 10 years of experience") are provided.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Adjudication Method: Not explicitly stated. The document only says "Surgeons evaluated image quality." It doesn't specify if this was an individual assessment, a consensus, or involved an adjudication process for discrepancies.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, a multi-reader multi-case comparative effectiveness study was not explicitly mentioned or described. The performance testing focused on device characteristics and an animal study for image quality evaluation, not on human reader performance with or without AI assistance. The Affirm 800 as described is an imaging module, not an AI-assisted diagnostic tool.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Not applicable in the context of an "algorithm only" device. The Affirm 800 is an imaging system designed for intra-operative visualization with a human surgeon-in-the-loop. Its performance is evaluated on its ability to acquire and display clear fluorescent images, not on an algorithm making a diagnostic decision by itself. However, the technical aspects of image quality (e.g., sensitivity, SNR, depth penetration) could be considered standalone measurements of the imaging capability.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Ground Truth: For the animal study, the "ground truth" for image quality was based on the subjective evaluation by surgeons on a defined scale (1 to 5). For the technical image quality tests, the "acceptance criteria" likely served as the ground truth, comparing measured values (e.g., Weber contrast, SNR) against pre-defined thresholds or performance of the predicate device. There is no mention of pathology or outcomes data as ground truth for this device's performance.
-
The sample size for the training set:
- Training Set Sample Size: Not explicitly stated. The document describes verification and validation testing, but it does not detail a separate "training set" for a machine learning model, as the device is specified as an angiographic X-ray system module, not an AI algorithm for diagnosis.
-
How the ground truth for the training set was established:
- Ground Truth for Training Set: Not explicitly stated or applicable, as a discrete "training set" for an AI model (with associated ground truth) is not described in the context of this device's submission summary. The device's validation focuses on engineering specifications and direct performance measures rather than training an AI.
Ask a specific question about this device
(55 days)
Endoscopic Imaging System
Classification Name: 21 CFR 876.1500: Endoscope and accessories,
21 CFR 892.1600
The Arthrex Synergy Vision Endoscopic Imaging System is intended to be used as an endoscopic video camera to provide visible light imaging in a variety of endoscopic diagnostic and surgical procedures, including laparoscopy, orthopedic, plastic surgery, sinuscopy, spine, urology, and procedures within the thoracic cavity. The device is also intended to be used as an accessory for microscopic surgery.
The Arthrex Synergy Vision Endoscopic Imaging System is indicated for use to provide real time endoscopic visible and near-infrared fluorescence imaging. Upon intravenous administration and use of ICG consistent with its approved label, the system enables surgeons to perform minimally invasive surgery using standard endoscope visible light as well as visualization of vessels, blood flow and related tissue perfusion, and at least one of the major extra-hepatic bile ducts (cystic duct, common bile duct or common hepatic duct), using near-infrared imaging. Fluorescence imaging of biliary ducts with the Arthrex Synergy Vision Endoscopic Imaging System is intended for use with standard of care white light, and when indicated, intraoperative cholangiography. The device is not intended for standalone use for biliary duct visualization.
Upon interstitial administration and use of ICG consistent with its approved label, the Arthrex Synergy Vision Endoscopic Imaging System is used to perform intraoperative fluorescence imaging and visualization of the lymphatic system, including lymphatic vessels and lymph nodes.
Upon administration and use of pafolacianine consistent with its approved label, the Arthrex Synergy Vision Endoscopic Imaging System is used to perform intraoperative fluorescence imaging of tissues that have taken up the drug.
The Arthrex NanoNeedle Scope when used with the Synergy Vision Endoscopic Imaging System is intended to be used as an endoscopic video camera to provide visible light imaging in a variety of endoscopic diagnostic and surgical procedures, including laparoscopy, orthopedic, plastic surgery, sinuscopy, spine, urology, and procedures within the thoracic cavity. The device is also intended to be used as an accessory for microscopic surgery. For pediatric patients, the Arthrex NanoNeedle Scope is indicated for laparoscopy and orthopedic procedures.
The Arthrex Synergy Vision Endoscopic Imaging System includes a camera control unit (CCU) console, camera heads, endoscopes, and a laser light source. The system provides real-time visible light and near-infrared (NIR) illumination and imaging.
The Arthrex Synergy Vision Endoscopic Imaging System uses an integrated LED light to provide visible light illumination and imaging of a surgical site. For NIR imaging, the system interacts with the laser light source to visualize the presence of a fluorescence contrast agent, indocyanine green (ICG) and pafolacianine. The contrast agent fluoresces when illuminated through the laparoscope with NIR excitation light from the laser light source and the fluorescent response is then imaged with the camera, processed, and displayed on a monitor.
The provided FDA 510(k) clearance letter and summary for the Arthrex Synergy Vision Endoscopic Imaging System (K250728) describes the device and its indications for use, but does not contain the detailed information necessary to fully answer all the questions regarding acceptance criteria and a study that proves the device meets those criteria.
Specifically, the document states: "The Arthrex Synergy Endoscopic Imaging System did not require animal testing or human clinical studies to support the determination of substantial equivalence." This implies that performance data demonstrating device capabilities against specific acceptance criteria (beyond general functional testing and compliance with standards) were not generated through clinical studies or animal studies for this submission.
However, based on the information provided, here's what can be inferred and what is missing:
Acceptance Criteria and Device Performance (Inferred/Missing)
Since no human or animal clinical studies were performed, there appears to be no specific clinical performance acceptance criteria listed in this document. The "Performance Data" section primarily focuses on engineering and regulatory compliance testing.
Acceptance Criterion | Reported Device Performance |
---|---|
Functional Performance (General) | "The test results confirm the subject device met Arthrex product requirements and design specifications for the device." |
Biocompatibility | "Leveraged from the biocompatibility testing... of the additional predicate devices, as there were no modifications made to the subject device... that would affect the biocompatibility..." |
Sterility | "Leveraged from the... sterilization validation of the additional predicate devices, as there were no modifications made to the subject device... that would affect the... sterility of the device." |
Electrical Safety (EMT) | "The test results confirm the subject device conforms with EMT safety... standards." |
Electromagnetic Compatibility (EMC) | "The test results confirm the subject device conforms with... EMC standards." |
Software Performance | "Software testing was conducted and documentation was provided in this submission. The test results confirm the Arthrex software updates met product requirements and design specifications established for the device." |
Image Resolution | (Implicitly met by technical specifications matching predicate) 3840 x 2160, 400 x 400 (Nano), 720 x 720 (Nano) |
Frame Rate | (Implicitly met by technical specifications matching predicate) 60 fps, 30 fps (Nano) |
NIR Wavelengths/Detection | (Implicitly met by technical specifications matching predicate) Excitation Wavelength: 785 nm, Detection Bandwidth: 810 – 940 nm |
Clinical Performance (e.g., Sensitivity, Specificity for specific disease/task) | NOT APPLICABLE / NOT PROVIDED. The document explicitly states no human or animal studies were required for substantial equivalence. Therefore, there are no reported clinical performance metrics like sensitivity or specificity. |
Study Details (Based on Provided Document and Inferences)
-
Sample Size used for the test set and the data provenance:
- Test set sample size: Not applicable for clinical performance as the document explicitly states: "The Arthrex Synergy Endoscopic Imaging System did not require animal testing or human clinical studies to support the determination of substantial equivalence."
- Data provenance: Not applicable for clinical performance. The data provenance for engineering tests (e.g., EMT, EMC, software) would be internal Arthrex testing labs.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as no human clinical studies establishing ground truth for clinical performance were conducted for this submission. Ground truth for engineering tests is established through technical specifications and industry standards.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable for clinical performance.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done. The device description does not indicate AI assistance features. The system is described as an endoscopic video camera and imaging system, not an AI-powered diagnostic aide.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. The device is hardware (imaging system) with associated software, not a standalone algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the engineering and regulatory compliance tests: Ground truth was established by adherence to design specifications, industry standards (e.g., for biocompatibility, sterility, EMT, EMC), and internal product requirements.
- For clinical performance (e.g., diagnostic accuracy for a specific disease): No such ground truth was established or presented as no clinical studies were performed.
-
The sample size for the training set:
- Not applicable, as this device does not appear to utilize machine learning for clinical interpretation or outcome prediction in a way that requires a "training set" in the context of AI/ML models. "Software testing" refers to verification against technical requirements, not AI model training.
-
How the ground truth for the training set was established:
- Not applicable for the same reasons as #7.
Summary of Key Takeaways from the Document:
The 510(k) clearance for the Arthrex Synergy Vision Endoscopic Imaging System (K250728) is based on substantial equivalence to existing predicate devices (K241361 and K243008). The justification for this clearance relies heavily on:
- Similar intended use and indications for use.
- Similar technological characteristics (e.g., components, imaging modes, specifications).
- Successful completion of engineering and regulatory compliance testing (e.g., functional testing, biocompatibility, sterility, EMT, EMC, software).
Crucially, the document explicitly states that no animal testing or human clinical studies were required or performed to demonstrate the device meets acceptance criteria related to clinical performance. This means the clearance is not based on a study proving diagnostic accuracy or clinical effectiveness in patients, but rather on the device's technological similarity and safety/performance in a non-clinical, engineering test environment compared to previously cleared devices.
Ask a specific question about this device
(119 days)
2316XG
NETHERLANDS
Re: K243769
Trade/Device Name: QFR (3.0)
Regulation Number: 21 CFR 892.1600
Classification Name:** Angiographic X-ray system
Regulatory Class: II
Regulation: 21 CFR 892.1600
DEVICE: QFR (K243769)** | PREDICATE DEVICE: QANGIO XA 3D (K182611) |
|---|---|---|
| Regulation | 892.1600
– Angiographic X-ray System | 892.1600 – Angiographic X-ray System |
| Product Code | QHA (Class 2)
QFR is indicated for use in clinical settings where validated and reproducible quantified results are needed to support the assessment of coronary vessels in X-ray angiographic images, for use on individual patients with coronary artery disease.
When the quantified results provided by QFR are used in a clinical setting on X-ray images of an individual patient. The results are only intended for use by the responsible clinicians.
QFR is delivered as a standalone software package which is installed and running on a server system in the server room of the cathlab or the hospital. The server offers all functionalities that are required to work with the quantitative measurement in X-ray Angiographic (XA) patient studies supported by the QFR device.
QFR will be used by interventional cardiologists and researchers to obtain quantifications of lesions in coronary vessels. QFR has been developed as a web-based application to run in a web browser in the control room of the cathlab or in a hospital image review room. The import of images and the export of analysis results are via PACS.
The QFR device calculates the QFR value based on an anatomical model which is the result of a 3D reconstruction using the 2D contours obtained from two angiographic projections with angles >=25 degrees apart. These projections are acquired through monoplane or biplane XA systems. The algorithm involves three key steps: (1) Vessel Selection, (2) Contours Detection, and (3) QFR Analysis:
-
Vessel Selection: Angiograms are pre-classified by a deep learning model, identifying main epicardial vessels such as RCA, LAD, and LCx. The user then chooses the segment for analysis, and the software automatically selects end-diastolic image frames. The end-diastolic frame is determined as the angiogram frame with the vessel lumen adequately filled with contrast in both image sequences. This selection is either based on the patient's electrocardiogram when available or performed by the software using a deep learning model. It is essential for the user to verify this selection before proceeding with the analysis. The chosen end-diastolic frame serves as the projection view for the subsequent 3D reconstruction of the vessel.
-
Contours Detection. First, the system runs another deep learning model for coronary vessel segmentation as input to identify anatomical corresponding points on both projections for automatic correction of the system distortions introduced by the isocenter offset and the respiration-induced heart motion. Second, begins the automatic detection of start and end positions of the vessel segment to be reconstructed on the projection views, and extract its contours and centerline. Third, the position of the start and end point must be confirmed by the user.
-
QFR Analysis: The QFR value is computed from the arterial and reference diameter function calculated from the 3D reconstruction based on the contours detected on the cross-sections of the vessel segment, and the patient-specific volumetric flow rate calculated from the automated TIMI frame count. The reference diameter and bifurcations are used to determine the flow distribution at coronary bifurcations and calculate the reference diameter function. The reconstructed 3D model is used to calculate the QFR value.
A report is generated by QFR that shows patient information, image acquisition information (both obtained from the DICOM input), analysis results (vessel sizing and QFR value) and snapshot images showing the vessel boundaries.
The provided FDA 510(k) Clearance Letter for QFR (3.0) outlines the device's acceptance criteria and performance data from a study. Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated in a single, clear table. Instead, they are defined for specific algorithmic improvements. The reported performance is then compared against these defined criteria.
Feature Evaluated | Acceptance Criteria | Reported Device Performance | Result (Met/Not Met) |
---|---|---|---|
Vessel Classification | 80% for correct vessel classification (since it supports the user, not fully automates) | RCA: 96% correct; LAD: 88% correct; LCx: 78% correct. "On average the 80% acceptance criterion was satisfied." | Met |
Start and End Point Detection | 80% for correct result, with only 10% allowed to be completely wrong (since it supports the user, not fully automates) | AI/ML model using image data: 77% correct result, 11% small deviation (needed no correction), 8% wrong result (needed correction), 4% gave no result. "In conclusion, 88% satisfies the 80% criterion and 8% satisfies the 10% criterion." (Note: The 88% is derived from 77% correct + 11% small deviation which needed no correction. The 8% wrong result is within the 10% allowed. The 4% "no result" is not explicitly addressed by criteria but the overall conclusion is favorable.) | Met |
End-Diastolic (ED) Frame Detection | 80% for correct detection of the ED (since it supports the user, not fully automates) | Analytical algorithm (using ECG data): 83% on a representative dataset. AI/ML model (using image data): 81% on a representative dataset. | Met |
New Flow Velocity Calculation (influencing QFR) | Acceptance criterion "significantly stricter for the resulting QFR measurement than the reproducibility of FFR measurements." | "This ensured that, the automatic flow calculation, was not outperformed by manual indication." (The specific numerical values of the "stricter" criterion and validation results are not provided, only a general statement of meeting the intent.) | Met (implied) |
2. Sample Size Used for the Test Set and the Data Provenance
- Test Set Sample Size: For the "Vessel classification," "Start and end point detection," and "End-Diastolic (ED) frame detection" evaluations, the document mentions being performed "on a representative dataset." However, the exact sample size (number of patients or images) for these test sets is not explicitly stated.
- Data Provenance: The document does not specify the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications (e.g., "radiologist with 10 years of experience"). The context implies that for functionalities "supporting the user and not to completely automate the functionality," human review and correction are part of the process, suggesting expert involvement, but the formal ground truth establishment process is not detailed.
4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set
The document does not specify an adjudication method (such as 2+1 or 3+1) for establishing the ground truth for the test set.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
The document does not describe a formal Multi Reader Multi Case (MRMC) comparative effectiveness study designed to measure the improvement of human readers with AI assistance versus without AI assistance. The performance evaluations stated are for the algorithm's ability to assist (e.g., correct classification or detection rates) rather than human performance metrics. The statement "For all of these algorithmic improvements the user is able to review and correct the results before the QFR value is calculated" implies that the AI is assistive, but no data on human performance improvement is presented.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, standalone performance of the algorithm components (e.g., vessel classification, start/end point detection, ED frame detection by AI/ML model alone) was evaluated and reported against the acceptance criteria. For example, for vessel classification, 96% correct for RCA, 88% for LAD, and 78% for LCx are standalone algorithmic performance numbers before human review and correction. Similarly, the 77% correct for start/end point detection and 81% for ED frame detection are standalone algorithmic performances.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The document implies that the ground truth for the "correct" classifications/detections was based on some form of human reference standard or expert review, as the system is designed to "support the user" and allows for "manual correction." However, the specific method of establishing this ground truth (e.g., expert consensus, comparison to a gold standard, or clinical outcomes) is not explicitly detailed. For the flow velocity calculation influencing QFR, the ground truth is implicitly related to QFR results and their validation against FFR (Fractional Flow Reserve) reproducibility, which is a physiological measurement.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set used for the deep learning models. It only mentions that the angiograms are "pre-classified by a deep learning model" and that the system "runs another deep learning model for coronary vessel segmentation."
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly describe how the ground truth for the training set was established. It is implied that for deep learning models, labeled data would have been required, but the process of labeling (e.g., by experts, automated methods, or a combination) is not detailed.
Ask a specific question about this device
(118 days)
|
| Classification | 21 CFR 876.1500: Endoscope and accessories,
21 CFR 892.1600
The Arthrex NanoScope System is intended to be used as an endoscopic video camera to provide visible light imaging in a variety of endoscopic diagnostic and surgical procedures, including laparoscopy, orthopedic, plastic surgery, sinuscopy, spine, and urology. The device is also intended to be used as an accessory for microscopic surgery.
For pediatric patients, the Arthrex NanoScope System is indicated for use in laparoscopy and orthopedic procedures.
The Arthrex NanoScope System provides real-time visible light illumination and imaging. The system includes a non-sterile reusable camera control unit (CCU) console and sterile disposable camera handpieces. The system integrates high-definition camera technology, LED lighting, and an imaging management system into a single console with touchscreen interface.
The provided document describes the Arthrex NanoScope System, a medical device used for endoscopic video imaging. However, this document does not contain information about a study proving the device meets specific acceptance criteria related to a specific performance metric (e.g., accuracy, sensitivity, specificity) for an AI/ML algorithm that identifies or classifies something.
The performance data section (pages 5 and 6) outlines various non-clinical bench testing, including:
- Biocompatibility testing: Performed according to ISO 10993 standards.
- Electrical, Mechanical, and Thermal (EMT) safety testing: Performed according to ANSI/AAMI ES60601-1 and IEC 60601-2-18 standards.
- Electromagnetic Compatibility (EMC) testing: Performed according to IEC 60601-1-2 standards.
- Software testing: Performed according to FDA guidance and IEC 62304.
- Design verification testing: Included inspection, engineering analysis, and functional testing.
These tests confirm the device's conformance to safety, performance, and software quality standards relevant to an endoscopic imaging system, rather than demonstrating the performance of an AI model against specific clinical metrics like sensitivity or specificity. The submission aims to expand indications and report software and device modifications, relying on equivalence to predicate devices rather than a de novo clinical study proving AI performance.
Therefore, I cannot fulfill the request to describe the acceptance criteria and the study that proves the device meets those criteria in the context of an AI/ML algorithm's performance, as the provided text does not describe such a study. The document explicitly states: "The Arthrex NanoScope System did not require animal testing or human clinical studies to support the determination of substantial equivalence." This further indicates that no clinical performance study, particularly one involving AI/ML and human-in-the-loop or standalone performance, was conducted as part of this submission.
Ask a specific question about this device
(265 days)
CAAS Workstation features segmentation of cardiovascular structures, 3D reconstruction of vessel segments and catheter path based on multiple angiographic images, measurement and reporting tools to facilitate the following use:
- Calculate the dimensions of cardiovascular structures;
- Quantify stenosis in coronary vessels;
- Determine C-arm position for optimal imaging of cardiovascular structures;
- Quantify pressure drop in coronary vessels;
- Enhance stent visualization and measure stent dimensions;
CAAS Workstation is intended to be used by or under supervision of a cardiologist.
CAAS Workstation is an image post-processing software package for advanced visualization and ysis in the field of cardiology or radiology and offers functionality to view X-Ray angiographic images, to segment cardiovascular structures in these images, to analyze and quantify these cardiovascular structures and to present the results in different formats.
CAAS Workstation is a client-server solution intended for usage in a network environment or standalone usage and runs on a PC with a Windows operating system. It can read DICOM X-ray images from a directory, or receive DICOM images from the X-ray or PACS system.
CAAS Workstation is composed out of the following analysis workflows: StentEnhancer and vFFR for calculating dimensions of coronary vessels, quantification of stenosis and calculating the pressure drop and vFFR value based on two 2D X-Ray angiographic images. Semi-automatic contour detection forms the basis for the analyses.
Results can be displayed on the screen, printed or saved in a variety of formats to hard disk, network, PACS system or CD. Results and clinical images with overlay can also be printed as a hardcopy and exported in various electronic formats. The functionality is independent of the type of vendor acquisition equipment.
The provided text describes a 510(k) premarket notification for the CAAS Workstation, a software package for advanced visualization and analysis in cardiology and radiology. However, it does not contain specific details about acceptance criteria or a study proving the device meets those criteria with quantitative performance metrics for AI/ML components.
The document states: "Performance testing demonstrated that the numerical results for the analysis workflows StentEnhancer and vFFR, as already available in predicate device K180019, were comparable." This is a qualitative statement of comparability to a predicate device, not a detailed presentation of acceptance criteria and the results of a study designed to meet them.
Therefore, I cannot fulfill all parts of your request with the provided input. I will outline what can be extracted and note what information is missing.
Summary of Device and Approval:
- Device Name: CAAS Workstation
- Applicant: Pie Medical Imaging B.V.
- FDA K-Number: K232147
- Regulation Name: Angiographic X-Ray System
- Regulatory Class: Class II
- Product Codes: QHA, LLZ
- Predicate Device: CAAS Workstation (K180019) – an earlier version of the same product.
- Basis for Clearance: Substantial Equivalence to the predicate device.
Indications for Use (Key Features):
CAAS Workstation features segmentation of cardiovascular structures, 3D reconstruction of vessel segments and catheter path based on multiple angiographic images, measurement and reporting tools to facilitate the following use:
- Calculate the dimensions of cardiovascular structures;
- Quantify stenosis in coronary vessels;
- Determine C-arm position for optimal imaging of cardiovascular structures;
- Quantify pressure drop in coronary vessels;
- Enhance stent visualization and measure stent dimensions;
Missing Information:
The provided text focuses on the regulatory clearance process through 510(k) substantial equivalence. This pathway often relies on demonstrating that a new device is as safe and effective as a legally marketed predicate device, rather than requiring extensive de novo clinical performance studies with specific acceptance criteria as you've requested for an AI/ML component. The document mentions "Performance testing," but it does not provide the details required to answer your specific questions about acceptance criteria, study design, sample sizes, ground truth establishment, or expert involvement for a new AI/ML model's performance.
The "AI" mentioned appears to refer more to automated image processing algorithms (semi-automatic contour detection, vFFR workflow involving pressure drop quantification, StentEnhancer workflow) rather than a novel, deep learning-based AI/ML algorithm that would typically necessitate the detailed performance study described in your prompt. The emphasis is on comparability of "numerical results" to the predicate, implying validation of existing algorithms, possibly with minor improvements, not a new AI/ML model with distinct performance criteria.
Based on the provided text, here's what can be inferred or explicitly stated, and what remains unknown:
1. A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated in quantitative terms in the provided text. The document broadly indicates that "numerical results for the analysis workflows StentEnhancer and vFFR...were comparable" to the predicate. This implies the acceptance criterion was "comparability" to the predicate's performance, but no specific thresholds (e.g., accuracy > X%, ROC AUC > Y) are given.
- Reported Device Performance: No quantitative performance metrics (e.g., sensitivity, specificity, accuracy, precision, recall) are provided in the text. The statement is qualitative: "numerical results...were comparable."
Criterion Type | Acceptance Criterion (as described) | Reported Device Performance (as described) |
---|---|---|
Numerical Results | Comparability to predicate device (K180019) for StentEnhancer and vFFR workflows. | "Numerical results...were comparable" to the predicate. |
Safety & Effectiveness | As safe and effective as predicate device (K180019). | Demonstrated through verification and validation results. |
Usability | Conformance to IEC 62366-1 standard. | User is able to use CAAS Workstation for its purpose. |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The document mentions reading DICOM X-ray images, but not the source of the test data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. The device is intended for use by or under the supervision of a cardiologist, suggesting expert cardiac imaging knowledge would be relevant, but details about ground truth establishment are absent.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not described. The focus is on the device's standalone performance compared to a predicate, not on a human-in-the-loop MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The "Performance testing demonstrated that the numerical results for the analysis workflows StentEnhancer and vFFR...were comparable" indicates an evaluation of the algorithm's output. This is consistent with a standalone performance assessment, as the comparison is about the output of the software itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated. Given the functionalities (quantifying stenosis, dimensions, pressure drop), the ground truth for these "numerical results" would likely involve comparison against a gold standard derived from established imaging methods, potentially quantitative measurements from calibrated imaging devices, or expert consensus measurements, but the document does not elaborate.
8. The sample size for the training set:
- Not specified. The document mentions "semi-automatic contour detection forms the basis for the analyses" for the vFFR workflow, which could imply a training process, but no details are given.
9. How the ground truth for the training set was established:
- Not specified.
In conclusion, the K232147 FDA clearance document for the CAAS Workstation confirms its regulatory pathway via substantial equivalence to a predicate device. While it mentions "Performance testing" and "comparable numerical results," it does not provide the detailed quantitative acceptance criteria, study methodology, or specific performance metrics that would typically be found in an in-depth clinical validation study for a novel AI/ML device. The information provided is sufficient for a 510(k) submission based on predicate equivalence but lacks the granularity for the specific technical and clinical performance questions asked.
Ask a specific question about this device
(264 days)
800 Toronto, ON, M5V 3B1 Canada
Re: K231986
Trade/Device Name: Modus IR Regulation Number: 21 CFR 892.1600
x-ray system |
| Classification Name: | System, X-Ray, Angiographic |
| Regulation Number: | 21 CFR 892.1600
-----------|
| Classification Name: | System, X-Ray, Angiographic |
| Regulation Number: | 21 CFR 892.1600
Modus IR used with the Synaptive Surgical Exoscope is indicated for fluorescence imaging in conjunction with indocyanine green to aid in the visualization of vessels (micro- and macro-vasculature) and blood flow in the cerebrovasculature before, during, and after neurosurgery, plastic, and reconstructive surgeries.
Modus IR is an accessory of the Synaptive surgical exoscope. Modus IR provides surgical staff with a means to visualize vessels and blood flow during surgical procedures that may not be visible under white light conditions. When used with the appropriate imaging agent, light output at a specific wavelength excites the imaging agent, which emits light at a specific wavelength that is detected by the optical system, thereby allowing the user to differentiate the structure that the imaging agent has concentrated in. The imaging agent is not packaged or sold as part of Modus IR. It is the responsibility of the user to source and administer the applicable imaging agent according to the excitation and observation wavelengths of Modus IR. Modus IR is selectively enabled by authorized personnel using software configuration management.
Here's a breakdown of the acceptance criteria and study information for the Modus IR device based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria / Test | Acceptance/Pass Status | Reported Device Performance |
---|---|---|
Performance Bench Testing: | ||
Signal-to-noise ratio (Sensitivity) | Pass | Modus IR was found to have a lower limit of detection and limit of quantification than the predicate at comparable working distances. |
Fluorescence excitation and emissions | Pass | Although the excitation and emission wavelengths of Modus IR and the predicate device are not identical, they are considered equivalent (Modus IR: 748-802nm excitation, 820-1000nm detection; Predicate: 700-780nm excitation, 820-900nm detection). |
Non-deformed, non-rotated, non-mirrored, and centered video image | Pass | IR images from Modus IR were found to be non-deformed, non-rotated, non-mirrored, and centered relative to the white light image, based on assessment during in-vivo animal study. |
Spatial resolution | Pass | Modus IR was found to have a higher spatial resolution than the predicate at each working distance and zoom configuration, including at maximum zoom. |
Photometric resolution | Pass | Photometric resolution was found to be equivalent between Modus IR and the predicate. |
Latency to external monitor | Pass | Average latency of Modus IR was found to be lower than the predicate device. |
In vivo Animal Study: | ||
Functionally equivalent visualization of intraoperative blood flow and vessel architecture | 100% confirmation | In all 40 comparative evaluations by 10 neurosurgeons, intraoperative blood flow and vessel architecture visualization was found to be functionally equivalent between Modus IR and the predicate device. |
Suitability of IR Fusion video clips for visualization of intraoperative blood flow against background anatomical structures | 100% confirmation | All IR Fusion video clips from Modus IR were found to be suitable for this purpose. |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 4 comparative video sets, resulting in 40 comparative evaluations (4 video sets * 10 neurosurgeons).
- Data Provenance: Prospective, in vivo animal study using two healthy porcine models. The specific country of origin is not mentioned, but the sponsor is based in Canada.
3. Number of Experts and Qualifications for Ground Truth for the Test Set
- Number of Experts: 10 neurosurgeons.
- Qualifications of Experts: The document explicitly states "neurosurgeons," implying they are medical professionals specialized in neurosurgery, making them qualified to assess the visualization of cerebrovasculature. Their specific years of experience are not mentioned.
4. Adjudication Method for the Test Set
- The study used an assessment by 10 neurosurgeons who were blinded to which system they were evaluating. The video sets were presented in a randomized order.
- The results state "In all evaluations... was found to be functionally equivalent... (100% confirmation)." This implies a consensus or unanimous agreement among the neurosurgeons regarding the functional equivalence and suitability. It doesn't explicitly state a 2+1 or 3+1 method, but the 100% confirmation suggests that all 10 neurosurgeons agreed on the equivalence.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC-like study was done. The in vivo animal study involved multiple readers (10 neurosurgeons) assessing multiple cases (4 comparative video sets) for comparative effectiveness between the subject device and the predicate.
- Effect Size of Human Readers Improve with AI vs. without AI assistance: This information is not provided in the document. The study aimed to demonstrate functional equivalence between the new device (Modus IR) and a predicate device (INFRARED 800 with FLOW 800 option) for aiding human visualization, not to quantify the improvement of human readers with AI assistance compared to without. Modus IR is an imaging accessory, not an AI-powered diagnostic tool in the sense of AI-driven image analysis or decision support for the clinician.
6. Standalone (Algorithm Only) Performance Study
- No, a standalone algorithm-only performance study was not explicitly mentioned or performed. The Modus IR is an imaging accessory that aids human visualization, not an algorithm that performs a task without human interpretation. Its performance is assessed in the context of aiding a human surgeon.
7. Type of Ground Truth Used
- For the in vivo animal study, the ground truth was expert consensus (10 neurosurgeons' unanimous agreement on functional equivalence and suitability of visualization).
- For the bench tests, the ground truth was based on objective measurements and comparisons against the predicate device using established metrics (e.g., limit of detection, spatial resolution measurements, qualitative assessment for image integrity).
8. Sample Size for the Training Set
- The document does not refer to or describe a training set for the Modus IR in the context of an AI/ML algorithm. Modus IR is described as an optical imaging accessory, not a software algorithm that performs diagnostic or analytical functions requiring a training set for machine learning. The software verification and validation mentioned are typically for ensuring software functions as intended, not for training a model.
9. How Ground Truth for the Training Set Was Established
- As no training set is described for an AI/ML algorithm, this information is not applicable.
Ask a specific question about this device
: EXPLORER AIR® II (8001, 8002, 8003); EXPLORER AIR® Sterile Drape (8004) Regulation Number: 21 CFR 892.1600
: Angiographic X-Ray System Classification Name: System, X-Ray, Angiographic Classification: 21 CFR 892.1600
Upon intravenous administration and use of an ICG (Indocyanine green for Injection) consistent with its approved label, the EXPLORER AIR® II is used in capturing fluorescent images for the visual assessment of blood flow and tissue perfusion, before, during, and after vascular, gastrointestinal, organ transplant, and plastic, micro- and reconstructive surgeries. The EXPLORER AIR® II is indicated for use in adult and pediatric patients one month of age and older.
Upon administration and use of pafolacianine consistent with its approved labeling, the EXPLORER AIR® II is used to perform intraoperative fluorescence imaging of tissues that have taken up the drug.
EXPLORER AIR® II consists of an imaging system that contains two cameras (one (1) for fluorescence, one (1) for color images) suspended by an articulated arm attached to a trolley. A touch screen and secondary monitor are also mounted on the trolley.
EXPLORER AIR® II enhances the surgeon's vision with use of near infrared fluorescence (NIR) imaging. The technology is based on the exposure of the tissue of interest to light after fluorescent dye such as indocyanine green (ICG) or pafolacianine has been administered to the patient. The EXPLORER AIR® II visualizes fluorescence excited by infrared light (740-760nm) and emitted in the band around 800nm. After image acquisition, the composite image (overlay of fluorescence and color images) is displayed along with the fluorescent and color images. The user can tag and compare images, play the recorded videos, and export the selected files.
The EXPLORER AIR® II must be used with EXPLORER AIR® Sterile Drape for use under sterile conditions.
The provided text is a 510(k) premarket notification for the EXPLORER AIR® II device. It primarily focuses on demonstrating substantial equivalence to a previously cleared predicate device (K222240) rather than presenting a de novo clinical study with detailed acceptance criteria and performance data for a novel AI/software component.
Therefore, the information required to fully answer your request (acceptance criteria, study details like sample size for test sets, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance, ground truth types, training set details) is not present in the provided document.
The document states:
- "No clinical tests were conducted to support this submission."
- The primary argument for clearance is that the device is "substantially equivalent" to its predicate due to "minor differences for HW design 2.2 (due to manufacturability and EOL of certain components), Packaging 2.0 (same materials, improvements for repacking in case of service), and SW 2.2 (OTS updates, image visualization improvements, and removal of alignment step by the user prior to every procedure)."
- It mentions "Software verification and validation testing were updated and conducted" to meet FDA guidance, but it does not provide the specific acceptance criteria or the results of these tests in a detailed manner.
In summary, this document describes a submission based on substantial equivalence and non-clinical testing (software verification and validation), not a performance study of an AI algorithm with specific acceptance criteria and detailed clinical study results as you've requested.
Ask a specific question about this device
(90 days)
|
| Common Name | 21 CFR 876.1500: Endoscope and accessories
21 CFR 892.1600
The Arthrex Synergy Vision Endoscopic Imaging System is intended to be used as an endoscopic video camera to provide visible light imaging in a variety of endoscopic diagnostic and surgical procedures, including but not limited to: orthopedic, spine, laparoscopic, urologic, sinuscopic, plastic surgical procedures, and procedures within the thoracic cavity. The device is also intended to be used as an accessory for microscopic surgery.
The Arthrex Synergy Vision Endoscopic Imaging System is indicated for use to provide real time endoscopic visible and near-infrared fluorescence imaging. Upon intravenous administration and use of ICG consistent with its approved label, the system enables surgeons to perform minimally invasive surgery using standard endoscope visible light as well as visualization of vessels, blood flow and related tissue perfusion, and at least one of the major extra-hepatic bile ducts (cystic duct, common bile duct or common hepatic duct), using near-infrared imaging. Fluorescence imaging of biliary ducts with the Arthrex Synergy Vision Endoscopic Imaging System is intended for use with standard of care white light, and when indicated, intraoperative cholangiography. The device is not intended for standalone use for biliary duct visualization.
Upon interstitial administration and use of ICG consistent with its approved label, the Arthrex Synergy Vision Endoscopic Imaging System is used to perform intraoperative fluorescence imaging and visualization of the lymphatic system, including lymphatic vessels and lymph nodes.
The Arthrex Synergy Vision Endoscopic Imaging System includes a non-sterile camera control unit (CCU) console, camera heads, a laser light source, and endoscope. The system integrates ultra-high-definition camera technology, light emitting diode (LED) lighting, and an image management system into a single console with a tablet interface. The system provides real-time visible and near-infrared light illumination and imaging.
The Arthrex Synergy Vision Endoscopic Imaging System interacts with the laser light source to be able to provide near-infrared (NIR) imaging to visualize the presence of Indocyanine Green (ICG). The ICG fluoresces when illuminated through a laparoscope with NIR excitation light from the laser light source and the fluorescence response is then imaged with the camera, processed, and displayed on a monitor.
The provided text describes the Arthrex Synergy Vision Endoscopic Imaging System. However, it does not contain specific acceptance criteria, performance metrics, or details of a study that would demonstrate the device meets such criteria in terms of clinical performance or diagnostic accuracy.
Instead, the document focuses on regulatory clearance for a medical device by demonstrating substantial equivalence to a predicate device. The "Performance Data" section details non-clinical testing related to safety and functionality, rather than clinical efficacy or accuracy.
Therefore, I cannot populate the requested table or answer most of the questions as the information is not present in the provided text.
Here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance:
Acceptance Criteria Category | Reported Device Performance (from non-clinical testing) |
---|---|
Biocompatibility | Device is biocompatible for its intended use (conforms to ISO 10993 standards) |
Electrical Safety | Conforms with electrical safety standards (e.g., IEC 60601-1) |
Electromagnetic Compatibility (EMC) | Conforms with EMC standards (e.g., IEC 60601-1-2) |
Software Functionality | Arthrex software functions met product requirements and design specifications |
Design Verification | Met Arthrex product requirements and design specifications for system and individual components (through inspection, engineering analysis, functional testing) |
Clinical Performance/Accuracy | Not specified in the provided text. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not applicable for clinical performance testing. The reported "testing" refers to non-clinical engineering and safety validations. No patient data or clinical test sets are mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. No expert review or ground truth establishment for clinical performance is mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. No adjudication method for clinical performance is mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No such study was done. The document explicitly states: "The Arthrex Synergy Endoscopic Imaging System did not require animal testing or human clinical studies to support the determination of substantial equivalence."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. The device is an imaging system providing visible and near-infrared fluorescence imaging, not an AI-driven diagnostic algorithm that would typically have standalone performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable for clinical performance. The "ground truth" for the non-clinical tests involved compliance with engineering standards and predefined design specifications.
8. The sample size for the training set
- Not applicable. The non-clinical testing does not involve training data in the context of machine learning.
9. How the ground truth for the training set was established
- Not applicable.
Ask a specific question about this device
(67 days)
Fluorescence Accessories (YELLOW 560 and INFRARED 800 with FLOW 800 Option) Regulation Number: 21 CFR 892.1600
Fluorescence accessories for surgical microscope |
| Classification: | 21 CFR 892.1600
| Carl Zeiss Meditec AG |
| Classification: | 21 CFR 892.1600
|
| Regulation Number | 892.1600
| 892.1600
· INFRARED 800 with FLOW 800 Option is a surgical microscope accessory intended to be used with a compatible surgical microscope in viewing and visual assessment of intraoperative blood flow in cerebral vascular area including, but not limited to, assessing cerebral aneurysm and vessel branch occlusion, as well as patency of very small perforating vessels. It also aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery. Likewise, INFRARED 800 with FLOW 800 Option used during fluorescence guided surgery aids in the visual assessment of intra-operative blood flow as well as vessel patency in bypass surgical procedures in neurosurgery, plastics and reconstructive procedures and coronary artery bypass graft surgery.
· YELLOW 560 is a surgical microscope accessory intended to be used with a compatible surgical microscope in viewing and visual assessment of intraoperative blood flow in cerebral vascular area including, but not limited to, assessing cerebral aneurysm and vessel branch occlusion, as well as patency of very small perforating vessels. It also aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery.
Fluorescence accessories (YELLOW 560 and INFRARED 800 with FLOW 800 option) are an accessory to surgical microscope and are intended for viewing and visual assessment of intra-operative blood flow as well as aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery. The functionality of these filters is derived from their ability to hight fluorescence emitted from tissue that has been treated with a fluorescence agent by applying appropriate wavelengths of light and utilizing selected filters. This helps a surgeon to visualize different structural body elements (such as vessels, tissue, blood flow, occlusions, aneurysms, etc.) during various intraoperative procedures. The fluorescence accessory can be activated by the user via the Graphical User Interface (GUI), foot control panel or the handgrips, for example.
For these accessories to be used with a qualified surgical microscope, the critical components of the surgical microscope need to fulfill the clinically relevant parameters for the Indications for Use of YELLOW 560 and INFRARED 800 with FLOW 800 Option.
The fluorescence accessories are embedded into the surgical microscope. The emission filter wheels are present within the head of the microscope. For filter installation into the surgical microscope, two emissions filters (one for each eyepiece) are placed into each of these filter wheel is present in front of the light source, which is installed along with the excitation filter
The provided text is a 510(k) summary for the Carl Zeiss Meditec Inc. "Fluorescence Accessories (YELLOW 560 and INFRARED 800 with FLOW 800 Option)". This document focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed acceptance criteria and a study proving the device meets those criteria.
The 510(k) summary primarily addresses:
- Indications for Use: The device is a surgical microscope accessory for viewing and visual assessment of intraoperative blood flow in the cerebral vascular area (e.g., assessing cerebral aneurysm, vessel branch occlusion, patency of small perforating vessels, and vessel types before/after Arteriovenous Malformation (AVM) surgery). It also aids in real-time visualization of blood flow and vessel patency in bypass surgical procedures in neurosurgery, plastics, reconstructive procedures, and coronary artery bypass graft surgery.
- Technological Characteristics: Comparison to predicate devices (YELLOW 560 (K162991) and INFRARED 800 with FLOW 800 Option (K100468)) is presented, showing substantial equivalence in application, patient population, device description, fluorescent agents used, visualization of real-time images, display, physical method, fluorescence excitation/detection, white light application, camera adaption, zoom, autofocus, autogain, control system, storage, and upgrade options. Minor differences are noted and deemed not to affect substantial equivalence.
- Non-Clinical Testing: A list of performance testing parameters for the system is provided, confirming that the "functional and system level testing showed that the system met the defined specifications."
Therefore, based on the provided text, a detailed table of acceptance criteria and a study proving the device meets those criteria (with specific performance metrics) cannot be fully constructed as requested. The document attests that the device met internal specifications through software verification and non-clinical system testing, but does not provide the specific numerical acceptance criteria or the study results themselves.
Here's a breakdown of what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Cannot be fully provided as specific numerical acceptance criteria and reported device performance are not detailed in the provided document. The document states that "functional and system level testing showed that the system met the defined specifications" and lists the parameters tested. However, the values for these specifications and the results of the testing are not included.
Acceptance Criteria (Implied / Stated) | Reported Device Performance (Not detailed in document) |
---|---|
Brightness of fluorescence ocular image | Met defined specifications |
Excitation wavelength | Met defined specifications |
Excitation filter | Met defined specifications |
Emission wavelength | Met defined specifications |
Emission filter | Met defined specifications |
Color reproduction of fluorescence ocular images | Met defined specifications |
Spatial resolution of the ocular image | Met defined specifications |
Color reproduction of fluorescence video images | Met defined specifications |
Non-mirrored video image | Met defined specifications |
Non-rotated video image | Met defined specifications |
Non-deformed video image | Met defined specifications |
Centered video image | Met defined specifications |
Photometric resolution of video image | Met defined specifications |
Signal-to-noise ratio of the video image (sensitivity) | Met defined specifications |
Latency of the video image (external monitor) | Met defined specifications |
Spatial resolution of the video image | Met defined specifications |
Irradiance (minimum irradiance at maximum illumination) | Met defined specifications |
Color reproduction of non-fluorescence ocular images | Met defined specifications |
Color reproduction of non-fluorescence video images | Met defined specifications |
Software performing as intended | Performed as intended |
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified. The document mentions "non-clinical system testing" and "software verification testing" but does not provide details on the number of samples, test cases, or images used.
- Data Provenance: Not specified. This appears to be internal company testing (bench testing) rather than a study involving patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable/Not specified. This was a non-clinical bench and software performance testing; it does not involve expert ground truth for clinical assessment.
4. Adjudication method for the test set
- Not applicable/Not specified. As noted above, this was non-clinical bench and software performance testing.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was mentioned. The device is an accessory to a surgical microscope providing visualization, not an AI diagnostic tool that assists human readers in interpreting images.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. The device provides "real-time visualization" and "visual assessment," which implies human interpretation of the images/data it presents. It's an accessory, not a standalone automated diagnostic algorithm. The testing described is for the functional and system performance of the accessory.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable for the non-clinical and software testing described. The "ground truth" for the performance testing would be the predefined specifications that the system components were designed to meet.
8. The sample size for the training set
- Not applicable. The description does not suggest this device uses machine learning or AI that would require a "training set" in the conventional sense for image analysis. It's a fluorescence visualization system.
9. How the ground truth for the training set was established
- Not applicable, as there is no mention of a training set.
Ask a specific question about this device
(81 days)
Trade/Device Name: SPY Portable Handheld Imaging (SPY-PHI) System Regulation Number: 21 CFR 892.1600
Fluorescence Angiographic System |
| Classification Name | Angiographic X-Ray System, 21 CFR 892.1600
Upon intravenous administration of SPY AGENT GREEN (indocyanine green for injection, USP), the SPY-PH System is used with SPY AGENT GREEN to perform intraoperative fluorescence angiography. The SPY-PHI System is indicated for use in adult and pediatric patients one month of age and older.
The SPY-PHI System is indicated for fluorescence imaging of blood flow and tissue perfusion before, and after. vascular, gastrointestinal, organ transplant, and plastic, micro- and reconstructive surgical procedures.
Upon interstitial administration of SPY AGENT GREEN, the SPY-PHI System is used to perform intraoperative fluorescence imaging and visualization of the lymphatic system, including lymphatic vessels and lymph nodes.
Upon intradermal administration of SPY AGENT GREEN, the SPY-PHI System is indicated for fluorescence imaging of lymph nodes and delineation of lymphatic vessels during lymphatic mapping in adults with breast cancer for which this procedure is a component of intraoperative management.
The SPY-PHI System is a real-time white-light and near-infrared illumination/ fluorescence imaging system used during open-field surgical procedures. Near-infrared illumination is used for fluorescence imaging using SPY AGENT® GREEN for the visual assessment of blood flow, tissue perfusion and visualization of the lymphatic system, including lymphatic vessels and lymph nodes. It consists of the SPY Portable Handheld Imager/ imaging head with an integrated light guide and camera cable and the Video Processor/ Illuminator. Additionally, SPY-QP Fluorescence Assessment Software is provided as an optional upgrade to the SPY-PHI System that enables relative quantification of NIR fluorescence.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Acceptance Criteria (Quantitative/Qualitative) | Reported Device Performance |
---|---|---|
Optical Imaging - Dynamic Range | The user shall be able to visualize SPY AGENT GREEN in physiology applications. The response on the system monitor to the minimum clinically relevant concentration of SPY AGENT GREEN shall be at least 6.9 (ΔE), and the system response to the maximum clinically relevant concentration shall be at least twice that at low concentrations. | PASS (Stated in "Performance Testing - Bench": "In accordance with design input specifications including optical imaging performance specifications") |
Optical Imaging - Localization | The user shall be able to visualize SPY AGENT GREEN in anatomy applications. The response on the system monitor shall be at least 10.35 (ΔΕ) under clinically relevant conditions. | PASS (Stated in "Performance Testing - Bench": "In accordance with design input specifications including optical imaging performance specifications") |
Electromagnetic Compatibility (EMC) | In accordance with IEC 60601-1-2:2014, Medical electrical equipment... Electromagnetic compatibility - Requirements and tests. | PASS |
Electrical Safety | In accordance with IEC 60601-1:2005+A1:2012 (Medical electrical equipment, Part 1) and IEC 60601-1-6:2020-07 (Medical electrical equipment - Part 1-6). | PASS |
Laser Safety | In accordance with IEC 60825-1:2014, Safety of laser products - Part 1: Equipment classification and requirements. | PASS |
Usability | In accordance with IEC 62366-1:2015, Medical devices -- Part 1: Application of usability engineering to medical devices. | PASS |
Software | In accordance with IEC 62304:2006, Medical device software - Software life cycle processes. | PASS |
Clinical Effectiveness (Primary Endpoint) | SPY AGENT GREEN and SPY-PHI should demonstrate a statistically significant and greater proportion of lymph nodes identified compared to Tc99m / gamma probe. (Implicit: showing superiority or non-inferiority to a specified degree). | 89% of confirmed LNs identified by SPY AGENT GREEN and SPY-PHI vs. 66% identified by Tc99m / gamma probe, a difference of 23% [95% CI 3.67% to 9.48%]; p |
Ask a specific question about this device
Page 1 of 15