Search Results
Found 42 results
510(k) Data Aggregation
(90 days)
NQQ
The HyperVue Software is intended to be used only with compatible HyperVue Imaging Systems and Starlight Imaging Catheter.
The HyperVue Imaging System is intended for the imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures.
The Starlight Imaging Catheter is intended for use in vessels 2.0 to 5.2 mm in diameter.
The Starlight Imaging Catheter is not intended for use in a target vessel which has undergone a previous bypass procedure.
The NIRS capability of the HyperVue Imaging System is intended for the detection of lipid core containing plaques of interest.
The NIRS capability of the HyperVue Imaging System is intended for the assessment of coronary artery lipid core burden.
The NIRS capability of the HyperVue Imaging System is intended for the identification of patients and plaques at increased risk of major adverse cardiac events.
The HyperVue Software (2.0) is resident on the HyperVue Imaging System (K230691) and is used with the Starlight Imaging Catheter (K243016). The HyperVue Software provides a user interface for executing clinical workflows, acquiring and processing OCT-NIRS data, and exporting patient data. The software update introduces the ability to connect to hospital PACS servers for data export.
The provided FDA 510(k) clearance letter for the HyperVue™ Software primarily focuses on demonstrating substantial equivalence to a predicate device based on technological characteristics and general software verification and validation. It does not contain detailed information regarding clinical performance studies (e.g., MRMC studies, standalone performance), specific acceptance criteria, or the methodology for establishing ground truth for medical image analysis tasks, especially related to the NIRS capabilities like plaque assessment.
The text states that the software update "introduces the ability to connect to hospital PACS servers for data export" and discusses "historical software and algorithm changes." However, it does not provide specifics on how these "historical algorithm changes" were validated in terms of clinical performance metrics that would typically be included in an AI/ML medical device submission.
Based on the provided document, here's what can be extracted and what is missing:
Acceptance Criteria and Device Performance
The document does not provide a specific table of acceptance criteria for clinical performance (e.g., sensitivity, specificity, accuracy) or reported device performance metrics related to diagnostic tasks (like lipid core detection or plaque assessment). The performance data section focuses on software engineering aspects (verification, validation, cybersecurity, and adherence to design controls) rather than clinical accuracy or effectiveness.
Table of Acceptance Criteria and Reported Device Performance (Based only on available information)
Acceptance Criteria Category | Specific Criteria (Expected but not found in document) | Reported Device Performance (Not quantified in document) |
---|---|---|
Software Functionality | All functions performed by the software are evaluated and passed. | Passed all pre-determined acceptance criteria identified in the test plan. |
Design Control Compliance | Verification and validation testing completed per company's Design Control process (21 CFR Part 820.30) and FDA guidance for software. | Verification and validation testing completed in accordance with the company's Design Control process in compliance with 21 CFR Part 820.30 and FDA "Guidance on Software Contained in Medical Devices". |
Cybersecurity | Static Code Analysis, Vulnerability Scanning, Penetration Testing, Security Controls verified, Interoperability Assessment, Risk Analysis & Mitigation. | Performed as per FDA guidance "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions." Risks analyzed and satisfactorily mitigated. |
Clinical Performance (e.g., for NIRS capability) | Not specified in the document (e.g., sensitivity, specificity, AUC for lipid core detection) | Not reported in the document. |
Study Details (Based only on available information, with many points missing)
-
Sample sizes used for the test set and data provenance:
- Test Set Sample Size: Not specified. The document mentions "an established test plan that fully evaluated all functions performed by the software," but it does not specify the number of cases or patients used for performance testing, especially not for clinical performance.
- Data Provenance: Not specified. There is no mention of the country of origin of data or whether it was retrospective or prospective. The testing described appears to be primarily software-level functional and cybersecurity testing rather than a clinical performance study.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document does not describe the establishment of a clinical ground truth, suggesting that the primary validation for this 510(k) was based on software engineering and safety, not on a new clinical performance claim requiring expert ground truth.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified. Since a clinical performance study with expert ground truth establishment is not detailed, adjudication methods are not mentioned.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study is mentioned or implied. The submission emphasizes substantial equivalence based on technological characteristics and software updates rather than a new clinical claim supported by a reader study.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated in terms of clinical performance metrics. The document claims that the software "processes reflected optical signals to construct images" and makes "mathematical comparisons of image data." However, it does not provide standalone performance metrics (e.g., sensitivity/specificity for lipid plaque detection) for these algorithmic functions. The clearance is for the software (2.0) that is resident on the imaging system, implying it's part of the overall system that assists physicians, but no specific standalone diagnostic performance is reported.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified for clinical claims. For the "software functions," ground truth would likely be based on technical specifications and expected software behavior. For the NIRS capabilities (lipid core detection, plaque assessment), the method for establishing ground truth for performance evaluation is not described in this document. This suggests that the current 510(k) submission did not hinge on a new clinical efficacy claim for these NIRS functionalities that would require a new, robust clinical study with defined ground truth. Instead, it seems to rely on the predicate device's existing clearance for these capabilities.
-
The sample size for the training set:
- Not specified. The document does not discuss any machine learning model training or associated training sets. The primary focus of this 510(k) is a software update (version 2.0) mainly involving PACS connectivity and "historical" algorithm changes, which doesn't necessarily imply retraining a new ML model that would require a dedicated training set description in this context.
-
How the ground truth for the training set was established:
- Not applicable/Not specified. Since a training set is not mentioned, the method for establishing its ground truth is also not.
Summary of Gaps:
The provided FDA 510(k) clearance letter is for a software update (HyperVue™ Software 2.0) that appears to be primarily a software modification/upgrade (PACS connectivity, historical algorithm changes) to an existing cleared device. As such, the submission focuses heavily on software engineering verification and validation, cybersecurity, and demonstrating substantial equivalence to the predicate device based on technological characteristics and intended use.
It does not contain the detailed clinical performance study information (e.g., specific acceptance criteria for diagnostic performance, quantitative performance metrics, sample sizes for clinical test sets, expert qualifications, ground truth methodology for clinical data) that would typically be seen for a novel AI/ML device making new clinical claims or demonstrating significantly improved diagnostic performance. The NIRS capabilities listed appear to be carried over from the predicate device's clearance.
Therefore, for aspects related to clinical accuracy and effectiveness of features like "detection of lipid core containing plaques," this document does not provide the specific study details you requested.
Ask a specific question about this device
(118 days)
NQQ
The OPUSWAVE System with DualView Catheter is intended for the intravascular imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures.
The OPUSWAVE Dual Sensor Imaging System consists of a wheeled console with monitor, keyboard, mouse, a software graphical user interface and a Motor Drive Unit (MDU) protected by an MDU cover. The MDU is connected to a DualView Catheter capable of imaging both, Optical Coherence Tomography (OCT) and Intravascular Ultrasound (IVUS) modalities, either simultaneously or asynchronously, without removing the catheter from the imaging site. The system allows image data to be exported and stored on external media (USB, DVD), and for integration with Cath Lab imaging technologies (angio, ECG).
The sterile operator (physician) is able to control image acquisition by manually positioning the imaging sensor as well as performing pullback (automatically or manually) for defined regions of interest. The system provides analysis tools such as area and linear measurements.
The provided FDA 510(k) Clearance Letter for the OPUSWAVE Dual Sensor Imaging System focuses on establishing substantial equivalence to predicate devices, primarily through engineering and regulatory compliance testing. While it mentions various verification and validation activities, it does not contain the detailed clinical study data typically found when a device’s performance against specific acceptance criteria is being proven, especially for AI-enabled devices requiring human reader studies or detailed standalone performance metrics.
Based on the information given, here's an analysis of what can be extracted or inferred, and what cannot be answered:
Acceptance Criteria and Study for OPUSWAVE Dual Sensor Imaging System
1. Table of Acceptance Criteria and Reported Device Performance:
The document primarily focuses on demonstrating equivalence to predicate devices for imaging capabilities (OCT and IVUS modalities), safety (electrical safety, EMC, laser output, acoustic output), and software functionality. There are no specific quantitative performance metrics or acceptance criteria reported similar to what would be found for an algorithm that provides diagnostic outputs (e.g., sensitivity, specificity, accuracy).
However, based on the provided text, we can infer the "acceptance criteria" were met through various tests that showed compliance with standards and equivalence to predicates.
Acceptance Criteria Category (Inferred) | Reported Device Performance (Inferred/Directly Stated) |
---|---|
Electrical Safety | Complies with IEC 60601-1 standard. |
Electromagnetic Compatibility (EMC) | Complies with IEC 60601-1-2 standard. |
Software Functionality | Software verification and validation testing successfully completed; fulfillment documentation provided as recommended by FDA guidance ("Enhanced" level, implying potential for serious injury/death from failure). |
Design Verification | Performs pursuant to defined design input requirements. |
Design Validation (Simulated Use) | Meets user needs and intended use. |
Acoustic Output (IVUS) | Does not exceed Track 3 limits (equivalent to predicate 1 meeting 60601-2-37 requirements). |
Laser Output (OCT) | Class 1 Laser Output per 60825-1 (equivalent to predicate 2 being Class 1M). |
Image Quality/Clinical Equivalence | Demonstrated substantial equivalence to predicate devices through animal testing. No quantitative imaging performance metrics (e.g., resolution, penetration depth, signal-to-noise ratio) are explicitly provided as acceptance criteria or results beyond "real-time grayscale image". |
2. Sample Size for the Test Set and Data Provenance:
- Test Set Sample Size: Not specified. The document mentions "animal testing" and "simulated use testing" for design validation. For software, general "verification and validation testing" is mentioned, but specific test set sizes (e.g., number of test cases, images, patients) are not provided.
- Data Provenance:
- Animal Study: Animal data. Details on geographic origin or whether it was retrospective/prospective are not provided.
- Simulated Use Testing: Implies a controlled environment, likely within the manufacturer's facility, but specifics are not given.
- Software V&V: Internal testing.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:
- Not Applicable / Not Provided: The document does not describe the establishment of a "ground truth" in the clinical sense (e.g., for diagnostic accuracy) as it's not a device focusing on automated interpretation or diagnosis. The studies mentioned (animal, simulated use, software V&V) would have their own internal verification against design specifications or a recognized standard, but not a human expert-adjudicated ground truth as would be typical for AI/CADx devices.
4. Adjudication Method for the Test Set:
- Not Applicable / Not Provided: As there's no mention of expert-established ground truth for a diagnostic test set, there's no adjudication method described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs. without AI assistance:
- No: The document explicitly states: "Clinical testing was not required to demonstrate the substantial equivalence of the OPUSWAVE Dual Sensor Imaging System to the predicate devices and is not included as part of this premarket notification." Therefore, an MRMC study demonstrating human reader improvement with AI assistance was not conducted or reported. This device is an imaging system, not an AI interpretation tool.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not Applicable (in the classic AI sense): This device is an imaging system, not an AI algorithm designed to provide standalone diagnostic outputs. Its "performance" would be related to image quality, system functionality, and safety parameters. While the system's software underwent "verification and validation testing," this is to ensure the software itself functions correctly and safely within the imaging system, not to assess its diagnostic accuracy as a standalone algorithm.
7. The Type of Ground Truth Used:
- Design Specifications, Regulatory Standards, and Animal Models:
- For electrical safety and EMC: Compliance with IEC 60601-1 and IEC 60601-1-2.
- For software: Compliance with FDA Software Guidance ("Enhanced" level), suggesting verification against defined requirements.
- For design: Compliance with "defined design input requirements."
- For clinical equivalence/imaging capabilities: Demonstrated equivalence in "animal testing," implying that normal anatomy and pathology in animal models served as a reference.
8. The Sample Size for the Training Set:
- Not Applicable / Not Provided: This device is not described as an AI/machine learning device that requires a "training set" in the context of model development. The verification and validation activities are for the entire system, not for training a specific algorithm.
9. How the Ground Truth for the Training Set was Established:
- Not Applicable / Not Provided: As there is no mention of a training set for an AI algorithm, there is no ground truth establishment process described for one.
Summary of Device and Evidence Focus:
The OPUSWAVE Dual Sensor Imaging System is an intravascular imaging system (OCT and IVUS). The 510(k) clearance relied on demonstrating substantial equivalence to existing predicate devices, rather than proving novel clinical efficacy or superior diagnostic accuracy through large-scale human clinical trials or AI performance evaluations. The "studies" primarily referenced are engineering verification and validation testing, animal studies for equivalence, and compliance with recognized safety and software standards. The document does not suggest that the device incorporates AI in a way that requires AI-specific performance criteria (e.g., sensitivity, specificity, MRMC studies) for its clearance.
Ask a specific question about this device
(272 days)
NQQ
OPXION Optical Skin Viewer is a non-invasive imaging system intended to be used for real-time visualization of the external tissues of the human body. The two-dimensional, cross-sectional, three-dimensional, and en-face images of tissue microstructures can be obtained.
OPXION Optical Skin Viewer is composed of two parts consisting of a handheld probe and a mainframe, connected by an optical fiber cable. The device comes with three accessories: a USB 3.0 cable, a power adapter, and a power cord. The Optical Skin Viewer needs to be connected to a laptop or a personal computer. The device uses Optical Coherence Tomography (OCT) technology with a Superluminescent diode, 840 nm, 6 mW light source.
Based on the provided FDA 510(k) clearance letter for the OPXION Optical Skin Viewer, an optical device that visualizes external tissue and is not an AI/ML powered device, the document does not contain the specific information requested about acceptance criteria and a study that proves the device meets the acceptance criteria in the context of AI/ML performance.
The 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (VivoSight Topical OCT System) primarily based on intended use, technology (Optical Coherence Tomography), and general performance (image quality accepted by a qualified medical professional for visualization).
Therefore, I cannot provide a table of acceptance criteria, sample sizes for test sets, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance, or details about training sets, as these specific details are not present in the provided document, nor are they typically required for a Class II medical imaging device like this one unless it incorporates AI/ML for diagnostic or interpretive functions.
However, I can extract the general acceptance criteria and the type of study conducted for this device based on the provided text:
Acceptance Criteria and Study:
The document describes the device's performance in terms of its ability to produce images for visualization, rather than offering specific quantitative metrics for diagnostic accuracy, sensitivity, or specificity that would be typical for an AI/ML driven device.
Here's an interpretation based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (General) | Reported Device Performance (General) |
---|---|
Image quality confirmed and accepted by a qualified medical professional. | The OPXION Optical Skin Viewer demonstrated consistent performance in producing images of a quality that is substantially equivalent to that produced by the cited predicate device. The device successfully displayed anatomical features of skin. |
No adverse events or safety concerns were reported. The scanning process was well-tolerated by all subjects. | |
Safe and effective clinical imaging device capable of generating two-dimensional, cross-sectional, three-dimensional, and en-face images of external tissue microstructure. |
2. Sample Size Used for the Test Set and Data Providence
- Test Set Sample Size: The study included three subjects with healthy skin and five subjects with diseased skin conditions.
- Data Provenance: Not explicitly stated, but implies a prospective study given the "Study Design" description of scanning "each target area in three sessions." The country of origin of the data is not specified in the 510(k) summary.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The document states that "Image quality was confirmed and accepted by a qualified medical professional." This implies at least one, but the exact number beyond "a" is not specified.
- Qualifications of Experts: Described as "a qualified medical professional." No specific specialty (e.g., dermatologist, radiologist) or years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated as an adjudication method in the context of multiple readers reaching consensus. The acceptance criterion notes "Image quality was confirmed and accepted by a qualified medical professional," which suggests a single reviewer or possibly an internal review process where consensus was reached without a formal adjudication method described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not conducted or described in the provided document. The study focuses on the device's ability to produce images and its substantial equivalence to a predicate, not on how human readers perform with or without the device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Performance: This device is an imaging system for visualization, not an AI/ML algorithm that provides diagnostic outputs. Therefore, the concept of "standalone performance" of an algorithm is not applicable or described. Its "performance" is its ability to acquire and display images.
7. The Type of Ground Truth Used
- Type of Ground Truth: The ground truth for this device's performance evaluation was the visual assessment and acceptance of image quality by a qualified medical professional, based on the successful display of "anatomical features of skin" for both healthy and diseased conditions. This is a form of expert consensus/acceptance on display quality.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. This is an optical imaging device, not an AI/ML algorithm that undergoes a training phase with a specific dataset.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no mention of an AI/ML training set.
In summary, the provided FDA 510(k) letter describes a traditional medical imaging device focused on visualization, not an AI/ML-powered device. Therefore, the detailed criteria and study designs typically associated with AI/ML device validation are absent from this document.
Ask a specific question about this device
(267 days)
NQQ
Cornaris Intravascular Imaging System with Imaging Catheter is intended for the imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures. LumenCross Imaging Catheter is intended for use in vessels 2.0 to 3.5 mm in diameter. The Imaging Catheter is not intended for use in the left main coronary artery or in a target vessel which has undergone a previous bypass procedure.
The LumenCross Imaging Catheter (referred to as LumenCross) with the Cornaris Intravascular Imaging System produced by Vivolight, which is intended for intravascular imaging and is indicated for use in coronary arteries in patients who are candidates for transluminal interventional procedures. The LumenCross intended for use in vessels 2.0~3.5mm in diameter. The LumenCross is not indicated for use in the left main coronary artery or in a target vessel that has undergone a previous bypass procedure.
Cornaris Intravascular Imaging System are cart-mounted computer and Imaging Engin(or optical engine) placed inside an ergonomically designed mobile cart with the cable underground. There are two models, P80-E is mainly composed of trolley, mouse, keyboard, two display monitors, optical engine, computer. Mobile-E is mainly composed of trolley, mouse, keyboard, one display monitor, monitor bracket, optical engine, computer. The both also includes the catheter connection unit (PIU), which provides the interconnection between the Cornaris Intravascular Imaging System and the LumenCross Imaging Catheter.P80-E and Mobile-E has the same software features.
The imaging catheter contains two main components: the catheter body and imaging core (internal rotating fiber optic). The outer diameter of the distal shaft of the catheter was 2.67 F (0.89 mm, 0.035 in.), and the length of the distal shaft was 280mm. The imaging catheter has a working length of 1350mm. The imaging catheter is compatible with 0.014" (0.356 mm) guidewire, which with a guidewire lumen length of 16 mm, the guidewire enters through tip entrance and exit through the RX port. The hydrophilic coating is applied on the outer surface of distal shaft. The LumenCross Imaging Catheter is a single use device. The LumenCross Imaging Catheter is sterilized by Ethylene Oxide Gas to achieve a SAL of 10-6 and supplied in sterility maintenance package which could maintain the sterility of the device during the shelf life of two years.
The provided FDA 510(k) clearance letter and its summary do not contain detailed information regarding the acceptance criteria, nor the specific study design and results typically associated with proving a device meets those criteria, especially in terms of algorithm performance for an imaging system. The submission focuses more on general product specifications, non-clinical bench testing, and animal studies related to the physical and material performance of the imaging catheter and system, rather than the quantitative performance of any imaging interpretation algorithm or AI component.
However, based on the information provided, we can infer some aspects and highlight what is missing for a complete answer to your request.
Here's a breakdown based on the provided text, with explicit notes where information is missing:
Overview of Device and Purpose
The Cornaris Intravascular Imaging System (P80-E, Mobile-E) and LumenCross Imaging Catheter (F2) are intended for imaging coronary arteries during transluminal interventional procedures. The system utilizes Optical Coherence Tomography (OCT) to visualize vessel structures. The 510(k) submission focuses on demonstrating substantial equivalence to predicate devices (ILUMIEN OPTIS and DRAGONFLY OPTIS IMAGING CATHETER).
1. Table of Acceptance Criteria and Reported Device Performance
The submission describes various performance tests conducted. These are primarily for hardware, optical parameters, and catheter physical characteristics, rather than interpretive accuracy.
Acceptance Criterion (Measured Parameter) | Reported Device Performance (or Compliance Statement) | Notes on Relevance to AI/Imaging Interpretation |
---|---|---|
Cornaris Intravascular Imaging System: | ||
Scan range | (Compliance implied by substantial equivalence) | Ensures adequate imaging area. |
Axial resolution | (Compliance implied by substantial equivalence) | Affects image quality and detail. |
Luminous Sensitivity | (Compliance implied by substantial equivalence) | Affects image quality and signal strength. |
A-line speed | (Compliance implied by substantial equivalence) | Affects imaging speed. |
Dynamic range | (Compliance implied by substantial equivalence) | Affects image contrast and detail. |
Frame rate | (Compliance implied by substantial equivalence) | Affects imaging speed. |
Pullback time and range | (Compliance implied by substantial equivalence) | Relates to image acquisition protocol. |
Fiber Optic Rotary Joint (FORJ) Insertion loss | (Compliance implied by substantial equivalence) | Ensures proper optical signal transmission. |
Fiber Optic Rotary Joint (FORJ) Rotational homogeneity | (Compliance implied by substantial equivalence) | Ensures consistent imaging across the rotation. |
Fiber Optic Rotary Joint (FORJ) Return loss | (Compliance implied by substantial equivalence) | Ensures proper optical signal transmission. |
LumenCross Imaging Catheter: | ||
Visual & Dimensional Inspection | (Compliance implied by substantial equivalence) | Basic quality control. |
Catheter bond Strength | (Compliance implied by substantial equivalence) | Safety and durability. |
Simulated use | (Compliance implied by substantial equivalence) | Evaluates real-world performance. |
Leakage | (Compliance implied by substantial equivalence) | Safety. |
Corrosion | (Compliance implied by substantial equivalence) | Safety and durability. |
Torque | (Compliance implied by substantial equivalence) | Ease of use and maneuverability in vessel. |
Particulates | (Compliance implied by substantial equivalence) | Safety (embolism risk). |
Coating integrity | (Compliance implied by substantial equivalence) | Safety and ease of use. |
Flexibility and Kink | (Compliance implied by substantial equivalence) | Ease of use and safety (prevents damage). |
Endotoxin | (Compliance implied by substantial equivalence) | Safety (prevents systemic reactions). |
Biological Safety Testing (LumenCross): | ||
Cytotoxicity | (Compliance met per ISO 10993-1) | Biocompatibility. |
Sensitization | (Compliance met per ISO 10993-1) | Biocompatibility. |
Mouse Lymphoma Assay | (Compliance met per ISO 10993-1) | Biocompatibility (genotoxicity). |
Bacterial Reverse Mutation Assay | (Compliance met per ISO 10993-1) | Biocompatibility (mutagenicity). |
Intracutaneous Reactivity | (Compliance met per ISO 10993-1) | Biocompatibility. |
Acute Systemc Toxicity | (Compliance met per ISO 10993-1) | Biocompatibility. |
Material Mediated Pyrogenicity | (Compliance met per ISO 10993-1) | Biocompatibility. |
Hemolysis (Direct and Indirect) | (Compliance met per ISO 10993-1) | Biocompatibility. |
Complement SC5b-9 | (Compliance met per ISO 10993-1) | Biocompatibility. |
In Vivo Thrombogenicity | (Compliance met per ISO 10993-1) | Biocompatibility. |
Pre-clinical testing (Animal Study): | ||
Clear Image Length (CIL) | "no significant differences between the subject device and the predicate device" | Directly relates to imaging performance. |
Device Performance (system stability, ease of operation, usability of sterile cover and PIU, catheter crossability, catheter vulnerability, catheter marker visualization) | "no significant differences between the subject device and the predicate device" | Relates to practical usability and image quality. |
In vivo thrombus formation | "no significant differences between the subject device and the predicate device" | Safety. |
Safety | "no significant differences between the subject device and the predicate device" | Overall safety. |
Crucially, this submission does not describe acceptance criteria or performance for an AI algorithm's interpretation of images. It focuses on the hardware's ability to produce images and the catheter's physical characteristics. If there were an AI component for image analysis (e.g., automated lumen segmentation, plaque characterization), specific performance metrics (e.g., accuracy, sensitivity, specificity, Dice score for segmentation) would be listed here, along with their acceptance thresholds. This information is not present in the provided document.
Specific Study Details (as inferable from the document, with noted gaps):
Since the document focuses on showing substantial equivalence through non-clinical and pre-clinical tests, and explicitly states "No clinical study is included in this submission," most of the questions about AI algorithm performance studies cannot be answered from this text.
2. Sample size used for the test set and the data provenance:
* Test Set Sample Size: Not specified for a data-driven algorithm test set. The pre-clinical animal study "conducted to support substantial equivalence" did involve a "test set" of animal cases, but the exact number of animals or images generated from that study is not provided.
* Data Provenance: The animal study is an in vivo (likely prospective) study, but the country of origin is not specified. It's safe to assume it was conducted under the company's control, likely in China given their location.
* Retrospective/Prospective: The biological and pre-clinical studies were prospective tests on either ex-vivo materials or in-vivo animal models.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Not Applicable / Not Specified: The ground truth for the biological and physical properties was established by adherence to ISO standards and direct measurements. For the animal study performance metrics, "no significant differences" were found, implying comparison to a predicate device or expert assessment, but the number or qualifications of experts involved in this specific assessment are not detailed. An AI study would typically involve expert readers.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
* Not Applicable / Not Specified: For the physical and biological tests, adjudication methods are not relevant. For the animal study, it's not specified how the "no significant differences" conclusion was reached, or if formal adjudication was used for subjective performance measures like "ease of operation."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* No MRMC Study: The document explicitly states: "No clinical study is included in this submission." Therefore, no MRMC study comparing human readers with and without AI assistance was performed or reported here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Not Indicated: The submission does not describe any standalone algorithm performance testing. This suggests that the device, as cleared, does not include an AI algorithm that performs any automated image analysis or diagnosis requiring such testing. The "Software Features for Imaging" mentioned seem to refer to basic imaging display and system control, not AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Defined Standards/Measurements & Comparative Animal Study:
* For the physical and biological tests, ground truth was implicitly defined by the relevant ISO standards (e.g., ISO 10993-1) and direct physical measurements.
* For the pre-clinical animal study, the "ground truth" was the observed performance of the device and its direct comparison to a predicate device (e.g., "Clear Image Length (CIL)" and "in vivo thrombus formation"). This is an in-vivo assessment.
8. The sample size for the training set:
* Not Applicable / Not Specified: Since no AI algorithm training is described, there is no mention of a training set.
9. How the ground truth for the training set was established:
* Not Applicable / Not Specified: As no AI training data is mentioned, the method for establishing its ground truth is also not.
Conclusion:
The provided FDA 510(k) summary focuses on demonstrating the substantial equivalence of the Cornaris Intravascular Imaging System and LumenCross Imaging Catheter to existing predicate devices based on physical, mechanical, optical, biological, and pre-clinical animal performance. It does not present any information related to the performance of an artificial intelligence (AI) component for image interpretation or analysis. Therefore, it cannot address the questions concerning acceptance criteria and study details for an AI algorithm's performance. The "device performance" described pertains to the system's ability to acquire and display images, and the catheter's physical safety and functionality, rather than any automated interpretive function.
Ask a specific question about this device
(174 days)
NQQ
Starlight Imaging Catheter with Hyper Vue Imaging System is intended for imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures.
The Starlight Imaging Catheter is intended for use in vessels 2.0 to 5.2 mm in diameter.
The Starlight Imaging Catheter is not intended for use in a target vessel which has undergone a previous bypass procedure.
The Starlight Imaging Catheter is a sterile, single-use, non-pyrogenic device and consists of two main assemblies: the catheter body and the internal rotating fiber optic imaging core. The catheter has an insertable length of 141 cm and a 2.5 Fr imaging window. It is a rapid exchange design with monorail tip, designed for compatibility with 0.014" (0.355mm) steerable guidewires used during coronary interventional procedures.
The Starlight Imaging Catheter connects to the HyperVue Imaging System through the HyperVue Controller (Controller), a reusable catheter connection allowing direct control of basic data acquisition. All fiber optic rotation and translational pullback is driven by the Controller and occurs inside the catheter.
The provided text is a 510(k) summary for the Starlight Imaging Catheter. It discusses the device's substantial equivalence to a predicate device and details performance testing. However, it does not contain the specific information requested in your prompt regarding acceptance criteria, reported device performance, sample sizes for test and training sets, data provenance, expert qualifications, adjudication methods, MRMC studies, or standalone algorithm performance.
The summary states that no clinical testing was provided in this pre-market notification (Section 7.9), and usability evaluation testing was not required for the modifications (Section 7.8). This indicates that the device's performance against specific clinical acceptance criteria, as evaluated through human-in-the-loop or standalone algorithm studies with detailed ground truth establishment, is not described in this document.
The performance testing described (Sections 7.1-7.7) includes:
- Bench testing: Optical performance, catheter deliverability, pullback performance, trackability, kink resistance, tensile strength. These tests were performed using "well-established methods used for the predicate devices."
- Biocompatibility testing: In accordance with ISO 10993-1.
- Animal testing: Performed in 3 porcine models (18 imaging passes) to evaluate vascular injury, thrombogenicity, device safety, and device performance.
Therefore, I cannot provide the requested table and details because the information is not present in the provided document.
Ask a specific question about this device
(17 days)
NQQ
The Gentuity® HF-OCT Imaging System with Vis-Rx® Micro-Imaging Catheter is intended for intravascular imaging and is indicated for use in coronary arteries in patients who are candidates for transluminal procedures. The Vis-Rx Micro-Imaging Catheter is intended for use in vessels 1.3 to 6.0 mm in diameter. The Vis-Rx Micro-Imaging Catheter is also intended for use prior to or following transluminal interventional procedures. The Vis-Rx Micro-Imaging Catheter is not intended for use in a target vessel that has undergone a previous bypass procedure.
The Gentuity® Imaging System provides images of the coronary arteries in patients who are candidates for transluminal interventional procedures. The system utilizes fiber-optic technology to deliver near-infrared light and receive light reflected from coronary tissue to produce high resolution, real-time images. The Gentuity Imaging System consists of the following components:
-
- The Gentuity Imaging Console: A mobile system that houses the Optical Engine, the Computer and application software, and the Probe Interface Module (PIM). It also includes two monitors, keyboard, mouse, and cord storage as well as external interfaces to the system. The PIM provides the interconnection between the Gentuity Imaging Console and the Vis-Rx® Catheter.
-
- Vis-Rx® Micro-Imaging Catheter: The Vis-Rx catheter is a sterile, single-use catheter that consists of an external sheath and an optical imaging core. The external sheath facilitates placement of the device into the coronary artery, and houses the optical imaging core. An optical fiber and lens assembly rotates inside the optical imaging core. The optical fiber and lens deliver near-infrared light to the tissue and receive reflected light. The Vis-Rx catheter is a rapid exchange design, compatible with an 0.014″ guidewire. The catheter attaches to the PIM, which is mounted outside the sterile field on the table bed rail. A sterile 3 ml purge syringe is provided with the Vis-Rx catheter.
-
- Optional Gentuity Review Station: The Gentuity Review Station (GRS) is an optional standalone computer with the Gentuity application software that provides analysis and review capabilities similar to what may be performed on the Gentuity Console. The GRS allows physicians to review images for research, presentation and publication preparation outside the catheterization lab without the Gentuity Console.
The provided document (K242239) is a 510(k) summary for the Gentuity® HF-OCT Imaging System with Vis-Rx® Micro-Imaging Catheter. This particular 510(k) states that "No additional non-clinical and clinical performance testing was required to support review of this 510(k) Premarket Notification" as the proposed device is identical to the predicate device (K230620).
Therefore, the acceptance criteria and study information would be found in the 510(k) for the predicate device (K230620), not in the provided document (K242239).
As this document does not contain the information requested regarding acceptance criteria and performance testing for the current device, I cannot provide an answer based solely on the provided text.
Ask a specific question about this device
(126 days)
NQQ
deepLive is intended to be used as a non-invasive imaging tool in the evaluation of external human tissue microstructure by providing three-dimensional, cross-sectional and en-face real-time depth visualization for assessment by physicians to support in forming a clinical judgment.
deepLive was designed for an easy integration into clinical practices. The device is composed of:
- A. A mobile cart, allowing the user to move the whole device and including a cart tablet for accessories.
- B. A touchscreen, fixed on the cart mast, displaying the software interfaces to the user.
- C. A hand-held probe, integrating the LC-OCT imaging system (interferometric microscope, OCT camera). The probe is connected to the CPU front panel by a sheathed cable bundle, and stored in a dedicated probe-holder fixed on the cart tablet. The probe is the interface between the device, the doctor and the patient: its measuring head (tip) must be positioned in contact with the patient's skin.
- D. A central power unit (CPU), mounted on the cart, integrating various imaging and electronic peripherals (laser, computer, electronic cards, drivers, power supplies, etc.), driving and powering the imaging probe.
- E. A software running on the device's computer, which controls the components of the system, acquires and processes images, and provides user interfaces for performing examinations and managing data.
deepLive hardware interfaces are located on the front-panel of the CPU. Input/output connections include:
- 1 Display port to connect the screen
- · 3 USB ports to connect external drives (Wifi key, hard drive disk, etc.)
deepLive software runs on a computer embedded in the CPU of the device. The computer uses Windows Enterprise LTSC operating system. The software executable and all dynamic libraries needed for program execution are deployed at a specific location in the file system.
The secured access to the computer operating system, deepLive software and data folders are managed by Windows sessions authentication system. The computer hosting deepLive is also likely to have applications installed by DAMAE Medical:
- · Synology Drive: used to retrieve device data for maintenance and software improvement purposes.
- TeamViewer: remote control software used for software manual update and software issues solving.
The provided text is an FDA 510(k) clearance letter and associated summary for the deepLive device. It outlines the device's characteristics, indications for use, and a comparison to a predicate device. However, it does not contain the detailed information necessary to fully address all parts of your request regarding acceptance criteria and the comprehensive study that proves the device meets these criteria.
Specifically, the document states: "Safety and performance of the deepLive device have been evaluated and verified in accordance with product and software specifications and applicable performance standards through verification, validation, nonclinical performance, and safety testing." and "Verification, validation, nonclinical performance, and safety test results established that the device meets its design requirements and indications for use, that it is as safe and as effective as the predicate device, and that no new questions of safety and effectiveness have been raised." However, it does not provide specific numerical acceptance criteria or the reported performance data from these tests in a detailed manner. There's also no mention of a clinical human-in-the-loop study (MRMC) or an AI-specific standalone performance study.
Given the information provided in the input, here's what can be extracted and what cannot:
Acceptance Criteria and Study for deepLive Device
The deepLive device is an imaging system, not an AI/ML diagnostic tool in the sense that would require typical AI performance metrics like sensitivity, specificity, AUC, or reader studies for decision-making support. Its "performance" in this context primarily refers to its imaging capabilities, technical specifications, and safety.
Since the document does not provide a table of acceptance criteria and reported device performance related to a diagnostic task or specific image interpretation metrics, it's impossible to create such a table in the requested format. The performance testing section broadly states that the device was evaluated according to product and software specifications and applicable standards, and that it met its design requirements.
Here's a breakdown of what can be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
-
Acceptance Criteria (Implicit): The text states that "Safety and performance of the deepLive device have been evaluated and verified in accordance with product and software specifications and applicable performance standards through verification, validation, nonclinical performance, and safety testing." and that "Verification, validation, nonclinical performance, and safety test results established that the device meets its design requirements and indications for use..." This implies the acceptance criteria were met internally based on design requirements, but these specific requirements and corresponding performance values are not detailed in the publicly available 510(k) summary.
-
Reported Device Performance: The document provides technical specifications of the deepLive device and compares them to the predicate device. These are performance characteristics of the imaging system itself, rather than performance on a diagnostic task (e.g., classifying disease).
Parameter Acceptance Criteria (Implied: "Meets or Exceeds Predicate/Design Specs") Reported Device Performance (deepLive) Predicate Device (VivoSight Dx) Substantially Equivalent? Imaging Modality Be OCT Optical Coherence Tomography Optical Coherence Tomography Yes Near Infrared Wavelength Yes (700-1400mm) Yes Yes Yes Light Source Wavelength Compatible for imaging 800 nm 1305 nm Yes Frame Rate (B-scan) Adequate for real-time imaging 8 fps 5 fps Yes Frame Rate (A-scan) Adequate for real-time imaging 8 fps N/A (not specified for Pred.) Yes Lateral Resolution ≤ Predicate Resolution for comparable detail 1.3 μm 7.5 μm Yes (deepLive is superior) Axial Resolution ≤ Predicate Resolution for comparable detail 1.1 μm 10 μm Yes (deepLive is superior) Lateral Scanning Range Adequate for tissue assessment 1.2 mm 6 mm Yes Axial Scanning Range Adequate for tissue assessment 0.5 mm 1 mm Yes Optical Safety Class 1 medical device Class 1 Class 1 Yes
2. Sample sized used for the test set and the data provenance
- The document mentions "verification, validation, nonclinical performance, and safety testing" but does not provide sample sizes for any test sets (e.g., number of images, patients, or tissue samples).
- Data provenance is not specified. The type of device (optical coherence tomography for external human tissue microstructure) suggests that if human data was used for validation of imaging quality, it would likely be prospective clinical data, but this is speculative given the lack of detail. There is no indication of country of origin.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided because the nature of the device's clearance appears to be based on technical specifications and safety rather than a diagnostic performance study requiring expert ground truth beyond device design verification.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- This information is not applicable/not provided as no diagnostic ground truth establishment process is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study is mentioned. The device is described as an "imaging tool" for physicians to assess and support clinical judgment, but there is no mention of an AI component that assists human readers or an evaluation of such assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No standalone algorithm performance study is mentioned. The device provides image visualization; it is not presented as an AI algorithm making diagnoses.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the technical specifications, the "ground truth" would be engineering measurements and adherence to specified performance metrics. For safety, it would be compliance with international standards (IEC, EN, ASTM, ISO listed in the document). There is no mention of clinical ground truth (e.g., pathology, outcomes) in the context of a diagnostic performance study.
8. The sample size for the training set
- Not applicable/not provided. The document describes the device as an imaging system, not explicitly an AI/ML model that would require a "training set" in the context of machine learning.
9. How the ground truth for the training set was established
- Not applicable/not provided for the same reason as point 8.
Ask a specific question about this device
(30 days)
NQQ
The AptiVue™ E series software is intended to be used only with compatible OPTIS™ imaging systems.
The OPTIS imaging system with a compatible Dragonfly™ imaging catheter is intended for the imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures. The compatible Dragonfly imaging catheters are intended for use in vessels 2.0 to 3.5 mm in diameter. The compatible Dragonfly imaging catheters are not intended for use in the left main coronary artery or in a target vessel which has undergone a previous bypass procedure.
The OPTIS imaging system is intended for use in the catheterization and related cardiovascular specialty laboratories and will further compute and display various physiological parameters based on the output from one or more electrodes, transducers, or measuring devices. The physician may use the acquired physiological parameters, along with knowledge of patient history, medical expertise and clinical judgment to determine if therapeutic intervention is indicated.
OPTIS™ Systems with AptiVue™ Imaging Software (version E.6) perform Optical Coherence Tomography (OCT), Fractional Flow Reserve (FFR), and Resting Full-cycle Ratio (RFR) procedures and provides images of the coronary arteries in patients who are candidates for transluminal interventional procedures. Version E.6 adds cloud connectivity to enable remote installation of software updates and transmission of system telemetry data.
The provided document (K232386) describes the premarket notification for the "ILUMIEN™ OPTIS™ System, OPTIS™ Integrated System, OPTIS™ Mobile System, with AptiVue™ Imaging Software version E.6". This submission primarily focuses on the addition of cloud connectivity features (remote software updates and telemetry data transmission) to an existing device (predicate device K183320).
Crucially, the document explicitly states that "No clinical testing is provided in this pre-market notification" (page 4). Therefore, the information requested regarding acceptance criteria and study proving device performance (including sample sizes, expert involvement, ground truth, MRMC studies, and standalone performance) cannot be extracted from this document, as such studies were not performed or reported for this particular submission.
The document focuses on demonstrating substantial equivalence based on the updated software's functional similarity and verification and validation (V&V) testing.
Here's what can be inferred from the document regarding the acceptance criteria and the study that proves the device meets the acceptance criteria, based on the non-clinical testing performed:
1. Acceptance Criteria and Reported Device Performance (Non-Clinical)
Acceptance Criteria (Inferred from V&V) | Reported Device Performance (as stated in the document) |
---|---|
Functionality of new cloud connectivity features (remote software updates, telemetry data transmission) | The device performs the stated functions. The core intent of the submission is that "Version E.6 adds cloud connectivity to enable remote installation of software updates and transmission of system telemetry data." The acceptance for these features would be their successful operation and secure data transfer. |
Adherence to user needs and product specifications | "The results demonstrate that the AptiVue Software version E.6 meets the user needs and product specifications and is appropriate for its intended use and does not raise any new issues of safety and effectiveness." This implies that the software performed as designed and met its functional and non-functional requirements. |
Compliance with internal design control procedures | "Software verification and validation tests were performed on OPTIS Systems with AptiVue E.6 Software in compliance with internal design control procedures." This indicates that the testing followed the company's established quality system for software development and validation. |
Safety and effectiveness (no new issues) | "does not raise any new issues of safety and effectiveness." This is a key regulatory acceptance criterion for 510(k) applications demonstrating substantial equivalence. The V&V testing confirms that the added features do not negatively impact the safety or effectiveness of the device compared to the predicate. This would involve risk analysis and mitigation for the new features. |
The remaining requested information (2-9) pertains to clinical studies and performance evaluation of an AI/ML algorithm against a ground truth, which is explicitly stated as not included in this submission.
Here's a breakdown of why the other points cannot be answered:
- Sample sizes used for the test set and the data provenance: Not applicable, as no clinical test set for AI/ML performance was used. The V&V testing would have involved engineering and software test cases, not patient data in the context of an AI performance study.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as no ground truth for AI/ML performance was established or reviewed for this submission.
- Adjudication method for the test set: Not applicable.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The submission is about adding cloud connectivity, not about AI-assisted interpretation or improvement of human reader performance.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This submission concerns software and hardware systems for imaging, FFR, and RFR; the "AptiVue™ Imaging Software" is part of the system, but the document does not indicate a distinct AI/ML algorithm for standalone diagnostic performance beyond what the predicate device already provided. The predicate's capabilities (OCT, FFR, RFR) involve algorithms, but this submission does not describe a new or modified AI/ML algorithm requiring standalone clinical validation.
- The type of ground truth used: Not applicable, as no clinical ground truth assessment (e.g., pathology, outcomes data) was reported for this submission. The "ground truth" for the V&V of the new features would be the expected functional behavior according to design specifications.
- The sample size for the training set: Not applicable, as no machine learning model training was described or modified for this submission.
- How the ground truth for the training set was established: Not applicable.
In summary, the provided FDA 510(k) clearance letter and summary primarily address an update to existing software with new cloud-based functionalities. It clearly states that "No clinical testing is provided." Therefore, the detailed questions about acceptance criteria for AI/ML performance, clinical test sets, experts, and ground truth are not relevant to the information contained within this specific document.
Ask a specific question about this device
(86 days)
NQQ
The HyperVue™ Imaging System is intended for the imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures.
The Starlight™ Imaging Catheter is intended for use in vessels 2.0 to 5.2 mm in diameter.
The Starlight Imaging Catheter is not intended for use in a target vessel which has undergone a previous bypass procedure.
The NIRS capability of the HyperVue Imaging System is intended for the detection of lipid core containing plaques of interest.
The NIRS capability of the HyperVue Imaging System is intended for the assessment of coronary artery lipid core burden.
The NIRS capability of the HyperVue Imaging System is intended for the identification of patients and plaques at increased risk of major adverse cardiac events.
The HyperVue™ Imaging System is an intravascular imaging device with the ability to simultaneously assess vessel composition and structure by combining Optical Coherence Tomography (OCT) and Near Infrared Spectroscopy (NIRS) in a single catheter-based system.
The HyperVue™ Imaging System consists of the following components:
- Console: A mobile platform containing the optical and computing engine, physician and technologist touch displays, power distribution system, and input/output interface.
- Software: A proprietary application software that orchestrates the control, acquisition, processing, and display of the OCT-NIRS data.
- Catheter Interface Unit (CIU): A tethered CIU that controls the motion of the fiber optic imaging core within the Catheter sheath and connects the Catheter to the Console.
- Imaging Catheter: A sterile, single patient use 2.5 French dual-modality imaging catheter containing a rotating fiber optic imaging core inside a protective sterile sheath.
The provided text is a 510(k) Summary for the HyperVue™ Imaging System, focusing on a software update. It is important to note that this document does not contain the detailed performance testing results, acceptance criteria, or the study design (e.g., sample size, ground truth establishment, expert qualifications, MRMC study details) that would prove the device meets specific acceptance criteria for clinical performance.
The document states:
- "Design verification and validation (V&V) of the HyperVue™ Imaging System with the updated software were performed in compliance with external standards and internal design control procedures. V&V testing comprised of system/software verification and summative usability testing to confirm device performance."
- "Software verification and validation were conducted to FDA regulations, standards, and guidance document requirements. The results of this testing conclude the software has met these requirements."
- "Benchtop testing of the entire device was conducted to evaluate certain system-level features, such as measurements, that require both hardware and software to evaluate. The results of this testing conclude the system has met these requirements."
- "Usability evaluation was conducted to establish that the updated software for the HyperVue™ Imaging System meets the needs of the intended users to perform OCT-NIRS imaging safely and effectively according to ANSI/AAMI/IEC 62366-1."
- "No clinical testing is provided in this pre-market notification."
The FDA's 510(k) process primarily relies on demonstrating substantial equivalence to a predicate device. For software updates like this, the focus is often on verifying that the changes do not introduce new safety or effectiveness concerns and that the device continues to perform as intended. Detailed clinical performance studies (like MRMC studies) with specific acceptance criteria are often reserved for novel devices or significant changes that introduce new claims or potential risks not addressed by the predicate.
Therefore,Based on the provided text, I cannot complete the requested information regarding specific acceptance criteria and the study that clinically proves the device meets them because the document explicitly states "No clinical testing is provided in this pre-market notification." The performance testing described is primarily focused on software verification, bench testing of system-level features, and usability, demonstrating that the software update does not adversely affect the known performance characteristics of the predicate device.
Here's what I can extract and what remains unknown based on the provided text:
Acceptance Criteria and Reported Device Performance
Since no specific clinical performance metrics or thresholds are provided in this regulatory document for the software update, a table for clinical acceptance criteria and reported device performance cannot be generated from the given text. The "acceptance" by the FDA in this context is based on demonstrating substantial equivalence and ensuring the software update does not negatively impact existing validated performance.
Study Details (as much as can be inferred from the document)
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated in terms of quantitative clinical performance metrics (e.g., sensitivity, specificity, accuracy for disease detection). The criteria are implicitly related to software verification, usability, and system-level functional performance (e.g., measurements, image display) as demonstrated by bench testing.
- Reported Device Performance:
- "The results of this testing conclude the software has met these requirements." (Referring to FDA regulations, standards, and guidance document requirements for software V&V).
- "The results of this testing conclude the system has met these requirements." (Referring to benchtop testing of system-level features).
- "The updated software for the HyperVue™ Imaging System has been found to be safe and effective for the intended users, uses, and use environments." (From Usability Study).
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: Not specified for any performance testing.
- Data Provenance: Not specified. The testing seems to be internal verification and validation, possibly using simulated data, phantom data, or existing (de-identified) data for bench testing. The document states "No clinical testing is provided in this pre-market notification," meaning patient data for new clinical performance claims was not used for this submission.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not specified. Given that no clinical testing was performed for this submission, there's no mention of expert-established ground truth for a clinical test set. Usability testing would involve users, but they are evaluating the software interface, not providing ground truth for diagnostic accuracy.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable / Not specified. No clinical test set requiring ground truth adjudication is described.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done. The document explicitly states: "No clinical testing is provided in this pre-market notification." Therefore, no effect size of human reader improvement can be reported from this document.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The document implies that "software verification and validation" and "benchtop testing" evaluate the software's functional performance. However, specific standalone performance metrics (e.g., for automated detection of features) are not provided in this summary. The software "orchestrates the control, acquisition, processing, and display of the OCT-NIRS data" and provides "computer-aided measurement tools." Whether these tools have undergone standalone performance validation (and what those metrics are) is not detailed here.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable / Not specified for clinical performance. For software verification and bench testing, ground truth would likely be established through engineering specifications, known correct behaviors, or physical measurements on phantoms, but this is not a clinical ground truth.
-
The sample size for the training set:
- Not applicable / Not specified. This document describes a software update and its verification, not the original development and training of a machine learning model. If any ML components were part of the predicate device, their training details are not provided here, and no new training due to this software update is indicated.
-
How the ground truth for the training set was established:
- Not applicable / Not specified. See point 8.
Conclusion from the document's perspective:
The submission for the HyperVue™ Imaging System (K230691) is for a software update to an existing cleared device. The manufacturer demonstrated substantial equivalence to its predicate (SpectraWAVE Imaging System and Catheter, K221257) by showing that the software modifications do not introduce new questions of safety or effectiveness. This was supported by:
- Software verification and validation in compliance with IEC 62304 and FDA requirements.
- Benchtop testing to confirm system-level features and measurements.
- Human factors engineering (HFE) usability testing to ensure safe and effective use.
The application explicitly states that no clinical testing was provided as part of this pre-market notification, indicating that the clearance is based on the equivalence and rigorous non-clinical verification of the software update.
Ask a specific question about this device
(302 days)
NQQ
The Spectra WAVE Imaging System is intended for the imaging of coronary arteries and is indicated in patients who are candidates for transluminal interventional procedures.
The SpectraWAVE Imaging Catheter is intended for use in vessels 2.0 to 5.2 mm in diameter.
The SpectraWAVE Imaging Catheter is not intended for use in a target vessel which has undergone a previous bypass procedure.
The NIRS capability of the SpectraWAVE Imaging System is intended for the detection of lipid core containing plaques of interest.
The NIRS capability of the SpectraWAVE Imaging System is intended for the assessment of coronary artery lipid core burden.
The NIRS capability of the SpectraWAVE Imaging System is intended for the identification of patients and plaques at increased risk of major adverse cardiac events.
The SpectraWAVE Imaging System is an intravascular imaging device with the ability to simultaneously assess vessel composition and structure by combining Optical Coherence Tomography (OCT) and Near Infrared Spectroscopy (NIRS) in a single catheter-based system.
The SpectraWAVE Imaging System consists of the following components:
- Console: A mobile platform containing the optical and computing engine, physician and technologist touch displays, power distribution system, and input/output interface.
- Software: A proprietary application software that orchestrates the control, acquisition, processing, and display of the OCT-NIRS data.
- Catheter Interface Unit (CIU): A tethered CIU that controls the motion of the fiber optic imaging core within the Catheter sheath and connects the Catheter to the Console.
- Imaging Catheter: A sterile, single patient use 2.5 French dual-modality imaging catheter containing a rotating fiber optic imaging core inside a protective sterile sheath.
This document describes the SpectraWAVE Imaging System, an intravascular imaging device combining Optical Coherence Tomography (OCT) and Near Infrared Spectroscopy (NIRS). It aims to demonstrate substantial equivalence to predicate devices, K192019 Dragonfly OpStar™ Imaging Catheter and K183599 Makoto Intravascular Imaging System™.
Here's an analysis of the acceptance criteria and the study that proves the device meets the acceptance criteria, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not contain a specific table detailing quantitative acceptance criteria for device performance (e.g., accuracy, sensitivity, specificity for NIRS detection of lipid-core plaques) and corresponding reported performance values for each criterion. Instead, it broadly states that "All testing passed the acceptance criteria" for bench testing and that the animal study met acceptance criteria for acute performance and safety.
The Indications for Use (page 3) serve as a high-level set of intended performance characteristics for the NIRS capability:
- Detection of lipid core containing plaques of interest.
- Assessment of coronary artery lipid core burden.
- Identification of patients and plaques at increased risk of major adverse cardiac events.
However, specific numerical acceptance criteria for these indications (e.g., a minimum sensitivity or specificity for lipid core detection) are not provided in this document. The Device Comparison Tables (Table 1 and Table 2, pages 5-6) compare various technical specifications (e.g., catheter diameter, image collection time, rotational rate) to predicate devices, indicating that the SpectraWAVE device's specifications are "Substantially equivalent," which implies they meet comparable functional performance as the predicates.
For instance, a comparison point that serves as an implicit performance and acceptance criterion for the OCT portion is the "SpectraWAVE Imaging System allows imaging of vessels up to 5.2mm in diameter, which covers the expected range of left main coronary arteries" as stated in the "Discussion of Equivalence & Differences" column (page 6) when comparing to the primary predicate which is limited to 3.5mm. This implies the SpectraWAVE device meets or exceeds the imaging range of its predicate for OCT.
Similarly, under "NIRS Verification & Validation summarizes the NIRS performance of the SpectraWAVE system, with the predicate device as a reference" in Table 2, it indicates that the NIRS performance was evaluated against the predicate, implying an acceptance criterion of comparable performance. However, the specific quantitative comparison is not detailed here.
Given the absence of a detailed quantitative table in the provided text, a summary is provided below based on the implicit and some explicit performance claims.
Acceptance Criteria (Inferred from Comparisons & Indications for Use) | Reported Device Performance (General Statements) |
---|---|
OCT Imaging: Imaging of coronary arteries, vessel diameter 2.0 to 5.2 mm. | "SpectraWAVE Imaging System allows imaging of vessels up to 5.2mm in diameter, which covers the expected range of left main coronary arteries." (page 6). Bench testing "successfully completed, raising no new issues of safety or effectiveness. All testing passed the acceptance criteria." (page 17) |
NIRS Capability: Detection of lipid core containing plaques. | "NIRS Verification & Validation summarizes the NIRS performance of the SpectraWAVE system, with the predicate device as a reference." (page 1-2 of Table 2 discussion, page 7). Bench testing "successfully completed, raising no new issues of safety or effectiveness. All testing passed the acceptance criteria." (page 17). |
NIRS Capability: Assessment of coronary artery lipid core burden. | "NIRS Verification & Validation summarizes the NIRS performance of the SpectraWAVE system, with the predicate device as a reference." (page 1-2 of Table 2 discussion, page 7). Bench testing "successfully completed, raising no new issues of safety or effectiveness. All testing passed the acceptance criteria." (page 17). |
NIRS Capability: Identification of patients and plaques at increased risk of MACE. | "NIRS Verification & Validation summarizes the NIRS performance of the SpectraWAVE system, with the predicate device as a reference." (page 1-2 of Table 2 discussion, page 7). Bench testing "successfully completed, raising no new issues of safety or effectiveness. All testing passed the acceptance criteria." (page 17). |
Catheter Safety & Performance: Acute performance and vascular injury in vivo. | Animal study: "the test device met the acceptance criteria for the study and should be considered to have acceptable acute performance and safety." (page 17) |
General System Performance: Compliance with technical specifications and safety standards. | Bench testing: "demonstrates its system meets its performance specifications." "All testing passed the acceptance criteria." (page 17). Compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-6, IEC 62366-1, and IEC 60825-1 (EMC/Basic Electrical Safety, page 16). Software V&V met FDA regulations, standards, and guidance (page 16). Usability met ANSI/AAMI/IEC 62366-1 (page 17). Sterilization SAL 10^-6 (page 16). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify exact sample sizes for test sets in a numerical sense (e.g., number of cases or images) for performance studies related to lipid core detection or OCT imaging metrics. It mentions "a series of bench tests" (page 17) and "a GLP animal study" (page 17).
- Data Provenance:
- Bench Testing: In vitro, conducted internally by SpectraWAVE.
- Animal Testing: In vivo, conducted in a porcine coronary artery model, GLP (Good Laboratory Practice) study (page 17).
- Clinical Testing: "No clinical testing is provided in this pre-market notification." (page 17)
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
This information is not provided in the document. Since no clinical testing was performed and detailed performance metrics for NIRS/OCT interpretation by experts are not discussed, there's no mention of experts establishing ground truth for a test set. Ground truth for the animal study would typically be established through pathological examination by veterinary pathologists, but details are not provided.
4. Adjudication Method for the Test Set
This information is not provided in the document. Without details on expert review or ground truth establishment by multiple parties, an adjudication method cannot be inferred.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study focusing on human readers' improvement with or without AI assistance was not explicitly described or provided in this pre-market notification. The document states, "No clinical testing is provided in this pre-market notification." (page 17).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document broadly mentions "Software verification and validation were conducted to FDA regulations, standards, and guidance document requirements. The results of this testing conclude the software has met these requirements." (page 16). While this confirms the software's functional correctness and validation, it does not specifically describe a standalone performance study of the algorithm's diagnostic capabilities (e.g., NIRS lipid core detection accuracy) without human intervention. The NIRS capability is intended for detection and assessment, implying an algorithmic component, but its standalone performance against a ground truth is not detailed in terms of metrics.
7. The Type of Ground Truth Used
- Bench Testing: Ground truth would be derived from known physical properties and measurements of phantoms or test objects.
- Animal Testing: Ground truth for acute performance and safety in the porcine model would typically involve direct observation, physiological measurements, and subsequent histopathological analysis of the vessel tissue. The document refers to "vascular injury" assessment, which implies pathological ground truth (page 17).
- NIRS Performance (implicit): The NIRS capabilities are compared against a predicate device, suggesting the predicate's established performance serves as a comparative reference, rather than explicitly an independent "ground truth" like pathology for novel claims. However, the predicate device (Makoto Intravascular Imaging System) itself utilizes NIRS for lipid core detection, which would have been validated against pathology in its original submission.
8. The Sample Size for the Training Set
This information is not provided. The document does not describe the development or training of any AI/ML models that would typically require a training set. Given that it's a 510(k) submission, the focus is on substantial equivalence rather than novel AI algorithm validation with separate training/test sets. Performance is demonstrated through equivalency to predicates and standard engineering verification and validation activities.
9. How the Ground Truth for the Training Set was Established
This information is not provided as no training set or AI/ML model training is described.
Ask a specific question about this device
Page 1 of 5