Search Results
Found 3 results
510(k) Data Aggregation
(119 days)
IC-PRO, an image acquisition and processing modular software package, is indicated for use as follows: Assists in the evaluation of coronary lesions by enabling the creation of 3D images of coronary vessel segments based on two to three 2D angiography images obtained from single plane angiography. Provides quantitative information regarding the calculated dimensions of arterial segments based on the 3D image. Performs quantitative analysis of the left ventricle based on left ventricular angiograms. Enhances visualization of the stent deployment region and provides quantitative data based on manual stent tracings Assist in device positioning by providing real time localization on predefined roadmaps. Assists in projection selection using 3D modeling based on 2D images. Performs dimensional measurements based on DICOM images. To be used in-procedure in the catheterization lab and off-line for post-procedural analysis
The IC-PRO (version 3.5, model B) system is an image acquisition and processing modular software package designed as an add-on to conventional X-ray angiography systems. This system improves the output of cardiovascular angiography by providing software modules that assist in diagnosis, procedure planning, therapeutic stage and post deployment analysis. The IC-PRO provides quantitative data and vessel measurements, left ventricular, stent dimensions, enhances visualization, localizes device on predefined roadmaps and assist in projection selection. The IC-PRO is used in patient with vascular, congenital, valvular, and myopathic heart disease and patients undergoing vascular stenting and artificial valve deployment.
The provided documentation does not contain detailed acceptance criteria or a specific study proving the device meets those criteria with statistical significance. It primarily consists of a 510(k) summary for the Paieon IC-PRO System, outlining its intended use, description, and claiming substantial equivalence to predicate devices.
Here's an analysis of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document states: "Performance Data: Testing included software validation and performance evaluation. The performance tests were made to evaluate the IC-PRO System and yield accuracy and precision results within the predetermined specifications."
However, the predetermined specifications (acceptance criteria) are not explicitly stated, nor are the specific accuracy and precision results reported in a quantifiable manner (e.g., specific percentages, ranges, or statistical measures).
Therefore, a table cannot be constructed with the information provided.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document discusses "software validation and performance evaluation" but does not mention a multi-reader multi-case (MRMC) comparative effectiveness study directly, nor does it provide any effect size for human reader improvement with or without AI assistance. The IC-PRO is described as assisting in diagnosis and analysis, implying human-in-the-loop, but no study is detailed to quantify its comparative effectiveness.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document does not explicitly state whether a standalone performance study was conducted. Given the description of the system as providing "quantitative data and vessel measurements" and "enhances visualization," it implies that the output is intended to be used and interpreted by clinicians, suggesting a human-in-the-loop context. However, it's not definitively ruled out that some components might have been evaluated in a standalone manner without this detail being provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided in the document.
8. The sample size for the training set
This information is not provided in the document. The document describes the system as "an image acquisition and processing modular software package," implying an algorithm, but details about its development and training are absent.
9. How the ground truth for the training set was established
This information is not provided in the document.
Summary of what is present:
- Device: Paieon IC-PRO System (version 3.5, model B), an image acquisition and processing modular software package for cardiovascular angiography.
- Intended Use: Assists in evaluating coronary lesions (3D imaging, quantitative dimensions), quantitative analysis of the left ventricle, enhancing stent deployment visualization, device positioning, projection selection, and dimensional measurements on DICOM images. Used in-procedure and off-line.
- Performance Data Mentioned: "Testing included software validation and performance evaluation. The performance tests were made to evaluate the IC-PRO System and yield accuracy and precision results within the predetermined specifications."
- Substantial Equivalence Claim: The device is claimed to be substantially equivalent to several predicate devices (IC-PRO v3.2, Integris 3D-RA, Vitrea 2, iConnection PRO Stent Positioning System).
- Key Missing Information: All the detailed specifics requested about acceptance criteria, test set/training set sizes, data provenance, ground truth establishment, expert involvement, and study types (MRMC, standalone performance metrics) are absent from this 510(k) summary. This type of summary typically focuses on substantial equivalence rather than detailed study protocols and results.
Ask a specific question about this device
(121 days)
The Guided Medical Positioning System (gMPS™) is intended for the evaluation of vascular and cardiac anatomy. It is intended to enable real time tip positioning and navigation of a gMPS™ enabled (equipped with a gMPS™ sensor) diagnostic or therapeutic invasive device used in vascular or cardiac interventions in the Cath Lab environment, on both live fluoroscopy or recorded background. The System is indicated for use as an adjunct to fluoroscopy.
The gMPS™, used in conjunction with an X-ray System, employs magnetic positioning technology to track a gMPS™ enabled diagnostic or therapeutic invasive device for the 3D position relative to any X-ray image, in real-time or previously recorded cine-loop.
The gMPS™ system is intended to provide the following:
Catheter tip positioning and navigation - The real time position of the gMPSTM sensor (and thus of the gMPS™ enabled device) is displayed in real time ("Live") fluoroscopy mode or in a recorded mode.
Smart trace (foreshortening indication) - A 3D trace of the gMPS™ enabled device trajectory is projected and superimposed on the 2D X-ray images (either on live fluoroscopy, recorded cine-loop or recorded still image).
3D reconstructed model - The system reconstructs a 3D model of the inspected anatomical structure.
Quantitative longitudinal measurements - The measurements are based on the 3D trace, thus overcoming length measurement errors induced by the foreshortening effect.
Quantitative Coronary Angiography (OCA) - While working in conjunction with gMPSTM enabled coronary device, the gMPS™ provides 3D QCA.
Virtual landmarking - A manually marked point or region of interest superimposed on X-ray images (real-time angiography and cine-loop) and on the 3D reconstruction.
Here's a breakdown of the acceptance criteria and study information for the Guided Medical Positioning System (gMPS™), based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly list quantitative acceptance criteria or specific performance metrics with target values. Instead, it makes a general statement about performance testing.
2. Sample Size Used for the Test Set and Data Provenance
The provided text does not specify the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective). It only mentions "Performance testing was conducted."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The provided text does not specify the number of experts used or their qualifications for establishing ground truth for the test set.
4. Adjudication Method for the Test Set
The provided text does not specify any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not mention if a multi-reader multi-case (MRMC) comparative effectiveness study was conducted. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be determined from this document.
6. Standalone (Algorithm Only) Performance Study
The provided text describes the device's function as "real time tip positioning and navigation" and states "Performance testing was conducted in order to demonstrate the performance and accuracy of the gMPS™ and to verify that it does not raise any new safety and effectiveness issues in comparison to its predicate devices." This implies that the algorithm's performance in guiding the device was evaluated, which aligns with a standalone performance study. However, specific details of this study are not provided.
7. Type of Ground Truth Used
The provided text does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). Given the device's function for "real time tip positioning and navigation" and "3D reconstructed model" of anatomical structures, it's highly likely that ground truth would involve either:
* Direct measurement/imaging: Comparing the system's reported position/reconstruction to a known physical measurement or a high-accuracy imaging modality.
* Expert validation: Clinical experts confirming the accuracy of the system's output against their anatomical knowledge or other reference images.
8. Sample Size for the Training Set
The provided text does not specify the sample size for the training set.
9. How Ground Truth for the Training Set Was Established
The provided text does not specify how ground truth for the training set was established.
Ask a specific question about this device
(54 days)
The Innova systems are indicated for use in generating fluoroscopic images of human anatomy for vascular angiography, diagnostic and interventional procedures, and optionally, rotational imaging procedures. They are also indicated for generating fluoroscopic images of human anatomy for cardiology, diagnostic, and interventional procedures. They are intended to replace fluoroscopic images obtained through image intensifier technology. These devices are not intended for mammography applications.
The Innova 4100 IQ, 3100 IQ, 2100 IQ Systems are modified with an optional software feature called StentViz. The StentViz feature enhances the visibility of stents in the x-ray images produced by the Innova systems. Specifically, StentViz provides an enhanced static image of the stent that is derived from the video image sequence as recorded during fluoroscopic guidance. It does not provide real-time guidance.
The information provided focuses on demonstrating substantial equivalence to predicate devices rather than establishing specific acceptance criteria and detailed performance studies for the StentViz feature itself. Therefore, many of the requested details about acceptance criteria, sample sizes, expert involvement, and ground truth establishment are not explicitly stated in the provided text.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly present a table of acceptance criteria with numerical targets. Instead, it relies on a qualitative comparison to predicate devices and general statements about image enhancement.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Enhanced visibility of stents (compared to Innova without StentViz) | Bench tests were performed "to assess the enhancement of stent visibility." The text states, "When StentViz is used, the image quality and visibility of the stent is improved." |
Comparable image quality/stent enhancement to predicate devices (StentOp and StentBoost) | Bench tests were performed "to compare the performance of the Innova with StentViz to... the performance of the similar feature contained in the predicate device StentOp." The document concludes, "The image quality of the stent is enhanced in a comparable way with StentViz than with StentOp and StentBoost." Its performance is "substantially equivalent to the predicate devices." |
No adverse impact on safety or effectiveness | The improvement "does not adversely impact safety or effectiveness." |
Compliance with voluntary standards | The Innova systems with StentViz comply with voluntary standards as detailed in Sections 9 and 17 of the premarket submission (details not provided in the extract). |
2. Sample size used for the test set and the data provenance:
- Sample Size: "Bench tests were performed based on a library of clinical images." The exact number of images in this "library" is not specified.
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only mentions "clinical images."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The evaluation of "enhancement of stent visibility" appears to have been part of bench testing, but who performed these assessments and their qualifications are not mentioned.
4. Adjudication method for the test set:
- This information is not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A MRMC comparative effectiveness study is not explicitly mentioned. The evaluation described is "bench tests" to assess enhancement and perform comparisons. The document does not discuss human reader performance or improvement with AI assistance (StentViz).
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The evaluation was a "bench test" comparing the StentViz feature to the Innova without StentViz and to a predicate device's feature (StentOp). While human eyes would ultimately interpret the enhanced images, the description of "bench tests... to assess the enhancement of stent visibility" and compare performance suggests an evaluation of the algorithm's output (the enhanced image) itself, making it akin to a standalone performance assessment of the image enhancement capability. However, it's not explicitly stated as "algorithm only without human-in-the-loop performance" in the strict sense of a diagnostic task.
7. The type of ground truth used:
- The document mentions "a library of clinical images" being used for bench tests to assess "enhancement of stent visibility." The "ground truth" for assessing this enhancement is not explicitly defined. It likely relied on a subjective or objective assessment of image clarity and stent outline on the enhanced images compared to unenhanced images or images from predicate devices. There is no mention of pathology, or outcomes data being used as ground truth for this enhancement feature.
8. The sample size for the training set:
- The document does not provide information about a specific training set. The StentViz is described as an "optional software feature" that "enhances the visibility of stents." The development process mentioned includes "Risk Analysis, Requirements Reviews, Design Reviews, Testing on unit level (Module verification), Integration testing (System verification), Final acceptance testing (Validation), Performance testing (Verification), Safety testing (Verification)." This suggests a standard software development and testing cycle rather than a machine learning model that would typically have a distinct "training set."
9. How the ground truth for the training set was established:
- As no specific training set for a machine learning model is mentioned, this information is not applicable/provided. The feature development would have followed established engineering principles for image processing, not a typical machine learning training paradigm with a labeled ground truth dataset.
Ask a specific question about this device
Page 1 of 1