Search Results
Found 1 results
510(k) Data Aggregation
(314 days)
CARDNAV
CardNav, an image acquisition and processing modular software package, is indicated for use as follows: Assist in projection selection using 3D modeling based on 2D images. Perform quantitative analysis on coronary veins based on fluoroscopy images. Assists in device positioning by providing real time localization on predefined roadmaps and live fluoroscopy. Perform motion analysis on coronary veins based on fluoroscopy images. To be used in cardiac procedures and off-line for post-procedural analysis.
CardNav (version 1.0) is an image acquisition and processing modular software package designed as an add-on to conventional X-ray angiography systems. It enhances the output of cardiovascular angiography by providing software modules that assist in diagnosis, procedure planning and therapeutic staging. This data is obtained without altering the basic angiography procedure.
The provided text is a 510(k) summary for the CardNav device, and it states that the device is "substantially equivalent" to predicate devices. However, the document does not contain an acceptance criteria table or specific quantitative performance metrics for the device. It generally mentions "in-house software verification testing, on-site system evaluation, bench testing using phantoms and retrospective clinical data, and animal study comparing device results with a sonomicrometry measurement method." It concludes that "The results of the testing indicate that CardNav 1.0 performs as intended and is safe for its intended use."
Therefore, I cannot populate the table with specific acceptance criteria and reported device performance from the provided text.
Based on the provided text, here's what I can extract regarding the study and ground truth:
-
Table of Acceptance Criteria and Reported Device Performance: This information is not provided in the document. The 510(k) summary states that "The results of the testing indicate that CardNav 1.0 performs as intended and is safe for its intended use" but does not offer specific quantitative acceptance criteria or corresponding performance statistics.
-
Sample size used for the test set and the data provenance:
- Test set sample size: Not specified.
- Data provenance: The document mentions "retrospective clinical data" and an "animal study." No specific countries of origin are provided for the clinical data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not specified.
-
Adjudication method for the test set: Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: An MRMC comparative effectiveness study is not mentioned in the document. The focus of the 510(k) is on the device's performance as intended and its substantial equivalence to predicate devices, not on human reader improvement with AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The studies mentioned (in-house software verification, on-site system evaluation, bench testing, animal study) refer to the device's performance. While "without human-in-the-loop" is not explicitly stated, "algorithm only" performance is implied in these types of tests for a software device. The clinical use cases describe the software assisting in various tasks, indicating a human-in-the-loop scenario for its intended use, but the underlying performance of the algorithms themselves would be evaluated in standalone settings.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the animal study, the ground truth was a "sonomicrometry measurement method."
- For the "retrospective clinical data," the type of ground truth is not explicitly stated, but it would typically involve clinical assessments or established diagnostic methods.
-
The sample size for the training set: Not specified. The document relates to a 510(k) submission, which typically focuses on validation data rather than training data for AI/ML models.
-
How the ground truth for the training set was established: Not specified, as training set details are not provided.
Ask a specific question about this device
Page 1 of 1