Search Results
Found 2 results
510(k) Data Aggregation
(42 days)
Vitrea® Coronary Artery Analysis (CT Cardiac Analysis) is intended for investigating coronary obstructive disease by providing a non-invasive survey of a patient's coronary arteries.
Clinicians can select any coronary artery to view the following anatomical references: the highlighted vessel in 3D, two rotatable curved MPR vessel views displayed at 90 degree angles to each other, and cross sections of the vessel. The clinician can semi automatically determine contrasted lumen boundaries, stenosis measurements, and maximum and minimum lumen diameters. In addition, clinicians can edit lumen boundaries and examine Hounsfield unit statistics.
This submission is for the addition of a new feature, the Multi-Vessel preset, to the Vitrea CT Cardiac Analysis application. This application was originally cleared as "CT Coronary Analysis" in the predicate submission K052632 Witreal Image Processing Software). The application resides on the Vitrea AV platform, most recently cleared as K172855 (Vitrea Advanced Visualization, Version 7.6).
The submission is also intended to notify the Agency of the following non-significant changes to the previous clearance, documented by Letters to File (LTF):
· Vessel Tracking for Low kV Scans
· Angiographic View
The previously cleared Vitea CT Cardiac Analysis option (cleared in the predicate submission K052632 under the name "CT Coronary Analysis") provides a variety of tools and vith clinical CT images of the coronary arteries, heart, and surrounding tisse. The software supports CTA studies acquired by 4-slice and above multislice CT scanners, and includes the following features:
- · Automatic segmentation of the heart from the rest of the anatomy
- · Zero-click coronary vessel tree segmentation and automatic labeling of the three main coronary arteries
- · Selection of any coronary artery for viewing with the Vessel Probe tool with easy centerline review and editing
- · Full Vessel Probe capabilities for coronary arteries including the Lesion Tool, Vessel Walk, and Cath View
· A flythrough preset configured for flying through the coronary vessels (Global Illumination Rendering not available in the flythrough view) - Unique Heart Mode to automatically orient oblique MPR views to show one short-axis view and two long-axis views
- · Key findings classification during reading of the study for semi-automated structured report generation
- · Measurement of plaque burden between the lumen and the outer wall with the SUREPlaque tool
- · Display of a Transluminal Attenuation Gradient for probed vessels.
This submission adds the new Multi-Vessel preset feature. Whereas the offered initial automated probing and labeling of the three main coronary arteries, the new feature adds an additional initial automated probing (without labeling) of up to seventeen additional vessels in the vessel trees associated with the main coronary arterior tree, the left circumflex artery (LCX) tree, and the right coronary artery (RCA) tree. The tree structure allows edits to the trunk to be reflected on all the branch vessels. In both the subject and the predicate devices, the user has the ability to manually probe an unlimited number of additional vessels. The capability of either manually or automatically "probing" vessels was present in the predicate software.
The provided text describes a 510(k) premarket notification for a medical device called "Vitrea CT Cardiac Analysis" and includes information about its functionalities and validation. However, it does not contain specific acceptance criteria, reported device performance metrics in a defined table format, sample sizes for test sets, data provenance, details on experts, adjudication methods, or results of MRMC or standalone studies clearly related to acceptance criteria.
The submission focuses on a new feature, the "Multi-Vessel preset," and general software verification and validation. It states that "the software achieved all product release criteria" but does not enumerate what those criteria are or present quantitative results against them.
Therefore, many of the requested information points cannot be extracted directly from the provided text. I will provide the information that is available and explicitly state where the requested information is not present.
1. A table of acceptance criteria and the reported device performance
Unfortunately, the provided text does not contain a specific table of acceptance criteria with corresponding reported device performance metrics. It generally states that "the software achieved all product release criteria" and that "testing confirmed that the software functions according to its requirements," but does not detail these criteria or the quantitative results.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The text mentions "executing cardiac CTA test cases" for validation but does not specify the sample size of the test set nor the data provenance (e.g., country of origin, retrospective/prospective nature).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the text. The document refers to internal software verification and validation, but no details about expert involvement in establishing ground truth for testing are mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The text does not indicate that a multi-reader multi-case (MRMC) comparative effectiveness study was done. The focus is on the device's functionality and its internal validation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The provided text describes the device as a tool for clinicians to "semi automatically determine contrasted lumen boundaries, stenosis measurements, and maximum and minimum lumen diameters," and allows clinicians to "edit lumen boundaries." This implies human-in-the-loop interaction. While the new "Multi-Vessel preset" offers "initial automatic probing (without labeling)," the device's overall design is for clinical use with clinician interaction. A standalone (algorithm-only) performance study is not explicitly mentioned or detailed in the text.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The text mentions "internal software verification and validation" and "cardiac CTA test cases," but it does not specify the type of ground truth used for these tests.
8. The sample size for the training set
The text describes "Vitrea CT Cardiac Analysis" as an application with new features being added. It does not provide information about a "training set" or its sample size. This type of information is usually associated with machine learning model development, and while the device has "automatic segmentation" and "automatic labeling" features, the document focuses on the validation of software functionality rather than the development and training of new AI models.
9. How the ground truth for the training set was established
As no training set is discussed, this information is not provided in the text.
Ask a specific question about this device
(21 days)
Multi Modality Viewer is an option within Vitrea that allows the examination of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
The Multi Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows reformatting of image data, enables clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.
Here's a breakdown of the acceptance criteria and study information for the Multi Modality Viewer, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of numerical "acceptance criteria" for performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it focuses on functional capabilities and states that verification and validation testing confirmed the software functions according to requirements and that "no negative feedback was received," and "Multi Modality Viewer was rated as equal to or better than the reference devices."
The acceptance is primarily based on establishing substantial equivalence to predicate and reference devices, demonstrating that the new features function as intended and do not raise new questions of safety or effectiveness.
| Feature/Criterion | Acceptance Standard (Implied) | Reported Device Performance/Conclusion |
|---|---|---|
| Overall Safety & Effectiveness | Safe and effective for its intended use, comparable to predicate and reference devices. | Clinical validations demonstrated clinical safety and effectiveness. |
| Functional Equivalence | New features operate according to defined requirements and functions similarly to or better than features in reference devices. | Verification testing confirmed software functions according to requirements. External validation evaluators confirmed sufficiency of software to read images and rated it "equal to or better than" reference devices. |
| No Negative Feedback | No negative feedback from clinical evaluators regarding functionality or image quality of new features. | "No negative feedback received from the evaluators." |
| Substantial Equivalence | Device is substantially equivalent to predicate and reference devices regarding intended use, clinical effectiveness, and safety. | "This validation demonstrates substantial equivalence between Multi Modality Viewer and its predicate and reference devices with regards to intended use, clinical effectiveness and safety." |
| Risk Management | All risks reduced as low as possible; overall residual risk acceptable; benefits outweigh risks. | "All risks have been reduced as low as possible. The overall residual risk for the software product is deemed acceptable. The medical benefits of the device outweigh the residual risk..." |
| Software Verification (Internal) | Software fully satisfies all expected system requirements and features; all risk mitigations function properly. | "Verification testing confirmed the software functions according to its requirements and all risk mitigations are functioning properly." |
| Software Validation (Internal) | Software conforms to user needs and intended use; system requirements and features implemented properly. | "Workflow testing... provided evidence that the system requirements and features were implemented properly to conform to the intended use." |
| Cybersecurity | Follows FDA guidance for cybersecurity in medical devices, including hazard analysis, mitigations, controls, and update plan. | Follows internal documentation based on FDA Guidance: "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." |
| Compliance with Standards | Complies with relevant voluntary consensus standards (DICOM, ISO 14971, IEC 62304). | The device "complies with the following voluntary recognized consensus standards" (DICOM, ISO 14971, IEC 62304 listed). |
| New features don't raise new safety/effectiveness questions | New features are similar enough to existing cleared features in predicate/reference devices that they don't introduce new concerns. | For each new feature (Full volume MIP, Volume image rendering, 3D Cutting Tool, Clip Plane Box, bone/base segmentation tools, 1 Click Visible Seed, Automatic table segmentation, Automatic bone segmentation, US 2D Cine Viewer, Automatic Rigid Registration), the document states that the added feature "does not raise different questions of safety and effectiveness" due to similarity with a cleared reference device. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document repeatedly mentions "anonymized datasets" but does not specify the number of cases or images used in the external validation studies.
- Data Provenance: The data used for the external validation studies were "anonymized datasets." The country of origin is not explicitly stated, but the evaluators were from "three different clinical locations." Given Vital Images, Inc. is located in Minnetonka, MN, USA, it's highly probable the data and clinical locations are from the United States. The studies were likely retrospective as they involved reviewing "anonymized datasets" rather than ongoing patient enrollment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Three evaluators.
- Qualifications of Experts: The evaluators were "from three different clinical locations" and are described as "experienced professionals" in the context of simulated usability testing and clinical review. Their specific medical qualifications (e.g., radiologist, specific years of experience) are not explicitly detailed in the provided text.
4. Adjudication Method for the Test Set
The document does not describe an explicit "adjudication method" for establishing ground truth or resolving discrepancies between experts in the traditional sense. The phrase "no negative feedback received from the evaluators" and "Multi Modality Viewer was rated as equal to or better than the reference devices" suggests a consensus or individual evaluation model, but not a specific adjudication protocol like 2+1 or 3+1. It appears the evaluations focused on confirming functionality and subjective quality rather than comparing against a pre-established ground truth for a diagnostic task.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the context of measuring improvement with AI vs. without AI assistance.
- The "Substantial Equivalence Validation" involved three evaluators comparing the subject device against its predicate and reference devices. However, this comparison focused on functionality and image quality and aimed to show the equivalence or non-inferiority of the new device and its features, rather than quantifying performance gains due to AI assistance in human readers. The new features mentioned (like automatic segmentation or rigid registration) are components that might assist, but the study design wasn't an MRMC to measure the effect size of this assistance on human performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document describes "software verification testing" which confirms "the software functions according to its requirements." This implies a form of standalone testing for the algorithms and features. For example, "Automatic table segmentation" and "Automatic bone segmentation" are algorithms, and their functionality would have been tested independently.
- However, no specific performance metrics (e.g., accuracy, precision) for these algorithms in a standalone capacity are provided from these tests. The external validation was a human-in-the-loop setting where evaluators used the software.
7. The Type of Ground Truth Used
The external validation involved "clinical review of anonymized datasets" where evaluators assessed "functionality and image quality." For new features like segmentation or registration, the "ground truth" would likely be based on the expert consensus or judgment of the evaluators during their review of the anonymized datasets, confirming if the segmentation was accurate or if the registration was correct and useful. There is no mention of pathology, direct clinical outcomes data, or a separate "ground truth" panel.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It details verification and validation steps for the software but does not provide information about the development or training of any AI/ML components within the software. While features like "Automatic table segmentation" and "Automatic bone segmentation" likely involve machine learning, the document does not elaborate on their training data.
9. How the Ground Truth for the Training Set Was Established
Since the document does not specify the training set or imply explicit AI/ML development in the detail often seen for deep learning algorithms, it does not describe how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1