Search Results
Found 3 results
510(k) Data Aggregation
(258 days)
The FACT Medical Imaging System is a medical software application that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from a variety of imaging devices. The FACT Medical Imaging System is not meant for primary image interpretation in mammography. In addition, the FACT Medical Imaging System has the following indications:
(1) Pulmonary Review and Analysis Application
Pulmonary Review and Analysis application is an option that is intended for supporting physicians in the diagnosis and documentation of pulmonary tissue images (e.g., abnormalities) from pulmonary CT images. Three-Dimensional segmentation and isolation of sub-compartments, volumetric analysis, density evaluations, and reporting tools are combined with a dedicated workflow.
The automated image registration facilitates the synchronous display and navigation of multiple datasets for viewing data and easy follow-up comparison. The summary report of findings helps the user to track findings and note changes, such as shape, size, or overtime.
(2) Vessel Review and Analysis Application
Vessel Review and Analysis application is an option intended for viewing the anatomy and pathology of a patient's coronary arteries.
Physicians can select any artery to view the following anatomical references: the highlighted vessel in 3D, curved MPR vessel view, and straightened MPR view with cross sections of the vessel. Physicians can semi-automatically stenosis measurements, and maximum and minimum lumen diameters. In addition, physicians can also measure vessel dimensions along the centerline in curved MPR view and examine Hounsfield unit or signal intensity statistics.
The FACT Medical Imaging System is self-contained, non-invasive medical image processing software. It can be marketed as software only, as well as packaged with standard off-the-shelf PC hardware. It can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.
The FACT Medical Imaging System allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from a variety of imaging devices. This device addresses physicians' needs through various application options for pulmonary and vessel applications.
The provided document is a 510(k) summary for the FACT Medical Imaging System. It focuses on establishing substantial equivalence to predicate devices rather than deeply detailing the device's acceptance criteria and the comprehensive study results. While it mentions having performed non-clinical tests and that those tests "successfully passed," it does not provide the specific quantitative acceptance criteria or detailed results of those studies.
Therefore, for aspects like specific performance metrics, sample sizes, ground truth establishment methods for test data, and detailed information about expert involvement, the document states "Not provided" or "Not applicable" because this information is not present in the given text.
Here's the information that can be extracted or inferred from the document:
1. Table of Acceptance Criteria and Reported Device Performance
The document states that "Pass/Fail criteria were based on the requirements and intended use of the proposed device. Test results showed that all tests successfully passed." However, specific numerical acceptance criteria and reported device performance are not explicitly provided in the given text. The tests performed include:
- Functional test
- Functional regression test
- Performance test
- Portability test
- Graphical user interface test
- Usability test
- Interoperability test
- Measurement accuracy evaluation and validation:
- Accuracy of pulmonary segmentation and measurements
- Accuracy of fissure segmentation and measurements
- Accuracy of airway segmentation and measurements
- Accuracy of vessel segmentation and measurements
2. Sample size used for the test set and the data provenance
- Sample size for the test set: Not provided in the document.
- Data provenance (country of origin, retrospective/prospective): Not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Not provided in the document.
- Qualifications of experts: Not provided in the document.
4. Adjudication method for the test set
- Adjudication method: Not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
- MRMC study: No. The document states "Discussion of Clinical Tests Performed: None." This indicates that no clinical studies, including MRMC studies comparing human readers with and without AI assistance, were performed or submitted.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- The document implies that accuracy evaluations were performed for segmentation and measurements, which would typically be standalone algorithm performance. However, specific standalone performance metrics are not given. The document focuses on demonstrating that the software functions as intended and is comparable to predicate devices in its capabilities.
7. The type of ground truth used
- For the "Measurement accuracy evaluation and validation" tests (pulmonary, fissure, airway, and vessel segmentation and measurements), the ground truth type is not explicitly stated. It's common for such evaluations to use expert-annotated data, but this is not confirmed in the text.
8. The sample size for the training set
- The document does not mention a "training set" or "training data" as it describes a medical image processing software (PACS) rather than an AI/ML device that requires a distinct training phase. If any internal parameters of the segmentation or measurement algorithms were optimized, the datasets used for such optimization are not specified.
9. How the ground truth for the training set was established
- Not applicable, as no training set is discussed for this type of device and submission.
Ask a specific question about this device
(87 days)
Vitrea® CT Myocardial Perfusion is intended to assist a trained user for the visualization of hypo/hyper dense areas in patients with angina or with a previous myocardial infarction to assess the disease state and treatment. This software provides semi-automated heart and left ventricle segmentation and color polar maps of the myocardial tissue.
The information provided is intended to be qualitative in nature and, when used by a qualified physician, may aid in the identification of myocardial enhancement defects and the follow up of such findings.
Vitrea CT Myocardial Perfusion is a post-processing software option for the already cleared Vitrea software platform (K071331). It leverages the existing Vitrea platform functionalities such as Multi-Planar Reconstruction (MPR) images, Maximum Intensity Projections (MIP) and volume rendering.
Vitrea CT Myocardial Perfusion enables the visualization and analysis of perfusion defects in the myocardium. The software is intended for use with cardiac CT (Computed Tomography) studies to analyze cardiovascular anatomy and pathology and to assess the presence of hypo/hyper dense areas of myocardial tissue.
The software visualization tools provide semi-automated heart and Left Ventricle (LV) segmentation, and color overlay and polar maps of the myocardial tissue based on the Hounsfield attenuation (HU) values. The software displays the values associated with the generation of the Perfusion Index (PI) and Transmural Perfusion Ratio (TPR) maps. Perfusion Index (PI) is the ratio of the Mean Myocardial CT value to the LV blood pool CT value. Transmural Perfusion Ratio (TPR) is provided as a ratio per sector of the Endocardial CT value to the mean Epicardial CT value. The defect size scoring tool allows a user to delineate one or more contiguous 3D regions within the myocardium for independent size measurements. The user may observe the interpolated result as they are constructing the defect.
The software analysis tools include measurements and comparison ratios. The CT Myocardial Perfusion application allows users to load one or two volumes. In dual-volume cases, the volumes are displayed based on time. It also includes reporting tools for formatting findings and user selected areas of interest.
The provided text describes the Vitrea CT Myocardial Perfusion software, its intended use, a comparison to predicate devices, and the non-clinical tests performed for its 510(k) submission. However, it does not contain specific acceptance criteria or a study that directly proves the device meets those criteria with numerical performance metrics.
The document states that clinical studies were not required to support the safety and effectiveness of the software. Instead, it relies on design control measures, software verification and validation, and internal/external validation using phantoms and experienced users.
Therefore, providing a table of acceptance criteria and reported device performance, sample sizes for test sets, number of experts, adjudication methods, MRMC study results, standalone performance, type of ground truth, training set size, or how training set ground truth was established is not possible based on the provided text. This information is typically found in clinical study reports or detailed validation documentation, which is not present here.
Summary of available information regarding testing:
-
Table of acceptance criteria and reported device performance: Not explicitly stated with numerical performance. The document describes general quality assurance activities rather than specific performance metrics against pre-defined acceptance criteria.
-
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Test Set Sample Size: "various phantoms and patient based CT datasets" were used for internal validation, and "experienced users" for external validation. No specific number of cases or patients is provided.
- Data Provenance: Not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- For external validation, "experienced users" evaluated the software. No specific number or qualifications are provided beyond "experienced users."
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not mentioned.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was mentioned. The document explicitly states that "clinical studies to support safety and effectiveness of the software" were not required.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The software aims to "assist a trained user," implying a human-in-the-loop application. No standalone algorithm performance is reported.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc): For internal validation, numerical quantities were verified using "CT semi-synthetic phantoms and patient based CT datasets." For external validation, "experienced users evaluated" the software. This suggests a form of expert review or comparison to phantom-derived ground truth, but not pathology or outcomes data.
-
The sample size for the training set: Not applicable and not mentioned, as the documentation focuses on verification and validation of a software enhancement, not the development of a de novo AI algorithm that requires a separate training set. The existing Vitrea platform is leveraging cleared functionalities.
-
How the ground truth for the training set was established: Not applicable and not mentioned.
In conclusion, the document details the rigorous design control measures, software verification, and validation activities (including phantom testing and usability testing by experienced users) undertaken to ensure the software met its requirements and user needs. However, it does not present a formal study with quantitative acceptance criteria and corresponding performance data as might be found in a clinical trial or a performance validation study for a device with specific objective performance claims. The basis for clearance appears to be substantial equivalence to predicate devices and adherence to quality system regulations, rather than specific performance against numerical criteria.
Ask a specific question about this device
(186 days)
Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning is a non-invasive postprocessing application designed to assist medical professionals with the assessment of the aortic valve and in pre-operational planning and post-operative evaluation of transcatheter aortic valve replacement procedures.
Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning is a non-invasive postprocessing application designed to assist medical professionals with the assessment of the aortic root and in pre-operational planning and post-operative evaluation of transcatheter aortic valve replacement procedures.
It allows cardiologists, radiologists and clinical specialists to select patient CT studies from various data sources, view them, and process the images with the help of a comprehensive set of tools. It provides assessment and measurement of different structures of the heart and vessels relevant to approach planning. It provides simple techniques to assess the feasibility of a trans-apical, iliac, or subclavian approach to heart structures for replacement or repair procedures.
The provided text describes Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning, a non-invasive post-processing application. The 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (Pie Medical Imaging B.V., 3mensio Workstation (K120367)) rather than establishing specific acceptance criteria and proving performance against them in a detailed study.
Therefore, much of the requested information (acceptance criteria, reported performance values, specific sample sizes for test and training sets, expert qualifications, adjudication methods, MRMC study details, ground truth types) is not explicitly stated in the provided document in the format typically associated with a robust performance study.
However, based on the non-clinical testing section and external validation summary, we can infer some aspects and report what is available.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of quantitative acceptance criteria with corresponding reported device performance values. Instead, it describes general validation goals and outcomes.
Feature/Functionality | Acceptance Criteria (Inferred from Validation Goals) | Reported Device Performance |
---|---|---|
Accuracy of spatial measurements (distance, angle) | Spatial accuracy of image rendering, distance, angular measurement, and navigational tools aligned with known values from imaging phantoms. | Verified using imaging phantoms with known positions, distances, and angles. Specific numerical accuracy values are not provided, but the verification confirmed alignment with expected results for spatial accuracy, distance measurement, angular measurement, navigational tools, and orientation markers. |
Accuracy of automated segmentation for 3D anatomical representation | Automated segmentation should enable accurate 3D representation of relevant anatomy. | During external validation with 70 TAVR cases, "Each user felt that the automated segmentation within the application enabled an accurate 3D representation of the relevant anatomy." |
Accuracy of automated oblique for annulus valve plane | Automated oblique should provide an accurate starting point for determining the annulus valve plane. | During external validation with 70 TAVR cases, "The users also felt the automated oblique provided an accurate starting point for determining the annulus valve plane." |
User ability to review, verify, adjust, measure, and report | Users should be able to effectively use the software's tools (review 2D/3D images, verify/correct segmentation, create measurements, generate reports). | During external validation with 70 TAVR cases, "All of the users were able to review the 2D and 3D images, verify and correct the results of segmentations and initialization, create measurements, and generate reports." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 70 TAVR cases were used for "External Validation."
- Data Provenance: Not explicitly stated (e.g., country of origin, specific institutions). It is referred to as "previously acquired medical images," suggesting a retrospective nature, but this is not explicitly confirmed. The type of study is "non-clinical tests" and "evaluation of previously acquired medical images."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not specified. The document states that "external cardiologists and radiologists evaluated 70 TAVR cases" during external validation. It does not clarify how many individual experts were involved.
- Qualifications of Experts: "External cardiologists and radiologists." No specific details on their years of experience or expertise are provided beyond their general medical specializations.
- Ground Truth Establishment for Test Set: The exact method for establishing ground truth for the 70 TAVR cases is not detailed. The "external cardiologists and radiologists" evaluated the application's performance, but how their evaluations were aggregated or compared against a definitive "ground truth" (e.g., surgical outcome, independent expert consensus) is not described. The statements suggest they evaluated the device's output against their clinical judgment (e.g., "felt that the automated segmentation...enabled an accurate 3D representation").
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (like 2+1 or 3+1 consensus) for the external validation or for establishing ground truth for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- An MRMC comparative effectiveness study, comparing human readers with and without AI assistance, was not described. The external validation involved users (cardiologists and radiologists) evaluating the application, but it doesn't indicate a comparison against a baseline without the software or quantifies an improvement in human reader performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document describes "automated segmentation" and "automated oblique" as features of the software. The verification testing involved "applying risk management," "performance testing (Verification)," and "safety testing (Verification)," which implies testing of the algorithm's functions. Measurement accuracy tests using phantoms would also be a standalone assessment of the algorithm's precision.
- However, the overall validation described (internal and external) focuses on the user's interaction with and ability to verify/adjust the software's outputs, indicating a human-in-the-loop context. No specific "standalone performance study" with metrics like sensitivity, specificity, or accuracy for the algorithm in isolation (without user intervention/correction) is explicitly reported.
7. The Type of Ground Truth Used
- For Measurement Accuracy: Known positions, distances, and angles from imaging phantoms were used as the ground truth.
- For External Validation (70 TAVR cases): Not explicitly stated. The document implies a qualitative assessment by "external cardiologists and radiologists" who "felt" the segmentation and oblique were accurate and "were able to" perform their tasks. This suggests a form of expert subjective evaluation rather than a direct comparison against a definitive objective ground truth like pathology, surgical outcomes, or a pre-established consensus gold standard.
8. The Sample Size for the Training Set
- The document does not provide the sample size used for the training set for the Vitrea CT TAVR Planning application. The focus is on validation against the predicate device and usability.
9. How the Ground Truth for the Training Set Was Established
- Since the training set size is not mentioned, the method for establishing its ground truth is also not described in the provided document. The 510(k) summary focuses on validating the device for regulatory clearance, primarily through demonstrating substantial equivalence and usability, rather than detailing the underlying machine learning model development.
Ask a specific question about this device
Page 1 of 1