Search Results
Found 1 results
510(k) Data Aggregation
(221 days)
The SQuEEZ Software is an image analysis software application for cardiac Computer Tomography (CT) studies to assist cardiologists and radiologists in assessing function when producing a cardiac evaluation. The software calculates a Stretch Quantifier for Endocardial Engraved Zones (SQuEEZ) value to highlight and color code the wall motions of the heart. Tools are provided to display regional motion properties of the heart.
Cardiowise SQuEEZ Software is a post processing image analysis software application for cardiac Computer Tomography (CT) studies to assist cardiologists and radiologists in assessing left ventricle function when producing a cardiac evaluation. SQuEEZ Software enables a quantitative evaluation of the regional myocardial strain using the data from an ultra-high-resolution cardiac CT scan that requires an acquisition time of as short as a single heartbeat. This is done by calculating a Stretch Quantifier for Endocardial Engraved Zones (SQuEEZ) value to highlight and color code the wall motions of the heart. The resulting analysis is graphically presented in threedimensional color contour maps or regional graphical representations that indicate the level of strain the left ventricle experiences during the cardiac cycle.
The provided text does not contain the detailed acceptance criteria and study results in the format requested. While it mentions performance data and studies, it lacks specific numerical acceptance criteria, the reported device performance against those criteria, and explicit details about the ground truth establishment, expert qualifications, and sample sizes for the test and training sets.
Here's a breakdown of what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
- Missing: The document does not provide a table with specific, measurable acceptance criteria (e.g., minimum accuracy, precision, or correlation thresholds) and the corresponding reported performance values from the device during a validation study.
2. Sample size used for the test set and the data provenance:
- Mentioned/Implied:
- "digital phantom study": No sample size given. Likely synthetically generated data.
- "Comparison to tagged MRI (magnetic resonance imaging) in acutely infarcted dogs": This is an animal study, not human clinical data. No specific sample size (number of dogs) is mentioned.
- "peer-reviewed publication describing a study of SQuEEZ values in human subjects with normal left ventricle function": This is a human study, but it's not explicitly stated as the test set for the regulatory submission, nor is its sample size provided in this document. The provenance (country of origin, retrospective/prospective) is also not stated for any of these.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Missing: This information is not provided for any of the mentioned studies. There is no mention of experts establishing ground truth for the digital phantom study or the animal study. For the human study, no details about expert involvement or qualifications are given.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Missing: No information on adjudication methods is present.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Missing: The document does not describe an MRMC study comparing human readers with and without AI assistance, nor does it provide an effect size for such a study. The device is described as "image analysis...to assist cardiologists," which implies aiding human readers, but no comparative effectiveness study with human readers is detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Implied: The "digital phantom study" and the "comparison to tagged MRI in acutely infarcted dogs" likely represent standalone performance, as they evaluate the algorithm's output against a reference without direct human intervention in the interpretation process of the SQuEEZ values. However, specific performance metrics are not given.
7. The type of ground truth used:
- Mentioned/Implied:
- Digital phantom study: Ground truth would likely be the known, simulated properties of the phantom.
- Comparison to tagged MRI in acutely infarcted dogs: Ground truth is "tagged MRI," which is a validated imaging technique for assessing myocardial motion.
- Peer-reviewed publication in human subjects: The document states this publication describes "normal left ventricle function," implying a ground truth of "normal function" based on clinical assessment, but the specific method of establishing this normal function (e.g., expert consensus, other imaging, pathology, outcomes data) is not detailed.
8. The sample size for the training set:
- Missing: The document solely focuses on performance testing and does not mention the training set size or how the model was developed/trained.
9. How the ground truth for the training set was established:
- Missing: No information provided regarding the training set or its ground truth.
In summary, the document states what kinds of studies were done (digital phantom, animal, human publication) to support substantial equivalence but does not provide the quantitative details of the acceptance criteria and performance results that would typically be found in such a study description.
Ask a specific question about this device
Page 1 of 1