Search Results
Found 1 results
510(k) Data Aggregation
(179 days)
Narnar, LLC
GECHO is a software package intended to visually assess contrast-enhanced echocardiography for left ventricular function and myocardial blood flow by displaying enhanced images of the heart and Time-To-Replenish images. GECHO is intended for use by a cardiologist.
GECHO is for use on images of adult patients who underwent contrast-enhanced echocardiography.
GECHO is an image review platform and analysis software that assists cardiologists in the interpretation of left ventricular function and myocardial blood replenishment from two-dimensional, contrast-enhanced echocardiograms.
The FDA 510(k) Clearance Letter for GECHO provides information about the device's intended use and design, but it does not detail specific acceptance criteria or the study results proving the device meets those criteria. The letter primarily focuses on establishing substantial equivalence to a predicate device (QLAB Advanced Quantification Software, K181264) and outlines general regulatory compliance.
However, based on the information provided in the "510(k) Summary," particularly the "Performance Data" section, we can infer some aspects of the study and the implicit acceptance criteria. The summary explicitly states:
- "The TTR (Time-to-Replenish) algorithm demonstrated strong performance with these key metrics:
- RMSE of 0.98 seconds, showing high accuracy
- Minimal bias of 0.0025 seconds
- Can detect TTR values down to 0.5 seconds
- Performs well even with noise (NMSE up to 0.05)"
- "Additionally, an expert survey was conducted to ensure the TTR image correctly represents the information present in raw images and is intuitive and useful, in combination with raw images, for the interpretation of myocardial blood flow."
Given these statements, the acceptance criteria would likely be defined by target values for RMSE, bias, minimum detectable TTR, and performance under noise, along with positive feedback from an expert survey regarding the clinical utility and accurate representation of information.
Let's organize the available and inferred information:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Inferred from Reported Performance) | Reported Device Performance |
---|---|
Root Mean Square Error (RMSE) for TTR algorithm | 0.98 seconds |
Bias for TTR algorithm | 0.0025 seconds |
Minimum detectable TTR value | Can detect TTR values down to 0.5 seconds |
Performance with noise (Normalized Mean Square Error) | Performs well even with noise (NMSE up to 0.05) |
Expert consensus on TTR image representation and utility | "Expert survey... ensured the TTR image correctly represents the information present in raw images and is intuitive and useful" |
Study Details
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: The document mentions "synthetic data across a wide range of representative clinical data parameters" was used for technical performance assessment of the TTR algorithm. However, the specific sample size (number of synthetic cases or data points) for this test set is not explicitly stated.
- Data Provenance: The primary data for technical performance assessment was synthetic data. The document does not specify country of origin for the synthetic data parameters or if any real patient data (even de-identified) influenced the generation of this synthetic data. The study was a technical performance assessment, not a clinical study on real patients, so the terms "retrospective" or "prospective" don't directly apply to the data collection method for the synthetic data itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: The document states "an expert survey was conducted." The exact number of experts involved is not specified.
- Qualifications of Experts: The experts are referred to simply as "cardiologist" in the Indications for Use, and the survey was conducted with "experts." While it's implied they are cardiologists, their specific qualifications (e.g., years of experience, sub-specialty) are not detailed.
4. Adjudication method for the test set
- For the quantitative metrics (RMSE, bias, etc.), the "ground truth" was inherently defined by the generation of the synthetic data, which presumably had known TTR values as part of its construction. Therefore, no human adjudication method was needed for these quantitative comparisons.
- For the "expert survey" regarding image representation and utility, the adjudication method (e.g., consensus, majority vote) is not specified. It simply states the survey "ensured" these aspects.
5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "No clinical performance data was necessary to claim substantial equivalence." The "expert survey" was qualitative regarding image utility, not a quantitative MRMC study measuring reader performance improvement with assistance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone algorithm performance assessment was conducted. The "Technical Performance Assessment" section details the algorithm's performance on synthetic data (RMSE, bias, minimum detectable TTR, performance with noise), which is the algorithm's standalone performance.
7. The type of ground truth used
- For the quantitative assessment of the TTR algorithm (RMSE, bias, etc.), the ground truth was based on known values embedded within the synthetic data.
- For the qualitative assessment of image representation and utility, the ground truth was established by expert opinion/consensus via the expert survey.
8. The sample size for the training set
- The document does not provide any information regarding the sample size of the training set for the GECHO algorithm.
9. How the ground truth for the training set was established
- The document does not provide any information regarding how the ground truth for the training set was established, as details about the training set itself are absent.
In summary: The FDA 510(k) summary focuses heavily on establishing substantial equivalence based on technical characteristics and functionality with a predicate device, along with verification and validation of software and a technical performance assessment of a key algorithm (TTR) using synthetic data. It explicitly states that no clinical performance data was deemed necessary for this clearance, which means there was no demonstration of human-in-the-loop performance improvement or comparative effectiveness against human readers.
Ask a specific question about this device
Page 1 of 1