Search Results
Found 1 results
510(k) Data Aggregation
(55 days)
SYSTEM, TOMOGRAPHY, COMPUTED EMISSION
The Emory Cardiac Toolbox™ (CEqual®, EGS™) 3.1 software program should be used for the quantification of myocardial perfusion (CEqual®), for the display of wall motion and guantification of left-ventricular function parameters from gated Tc99m SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanic dyssynchrony using phase anaylsys, and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM).
The Emory Cardiac Toolbox™ 3.1 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), and cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, N-13-ammonia, FDG protocols, and user defined normal databases. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports IDL and the Aladdin (General Electric) software development environment. The program processes the studies automatically. however, user verification of output is required and manual processing capability is provided.
The information provided in the 510(k) summary (K071503) focuses on demonstrating substantial equivalence to predicate devices and detailing the validation studies conducted for different versions and features of the Emory Cardiac Toolbox. However, it does not explicitly state specific acceptance criteria in a quantitative manner or present a table comparing reported device performance against such criteria for the entire Emory Cardiac Toolbox 3.1.
Instead, the document describes the types of validation performed and the sample sizes used for various features. It asserts the software's safety and effectiveness based on these studies and its similarity to predicate devices.
Based on the provided text, here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document explicitly states the "expected accuracy of the initial program can be found in the multicenter trial results listed in the article by Vansant et al Emoy Cardinc Toolbox™ (CEqual®, EGS™) Version 2.0, Ref. 510(k) #: K992450 and Version 2.1, Ref. 510(k) #: K014033)". However, the specific quantitative acceptance criteria for the Emory Cardiac Toolbox 3.1 (or even 2.0/2.1 in detail within this document) are not defined within the provided text immediately, nor is a table directly presenting performance against such criteria. The document discusses "accuracy" in general terms for previous versions, and then "development and prospective validation" for new features in 3.1.
Therefore, a table cannot be fully constructed for explicit acceptance criteria as they are not explicitly detailed in this document. The document refers to external 510(k)s (K992450 and K014033) for baseline accuracy. For version 3.1 features, it describes validation studies conducted.
Summary of Device Performance/Validation Studies (as described within the document):
Feature/Version | Study Type/Description | Sample Size | Outcome (Descriptive, specific metrics not provided in this document) |
---|---|---|---|
Emory Cardiac Toolbox™ 2.0 | Phantom and computer simulations | N/A | Established effectiveness |
In-house trial (LV functional parameters) | 217 patients | Established effectiveness | |
Multicenter trial (general effectiveness) | 80 patients | Established effectiveness ("expected accuracy" referred to other 510(k)s) | |
Computer assisted visual scoring | 20 patients | Successfully evaluated | |
Prognosis algorithms | 504 patients | Successfully evaluated | |
Expert system algorithms | 461 patients | Successfully evaluated | |
Coronary fusion algorithms | 9 patients | Successfully evaluated | |
Emory Cardiac Toolbox™ 2.1 | Rb-82 normal limits development & validation | 176 patients | Successfully completed |
PET tools for perfusion-metabolism match-mismatch | 90 patients | Successfully completed | |
Emory Cardiac Toolbox™ 2.6 | N-13 ammonia normal limits development & validation | 144 patients | Successfully completed |
3D CT coronary artery alignment (phantom & patient) | 8 patients | Successfully completed | |
Emory Cardiac Toolbox™ 3.1 | SPECT reconstruction validation | 10 patients (prospective) | Successfully validated |
Phase analysis development (cardiac mechanic dyssynchrony) | 90 normal patients | Successfully developed | |
Phase analysis validation (cardiac mechanic dyssynchrony) | 75 additional patients (prospective) | Successfully validated |
2. Sample Size Used for the Test Set and the Data Provenance
The document describes several test sets for different features and versions of the software.
-
Emory Cardiac Toolbox 2.0 (initial program):
- "In-house trial validations which included an evaluation of left ventricular functional parameter calculations": 217 patients. (Provenance not specified, but "in-house" suggests internal data, likely US based, retrospective or mixed).
- "Multicenter trial validation": 80 patients. (Provenance not specified, typically implies multiple centers, often US, could be retrospective or prospective).
- Computer assisted visual scoring: 20 patients. (Provenance not specified).
- Prognosis: 504 patients. (Provenance not specified).
- Expert system: 461 patients. (Provenance not specified).
- Coronary fusion: 9 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 2.1:
- Rb-82 normal limits validation: 176 patients. (Provenance not specified).
- PET tools for match-mismatch validation: 90 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 2.6:
- N-13 ammonia normal limits validation: 144 patients. (Provenance not specified).
- 3D CT coronary artery alignment validation (patient studies): 8 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 3.1 (new features):
- SPECT reconstruction validation: 10 patients. This was a prospective validation. (Provenance not specified).
- Phase analysis validation: 75 patients. This was a prospective validation. (Provenance not specified).
Overall Data Provenance (General): The document does not explicitly state the country of origin for the data for most studies, but the company (Syntermed, Inc.) and the submission location (Atlanta, GA) suggest a US context for clinical trials. Many studies involve "patients," implying retrospective or prospective clinical data. Multiple "validation" studies are mentioned as "prospective" for the 3.1 version's new features.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not specify the number of experts or their qualifications used to establish ground truth for any of the studies mentioned. It simply refers to "validation" or "evaluations" that likely rely on expert review or established methods.
4. Adjudication Method for the Test Set
The document does not specify any adjudication methods (e.g., 2+1, 3+1) for establishing ground truth in any of the described test sets.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI assistance vs. without it. The device is referred to as a "display and processing program to aid in the diagnostic interpretation," implying it assists physicians. However, it does not quantify this assistance in a comparative MRMC study. The "expert system interpretation" mentioned for version 2.0 suggests an AI component, but its impact on human reader performance is not detailed.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
The device is explicitly described as serving "merely as a display and processing program to aid in the diagnostic interpretation of a patients' study. It was not meant to replace or eliminate the standard visual analysis of the gated SPECT & PET study." Furthermore, it states that "user verification of output is required." This strongly indicates that the device is not designed or evaluated as a standalone AI system without human-in-the-loop performance. Its role is assistive, and the physician retains final responsibility. Therefore, a standalone performance study as a replacement for human interpretation was likely not performed, nor would it be appropriate for its intended use.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The document refers to various types of "validation" but does not explicitly detail the specific type of ground truth used for each. However, based on the context of cardiac imaging and functional parameters, the ground truth likely involved a combination of:
- Clinical outcomes/patient data: For prognostic information and normal limits.
- Expert interpretation/consensus: For visual scoring, expert system interpretation, and possibly for validating quantitative measurements against a clinical standard.
- Pathology: Not directly mentioned, but could be a component for definitive diagnoses if included in certain patient cohorts.
- Phantom studies: Explicitly mentioned for version 2.0 (effectiveness) and version 2.6 (3D CT coronary artery alignment), which would use known, controlled configurations as ground truth.
- Animal studies: Explicitly mentioned for version 3.1 (development for SPECT reconstruction and phase analysis), where ground truth could be established through invasive measurements or controlled conditions.
8. The Sample Size for the Training Set
The document mentions "development" for some features but doesn't explicitly refer to "training sets" in the modern machine learning sense with specific sample sizes. It mentions:
- For Emory Cardiac Toolbox™ 2.1: "development and validation of Rb-82 normal limits (n=176)". This 176-patient cohort likely served as a development/training set for establishing these normal ranges.
- For Emory Cardiac Toolbox™ 2.6: "development and validation of N-13ammonia normal limits (n=144)". This 144-patient cohort likely served as a development/training set.
- For Emory Cardiac Toolbox™ 3.1: "development... in 90 normal patients" for phase analysis. This 90-patient cohort effectively served as a training/development set for the phase analysis algorithm.
Other studies are described as "evaluations" or "validations" rather than "development" directly tied to a specific cohort, suggesting pre-existing algorithms were tested on these patient populations.
9. How the Ground Truth for the Training Set Was Established
Similar to the ground truth for the test set, the document is not specific on this point. For the "development" cohorts mentioned:
- Normal limits (Rb-82, N-13 ammonia): Ground truth for "normal" would typically be established based on rigorous clinical criteria, including detailed patient history, lack of cardiac symptoms, clean EKG, and other physiological measurements, often confirmed by expert clinical review.
- Phase analysis (90 normal patients): Ground truth for "normal" in phase analysis (assessing cardiac mechanic dyssynchrony) would involve established clinical criteria defining normal cardiac rhythm and function, likely based on expert cardiological assessment, EKG, echocardiography, and other standard diagnostic tests to confirm the absence of dyssynchrony.
- Phantom and animal studies: As mentioned for other versions, these would use known physical properties or invasively measured physiological parameters to establish ground truth.
In general, it relies on conventional medical diagnostic methods and expert clinical assessment as the reference standard.
Ask a specific question about this device
Page 1 of 1