Search Results
Found 2 results
510(k) Data Aggregation
(74 days)
The Emory Cardiac Toolbox™ 3.2 software program should be used for the quantification of myocardial perfusion for the display of wall motion and quantification of left-ventricular function parameters from gated Tc-99m SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanic dyssynchrony using phase analysis, for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM), and for the quantitative analysis and display of SPECT AdreView™ (133)-mlBG) data sets used for evaluation of patients with congestive heart failure.
The product is intended for use by trained nuclear technicians and nuclear medicine or nuclear cardiology physicians. The clinician remains ultimately responsible for the final interpretation and diagnosis based on standard practices and visual interpretation of all SPECT and PET data.
The Emory Cardiac Toolbox™ 3.2 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), and cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, N-13-ammonia, FDG protocols, and user defined normal databases. The program can also be used for the quantitative analysis and display of SPECT Adre\View™ (123)-mlBG) data sets. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports IDL and the Aladdin (General Electric) software development environment. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.
Here's an analysis of the provided text to extract the acceptance criteria and study details:
1. Acceptance Criteria and Reported Device Performance
The document does not explicitly state acceptance criteria in a quantitative, pass/fail format for the new features of Emory Cardiac Toolbox™ 3.2. Instead, it relies on substantial equivalence to previous versions and validates the new AdreView™ analysis through its ability to differentiate abnormal and normal uptake.
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Visual Scoring, Prognosis, Expert System, Coronary Fusion Algorithms | Successfully evaluated in their respective patient populations. | Successfully evaluated in 20, 504, 461, and 9 patients respectively. |
Rb-82 Normal Limits | Development and validation successful. | Developed and validated in 176 patients. |
PET tools for perfusion-metabolism match-mismatch | Successful evaluation. | Successfully completed in 90 patients. |
N-13ammonia normal limits | Development and validation successful. | Developed and validated in 144 patients. |
Alignment method for 3D CT coronary artery onto LV 3D epicardial surface | Successful validation using phantom and patient studies. | Validated using phantom and patient studies (n=8). |
SPECT Reconstruction | Prospective validation. | Validated in 10 patients. |
Phase analysis (SyncTool™) | Development and prospective validation. | Developed in 90 normal patients and prospectively validated in 75 additional patients. |
SPECT AdreView™ (¹²³I-mIBG) Data Analysis (New in v3.2) | Capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index. | Demonstrated the capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index in a validation group of 1,016 patients. |
2. Sample Size and Data Provenance for the Test Set
-
Sample Size for AdreView™ Validation: 1,016 patients
-
Data Provenance for AdreView™ Validation: The document does not explicitly state the country of origin or whether cases were retrospective or prospective for the AdreView™ validation group. It only mentions "two groups of patients used in this study, a pilot group consisting of 67 patients... and a validation group consisting of 1,016 patients." The pilot group was "used to develop the heart volume regions of interest," implying a development set, while the 1,016 patients were specifically for validation.
-
Other Sample Sizes Mentioned (for previous versions or other features):
- Initial program (v2.0) LV functional parameters: 217 patients (in-house), 80 patients (multicenter trial).
- Visual scoring: 20 patients
- Prognosis: 504 patients
- Expert system: 461 patients
- Coronary fusion algorithms: 9 patients
- Rb-82 normal limits: 176 patients
- PET tools (perfusion-metabolism): 90 patients
- N-13ammonia normal limits: 144 patients
- 3D CT coronary artery alignment: 8 patients
- SPECT reconstruction: 10 patients (prospective)
- Phase analysis (SyncTool™): 90 normal patients (development), 75 additional patients (prospective validation).
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number of experts used to establish ground truth for the AdreView™ validation. For previous versions and other features, it refers to "standard visual analysis," "physician interpretation," and "expert system interpretation" but does not detail the number or specific qualifications of these experts.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth in the validation studies. The statement "The physician should integrate all of the patients' clinical and diagnostic information...prior to making his final interpretation" implies a physician-led decision process, but not a specific adjudication protocol involving multiple experts.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study demonstrating how much human readers improve with AI vs. without AI assistance is explicitly described for the new features in v3.2. The document focuses on the performance of the algorithm itself in differentiating normal/abnormal AdreView™ uptake, rather than the AI's assistance to human readers. The general statement about the program serving "merely as a display and processing program to aid in the diagnostic interpretation" suggests it's an assistive tool, but no study quantifies this assistance.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance evaluation was done for the new AdreView™ analysis. The study focused on the "capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index" from the algorithm's output. This is a direct evaluation of the algorithm's performance on its own. Similarly, many of the performance evaluations for previous versions (e.g., calculation of LV functional parameters, Rb-82 normal limits, PET tools) appear to be standalone evaluations of the algorithm's output.
7. Type of Ground Truth Used
For the AdreView™ validation, the ground truth was implicitly derived from the clinical classification of patients into "abnormal and normal AdreView™ uptake." The document does not specify the exact methods for this classification (e.g., based on clinical diagnosis, follow-up, or a single expert's visual assessment). For earlier features, "standard visual analysis of the gated SPECT & PET study" and "physician interpretation" are mentioned, suggesting expert consensus or clinical diagnosis as the implicit ground truth.
8. Sample Size for the Training Set
- Pilot Group for AdreView™: 67 patients were used "to develop the heart volume regions of interest on the SPECT reconstructions." This functions as a development/training set for establishing parts of the algorithm.
- Other Development Sets:
- Phase analysis (SyncTool™): Developed in 90 normal patients.
- SPECT reconstruction: "development (phantom, animal, and patients n=4)".
- Rb-82 normal limits: 176 patients used for "development and validation."
- N-13ammonia normal limits: 144 patients used for "development and validation."
9. How Ground Truth for the Training Set Was Established
For the 67-patient pilot group for AdreView™ to "develop the heart volume regions of interest," the method for establishing ground truth is not specified. It likely involved expert input or established anatomical guidelines for defining these regions.
For the "development" portions of other features (e.g., 90 normal patients for SyncTool™ phase analysis, 4 patients/phantoms/animals for SPECT reconstruction development), the document doesn't detail how ground truth was explicitly established, but it would typically involve expert interpretation, phantoms with known properties, or histological/pathological correlation where applicable, though none are specifically stated here beyond the nature of the data itself (e.g., "normal patients" for SyncTool).
Ask a specific question about this device
(55 days)
The Emory Cardiac Toolbox™ (CEqual®, EGS™) 3.1 software program should be used for the quantification of myocardial perfusion (CEqual®), for the display of wall motion and guantification of left-ventricular function parameters from gated Tc99m SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanic dyssynchrony using phase anaylsys, and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM).
The Emory Cardiac Toolbox™ 3.1 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), and cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, N-13-ammonia, FDG protocols, and user defined normal databases. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports IDL and the Aladdin (General Electric) software development environment. The program processes the studies automatically. however, user verification of output is required and manual processing capability is provided.
The information provided in the 510(k) summary (K071503) focuses on demonstrating substantial equivalence to predicate devices and detailing the validation studies conducted for different versions and features of the Emory Cardiac Toolbox. However, it does not explicitly state specific acceptance criteria in a quantitative manner or present a table comparing reported device performance against such criteria for the entire Emory Cardiac Toolbox 3.1.
Instead, the document describes the types of validation performed and the sample sizes used for various features. It asserts the software's safety and effectiveness based on these studies and its similarity to predicate devices.
Based on the provided text, here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document explicitly states the "expected accuracy of the initial program can be found in the multicenter trial results listed in the article by Vansant et al Emoy Cardinc Toolbox™ (CEqual®, EGS™) Version 2.0, Ref. 510(k) #: K992450 and Version 2.1, Ref. 510(k) #: K014033)". However, the specific quantitative acceptance criteria for the Emory Cardiac Toolbox 3.1 (or even 2.0/2.1 in detail within this document) are not defined within the provided text immediately, nor is a table directly presenting performance against such criteria. The document discusses "accuracy" in general terms for previous versions, and then "development and prospective validation" for new features in 3.1.
Therefore, a table cannot be fully constructed for explicit acceptance criteria as they are not explicitly detailed in this document. The document refers to external 510(k)s (K992450 and K014033) for baseline accuracy. For version 3.1 features, it describes validation studies conducted.
Summary of Device Performance/Validation Studies (as described within the document):
Feature/Version | Study Type/Description | Sample Size | Outcome (Descriptive, specific metrics not provided in this document) |
---|---|---|---|
Emory Cardiac Toolbox™ 2.0 | Phantom and computer simulations | N/A | Established effectiveness |
In-house trial (LV functional parameters) | 217 patients | Established effectiveness | |
Multicenter trial (general effectiveness) | 80 patients | Established effectiveness ("expected accuracy" referred to other 510(k)s) | |
Computer assisted visual scoring | 20 patients | Successfully evaluated | |
Prognosis algorithms | 504 patients | Successfully evaluated | |
Expert system algorithms | 461 patients | Successfully evaluated | |
Coronary fusion algorithms | 9 patients | Successfully evaluated | |
Emory Cardiac Toolbox™ 2.1 | Rb-82 normal limits development & validation | 176 patients | Successfully completed |
PET tools for perfusion-metabolism match-mismatch | 90 patients | Successfully completed | |
Emory Cardiac Toolbox™ 2.6 | N-13 ammonia normal limits development & validation | 144 patients | Successfully completed |
3D CT coronary artery alignment (phantom & patient) | 8 patients | Successfully completed | |
Emory Cardiac Toolbox™ 3.1 | SPECT reconstruction validation | 10 patients (prospective) | Successfully validated |
Phase analysis development (cardiac mechanic dyssynchrony) | 90 normal patients | Successfully developed | |
Phase analysis validation (cardiac mechanic dyssynchrony) | 75 additional patients (prospective) | Successfully validated |
2. Sample Size Used for the Test Set and the Data Provenance
The document describes several test sets for different features and versions of the software.
-
Emory Cardiac Toolbox 2.0 (initial program):
- "In-house trial validations which included an evaluation of left ventricular functional parameter calculations": 217 patients. (Provenance not specified, but "in-house" suggests internal data, likely US based, retrospective or mixed).
- "Multicenter trial validation": 80 patients. (Provenance not specified, typically implies multiple centers, often US, could be retrospective or prospective).
- Computer assisted visual scoring: 20 patients. (Provenance not specified).
- Prognosis: 504 patients. (Provenance not specified).
- Expert system: 461 patients. (Provenance not specified).
- Coronary fusion: 9 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 2.1:
- Rb-82 normal limits validation: 176 patients. (Provenance not specified).
- PET tools for match-mismatch validation: 90 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 2.6:
- N-13 ammonia normal limits validation: 144 patients. (Provenance not specified).
- 3D CT coronary artery alignment validation (patient studies): 8 patients. (Provenance not specified).
-
Emory Cardiac Toolbox 3.1 (new features):
- SPECT reconstruction validation: 10 patients. This was a prospective validation. (Provenance not specified).
- Phase analysis validation: 75 patients. This was a prospective validation. (Provenance not specified).
Overall Data Provenance (General): The document does not explicitly state the country of origin for the data for most studies, but the company (Syntermed, Inc.) and the submission location (Atlanta, GA) suggest a US context for clinical trials. Many studies involve "patients," implying retrospective or prospective clinical data. Multiple "validation" studies are mentioned as "prospective" for the 3.1 version's new features.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not specify the number of experts or their qualifications used to establish ground truth for any of the studies mentioned. It simply refers to "validation" or "evaluations" that likely rely on expert review or established methods.
4. Adjudication Method for the Test Set
The document does not specify any adjudication methods (e.g., 2+1, 3+1) for establishing ground truth in any of the described test sets.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI assistance vs. without it. The device is referred to as a "display and processing program to aid in the diagnostic interpretation," implying it assists physicians. However, it does not quantify this assistance in a comparative MRMC study. The "expert system interpretation" mentioned for version 2.0 suggests an AI component, but its impact on human reader performance is not detailed.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
The device is explicitly described as serving "merely as a display and processing program to aid in the diagnostic interpretation of a patients' study. It was not meant to replace or eliminate the standard visual analysis of the gated SPECT & PET study." Furthermore, it states that "user verification of output is required." This strongly indicates that the device is not designed or evaluated as a standalone AI system without human-in-the-loop performance. Its role is assistive, and the physician retains final responsibility. Therefore, a standalone performance study as a replacement for human interpretation was likely not performed, nor would it be appropriate for its intended use.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The document refers to various types of "validation" but does not explicitly detail the specific type of ground truth used for each. However, based on the context of cardiac imaging and functional parameters, the ground truth likely involved a combination of:
- Clinical outcomes/patient data: For prognostic information and normal limits.
- Expert interpretation/consensus: For visual scoring, expert system interpretation, and possibly for validating quantitative measurements against a clinical standard.
- Pathology: Not directly mentioned, but could be a component for definitive diagnoses if included in certain patient cohorts.
- Phantom studies: Explicitly mentioned for version 2.0 (effectiveness) and version 2.6 (3D CT coronary artery alignment), which would use known, controlled configurations as ground truth.
- Animal studies: Explicitly mentioned for version 3.1 (development for SPECT reconstruction and phase analysis), where ground truth could be established through invasive measurements or controlled conditions.
8. The Sample Size for the Training Set
The document mentions "development" for some features but doesn't explicitly refer to "training sets" in the modern machine learning sense with specific sample sizes. It mentions:
- For Emory Cardiac Toolbox™ 2.1: "development and validation of Rb-82 normal limits (n=176)". This 176-patient cohort likely served as a development/training set for establishing these normal ranges.
- For Emory Cardiac Toolbox™ 2.6: "development and validation of N-13ammonia normal limits (n=144)". This 144-patient cohort likely served as a development/training set.
- For Emory Cardiac Toolbox™ 3.1: "development... in 90 normal patients" for phase analysis. This 90-patient cohort effectively served as a training/development set for the phase analysis algorithm.
Other studies are described as "evaluations" or "validations" rather than "development" directly tied to a specific cohort, suggesting pre-existing algorithms were tested on these patient populations.
9. How the Ground Truth for the Training Set Was Established
Similar to the ground truth for the test set, the document is not specific on this point. For the "development" cohorts mentioned:
- Normal limits (Rb-82, N-13 ammonia): Ground truth for "normal" would typically be established based on rigorous clinical criteria, including detailed patient history, lack of cardiac symptoms, clean EKG, and other physiological measurements, often confirmed by expert clinical review.
- Phase analysis (90 normal patients): Ground truth for "normal" in phase analysis (assessing cardiac mechanic dyssynchrony) would involve established clinical criteria defining normal cardiac rhythm and function, likely based on expert cardiological assessment, EKG, echocardiography, and other standard diagnostic tests to confirm the absence of dyssynchrony.
- Phantom and animal studies: As mentioned for other versions, these would use known physical properties or invasively measured physiological parameters to establish ground truth.
In general, it relies on conventional medical diagnostic methods and expert clinical assessment as the reference standard.
Ask a specific question about this device
Page 1 of 1