Search Filters

Search Results

Found 12 results

510(k) Data Aggregation

    K Number
    K130902
    Manufacturer
    Date Cleared
    2013-06-14

    (74 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Emory Cardiac Toolbox™ 3.2 software program should be used for the quantification of myocardial perfusion for the display of wall motion and quantification of left-ventricular function parameters from gated Tc-99m SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanic dyssynchrony using phase analysis, for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM), and for the quantitative analysis and display of SPECT AdreView™ (133)-mlBG) data sets used for evaluation of patients with congestive heart failure.

    The product is intended for use by trained nuclear technicians and nuclear medicine or nuclear cardiology physicians. The clinician remains ultimately responsible for the final interpretation and diagnosis based on standard practices and visual interpretation of all SPECT and PET data.

    Device Description

    The Emory Cardiac Toolbox™ 3.2 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), and cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, N-13-ammonia, FDG protocols, and user defined normal databases. The program can also be used for the quantitative analysis and display of SPECT Adre\View™ (123)-mlBG) data sets. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports IDL and the Aladdin (General Electric) software development environment. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details:

    1. Acceptance Criteria and Reported Device Performance

    The document does not explicitly state acceptance criteria in a quantitative, pass/fail format for the new features of Emory Cardiac Toolbox™ 3.2. Instead, it relies on substantial equivalence to previous versions and validates the new AdreView™ analysis through its ability to differentiate abnormal and normal uptake.

    Feature/MetricAcceptance Criteria (Implicit)Reported Device Performance
    Visual Scoring, Prognosis, Expert System, Coronary Fusion AlgorithmsSuccessfully evaluated in their respective patient populations.Successfully evaluated in 20, 504, 461, and 9 patients respectively.
    Rb-82 Normal LimitsDevelopment and validation successful.Developed and validated in 176 patients.
    PET tools for perfusion-metabolism match-mismatchSuccessful evaluation.Successfully completed in 90 patients.
    N-13ammonia normal limitsDevelopment and validation successful.Developed and validated in 144 patients.
    Alignment method for 3D CT coronary artery onto LV 3D epicardial surfaceSuccessful validation using phantom and patient studies.Validated using phantom and patient studies (n=8).
    SPECT ReconstructionProspective validation.Validated in 10 patients.
    Phase analysis (SyncTool™)Development and prospective validation.Developed in 90 normal patients and prospectively validated in 75 additional patients.
    SPECT AdreView™ (¹²³I-mIBG) Data Analysis (New in v3.2)Capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index.Demonstrated the capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index in a validation group of 1,016 patients.

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size for AdreView™ Validation: 1,016 patients

    • Data Provenance for AdreView™ Validation: The document does not explicitly state the country of origin or whether cases were retrospective or prospective for the AdreView™ validation group. It only mentions "two groups of patients used in this study, a pilot group consisting of 67 patients... and a validation group consisting of 1,016 patients." The pilot group was "used to develop the heart volume regions of interest," implying a development set, while the 1,016 patients were specifically for validation.

    • Other Sample Sizes Mentioned (for previous versions or other features):

      • Initial program (v2.0) LV functional parameters: 217 patients (in-house), 80 patients (multicenter trial).
      • Visual scoring: 20 patients
      • Prognosis: 504 patients
      • Expert system: 461 patients
      • Coronary fusion algorithms: 9 patients
      • Rb-82 normal limits: 176 patients
      • PET tools (perfusion-metabolism): 90 patients
      • N-13ammonia normal limits: 144 patients
      • 3D CT coronary artery alignment: 8 patients
      • SPECT reconstruction: 10 patients (prospective)
      • Phase analysis (SyncTool™): 90 normal patients (development), 75 additional patients (prospective validation).

    3. Number of Experts and Qualifications for Ground Truth

    The document does not explicitly state the number of experts used to establish ground truth for the AdreView™ validation. For previous versions and other features, it refers to "standard visual analysis," "physician interpretation," and "expert system interpretation" but does not detail the number or specific qualifications of these experts.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth in the validation studies. The statement "The physician should integrate all of the patients' clinical and diagnostic information...prior to making his final interpretation" implies a physician-led decision process, but not a specific adjudication protocol involving multiple experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study demonstrating how much human readers improve with AI vs. without AI assistance is explicitly described for the new features in v3.2. The document focuses on the performance of the algorithm itself in differentiating normal/abnormal AdreView™ uptake, rather than the AI's assistance to human readers. The general statement about the program serving "merely as a display and processing program to aid in the diagnostic interpretation" suggests it's an assistive tool, but no study quantifies this assistance.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance evaluation was done for the new AdreView™ analysis. The study focused on the "capability to differentiate subjects with abnormal and normal AdreView™ uptake using the H/M ratio index" from the algorithm's output. This is a direct evaluation of the algorithm's performance on its own. Similarly, many of the performance evaluations for previous versions (e.g., calculation of LV functional parameters, Rb-82 normal limits, PET tools) appear to be standalone evaluations of the algorithm's output.

    7. Type of Ground Truth Used

    For the AdreView™ validation, the ground truth was implicitly derived from the clinical classification of patients into "abnormal and normal AdreView™ uptake." The document does not specify the exact methods for this classification (e.g., based on clinical diagnosis, follow-up, or a single expert's visual assessment). For earlier features, "standard visual analysis of the gated SPECT & PET study" and "physician interpretation" are mentioned, suggesting expert consensus or clinical diagnosis as the implicit ground truth.

    8. Sample Size for the Training Set

    • Pilot Group for AdreView™: 67 patients were used "to develop the heart volume regions of interest on the SPECT reconstructions." This functions as a development/training set for establishing parts of the algorithm.
    • Other Development Sets:
      • Phase analysis (SyncTool™): Developed in 90 normal patients.
      • SPECT reconstruction: "development (phantom, animal, and patients n=4)".
      • Rb-82 normal limits: 176 patients used for "development and validation."
      • N-13ammonia normal limits: 144 patients used for "development and validation."

    9. How Ground Truth for the Training Set Was Established

    For the 67-patient pilot group for AdreView™ to "develop the heart volume regions of interest," the method for establishing ground truth is not specified. It likely involved expert input or established anatomical guidelines for defining these regions.

    For the "development" portions of other features (e.g., 90 normal patients for SyncTool™ phase analysis, 4 patients/phantoms/animals for SPECT reconstruction development), the document doesn't detail how ground truth was explicitly established, but it would typically involve expert interpretation, phantoms with known properties, or histological/pathological correlation where applicable, though none are specifically stated here beyond the nature of the data itself (e.g., "normal patients" for SyncTool).

    Ask a Question

    Ask a specific question about this device

    K Number
    K130451
    Device Name
    NEUROQ 3.6
    Manufacturer
    Date Cleared
    2013-05-17

    (84 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    1. assist with regional assessment of human brain scans, through automated quantification of mean pixel values lying within standardized regions of interest (S-ROI's), and
    2. assist with comparisons of the activity in brain regions of individual scans relative to normal activity values found for brain regions in FDG-PET scans, through quantitative and statistical comparisons of S-ROI's.
    3. assist with comparisons of activity in brain regions of individual scans between two studies from the same patient, between symmetric regions of interest within the brain PET study, and to perform an image fusion of the patients PET and CT data
    4. NeuroQ 3.6 provides added functionality to provide analysis of amyloid uptake levels in brain regions.
    Device Description

    NeuroQ™ 3.6 has been developed to aid in the assessment of human brain scans through quantification of mean pixel values lying within standardized regions of interest, and to provide quantified comparisons with brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC). The Program provides automated analysis of brain PET scans, with output that includes quantification of relative activity in 240 different brain regions, as well as measures of the magnitude and statistical significance with which activity in each region differs from mean activity values of brain regions in the AC database. The program can also be used to compare activity in brain regions of individual scans between two studies from the same patient, between symmetric reqions of interest within the brain PET study, and to perform an image fusion of the patients PET and CT data. The program can also be used to provide analysis of amyloid uptake levels in the brain. This program was developed to run in the IDL operating system environment, which can be executed on any nuclear medicine computer systems which support the IDL software platform. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    The provided text describes the NeuroQ™ 3.6 device, its intended use, and its equivalence to previously cleared devices. However, it does not contain specific acceptance criteria for performance metrics (like sensitivity, specificity, accuracy, or statistical thresholds) or a detailed study description with specific results that would "prove" the device meets such criteria. Instead, it references previous validation studies and states general conclusions about safety and effectiveness.

    Therefore, I cannot populate a table of acceptance criteria and reported performance, nor can I provide detailed information for many of the requested points because that specific data is not present in the provided 510(k) summary. The summary focuses on establishing substantial equivalence based on prior versions and a general statement of in-house testing.

    Here's a breakdown of what can and cannot be answered based on the provided text:


    1. A table of acceptance criteria and the reported device performance

    • Cannot be provided. The document does not specify any quantitative acceptance criteria or reported performance metrics (e.g., specific accuracy, sensitivity, specificity values, or statistical thresholds) that the device was tested against. It states that validation for modifications can be found in "Item H. Testing & Validation," but this item itself is not included in the provided text.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Cannot be fully provided. The document mentions "clinical validation studies submitted in our previous 510(k) K041022 and 510(k) #: K072307" for the initial program and earlier versions. For the current version (3.6) it only refers to "in-house testing" and "final in-house validation results."
      • Sample Size (Test Set): Not specified for NeuroQ™ 3.6.
      • Data Provenance (country of origin, retrospective/prospective): Not specified for NeuroQ™ 3.6. The reference database is described as "brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)," but no details on its origin are given.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Cannot be provided. The document does not describe how ground truth was established for any test set or mention specific experts involved in such a process for NeuroQ™ 3.6.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Cannot be provided. The document does not describe any adjudication method for a test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study is not described as being performed. The document states the program "serves merely as a display and processing program to aid in the diagnostic interpretation...it was not meant to replace or eliminate the standard visual analysis." It emphasizes the physician's ultimate responsibility and integration of all information. There is no mention of a study comparing human reader performance with and without AI assistance, or any effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, implicitly, to some extent, but with a critical caveat. The device itself performs "automated analysis" and provides "quantification of relative activity." This suggests standalone algorithmic processing.
    • Caveat: The document explicitly states, "The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided." It also says it is "not meant to replace or eliminate the standard visual analysis" and that the physician "should integrate all of the patients' clinical and diagnostic information." This strongly indicates that while the algorithm runs automatically, its performance is not intended to be "standalone" in a diagnostic sense, as human-in-the-loop verification and interpretation are always required. No specific "standalone performance" metrics are provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Cannot be explicitly stated. The document describes a "reference database" of "asymptomatic controls (AC)" used for comparison. This implies a ground truth of "normalcy" based on the absence of identified neuropsychiatric disease or symptoms. However, the precise method of establishing this normal status (e.g., through long-term follow-up, expert clinical assessment, other diagnostic tests) is not detailed. For patient studies, the tool provides quantitative results to be integrated by a physician, but doesn't mention a specific "ground truth" used to validate its diagnostic accuracy in patient cases.

    8. The sample size for the training set

    • Cannot be provided. The training set size for the algorithms is not mentioned. The document refers to a "reference database" of asymptomatic controls, but its size is not specified.

    9. How the ground truth for the training set was established

    • Partially described for the "reference database." The "reference database" consists of "brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)." This indicates that the ground truth for this reference is the absence of neuropsychiatric disease or symptoms. However, the specific methods (e.g., detailed clinical evaluation, exclusion criteria, follow-up) used to establish this "asymptomatic" status are not provided. The document does not explicitly discuss a separate "training set" and associated ground truth, but rather a "reference database" for comparison.
    Ask a Question

    Ask a specific question about this device

    K Number
    K123646
    Manufacturer
    Date Cleared
    2013-02-22

    (87 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Emory Cardiac Toolbox™ 4.0 software program should be used for the quantification of myocardial perfusion for the display of wall motion and quantification of left-ventricular function parameters from SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanical dyssynchrony using phase analysis, for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM), for the quantification of myocardial blood flow and coronary flow reserve, and for the decision support in interpretation (LVX) and automatic structured reporting of the study.

    The product is intended for use by trained nuclear technicians and nuclear medicine or nuclear cardiology physicians. The clinician remains ultimately responsible for the final interpretation and diagnosis based on standard practices and visual interpretation of all SPECT and PET data.

    Device Description

    The Emory Cardiac Toolbox™ 4.0 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies and for the evaluation of dynamic PET studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), analysis of coronary blood flow and coronary flow reserve, and assessment of cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, and expert system image interpretation. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data either filtered backprojection (FBP) or iterative reconstruction using (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, Rubidium-82 with CTbased attenuation correction, N-13-ammonia, FDG protocols, and user defined normal databases. This program was developed to run in the .NET operating system environment which can be executed on any PC, any nuclear medicine computer system, or through a web browser. In addition, the program can be used for the decision support in interpretation and automatic structured reporting of the study. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the Emory Cardiac Toolbox™ 4.0, extracted from the provided 510(k) summary:

    Acceptance Criteria and Device Performance

    The 510(k) summary for Emory Cardiac Toolbox™ 4.0 primarily focuses on demonstrating substantial equivalence to predicate devices and detailing the validation studies for its new features. It does not explicitly state pre-defined acceptance criteria in a quantitative format as one might find in a clinical trial protocol. Instead, it describes successful validations and accuracy against established methods or predicate devices.

    However, based on the information provided, we can infer the performance goals for certain aspects of the device.

    Feature/ParameterAcceptance Criteria (Inferred)Reported Device Performance
    IDL to .NET Conversion Accuracy (for 14 perfusion, function, and dyssynchrony variables)To demonstrate accuracy in deriving "similar values" when converting from the IDL to the .NET operating system.Successfully achieved "accuracy greater than 99%."
    Coronary Blood Flow ValidationComparable performance to cleared predicate devices (INVIA's CORRIDOR 4DM V2010 and cfrQuant).Validation conducted successfully in 44 patient studies, contributing to the claim of safety and effectiveness, and substantial equivalence to predicate devices.
    Decision Support Validation (LVX)Comparable performance to cleared predicate devices (Emory Cardiac Toolbox v2.0 K992450 (PERFEX) and Rcadia's COR Analyzer K110071).Validation conducted successfully in 126 studies, contributing to the claim of safety and effectiveness, and substantial equivalence to predicate devices.

    Study Details

    2. Sample Sizes Used for the Test Set and Data Provenance

    • IDL to .NET Conversion: 301 patients. Data provenance is not explicitly stated (e.g., country, retrospective/prospective).
    • Coronary Blood Flow Validation: 44 patient studies. Data provenance is not explicitly stated.
    • Decision Support Validation (LVX): 126 studies. Data provenance is not explicitly stated.
    • Previous Versions Referenced (ECT 2.0, 2.1, 2.6, 3.1):
      • ECT 2.0: 217 patients for LV functional parameter evaluation (in-house trials), 80 patients for a multicenter trial validation. Computer assisted visual scoring (20 patients), prognosis (504 patients), expert system (461 patients), and coronary fusion (9 patients).
      • ECT 2.1: Rb-82 normal limits (n=176), PET tools for perfusion-metabolism match-mismatch (n=90).
      • ECT 2.6: N-13-ammonia normal limits (n=144), alignment method for 3D CT coronary artery onto LV 3D epicardial surface (phantom and 8 patients).
      • ECT 3.1: SPECT reconstruction (phantom, animal, and 4 patients for development; 10 patients for prospective validation); Phase analysis (90 normal patients for development; 75 additional patients for prospective validation).

    The provided text does not specify the country of origin for the data or if the studies were retrospective or prospective, except for two instances in ECT 3.1: "prospective validation in 10 patients" for SPECT reconstruction and "prospective validation in 75 additional patients" for phase analysis.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") for establishing ground truth for the test sets described for version 4.0.

    However, for the original Emory Cardiac Toolbox™ 2.0, it references an article "by Vansant et al Emory Cardiac Toolbox™ (CEquar®, EGS™) Version 2.0, Ref. 510(k) #: K992450 and Version 2.1, Ref. 510(k) #: K014033)" for initial program accuracy. The details of how ground truth was established in those studies would likely be found in those referenced 510(k)s or publications.

    For the decision support system (LVX), given its intended use "to assist a trained physician to analyze nuclear cardiology images" and that the "physician should integrate all of the patients' clinical and diagnostic information... prior to making his final interpretation," it implies that the physicians using the system acted as the ultimate arbiters of truth, but how "ground truth" for the validation study itself was established for the 126 studies is not detailed.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (like 2+1 or 3+1 consensus) for establishing ground truth for the test sets mentioned in the 4.0 validation or earlier versions.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to quantify how much human readers improve with AI vs. without AI assistance. The new features for version 4.0 (coronary blood flow, coronary flow reserve, and decision support) were primarily validated by demonstrating their derivation accuracy or by comparing their results to predicate devices, rather than through a comparative reader study with and without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone performance was assessed for the new features. The validation of the "accuracy greater than 99%" for the IDL to .NET conversion, the "coronary blood flow validation" in 44 patients, and the "decision support validation" in 126 studies describe the algorithm's performance in generating specific measurements or interpretations. While user verification is required and manual processing is possible, the reported validations focus on the program's ability to process studies and produce results, which is a form of standalone evaluation. The device is intended to provide "decision support," meaning its output is fed to a human for final interpretation, but its internal validation steps would typically evaluate its standalone computational accuracy.

    7. The Type of Ground Truth Used

    • IDL to .NET Conversion: The ground truth would likely be the values derived from the previous, established IDL version of the software. The goal was to show the new .NET version produced "similar values."
    • Coronary Blood Flow and Coronary Flow Reserve: The ground truth for these quantitative measurements would typically be established against other validated quantitative methods for measuring blood flow and reserve, implicitly by comparison/equivalence to the predicate devices (INVIA's CORRIDOR 4DM V2010 and cfrQuant).
    • Decision Support System (LVX): For decision support systems, ground truth is often clinical outcomes, invasive procedures (like angiography for coronary artery disease), or expert consensus on image interpretation. The text states equivalence to PERFEX and COR Analyzer, implying the ground truth methods used to validate those predicates (which are not detailed here) would be relevant. The phrasing "physician should integrate all of the patients' clinical and diagnostic information... prior to making his final interpretation" suggests that the ultimate "truth" is the comprehensive clinical diagnosis.

    8. The Sample Size for the Training Set

    The document does not explicitly provide a sample size for a training set directly associated with the new features of version 4.0. The studies described for version 4.0 (IDL to .NET conversion, coronary blood flow, decision support) are presented as validation studies, which typically use independent test sets, not training sets.

    However, the text mentions "development" for earlier versions:

    • ECT 2.1: "development and validation of Rb-82 normal limits (n=176)"
    • ECT 2.6: "development and validation of N-13-ammonia normal limits (n=144)"
    • ECT 3.1: "development (phantom, animal, and patients n=4)" for SPECT reconstruction and "development in 90 normal patients" for phase analysis.

    These "development" activities would involve data used to build or refine algorithms, akin to training data, but they are tied to earlier versions of the product.

    9. How the Ground Truth for the Training Set Was Established

    As no specific training sets are described for the version 4.0 updates, the method for establishing ground truth for such sets is not mentioned. For the "development" phases of earlier versions (Rb-82 normal limits, N-13-ammonia normal limits, phase analysis in normal patients), the ground truth for "normal limits" would typically be derived from a healthy population cohort, likely using established clinical criteria to define "normal." For animal and phantom studies, the "ground truth" is typically the known physical properties or induced conditions.

    Ask a Question

    Ask a specific question about this device

    K Number
    K072307
    Manufacturer
    Date Cleared
    2008-03-14

    (210 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The NeuroQ™-PET DP program is indicated to perform a quantitative analysis of FDG-PET brain scans using a ROI count method. NeuroQ™ 3.0 provides added functionality which allows for analyzing the difference between two FDG-PET brain studies for the same patient, calculating values within user defined regions of interest, and displaying CT and PET brain studies for the patient.

    Device Description

    NeuroQ™ 3.0 has been developed to aid in the assessment of human brain scans through quantification of mean pixel values lying within standardized regions of interest, and to provide quantified comparisons with brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC). The Program provides automated analysis of brain PET scans, with output that includes quantification of relative activity in 240 different brain regions, as well as measures of the magnitude and statistical significance with which activity in each region differs from mean activity values of brain regions in the AC database. The program can also be used to compare activity in brain regions of individual scans between two studies from the same patient, between symmetric regions of interest within the brain PET study, and to perform an image fusion of the patients PET and CT data. This program was developed to run in the IDL operating system environment, which can be executed on any nuclear medicine computer systems which support the IDL software platform. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    The provided document is a 510(k) summary for NeuroQ 3.0, a display and analysis program for PET Brain studies. It details the device's intended use and substantial equivalence to a predicate device but does not contain detailed information about specific acceptance criteria or a dedicated study proving performance against such criteria for NeuroQ 3.0 itself.

    Instead, it refers to the validation of the initial program, NeuroQ™ - PET DP, and states that "The validation for modifications in version 3.0 can be found in Item F, Testing & Validation of this 510(k)". However, Item F is not included in the provided text.

    Therefore, many of the requested details cannot be extracted directly from the given information.

    Here's what can be inferred or stated based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    • Cannot be provided. The document refers to validation in "Item F, Testing & Validation" which is not included. It broadly claims "safety and effectiveness" based on in-house testing and clinical validation studies for the previous version, but no specific quantifiable acceptance criteria or performance metrics for NeuroQ 3.0 are given.

    2. Sample size used for the test set and the data provenance

    • Cannot be provided directly for NeuroQ 3.0. The document mentions "clinical validation studies submitted in our previous 510(k) K041022" for the initial program, but details for NeuroQ 3.0's test set are not present.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Cannot be provided. This information would typically be detailed within the "Testing & Validation" section (Item F) which is missing.

    4. Adjudication method

    • Cannot be provided. This information is not present in the given text.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not explicitly stated for NeuroQ 3.0. The device is described as "a display and processing program to aid in the diagnostic interpretation" and not meant to "replace or eliminate the standard visual analysis." It performs automated analysis but requires "user verification of output." While it assists physicians, an MRMC comparative effectiveness study to quantify human reader improvement with AI assistance is not described in this summary.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, implicitly. The device "provides automated analysis of brain PET scans" and "quantification of relative activity in 240 different brain regions, as well as measures of the magnitude and statistical significance." This suggests the algorithm performs an analysis independently of immediate human interaction, though "user verification of output is required." However, specific standalone performance metrics are not given.

    7. The type of ground truth used

    • Cannot be provided directly. The document refers to comparing patient studies to a "reference database" of "FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)." This suggests "normal" or "healthy" controls form a basis for comparison, but the specific type of ground truth for assessing the accuracy of its quantitative analyses (e.g., against pathology, clinical outcomes, or expert consensus on abnormal findings) is not detailed.

    8. The sample size for the training set

    • Cannot be provided directly. The document mentions an "AC database" (asymptomatic controls) used for comparison, but the size of this database, or any other specific training set for model development (if applicable to this type of analysis program), is not disclosed.

    9. How the ground truth for the training set was established

    • Implicit for the AC database: The text states the AC database consists of "FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms." This implies a clinical assessment or screening process to confirm the asymptomatic status, which serves as their "ground truth" for normality. However, the specific methodology of this establishment is not detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K071503
    Manufacturer
    Date Cleared
    2007-07-26

    (55 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Emory Cardiac Toolbox™ (CEqual®, EGS™) 3.1 software program should be used for the quantification of myocardial perfusion (CEqual®), for the display of wall motion and guantification of left-ventricular function parameters from gated Tc99m SPECT & PET myocardial perfusion studies (EGS™), for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface, for the assessment of cardiac mechanic dyssynchrony using phase anaylsys, and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM).

    Device Description

    The Emory Cardiac Toolbox™ 3.1 is used to display gated wall motion and for quantifying parameters of left-ventricular perfusion and function from gated SPECT & PET myocardial perfusion studies. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass, transient ischemic dilatation (TID), and cardiac mechanic dyssynchrony. In addition, the program offers the capability of providing the following diagnostic information: computer assisted visual scoring, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D epicardial surface and for generation of the short axis, vertical, and horizontal long axis tomograms from the SPECT raw data using either filtered backprojection (FBP) or iterative reconstruction (MLEM/OSEM). The Emory Cardiac Toolbox can be used with any of the following Myocardial SPECT Protocols: Same Day and Two Day Sestamibi, Dual-Isotope (Tc-99m/Tl-201), Tetrofosmin, and Thallium, Rubidium-82, N-13-ammonia, FDG protocols, and user defined normal databases. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports IDL and the Aladdin (General Electric) software development environment. The program processes the studies automatically. however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    The information provided in the 510(k) summary (K071503) focuses on demonstrating substantial equivalence to predicate devices and detailing the validation studies conducted for different versions and features of the Emory Cardiac Toolbox. However, it does not explicitly state specific acceptance criteria in a quantitative manner or present a table comparing reported device performance against such criteria for the entire Emory Cardiac Toolbox 3.1.

    Instead, the document describes the types of validation performed and the sample sizes used for various features. It asserts the software's safety and effectiveness based on these studies and its similarity to predicate devices.

    Based on the provided text, here's a breakdown of the requested information:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted, the document explicitly states the "expected accuracy of the initial program can be found in the multicenter trial results listed in the article by Vansant et al Emoy Cardinc Toolbox™ (CEqual®, EGS™) Version 2.0, Ref. 510(k) #: K992450 and Version 2.1, Ref. 510(k) #: K014033)". However, the specific quantitative acceptance criteria for the Emory Cardiac Toolbox 3.1 (or even 2.0/2.1 in detail within this document) are not defined within the provided text immediately, nor is a table directly presenting performance against such criteria. The document discusses "accuracy" in general terms for previous versions, and then "development and prospective validation" for new features in 3.1.

    Therefore, a table cannot be fully constructed for explicit acceptance criteria as they are not explicitly detailed in this document. The document refers to external 510(k)s (K992450 and K014033) for baseline accuracy. For version 3.1 features, it describes validation studies conducted.

    Summary of Device Performance/Validation Studies (as described within the document):

    Feature/VersionStudy Type/DescriptionSample SizeOutcome (Descriptive, specific metrics not provided in this document)
    Emory Cardiac Toolbox™ 2.0Phantom and computer simulationsN/AEstablished effectiveness
    In-house trial (LV functional parameters)217 patientsEstablished effectiveness
    Multicenter trial (general effectiveness)80 patientsEstablished effectiveness ("expected accuracy" referred to other 510(k)s)
    Computer assisted visual scoring20 patientsSuccessfully evaluated
    Prognosis algorithms504 patientsSuccessfully evaluated
    Expert system algorithms461 patientsSuccessfully evaluated
    Coronary fusion algorithms9 patientsSuccessfully evaluated
    Emory Cardiac Toolbox™ 2.1Rb-82 normal limits development & validation176 patientsSuccessfully completed
    PET tools for perfusion-metabolism match-mismatch90 patientsSuccessfully completed
    Emory Cardiac Toolbox™ 2.6N-13 ammonia normal limits development & validation144 patientsSuccessfully completed
    3D CT coronary artery alignment (phantom & patient)8 patientsSuccessfully completed
    Emory Cardiac Toolbox™ 3.1SPECT reconstruction validation10 patients (prospective)Successfully validated
    Phase analysis development (cardiac mechanic dyssynchrony)90 normal patientsSuccessfully developed
    Phase analysis validation (cardiac mechanic dyssynchrony)75 additional patients (prospective)Successfully validated

    2. Sample Size Used for the Test Set and the Data Provenance

    The document describes several test sets for different features and versions of the software.

    • Emory Cardiac Toolbox 2.0 (initial program):

      • "In-house trial validations which included an evaluation of left ventricular functional parameter calculations": 217 patients. (Provenance not specified, but "in-house" suggests internal data, likely US based, retrospective or mixed).
      • "Multicenter trial validation": 80 patients. (Provenance not specified, typically implies multiple centers, often US, could be retrospective or prospective).
      • Computer assisted visual scoring: 20 patients. (Provenance not specified).
      • Prognosis: 504 patients. (Provenance not specified).
      • Expert system: 461 patients. (Provenance not specified).
      • Coronary fusion: 9 patients. (Provenance not specified).
    • Emory Cardiac Toolbox 2.1:

      • Rb-82 normal limits validation: 176 patients. (Provenance not specified).
      • PET tools for match-mismatch validation: 90 patients. (Provenance not specified).
    • Emory Cardiac Toolbox 2.6:

      • N-13 ammonia normal limits validation: 144 patients. (Provenance not specified).
      • 3D CT coronary artery alignment validation (patient studies): 8 patients. (Provenance not specified).
    • Emory Cardiac Toolbox 3.1 (new features):

      • SPECT reconstruction validation: 10 patients. This was a prospective validation. (Provenance not specified).
      • Phase analysis validation: 75 patients. This was a prospective validation. (Provenance not specified).

    Overall Data Provenance (General): The document does not explicitly state the country of origin for the data for most studies, but the company (Syntermed, Inc.) and the submission location (Atlanta, GA) suggest a US context for clinical trials. Many studies involve "patients," implying retrospective or prospective clinical data. Multiple "validation" studies are mentioned as "prospective" for the 3.1 version's new features.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    The document does not specify the number of experts or their qualifications used to establish ground truth for any of the studies mentioned. It simply refers to "validation" or "evaluations" that likely rely on expert review or established methods.


    4. Adjudication Method for the Test Set

    The document does not specify any adjudication methods (e.g., 2+1, 3+1) for establishing ground truth in any of the described test sets.


    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI assistance vs. without it. The device is referred to as a "display and processing program to aid in the diagnostic interpretation," implying it assists physicians. However, it does not quantify this assistance in a comparative MRMC study. The "expert system interpretation" mentioned for version 2.0 suggests an AI component, but its impact on human reader performance is not detailed.


    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done

    The device is explicitly described as serving "merely as a display and processing program to aid in the diagnostic interpretation of a patients' study. It was not meant to replace or eliminate the standard visual analysis of the gated SPECT & PET study." Furthermore, it states that "user verification of output is required." This strongly indicates that the device is not designed or evaluated as a standalone AI system without human-in-the-loop performance. Its role is assistive, and the physician retains final responsibility. Therefore, a standalone performance study as a replacement for human interpretation was likely not performed, nor would it be appropriate for its intended use.


    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The document refers to various types of "validation" but does not explicitly detail the specific type of ground truth used for each. However, based on the context of cardiac imaging and functional parameters, the ground truth likely involved a combination of:

    • Clinical outcomes/patient data: For prognostic information and normal limits.
    • Expert interpretation/consensus: For visual scoring, expert system interpretation, and possibly for validating quantitative measurements against a clinical standard.
    • Pathology: Not directly mentioned, but could be a component for definitive diagnoses if included in certain patient cohorts.
    • Phantom studies: Explicitly mentioned for version 2.0 (effectiveness) and version 2.6 (3D CT coronary artery alignment), which would use known, controlled configurations as ground truth.
    • Animal studies: Explicitly mentioned for version 3.1 (development for SPECT reconstruction and phase analysis), where ground truth could be established through invasive measurements or controlled conditions.

    8. The Sample Size for the Training Set

    The document mentions "development" for some features but doesn't explicitly refer to "training sets" in the modern machine learning sense with specific sample sizes. It mentions:

    • For Emory Cardiac Toolbox™ 2.1: "development and validation of Rb-82 normal limits (n=176)". This 176-patient cohort likely served as a development/training set for establishing these normal ranges.
    • For Emory Cardiac Toolbox™ 2.6: "development and validation of N-13ammonia normal limits (n=144)". This 144-patient cohort likely served as a development/training set.
    • For Emory Cardiac Toolbox™ 3.1: "development... in 90 normal patients" for phase analysis. This 90-patient cohort effectively served as a training/development set for the phase analysis algorithm.

    Other studies are described as "evaluations" or "validations" rather than "development" directly tied to a specific cohort, suggesting pre-existing algorithms were tested on these patient populations.


    9. How the Ground Truth for the Training Set Was Established

    Similar to the ground truth for the test set, the document is not specific on this point. For the "development" cohorts mentioned:

    • Normal limits (Rb-82, N-13 ammonia): Ground truth for "normal" would typically be established based on rigorous clinical criteria, including detailed patient history, lack of cardiac symptoms, clean EKG, and other physiological measurements, often confirmed by expert clinical review.
    • Phase analysis (90 normal patients): Ground truth for "normal" in phase analysis (assessing cardiac mechanic dyssynchrony) would involve established clinical criteria defining normal cardiac rhythm and function, likely based on expert cardiological assessment, EKG, echocardiography, and other standard diagnostic tests to confirm the absence of dyssynchrony.
    • Phantom and animal studies: As mentioned for other versions, these would use known physical properties or invasively measured physiological parameters to establish ground truth.

    In general, it relies on conventional medical diagnostic methods and expert clinical assessment as the reference standard.

    Ask a Question

    Ask a specific question about this device

    K Number
    K070089
    Device Name
    SYNTERMED LIVE
    Manufacturer
    Date Cleared
    2007-03-02

    (52 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Syntermed Live™ software program should be used for the transfer of medical images and data to a secure server for storage where it can subsequently be accessed, displayed, and processed on any PC connected to the Internet using either proprietary software applications, a DICOM Viewer, or a Web Browser.

    Device Description

    Syntermed Live™ is an Internet based system used to securely access, transfer, display, archive, and process medical images and data generated from a hospital or clinic. The Syntermed Live™ project is involved with the transmission and retrieval of output data files from either Syntermed proprietary applications, DICOM image files, or other medical digital data files to a Syntermed web server so that they can be archived and delivered to Syntermed customers for remote storage and review.

    AI/ML Overview

    The provided text describes a 510(k) submission for the Syntermed Live™ software program, which is an internet-based system for accessing, transferring, displaying, archiving, and processing medical images and data.

    However, the document does not contain specific acceptance criteria, a detailed study description with a table of reported performance against acceptance criteria, or most of the other requested information for a typical medical device study that would establish such criteria.

    Here's a breakdown of what can be gleaned and what is missing, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    • Missing. The document states that "The expected accuracy of the program can be found in our initial 510(k) submission of the Emory Cardiac Toolbox, K980914." However, these specific acceptance criteria and performance metrics are not included in the provided text.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: 30 patient studies.
    • Data Provenance: Not specified (country of origin, retrospective/prospective).
    • Study Design: The validation "compared the visual interpretation of 30 patient studies using the previous analysis to visual interpretation of the Syntermed Live generated images." This suggests a comparative study against a previous method of displaying these images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Missing. The document mentions "visual interpretation of... studies" but does not specify the number or qualifications of the physicians/experts performing this interpretation for the ground truth or comparison.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Missing. No information on adjudication methods is provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Likely Not in the traditional sense. The study compares "visual interpretation of 30 patient studies using the previous analysis to visual interpretation of the Syntermed Live generated images." This is a comparison of display methods, not necessarily an AI-assisted interpretation versus unassisted human reading to assess improvement. It evaluates the equivalence of the display platform. No effect size for human reader improvement with AI assistance is mentioned.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable for this device as described. Syntermed Live™ is described as a "display program to aid in the visual interpretation" and a "tool to display the patient's medical images." It is described as a software for transfer, storage, and display, not an AI algorithm that makes interpretations independently. The "final responsibility for interpretation of the study lies with the physician."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Implied Expert Interpretation. The ground truth for the comparison seems to be derived from the "visual interpretation of... studies." This suggests an expert consensus or interpretation, but the specifics are not detailed.

    8. The sample size for the training set

    • Not applicable / Not explicitly mentioned. Since the device is a display and data transfer system, not an AI model that learns from data in a training set, the concept of a "training set" for an algorithm doesn't directly apply in the usual sense. The document discusses "software development" and "in-house testing," but not a data-driven training process for an AI model.

    9. How the ground truth for the training set was established

    • Not applicable. As above, there is no mention of a training set for an AI algorithm.

    Summary of the "Study" (as much as can be discerned):

    The study described is a comparison of the visual interpretation of 30 patient studies. It aimed to show that interpretations made using images generated/displayed by Syntermed Live™ were equivalent to those made using a "previous analysis" method. This implies a non-inferiority or equivalence study for the display and transfer capabilities, rather than an efficacy study for a diagnostic algorithm. The primary finding mentioned is that the program is "substantially equivalent" to the predicate device (INTEGRADWeb MPR/MIP™) for its intended purpose of image display and transfer.

    Ask a Question

    Ask a specific question about this device

    K Number
    K042258
    Device Name
    BP-SPECT V1.O
    Manufacturer
    Date Cleared
    2004-10-04

    (45 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BP-SPECT™ software program should be used for the display of wall motion and quantification of left and right ventricular function parameters from gated Tc99m blood pool SPECT studies.

    Device Description

    The BP-SPECT™ is used to display gated wall motion and for quantifying parameters of left and right ventricular from gated blood pool SPECT studies. These parameters are: ejection fraction, end-diastolic volume, end-systolic volume, stroke volume, maximum and average emptying and filling rates, ejection and filling periods and times, and regional ejection fraction. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports the IDL software development. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    The provided text describes the Syntermed, Inc. BP-SPECT™ software program, which is used for displaying wall motion and quantifying left and right ventricular function parameters from gated Tc99m blood pool SPECT studies. However, the document does not provide specific acceptance criteria or a detailed study methodology with the requested information (sample sizes, expert qualifications, adjudication methods, MRMC studies, ground truth details, etc.).

    The document states:

    • "The expected accuracy of the program can be found in Item H. Testing & Validation"
    • "Specific details and results concerning the validation of the . BP-SPECT™ program are listed in Item H, Testing & Validation."

    Unfortunately, Item H. Testing & Validation is not included in the provided text. Therefore, I cannot extract the detailed information requested regarding the acceptance criteria and the study that proves the device meets them.

    The text does make general statements about validation:

    • "The effectiveness of the program has been established in in-house testing and clinical validation studies."
    • "We contend that the method employed for the development and the final in-house validation results of this medical display software program, . BP-SPECT™ program, have proven its safety and effectiveness."

    Without access to "Item H. Testing & Validation," the specific details of the acceptance criteria and the study proving compliance cannot be provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K041022
    Device Name
    NEUROQ - PET DP
    Manufacturer
    Date Cleared
    2004-06-17

    (58 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NeuroQ™ - PET DP Program is indicated to:

      1. assist with regional assessment of human brain scans, through automated quantification of mean pixel values lying within standardized regions of interest (S-ROI's), and
      1. assist with comparisons of the activity in brain regions of individual scans relative to normal activity values found for brain regions in FDG-PET scans, through quantitative and statistical comparisons of S-ROI's.
    Device Description

    The NeuroQ™ - PET DP Display and Analysis Program has been developed to aid in the assessment of human brain scans through quantification of mean pixel values lying within standardized regions of interest, and to provide quantified comparisons with brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC). The Program provides automated analysis of brain PET scans, with output that includes quantification of relative activity in and your brain regions, as well as measures of the magnitude and statistical significance with which activity in each region differs from mean activity values of brain regions in the AC database.

    This program was developed to run in the IDL operating system environment, which can be rms program any nuclear medicine computer systems which support the IDL software platform. The program processes the studies automatically, nowever, user verification plations. The program processing capability is provided.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the NeuroQ™ - PET DP program, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary (K041022) does not explicitly state quantified acceptance criteria (e.g., minimum sensitivity, specificity, accuracy percentages) or corresponding reported device performance metrics in a readily digestible format. Instead, it relies on demonstrating substantial equivalence to a predicate device (Mirage/NeuroGam™) and general statements about safety and effectiveness.

    The closest we get to "performance" is the description of the device's function:

    Acceptance Criteria (Implied)Reported Device Performance
    Assist with regional assessment of human brain scans:The NeuroQ™ program provides automated quantification of mean pixel values lying within standardized regions of interest (S-ROI's).
    Assist with comparisons of activity relative to normal values:The program assists with quantitative and statistical comparisons of S-ROI's from individual scans relative to normal activity values found for brain regions in FDG-PET scans (from a database of asymptomatic controls). The output includes quantification of relative activity in brain regions, as well as measures of the magnitude and statistical significance with which activity in each region differs from mean activity values of brain regions in the AC database. User verification of processing capability is provided. The program has been determined safe and effective through "initial design, coding, debugging, testing, software development" and "in-house and external validation." The clibourneous one progific details and results concerning the validation of the NeuroQ™ – PET DP program are listed in Item H, Testing & Validation (not provided in this excerpt).
    Safety and Effectiveness:"The program is substantially equivalent to the Mirage NeuroGam™ program which has been in the marketplace for over two years" and "it is intended for the same purpose and raises no new issues of safety or effectiveness." The program has no direct adverse effect on health, as the physician's interpretation remains crucial.

    2. Sample Size Used for the Test Set and Data Provenance

    This information is not provided in the excerpt. The document mentions "in-house and external validation" and "clibourneous one progific details and results concerning the validation of the NeuroQ™ – PET DP program are listed in Item H, Testing & Validation," but Item H is not included.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the excerpt. The nature of the ground truth (e.g., based on expert consensus for clinical diagnosis) is implied by the software's function to assist physicians, but specific details on expert involvement in validation are missing.

    4. Adjudication Method for the Test Set

    This information is not provided in the excerpt.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study demonstrating an effect size of human readers improving with AI vs without AI assistance is not mentioned in this document. The device is presented as an aid to the physician, but no formal comparative effectiveness study against unassisted reading is described.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The NeuroQ™ is designed as an "aid in the assessment" and "co-register and display brain PET scans and compare the patient's study to a reference database." It explicitly states, "It was not meant to replace or to be used as a primary diagnostic interpretation of a patient's PET brain scan." This suggests that performance was not evaluated in a standalone capacity, and its intended use is always with a human in the loop.

    7. The Type of Ground Truth Used

    Based on the device's function, the ground truth for the "normal activity values found for brain regions" likely came from a database of "FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)."

    For comparison to patient scans, the implied ground truth would be the physician's final interpretation based on integrating all clinical and diagnostic information, including the NeuroQ™ results, but the document doesn't explicitly detail how this ground truth was established for a test set.

    8. The Sample Size for the Training Set

    The document mentions "defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)" as the source of its normal database. However, the sample size for this training (or reference) set is not provided.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the "training set" (referred to as the "AC database" or "reference database") was established by using FDG-PET studies of "defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)." This implies a clinical assessment of individuals to confirm they are "normal" before their scans contribute to the reference database.

    Ask a Question

    Ask a specific question about this device

    K Number
    K040141
    Manufacturer
    Date Cleared
    2004-01-30

    (8 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Emory Cardiac Toolbox™ (CEqual®, EGS™) 2.6 software program should be used for the quantification of myocardial perfusion (CEqual®), for the display of wall motion for the quantification of left-ventricular function parameters from gated Tc99m gPECT & and quantification of lell-venthcaller fanonen for the 3D alignment of coronary artery PET myocarulal perfusion studies (LES - ); and ter the left ventricular 3D epicardial surface.

    Device Description

    The Emory Cardiac Toolbox™ 2.6 is used to display gated wall motion and for quantifying The Emory Strends Tronular perfusion and function from gated SPECT & PET myocardial paramours of ices. These parameters are: perfusion, ejection fraction, end-diastolic volume, end-systolic volume, myocardial mass and transient ischemic dilatation (TID). In addition, the program offers the capability of providing the following diagnostic information: computer the program of or are riog, prognostic information, expert system image interpretation, and patient specific 3D coronary overlay. The program can also be used for the 3D alignment of coronary artery models from CT coronary angiography onto the left ventricular 3D of coronaly aftery models from of corenary any in the IDL operating systems which epicalular sunace. "This program was avversions on any nuclear medicine computer systems to The supports IDL and the Aladdin (General Electric) software development environment. The Supports IDL and the Riadain (Gonoral Elookie) Sownload of output is required and manual processing capability is provided.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Emory Cardiac Toolbox™ 2.6, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state pre-defined acceptance criteria with numerical targets. Instead, it describes a series of validation studies with their corresponding patient numbers. The "device performance" in this context refers to the successful evaluation and validation of various algorithms, implying they met the implicit criteria for functionality and accuracy.

    Feature/Parameter TestedNumber of Patients/CasesStudy Type/PurposeReported Performance (Implicit Acceptance)
    Left ventricular functional parameter calculations217 (initial program, in-house) & 80 (multicenter)Phantom & Multicenter Trial Validation (Accuracy)Successfully evaluated and validated.
    Computer assisted visual scoring20 (patients)ValidationSuccessfully evaluated.
    Prognosis program504 (patients)ValidationSuccessfully evaluated.
    Expert system461 (patients)ValidationSuccessfully evaluated.
    Coronary fusion algorithms9 (patients)ValidationSuccessfully evaluated.
    Normal limits (Emory Cardiac Toolbox™ 2.1)176 (patients)ValidationSuccessfully completed.
    PET tools for assessment of perfusion – metabolism match-mismatch90 (patients)ValidationSuccessfully completed.
    N-13-ammonia normal limits144 (patients)ValidationSuccessfully completed.
    Alignment of 3D models of coronary arteries onto the left ventricular 3D epicardial surfaceNot explicitly given specific patient count for this specific validation, but "successfully completed"ValidationSuccessfully completed.

    2. Sample Sizes and Data Provenance

    • Test Set Sample Sizes:

      • Left ventricular functional parameter calculations: 217 (in-house) and 80 (multicenter).
      • Computer assisted visual scoring: 20 patients.
      • Prognosis program: 504 patients.
      • Expert system: 461 patients.
      • Coronary fusion algorithms: 9 patients.
      • Normal limits (Emory Cardiac Toolbox™ 2.1): 176 patients.
      • PET tools for perfusion-metabolism: 90 patients.
      • N-13-ammonia normal limits: 144 patients.
      • Alignment of 3D models of coronary arteries: Not explicitly stated as a separate patient cohort, but implied to be validated with existing data or a subset.
    • Data Provenance: The document mentions "in-house" validation and a "multicenter trial validation," indicating that data was gathered from more than one institution. Specific countries of origin are not detailed, nor is it explicitly stated whether the data was retrospective or prospective, though multicenter trials are often prospective or a mix.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not specify the number of experts used to establish ground truth or their qualifications. It mentions "computer assisted visual scoring," implying human visual interpretation, but the details of the ground truth establishment for the various functional parameter calculations and algorithms are not provided.

    4. Adjudication Method

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth on the test sets.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study that quantitatively assesses how much human readers improve with AI vs. without AI assistance is not described in the provided text. The document refers to the program as "merely a display and alignment and processing program to aid in the standard visual analysis" and states it is "not meant to replace or eliminate the recall of the patients' clinical and diagnostic information." This suggests the device is considered an aid rather than a replacement, but specific efficacy over human-only reading is not quantified.

    6. Standalone (Algorithm Only) Performance Study

    Yes, standalone (algorithm only) performance was done. The description of the validation studies for "left ventricular functional parameter calculations," "computer assisted visual scoring," "prognosis program," "expert system," "coronary fusion algorithms," "normal limits," "PET tools," and "N-13-ammonia normal limits" indicates that the algorithms within the Toolbox were evaluated for their accuracy and functionality. The general wording of "successfully evaluated" for these specific functions implies their standalone performance was assessed.

    7. Type of Ground Truth Used

    The type of ground truth used is not explicitly detailed for each study. However, given the context of myocardial perfusion, ejection fraction, ventricular volumes, and coronary artery models, it's highly probable that a combination of:

    • Expert Consensus: For visual scoring, qualitative interpretation, and possibly for establishing "normal limits."
    • Reference Standards/Clinical Data: For validating quantitative measurements (e.g., ejection fraction, volumes) against established methods or clinical outcomes, although this is not explicitly stated as "outcomes data."
    • Pathology: Not directly mentioned, but could be a component for certain diagnoses.

    The statement that the program itself is "not perfect, and will be accompanied with some false positive and false negative results" suggests that there was a ground truth against which these "false positives and negatives" were defined.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size used for the training set of the algorithms. The numbers provided (217, 80, 20, 504, 461, 9, 176, 90, 144 patients) relate to validation studies. For a device cleared in 2004, common practice at the time often involved using the same data sets (or subsets thereof) for both development/training and validation, or it might not have been standard to strictly delineate "training" and "test" sets in the same way as modern machine learning development.

    9. How Ground Truth for the Training Set Was Established

    Since an explicit training set size isn't provided, the method for establishing its ground truth is also not detailed. However, it's reasonable to infer that if a training set were used, its ground truth would have been established through methodologies similar to those used for validation sets (e.g., expert consensus, reference standards, and clinical data), but these details are not present in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K020300
    Manufacturer
    Date Cleared
    2002-04-23

    (84 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    SYNTERMED, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Northwestern Gated Blood Pool SPECT (NUMUGAS™) software program is indicated to use for the display of wall motion and quantification of left ventricular functional parameters from gated Tc-99m Blood Pool SPECT studies.

    Device Description

    The Northwestern Gated Blood Pool SPECT™ (NUMUGAS™) is used to display gated wall motion and for quantifying parameters of left-ventricular function from gated blood pool SPECT studies. These parameters are: ejection fraction, end-diastolic volume, endsystolic volume, stroke volume, maximum and average emptying and filling rates, ejection and filling periods and times, and regional ejection fraction. This program was developed to run in the IDL operating system environment which can be executed on any nuclear medicine computer systems which supports the IDL software development environment. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    Page 1 of 2