Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K241868
    Device Name
    xR IVD
    Manufacturer
    Date Cleared
    2025-09-19

    (449 days)

    Product Code
    Regulation Number
    866.6080
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Tempus xR IVD assay is a qualitative next generation sequencing-based in vitro diagnostic device that uses targeted high throughput hybridization-based capture technology for detection of rearrangements in two genes, using RNA isolated from formalin-fixed paraffin embedded (FFPE) tumor tissue specimens from patients with solid malignant neoplasms.

    Information provided by xR IVD is intended to be used by qualified health care professionals in accordance with professional guidelines in oncology for patients with previously diagnosed solid malignant neoplasms. Results from xR IVD are not intended to be prescriptive or conclusive for labeled use of any specific therapeutic product.

    Device Description

    xR IVD is a next generation sequencing (NGS)-based assay for the detection of alterations from RNA that has been extracted from routinely obtained FFPE tumor samples. Extracted RNA undergoes conversion to double stranded cDNA and library construction, followed by hybridization-based capture using a whole-exome targeting probe set with supplemental custom Tempus-designed probes. Using the Illumina® NovaSeq 6000 platform qualified by Tempus, hybrid-capture–selected libraries are sequenced, targeting > 6 million unique deduplicated reads. Sequencing data is processed and analyzed by a bioinformatics pipeline to detect gene rearrangements, including rearrangements in BRAF and RET.

    Alterations are classified for purposes of reporting on the clinical report as Level 2 or Level 3 alterations in accordance with the FDA Fact Sheet describing the CDRH's Approach to Tumor Profiling for Next Generation Sequencing Tests and as follows:

    • Level 2: Genomic Findings with Evidence of Clinical Significance
    • Level 3: Genomic Findings with Potential Clinical Significance

    xR IVD is intended to be performed with the following key components, each qualified and controlled by Tempus under its Quality Management System (QMS):

    • Reagents
    • Specimen Collection Box
    • Software
    • Sequencing Instrumentation

    1. Reagents

    All reagents used with respect to the operation of xR IVD are qualified by Tempus.

    2. Test Kit Contents

    xR IVD includes a specimen collection and shipping box (the Specimen Box). The Specimen Box contains the following components:

    • Informational Brochure with Specimen Requirements
    • Collection Box Sleeve
    • Collection Box Tray
    • Seal Sticker
    • ISO Label

    3. Software

    The proprietary xR IVD bioinformatics pipeline comprises data analysis software necessary for the xR IVD assay (software version is displayed on the xR IVD clinical report). The software is used with sequence data generated from NovaSeq 6000 instruments qualified by Tempus. Data generated from the pipeline is saved to a cloud infrastructure.

    4. Instrument

    xR IVD uses the Illumina NovaSeq 6000 Sequencer, a high throughput sequencing system employing sequencing-by-synthesis chemistry. The xR IVD device is intended to be performed with serial number-controlled instruments. All instruments are qualified by Tempus utilizing the Tempus Quality Management System (QMS).

    5. Sample preparation

    FFPE (Formalin Fixed Paraffin Embedded) tumor specimens are received either as unstained tissue sections on slides or as an FFPE block using materials supplied in the Specimen Box and prepared following standard pathology practices. Preparation and review of a Hematoxylin and Eosin (H&E) slide is performed prior to initiation of the xR IVD assay. H&E stained slides are reviewed by a board-certified pathologist to ensure that adequate tissue, tumor content and sufficient nucleated cells are present to satisfy minimum tumor content (tumor purity).

    Specifically, the minimum recommended tumor purity for detection of alterations by xR IVD is 20%, with macrodissection required for specimens with tumor purity lower than 20%. The recommended tumor size and minimum tumor content needed for testing are shown in Table 1, below.

    Table 1: Tumor Volume and Minimum Tumor Content

    Tissue TypeRecommended SizeMinimum Tumor ContentMacro-Dissection Requirements*LimitationsStorage
    FFPE blocks or 5 μm slides1mm³ of total tissue is recommended20%Macro-dissection must be done if the tumor content/purity is less than 20%Archival paraffin embedded material subjected to acid decalcification is unsuitable for analysis. Samples decalcified in EDTA are accepted.Room temperature

    *These requirements are based on the specimen's tumor content

    6. RNA extraction

    Nucleic acids are extracted from tissue specimens using a magnetic bead-based automated methodology followed by DNAse treatment. The remaining RNA is assessed for quantity and quality (sizing) at RNA QC1, which is a quality check (QC) to ensure adequate RNA extraction. The minimum amount of RNA required to perform the test is 50 ng. RNA is fragmented using heat and magnesium, with variable parameters, to yield similar sized fragments from RNA inputs with different starting size distributions.

    7. Library preparation

    Strand-specific RNA library preparation is performed by synthesizing the first-strand cDNA using a reverse transcriptase (RT) enzyme followed by second-strand synthesis using a DNA polymerase to create double stranded cDNA. Adapters are ligated to the cDNA and the adapter-ligated libraries are cleaned using a magnetic bead-based method. The libraries are amplified with high fidelity, low-bias PCR using primers complementary to adapter sequences. Amplified libraries are subjected to a 1X magnetic bead based clean-up to eliminate unused primers, and quantity is assessed (QC2) to ensure that pre-captured libraries were successfully prepared. Each amplified sample library contains a minimum of 150 ng of cDNA to proceed to hybridization.

    8. Hybrid capture

    After library preparation and amplification, the adapter-ligated library targets are captured by hybridization, clean-up of hybridized targets is performed, and unbound fragments are washed away. The captured targets are enriched by PCR amplification followed by a magnetic bead-based clean-up to remove primer dimers and residual reagents. To reduce non-specific binding of untargeted regions, human COT DNA and blockers are included in the hybridization step. Each post-capture library pool must satisfy a minimum calculated molarity (≥2.7 nM) to proceed to sequencing (QC3). The molarity is used to load the appropriate concentration of library pools onto sequencing flow cells.

    9. Sequencing

    The amplified target-captured libraries are sequenced with a 2x76 read length to an average of 50 million total reads on an Illumina NovaSeq 6000 System using patterned flowcells (SP/S1, S2, or S4). Pooled sample libraries are fluorometrically quantified and normalized into a sequencing pool of up to 28 samples (SP flowcell), 56 samples (S1 flowcell), 140 samples (S2 flowcell), 336 samples (S4 flowcell) with each flowcell including 2 external controls. Partial batches are supported using a set threshold of loading capacity down to a defined percentage. Pooled sample libraries are loaded on a sequencing flow cell and sequenced.

    10. Data Analysis

    a. Data Management System (DMS): Sequence data is automatically processed using software that tracks sample names, sample metadata processing status from sequencing through to analysis and reporting. Reports of identified alterations are available in a web-based user interface for download. Sequencing and sample metrics are available in run and case reports, including sample and sequencing quality.

    b. Demultiplexing and FASTQ Generation: Demultiplexing software generates FASTQ files containing sequence reads and quality scores for each of the samples on a sequencing run. The FASTQ formatted data files are used for subsequent processing of samples.

    c. Indexing QC Check: Samples are checked for an expected yield of sequence reads identified to detect mistakes in pooling samples. Samples outside the expected range are marked as failed.

    d. Read Alignment and BAM Generation: Genome alignment is performed to map sequence reads for each sample to the human reference genome (hg19). Alignments are saved as Binary Alignment Map (BAM) formatted files, which contain read placement information relative to the reference genome with quality scores. Aligned BAM files are further processed in a pipeline to identify genomic alterations.

    e. Sample QC check: A sample QC check (QC4) evaluates the quality of the samples processed through the bioinformatics pipeline (sample level metrics in Table 2). Samples are evaluated for contamination by evaluating the percent of a tumor sample contaminated with foreign nucleic acid with a threshold below 5%. Sample sequencing coverage is assessed through RNA gene-ids expressed which counts all genes raw expression abundance (>12,000) and RNA GC-distribution (45-59%). The sample mapping rate (>80%), RNA strand % sense (>88%) and RNA strand % failed (≤ 10%) metrics provide confidence in the sample quality.

    f. Alteration calling: A fully automated pipeline for bioinformatic analysis is used to identify gene rearrangements. The assay is validated to report specific gene rearrangements. Gene rearrangements are identified based on observations of reads supporting gene rearrangements in genomic alignments of discordantly mapped or split read pairs.

    11. Controls

    a. Negative control: A no template control (NTC) is processed to serve as a negative control to validate the acceptability of all the test samples processed through extraction, library preparation and hybridization and capture steps by testing for sample or reagent contamination. The NTC is not included on the sequencing run.

    b. Positive control: xR IVD uses multiple external controls consisting of contrived material with synthetically derived alterations or a pool of multiple cell lines. A positive control sample containing known gene rearrangements will be included with each sequencing run. The external controls are processed from library preparation through sequencing to serve as an end to end control to demonstrate assay performance. The external controls are checked during library preparation and after sequencing. Failure of the external control to meet the pre-defined quality metrics will result in all test samples on the run being reported as Quality Control (QC) failure.

    12. Result reporting

    xR IVD reports oncologically relevant gene rearrangements as genomic findings with evidence of clinical significance or with potential clinical significance. Gene rearrangements are assessed as oncogenic based on required genomic regions specified in a Tempus-developed curated database. Gene rearrangements that retain the genomic region(s) required for oncogenicity are assigned a level of clinical significance consistent with FDA's Fact Sheet and reported. Gene rearrangements that do not retain the region(s) required for oncogenicity are not reported.

    13. Quality metrics

    Reporting takes into account the quality metrics outlined in Table 2. Quality metrics are assessed across the following categories:

    • Batch-level: Metrics that are quantified per sequencing run; if the positive control fails these criteria, no results are reported for the entire batch of samples.
    • Sample-level: Metrics that are quantified per sample; no device results are generated for samples failing these metrics. These metrics are also referred to as sequencing quality control (QC4).
    • Analyte-level: Metrics that are quantified for individual alteration types. Alterations passing analyte-level metrics (threshold) are reported.

    Table 2: Summary of xR IVD Post-Sequencing Key Quality Metrics at Batch, Sample (QC4), and Analyte Levels

    Quality MetricBatch/Sample/AnalyteRequired Value
    Positive ControlBatch levelKnown sequence mutations are detected
    Expression Positive ControlBatch level≥0.9 r2
    RNA gene IDs expressedSample level>12,000
    RNA GC distributionSample level45-59%
    Mapping rateSample level>80%
    RNA strand percent senseSample level>88%
    RNA strand percent failedSample level≤10%
    Unique deduplicated readsSample level>6,000,000
    Tumor RNA junction saturation 50_100Sample level>1%
    Contamination fractionSample level<5%
    Gene Rearrangements (BRAF, RET)Analyte level≥4 reads
    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K250119
    Manufacturer
    Date Cleared
    2025-07-15

    (180 days)

    Product Code
    Regulation Number
    870.2380
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Tempus ECG-Low EF is software intended to analyze resting, non-ambulatory 12-lead ECG recordings and detect signs associated with having a low left ventricular ejection fraction (LVEF less than or equal to 40%). It is for use on clinical diagnostic ECG recordings collected at a healthcare facility from patients 40 years of age or older at risk of heart failure. This population includes but is not limited to patients with atrial fibrillation, aortic stenosis, cardiomyopathy, myocardial infarction, diabetes, hypertension, mitral regurgitation, and ischemic heart disease.

    Tempus ECG-Low EF only analyzes ECG data and provides a binary output for interpretation. Tempus ECG-Low EF is not intended to be a stand-alone diagnostic tool for cardiac conditions, should not be used for patient monitoring, and should not be used on ECGs with paced rhythms. Results should be interpreted in conjunction with other diagnostic information, including the patient's original ECG recordings and other tests, as well as the patient's symptoms and clinical history.

    A positive result may suggest the need for further clinical evaluation in order to establish a diagnosis of low LVEF. Patients receiving a negative result should continue to be evaluated in accordance with current medical practice standards using all available clinical information.

    Device Description

    Tempus ECG-Low EF is a cardiovascular machine learning software intended for analysis of 12-lead resting ECG recordings using machine-learning techniques to detect signs of cardiovascular conditions for further referral or diagnostic follow-up. The software employs machine learning techniques to analyze ECG recordings and detect signs associated with a patient experiencing low left ventricular ejection fraction (LVEF), less than or equal to 40%. The device is designed to extract otherwise unavailable information from ECGs conducted under the standard of care, to help health care providers better identify patients who may be at risk for undiagnosed LVEF in order to evaluate them for further referral or diagnostic follow up.

    As input, the software takes data from a patient's 12-lead resting ECG (including age and sex). It is only compatible with ECG recordings collected using 'wet' Ag/AgCl electrodes with conductive gel/paste, and using FDA authorized 12-lead resting ECG machines manufactured by GE Medical Systems or Philips Medical Systems with a 500 Hz sampling rate. It checks the format and quality of the input data, analyzes the data via a trained and 'locked' machine-learning model to generate an uncalibrated risk score, converts the model results to a binary output (or reports that the input data are unclassifiable), and evaluates the uncalibrated risk score against pre-set operating points (thresholds) to produce a final result. Uncalibrated risk scores at or above the threshold are returned as 'Low LVEF Detected,' and uncalibrated risk scores below the threshold are returned as 'Low LVEF Not Detected.' This information is used to support clinical decision making regarding the need for further referral or diagnostic follow-up. Typical diagnostic follow-up could include transthoracic echocardiogram (TTE) to detect previously undiagnosed LVEF, as described in device labeling. Results should not be used to direct any therapy against LVEF itself. Tempus ECG-Low EF is not intended to replace other diagnostic tests.

    Tempus ECG-Low EF does not have a dedicated user interface (UI). Input data comprising ECG tracings, tracing metadata (e.g., sample count, sample rate, patient age/sex), is provided to Tempus ECG-Low EF through standard communication protocols (e.g., file exchange) with other medical systems (e.g., electronic health record systems, hospital information systems, or other data display, transfer, storage, or format-conversion software). Results from Tempus ECG-Low EF are returned to users in an equivalent manner.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the Tempus ECG-Low EF device meets them, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    CriteriaAcceptance CriteriaReported Device Performance
    Sensitivity (for LVEF ≤ 40%)≥ 80% (lower bound of 95% CI)86% (point estimate); 84% (lower bound of 95% CI)
    Specificity (for LVEF > 40%)≥ 80% (lower bound of 95% CI)83% (point estimate); 82% (lower bound of 95% CI)

    Study Details

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Greater than 15,000 ECGs (specifically, 14,924 patient records are detailed in Table 1, with each patient having one ECG).
    • Data Provenance: Retrospective observational cohort study. The data was derived from 4 geographically distinct US clinical sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number of experts used or their qualifications for establishing the ground truth. It mentions that a clinical diagnosis of Low EF (LVEF ≤ 40%) was determined by a Transthoracic Echocardiogram (TTE), which is considered the gold standard for LVEF measurement. The interpretation of these TTE results to establish the ground truth would typically be done by cardiologists or trained echocardiography specialists, but the specific number and qualifications are not provided in this document.

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (such as 2+1 or 3+1) for the ground truth of the test set. The ground truth was established by correlating ECGs with TTEs to determine the presence or absence of a clinical diagnosis of low EF. It is implied that the TTE results themselves, as the gold standard, served as the definitive ground truth without a further adjudication process by multiple human readers for the TTE results in the context of this AI device validation.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    The document does not indicate that an MRMC comparative effectiveness study was performed, nor does it provide an effect size for human reader improvement with AI assistance. The study focuses on the standalone performance of the AI device.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone study was done. The described clinical performance validation evaluated the device's ability to "detect signs associated with a clinical diagnosis of low LVEF" and provided sensitivity and specificity metrics for the algorithm's output. The device "only analyzes ECG data and provides a binary output for interpretation," indicating a standalone performance assessment.

    7. The Type of Ground Truth Used

    The ground truth used was established by Transthoracic Echocardiogram (TTE), specifically used to determine the presence or absence of a clinical diagnosis of Low EF (LVEF ≤ 40%). This is a form of outcomes data / reference standard as TTE is the established clinical diagnostic method for LVEF.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: More than 930,000 ECGs (specifically, 930,689 ECGs are detailed in Table 1).

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. However, given that the model was trained to "detect signs associated with having a low left ventricular ejection fraction (LVEF less than or equal to 40%)" and the validation set used TTE for ground truth, it is highly probable that the training set also used LVEF measurements (likely from echocardiograms) as the ground truth. The description states the model was trained "on data from more than 930,000 ECGs," but does not detail the specific methodology for establishing the LVEF ground truth for each of these training examples.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233549
    Device Name
    Tempus ECG-AF
    Manufacturer
    Date Cleared
    2024-06-21

    (231 days)

    Product Code
    Regulation Number
    870.2380
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Tempus ECG-AF is intended for use to analyze recordings of 12-lead ECG devices and detect signs associated with a patient experiencing atrial fibrillation and/or atrial flutter (collectively referred to as AF) within the next 12 months. It is for use on resting 12-lead ECG recordings collected at a healthcare facility from patients:

    • · 65 years of age or older,
    • · without pre-existing or concurrent documentation of atrial fibrillation and/or atrial flutter,
    • · who do not have a pacemaker or implantable cardioverter defibrillator, and
    • · who did not have cardiac surgery within the preceding 30 days.

    Performance of repeated testing of the same patient over time has not been evaluated and results SHOULD NOT be used for patient monitoring.

    Tempus ECG-AF only analyzes ECG data. Results should be interpreted in conjunction with other diagnostic information, including the patient's original ECG recordings and other tests, as well as the patient's symptoms and clinical history. Tempus ECG-AF is not for use in patients with a history of AF, unless the AF occurred after a cardiac surgery procedure and resolved within 30 days of the procedure. It is not for use to assess risk of occurrence of AF related to cardiac surgery.

    Results do not describe a person's overall risk of experiencing AF or serve as the sole basis for diagnosis of AF, and should not be used as the basis for treatment of AF.

    Results are not intended to rule out AF follow-up.

    Device Description

    Tempus ECG-AF is a cardiovascular machine learning-based notification software intended to analyze recordings of 12-lead ECG devices from patients 65 years of age and older. The software employs machine learning techniques to analyze ECG recordings and detect signs associated with a patient experiencing atrial flutter (collectively referred to as AF) within the next 12 months. The device is designed to extract otherwise unavailable information from ECGs conducted under the standard of care, to help health care providers better identify patients who may be at risk for undiagnosed AF in order to evaluate them for referral of further diagnostic follow up and address the unmet need of reducing the number of undiagnosed AF patients.

    As input, the software takes data from a patient's 12-lead resting ECG (including age and sex). It is only compatible with ECG recordings collected using 'wet' Ag/AgCl electrodes with conductive gel/paste, and using FDA authorized 12-lead resting ECG machines manufactured by GE Medical Systems and Philips Medical Systems with a 500 Hz sampling rate. It checks the format and quality of the input data, analyzes the data via a trained and 'locked' machine-learning model to generate an uncalibrated risk score, converts the model results to a binary output (or reports that the input data are unclassifiable), and evaluates the uncalibrated risk score against pre-set operating points (thresholds) to produce a final result. Uncalibrated risk scores at or above the threshold are returned as 'increased risk' information; uncalibrated risk scores below the threshold are returned as 'no increased risk' information is used to support clinical decision making regarding the need for further referral or diagnostic follow-up. Typical diagnostic follow-up could include ambulatory ECG monitoring to detect previously undiagnosed AF, as described in device labeling. Results should not be used to direct any therapy aqainst AF itself, including anticoagulation therapy.

    Tempus ECG-AF does not have a dedicated user interface (UI). Input data comprising ECG tracing metadata (sample count, sample rate, etc.), patient age and patient sex, will be provided to Tempus ECG-AF through standard communication protocols (e.g. AP), file exchange) with other medical systems (e.g., electronic health record systems, hospital information systems, or other medical device data display, transfer, storage, or format-conversion software). Results from Tempus ECG-AF will be returned to users in an equivalent manner.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA clearance document does not explicitly state "acceptance criteria" in a table format with specific thresholds that the device had to meet. However, it presents the results of the clinical performance validation study, which implicitly represent the device's performance against expected clinical utility. The study endpoints of sensitivity and specificity were "met," indicating they were within an acceptable range for the intended use.

    Here's a table based on the provided performance metrics:

    Performance MetricReported Device Performance (95% Confidence Interval)
    Sensitivity31% (31% - 37%)
    Specificity92% (91% - 92%)
    Positive Predictive Value (PPV)19% (15% - 23%)
    Negative Predictive Value (NPV)95% (95% - 96%)

    Note: While the document states "Study endpoints of sensitivity and specificity were met," it does not explicitly define what specific numerical thresholds were considered "met" for acceptance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 4017 patients, with one ECG analyzed per patient. (Page 6)
    • Data Provenance: Retrospective observational cohort study. Data was collected from 3 geographically distinct clinical sites (real-world data). (Page 6)

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document states that the AF status of each patient in the test set was determined based on "duplicate manual chart review" (Page 6). It does not specify the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). This suggests the ground truth was derived from existing medical records interpreted by qualified personnel, but the specific details of those adjudicators are not provided.

    4. Adjudication Method for the Test Set

    The document mentions "duplicate manual chart review" (Page 6) for establishing the ground truth. This implies at least two reviewers independently reviewed charts to determine AF status. It does not explicitly state a 2+1, 3+1, or other specific adjudication method if there were discrepancies between the duplicate reviews.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a standalone performance validation of the AI algorithm. The document focuses on the algorithm's performance in detecting signs of AF risk rather than how human readers' performance might improve with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, a standalone performance study was done. The "Summary of Clinical Studies" section describes the evaluation of "the Tempus ECG-AF model" and its observed performance metrics (sensitivity, specificity, PPV, NPV). This is a direct evaluation of the algorithm's performance without the explicit involvement of human readers in the loop as part of the study design for device performance. The device is then intended to provide information to clinicians, but the study described is an algorithm-only evaluation against ground truth.

    7. The Type of Ground Truth Used

    The ground truth used was based on "a clinical diagnosis of AF in the next 12 months" (Page 6), determined through "duplicate manual chart review" (Page 6) and "sufficient pre- and post-ECG data available to determine that the patient was part of the intended use patient population and to enable at least 1 year of follow-up to determine the presence of a clinical diagnosis of AF." This suggests a combination of medical record review and outcomes data (clinical diagnosis of AF within 12 months based on follow-up).

    8. The Sample Size for the Training Set

    • Training Set Sample Size: > 1,500,000 ECGs and > 450,000 patients. (Page 6)

    9. How the Ground Truth for the Training Set was Established

    The document states that the "Tempus ECG-AF model was trained on data from > 1,500,000 ECGs and > 450,000 patients, with 80% of data used for training and 20% of the data used for model tuning." (Page 6)

    While it doesn't explicitly detail the methodology for establishing ground truth for the training set, it can be inferred that it involved similar processes to the test set, likely leveraging existing clinical diagnoses of AF from electronic health records or other forms of medical documentation, given the large scale of the dataset. The text does not provide specific details on manual review or expert involvement for the training set's ground truth beyond "data from" ECGs and patients.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1