Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K252029

    Validate with FDA (Live)

    Device Name
    AI-CVD
    Date Cleared
    2025-12-19

    (172 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-CVD® is an opportunistic AI-powered quantitative imaging tool that provides automated CT-derived anatomical and density-based measurements for clinician review. The device does not provide diagnostic interpretation or risk prediction. It is solely intended to aid physicians and other healthcare providers in determining whether additional diagnostic tests are appropriate for implementing preventive healthcare plans. AI-CVD® has a modular structure where each module is intended to report quantitative imaging measurements for each specific component of the CT scan. AI-CVD® quantitative imaging measurement modules include coronary artery calcium (CAC) score, aortic wall calcium score, aortic valve calcium score, mitral valve calcium score, cardiac chambers volumetry, epicardial fat volumetry, aorta and pulmonary artery sizing, lung density, liver density, bone mineral density, and muscle & fat composition.

    Using AI-CVD® quantitative imaging measurements and their clinical evaluation, healthcare providers can investigate patients who are unaware of their risk of coronary heart disease, heart failure, atrial fibrillation, stroke, osteoporosis, liver steatosis, diabetes, and other adverse health conditions that may warrant additional risk assessment, monitoring or follow-up. AI-CVD® quantitative imaging measurements are to be reviewed by radiologists or other medical professionals and should only be used by healthcare providers in conjunction with clinical evaluation.

    AI-CVD® is not intended to rule out the risk of cardiovascular diseases. AI-CVD® opportunistic screening software can be applied to non-contrast thoracic CT scans such as those obtained for CAC scans, lung cancer screening scans, and other chest diagnostic CT scans. Similarly, AI-CVD® opportunistic screening software can be applied to contrast-enhanced CT scans such as coronary CT angiography (CCTA) and CT pulmonary angiography (CTPA) scans. AI-CVD® opportunistic bone density module and liver density module can be applied to CT scans of the abdomen and pelvis. All volumetric quantitative imaging measurements from the AI-CVD® opportunistic screening software are adjusted by body surface area (BSA) and reported both in cubic centimeter volume (cc) and percentiles by gender reference data from people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA) and Framingham Heart Study (FHS). Except for coronary artery calcium scoring, other AI-CVD® modules should not be ordered as a standalone CT scan but instead should be used as an opportunistic add-on to existing and new CT scans.

    Device Description

    AI-CVD® is an opportunistic AI-powered modular tool that provides automated quantitative imaging reports on CT scans and outputs the following measurements:

    • Coronary Artery Calcium Score
    • Aortic Wall and Valves Calcium Scores
    • Mitral Valve Calcium Score
    • Cardiac Chambers Volume
    • Epicardial Fat Volume
    • Aorta and Main Pulmonary Artery Volume and Diameters
    • Liver Attenuation Index
    • Lung Attenuation Index
    • Muscle and Visceral Fat
    • Bone Mineral Density

    The above quantitative imaging measurements enable care providers to take necessary actions to prevent adverse health outcomes.

    AI-CVD® modules are installed by trained personnel only. AI-CVD® is executed via parent software which provides the necessary inputs and receives the outputs. The software itself does not offer user controls or access.

    AI-CVD® reads a CT scan (in DICOM format) and extracts scan specific information like acquisition time, pixel size, scanner type, etc. AI-CVD® uses trained AI models that automatically segment and report quantitative imaging measurements specific to each AI-CVD® module. The output of each AI-CVD® module is inputted into the parent software which exports the results for review and confirmation by a human expert.

    AI-CVD® is a post-processing tool that works on existing and new CT scans.

    AI-CVD® passes if the human expert approves the segmentation highlighted by the AI-CVD® module is correctly placed on the target anatomical region. For example, Software passes if the human expert sees the AI-CVD® cardiac chamber volumetry module highlighted the heart anatomy.

    AI-CVD® fails if the human expert sees the segmentation highlighted by the AI-CVD® module is not correctly placed on the target anatomical region. For example, Software fails if the human expert sees the AI-CVD® cardiac chamber volumetry module highlighted the lungs anatomy or a portion of the sternum or any adjacent organs. Furthermore, Software fails if the human expert sees that the quality of the CT scan is compromised by image artifacts, severe motion, or excessive noise.

    The user cannot change or edit the segmentation or results of the device. The user must accept or reject the segmentation where the AI-CVD® quantitative imaging measurements are performed.

    AI-CVD® is an AI-powered post-processing tool that works on non-contrast and contrast-enhanced CT scans of chest and abdomen.

    AI-CVD® is a multi-module deep learning-based software platform developed to automatically segment and quantify a broad range of cardiovascular, pulmonary, musculoskeletal, and metabolic biomarkers from standard chest or whole-body CT scans. AI-CVD® system builds upon the open-source TotalSegmentator as its foundational segmentation framework, incorporating additional supervised learning and model training layers specific to each module's clinical task.

    AI/ML Overview

    The provided FDA 510(k) Clearance Letter for AI-CVD® outlines several modules, each with its own evaluation. However, the document does not provide a single, comprehensive table of acceptance criteria with reported device performance for all modules. Instead, it describes clinical validation studies and agreement analyses, generally stating "acceptable bias and reproducibility" or "acceptable agreement and reproducibility" without specific numerical thresholds or metrics. Similarly, detailed information on sample sizes, ground truth establishment methods (beyond general "manual reference standards" or "human expert knowledge"), and expert qualifications is quite limited for most modules.

    Here's an attempt to extract and synthesize the information based on the provided text, recognizing the gaps:

    Acceptance Criteria and Study Details for AI-CVD®

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state numerical acceptance criteria for each module. Instead, it describes performance in terms of agreement with manual measurements or gold standard references, generally stating "acceptable bias and reproducibility" or "comparable performance." The table below summarizes what is reported.

    AI-CVD® ModuleAcceptance Criteria (Implicit/General)Reported Device Performance
    Coronary Artery Calcium ScoreComparative safety and effectiveness with expert manual measurements.Demonstrated comparative safety and effectiveness between expert manual measurements and both automated Agatston CAC scores and AI-derived relative density-based calcium scores.
    Aortic Wall & Aortic Valve Calcium ScoresAcceptable bias and reproducibility compared to manual reference standards.Bland-Altman agreement analyses demonstrated acceptable bias and reproducibility across imaging protocols.
    Mitral Valve Calcium ScoreReproducible quantification compared to manual measurements.Agreement analyses demonstrated reproducible mitral valve calcium quantification across imaging protocols.
    Cardiac Chambers VolumeBased on previously FDA-cleared technology (AutoChamber™ K240786).(No new performance data presented for this specific module as it leverages a cleared predicate).
    Epicardial Fat VolumeAcceptable agreement and reproducibility with manual measurements.Agreement studies comparing AI-derived epicardial fat volumes with manual measurements and across non-contrast and contrast-enhanced CT acquisitions demonstrated acceptable agreement and reproducibility.
    Aorta & Main Pulmonary Artery Volume & DiametersLow bias and comparable performance with manual reference measurements.Agreement studies comparing AI-derived measurements with manual reference measurements demonstrated low bias and comparable performance across gated and non-gated CT acquisitions. Findings support reliability.
    Liver Attenuation IndexAcceptable reproducibility across imaging protocols.Agreement analysis comparing AI-derived liver attenuation measurements across imaging protocols demonstrated acceptable reproducibility.
    Lung Attenuation IndexReproducible measurements across CT acquisitions.Agreement studies demonstrated reproducible lung density measurements across gated and non-gated CT acquisitions.
    Muscle & Visceral FatAcceptable reproducibility across imaging protocols.Agreement analyses between AI-derived fat and muscle measurements demonstrated acceptable reproducibility across imaging protocols.
    Bone Mineral DensityBased on previously FDA-cleared technology (AutoBMD K213760).(No new performance data presented for this specific module as it leverages a cleared predicate).

    2. Sample Size and Data Provenance for the Test Set

    • Coronary Artery Calcium (CAC) Score:
      • Sample Size: 913 consecutive coronary calcium screening CT scans.
      • Data Provenance: "Real-world" data acquired across three community imaging centers. This suggests a retrospective collection from a U.S. or similar healthcare system, though the specific country of origin is not explicitly stated. The term "consecutive" implies that selection bias was minimized.
    • Other Modules (Aortic Wall/Valve, Mitral Valve, Epicardial Fat, Aorta/Pulmonary Artery, Liver, Lung, Muscle/Visceral Fat):
      • The document refers to "agreement analyses" and "agreement studies" but does not specify the sample size for the test sets used for these individual modules.
      • Data Provenance: The document generally states that "clinical validation studies were performed based upon retrospective analyses of AI-CVD® measurements performed on large population cohorts such as the Multi-Ethnic Study of Atherosclerosis (MESA) and Framingham Heart Study (FHS)." It is unclear if these cohorts were solely used for retrospective analysis, or if the "real-world" data mentioned for CAC was also included for other modules. MESA and FHS are prospective, longitudinal studies conducted primarily in the U.S.

    3. Number of Experts and Qualifications for Ground Truth

    • Coronary Artery Calcium (CAC) Score:
      • Number of Experts: Unspecified, referred to as "expert manual measurements."
      • Qualifications: Unspecified, but implied to be human experts capable of performing manual Agatston scoring.
    • Other Modules:
      • Number of Experts: Unspecified, generally referred to as "manual reference standards" or "manual measurements."
      • Qualifications: Unspecified.

    4. Adjudication Method for the Test Set

    The document does not describe a specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth on the test set. It mentions "expert manual measurements" or "manual reference standards," suggesting that the ground truth was established by human experts, but the process of resolving discrepancies among multiple experts (if any were used) is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, the document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The performance data presented focuses on the standalone AI performance compared to human expert measurements.

    • Effect Size of Human Reader Improvement: Not applicable, as an MRMC study was not described.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes, the described performance evaluations for all modules (where new performance data was presented) are standalone performance studies. The studies compare the AI-CVD® algorithm's output directly against manual measurements or established reference standards.

    7. Type of Ground Truth Used

    • Coronary Artery Calcium Score: Expert manual measurements (Agatston scores).
    • Aortic Wall and Aortic Valve Calcium Scores: Manual reference standards.
    • Mitral Valve Calcium Score: Manual measurements.
    • Epicardial Fat Volume: Manual measurements.
    • Aorta and Main Pulmonary Artery Volume and Diameters: Manual reference measurements.
    • Liver Attenuation Index: (Implicitly) Manual reference measurements or established methods for hepatic attenuation.
    • Lung Attenuation Index: (Implicitly) Manual reference measurements or established methods for lung density.
    • Muscle and Visceral Fat: (Implicitly) Manual reference measurements.
    • Cardiac Chambers Volume & Bone Mineral Density: Leveraged previously cleared predicate devices, suggesting the ground truth for their original clearance would apply.

    8. Sample Size for the Training Set

    The document provides information on the foundational segmentation framework (TotalSegmentator) and hints at customization for AI-CVD® modules:

    • TotalSegmentator (Foundational Framework):
      • General anatomical segmentation: 1,139 total body CT cases.
      • High-resolution cardiac structure segmentation: 447 coronary CT angiography (CCTA) scans.
    • AI-CVD® Custom Datasets: The document states that "Custom datasets were constructed for coronary artery calcium scoring, aortic and valvular calcifications, cardiac chamber volumetry, epicardial and visceral fat quantification, bone mineral density assessment, liver fat estimation, muscle mass and quality, and lung attenuation analysis." However, it does not provide the specific sample sizes for these custom training datasets for each AI-CVD® module.

    9. How Ground Truth for the Training Set Was Established

    • TotalSegmentator (Foundational Framework): The architecture utilizes nnU-Net, which was trained on the described CT cases. Implicitly, these cases would have had expert-derived ground truth segmentations for training the neural network.
    • AI-CVD® Custom Datasets: "For each module, iterative model enhancement was applied: human reviewers evaluated model-generated segmentations and corrected any inaccuracies, and these corrections were looped back into the training process to improve performance and generalizability." This indicates that human experts established and refined the ground truth by reviewing and correcting model-generated segmentations, which were then used for retraining. The qualifications of these "human reviewers" are not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240786

    Validate with FDA (Live)

    Device Name
    AutoChamber
    Date Cleared
    2024-10-10

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AutoChamber software is an opportunistic AI-powered quantitative imaging tool that measures and reports cardiac chambers volumes comprising left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and left ventricular wall (LVW) from non-contrast chest CT scans including coronary artery calcium (CAC) scans and lung CT scans. AutoChamber is not intended to rule out the risk of a cardiovascular disease, and the results should not be used for any purpose other than to enable physicians to investigate patients that AutoChamber shows signs of enlarged heart (cardiomegaly), enlarged cardiac chambers, and left ventricular hypertrophy (LVH) whose conditions are otherwise missed by human eyes in non-contrast chest CT scans. AutoChamber similarly measures and reports LA, LV, RA, RV, and LVW in contrast-enhanced coronary CT angiography (CCTA) scans. Additionally, AutoChamber measures and reports cardiothoracic ratio (CTR) in both contrast and non-contrast CT scans where the entire thoracic cavity is in the axial field of view. AutoChamber quantitative imaging measurements are adjusted by body surface area (BSA) and are reported both in cubic centimeter volume (cc) and percentiles by gender using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). AutoChamber should not be ordered as a standalone CT scan but instead should be used as an opportunistic add-on to existing and new CT scans of the chest, such as CAC and lung CT scans, as well as CCTA scans.

    Using AutoChamber quantitative imaging measurements and their clinical evaluation, healthcare providers can investigate asymptomatic patients who are unaware of their risk of heart failure, atrial fibrillation, stroke and other life-threatening conditions associated with enlarged cardiac chambers, and LVH that may warrant additional risk-assessment or follow-up. AutoChamber quantitative imaging measurements are to be reviewed by radiologists or other medical professionals and should only be used by healthcare providers in conjunction with clinical evaluation.

    Device Description

    The AutoChamber Software is an opportunistic AI-powered quantitative imaging tool that provides an estimate of cardiac volume, cardiac chambers volumes and left ventricular (LV) mass from non-contrast chest CT scans as well as contrast-enhanced chest CT scans. In addition to cardiac chambers volume and LV mass, AutoChamber measures and reports cardiothoracic ratio (CTR).

    AutoChamber Software reads a CT scan (in DICOM format) and extracts scan specific information like acquisition time, pixel size and scanner type. The AutoChamber Software uses an AI trained model to identify cardiac chambers in the field of view and measure the volume of each chamber including left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and LV wall (LVW). AutoChamber calculates the volume of each chamber as well as the corresponding total volume of all cardiac chambers and, if the field of view contains the entire width of the thoracic cavity in the axial view, it calculates and reports cardiothoracic ratio (CTR).

    AutoChamber calculates the volume of each chamber based upon the volume of each pixel multiplied by the number of pixels in the region of interest per slice, multiplied by the number of slices included in each chamber's segmentation. The total volume per chamber is reported in cubic centimeters (CC). In addition to reporting the measured volume in CC per chamber report shows volumes adjusted by body surface area (BSA) and corresponding percentiles using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The default cut-off value for further investigations is the 75th percentile but it is optional and subject to provider's judgement.

    AutoChamber does not provide a numerical individualized risk score/prediction or categorial assessment for whether an individual patient will develop cardiovascular disease over a specified period of their percentile(s).

    AutoChamber is a post-processing quantitative imaging software that works on existing and new CT scans. The AutoChamber Software is a software module installed by trained personnel only. The AutoChamber Software is executed via a parent software which provides the necessary input and visualizes the output data. The software itself does not offer user controls or access. The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done. If rejected, the user must retry with a new series of images or conduct an alternate method to measure cardiac chamber volume. The expert's review solely pertains to the region of interest being properly located.

    Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge. Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy. Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.

    AI/ML Overview

    Based on the provided text, here's a description of the acceptance criteria and the study proving the device meets them:

    Acceptance Criteria and Device Performance

    The document does not explicitly state a table of quantitative acceptance criteria for the performance of the AutoChamber software (e.g., a specific mean absolute error for volume measurements or a target F1-score for segmentation). Instead, the software validation section states: "Software Verification and Validation testing was completed to demonstrate the safety and effectiveness of the device. Testing demonstrates the AutoChamber Software meets all its functional requirements and performance specifications."

    The closest the document comes to defining acceptance criteria is in the "Principles of Ops" section, describing conditions for software pass/fail from a user's perspective:

    • Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge.
    • Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy.
    • Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.
    • The only user interaction is to accept or reject the region where the cardiac chamber volume measurement is done, with rejection leading to a retry or alternate method. "The expert's review solely pertains to the region of interest being properly located."

    Given this, the qualitative acceptance criteria appear to be centered on the correct anatomical localization of the measured cardiac chambers by the AI, as confirmed by expert review.

    Reported Device Performance:
    The document does not provide specific metrics (e.g., mean absolute error, Dice coefficient, accuracy, sensitivity, specificity) for the performance of the AutoChamber software against its ground truth. It only states that "AutoChamber results were compared with measurements previously made by cardiac MRI" and other CT scans. Therefore, a table of acceptance criteria vs. reported device performance cannot be fully constructed from the provided text, as the specific performance outcomes are not detailed, nor are the quantitative acceptance thresholds.

    Study Details:

    The clinical validation of the AutoChamber software was based on retrospective analyses.

    1. Sample sizes used for the test set and data provenance:

      • Study 1: 5003 cases where AutoChamber results from non-contrast cardiac CT scans were compared with measurements previously made by cardiac MRI.
      • Study 2: 1433 patients with paired non-contrast and contrast-enhanced cardiac CT scans.
      • Study 3: 171 patients who underwent both ECG-gated cardiac CT scan and non-gated full chest lung scan.
      • Study 4: 131 cases where AutoChamber results were compared directly with a Reference device (K060937).
      • Data Provenance: The reference data for percentiles is from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The specific country of origin for the test set data (the 5003, 1433, 171, and 131 cases) is not explicitly stated, but MESA is a US-based study. All studies were retrospective analyses of existing databases.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document implies that "expert knowledge" is used to confirm the correct placement of cardiac chamber volumes and LV mass. However, it does not specify the number of experts, their qualifications (e.g., specific board certifications, years of experience), or the process by which they established ground truth for the volumes themselves (e.g., manual segmentation by experts, or if the "cardiac MRI" measurements served as the primary ground truth, and if so, how those were established).
    3. Adjudication method for the test set:

      • The document does not specify a formal adjudication method (e.g., 2+1, 3+1 consensus) for the expert review or the establishment of ground truth for the test set. It mentions "The expert's review solely pertains to the region of interest being properly located," implying individual expert qualitative assessment of the AI's output, rather than a multi-reader consensus process for establishing the ground truth values themselves.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure how much human readers improve with AI vs. without AI assistance is not described. The document states that the AI measurements were compared against existing data (e.g., MRI measurements, other CT scans). The AI is presented as a "post-processing quantitative imaging software" that helps physicians investigate patients and is to be reviewed by radiologists or medical professionals. This implies an assistive role, but a formal MRMC study demonstrating enhancement of human reader performance is not mentioned.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the clinical validation involved comparing the AutoChamber software's measurements directly with other established measurement methods (cardiac MRI, other CT scans, and a reference device). This indicates a standalone performance evaluation of the algorithm's output against a reference. The "Principles of Ops" section states, "The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done." This suggests the algorithm performs autonomously, and its output is then presented for acceptance/rejection based on anatomical placement.
    6. The type of ground truth used:

      • The primary ground truth appears to be measurements previously made by cardiac MRI in one key study (5003 cases), and measurements from other CT scans or a cleared reference device (K060937) in other studies. The document does not explicitly state that these "measurements" were derived from pathology or clinical outcomes data, but rather from other imaging modalities considered reference standards (MRI) or other devices. The qualitative "expert knowledge" mentioned for passing/failing the software seems to be about the anatomical correctness of the AI's segmentation/placement rather than the true quantitative values themselves.
    7. The sample size for the training set:

      • The sample size for the training set is not specified in the provided text. It only mentions that the AutoChamber Software uses an "AI trained model."
    8. How the ground truth for the training set was established:

      • The method for establishing ground truth for the training set is not specified in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K213760

    Validate with FDA (Live)

    Device Name
    ABMD Software
    Date Cleared
    2022-07-29

    (240 days)

    Product Code
    Regulation Number
    892.1170
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Automated Bone Mineral Density Software Module (ABMD) is a post-processing AI-powered software intended to measure bone mineral density (BMD) from existing CT scans by averaging Hounsfield units in the trabecular region of vertebral bones. ABMD is not intended to replace DXA or any other tests dedicated to BMD measurement. It is solely designed for measuring BMD in existing CT scans ordered for reasons other than BMD measurement. In summary, ABMD is an opportunistic AI-powered tool that enables: (1) retrospective assessment of bone density from CT scans acquired for other purposes, (2) assessment of bone density in conjunction with another medically appropriate procedure involving CT scans, and (3) assessment of bone density without a phantom as an independent measurement procedure.

    Device Description

    The Automated Bone Mineral Density (ABMD) Software is a software module that estimates bone mineral density in the vertebral bones by averaging Hounsfield Units (HU) in the trabecular area. ABMD Software is a post-processing software that works on existing CT scans. ABMD Software measurements are to be reviewed by radiologists and should be used by healthcare providers in conjunction with clinical evaluation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the ABMD Software based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Correlation with Manual QCT-based BMD MeasurementStrong correlation reported (t = 0.97, p<0.01).
    Correlation with DXA BMD MeasurementSignificant correlation reported (r = 0.72, p<0.01). This closely matched correlations reported in literature between DXA and manual QCT (r=0.5 to r=0.75).
    Sample Volume Placement (relative to cortical bone)Software passes if the sample volume is at least 1 pixel away from the cortical border.
    Functional Requirements and Performance SpecificationsAll functional requirements and performance specifications were met.
    Agreement with Manual QCT-based BMD MeasurementStrong agreement reported.
    Agreement with DXA BMD MeasurementModest but significant agreement reported.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Study 1 (Reference Dataset):
      • Sample Size: 993 cases.
      • Data Provenance: Not explicitly stated, but indicated as a "cohort of asymptomatic cases who underwent CT scans." The geographical origin (country) is not specified. It is retrospective as it uses "existing CT scans."
    • Study 2:
      • Sample Size: 172 asymptomatic cases.
      • Data Provenance: Not explicitly stated, but indicated as cases who underwent "whole-body DXA scans as well as CT scans." The geographical origin (country) is not specified. It is retrospective as it uses "existing CT scans."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated in terms of a specific count. However, the ground truth was established by "trained operators."
    • Qualifications of Experts: Described as "trained operators" for both manual QCT measurements and DXA scan derivations. Specific qualifications like "radiologist with 10 years of experience" are not provided.

    4. Adjudication Method for the Test Set

    • The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set's ground truth. It states that "QCT BMD, T-score, and Z-score values derived from manual measurement by trained operators" were used for ground truthing. This implies a single measurement by a "trained operator" formed the ground truth for QCT, and DXA values were also used, presumably from standard reports.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance?

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The studies focused on comparing the ABMD software to manual QCT measurements and DXA measurements, not on how human readers perform with or without AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was assessed. The studies directly compare the ABMD Software's measurements (which are algorithm-driven) to established ground truth methods (manual QCT and DXA). The text states, "The ABMD Software strongly correlated with manual QCT-based BMD measurement" and "The ABMD Software also correlated with DXA BMD measurement." This indicates the algorithm's performance independent of human-in-the-loop interaction for the measurement itself, though the results are intended for review by radiologists.

    7. The Type of Ground Truth Used

    • Study 1:
      • Type: Expert consensus (implicitly, from "trained operators") for QCT BMD, T-score, and Z-score values derived from manual measurements.
    • Study 2:
      • Type: Combined expert consensus (implicitly, from "trained operators") for QCT BMD, T-score, and Z-score values derived from manual measurements, and clinical outcomes/measurements from DXA scans (DXA T-score and Z-score values).

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It mentions the ABMD Software uses "an AI trained model," but the details of the training data are not provided in this summary.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only mentions the ground truthing process for the test/reference datasets.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1