Search Results
Found 2 results
510(k) Data Aggregation
(202 days)
The AutoChamber software is an opportunistic AI-powered quantitative imaging tool that measures and reports cardiac chambers volumes comprising left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and left ventricular wall (LVW) from non-contrast chest CT scans including coronary artery calcium (CAC) scans and lung CT scans. AutoChamber is not intended to rule out the risk of a cardiovascular disease, and the results should not be used for any purpose other than to enable physicians to investigate patients that AutoChamber shows signs of enlarged heart (cardiomegaly), enlarged cardiac chambers, and left ventricular hypertrophy (LVH) whose conditions are otherwise missed by human eyes in non-contrast chest CT scans. AutoChamber similarly measures and reports LA, LV, RA, RV, and LVW in contrast-enhanced coronary CT angiography (CCTA) scans. Additionally, AutoChamber measures and reports cardiothoracic ratio (CTR) in both contrast and non-contrast CT scans where the entire thoracic cavity is in the axial field of view. AutoChamber quantitative imaging measurements are adjusted by body surface area (BSA) and are reported both in cubic centimeter volume (cc) and percentiles by gender using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). AutoChamber should not be ordered as a standalone CT scan but instead should be used as an opportunistic add-on to existing and new CT scans of the chest, such as CAC and lung CT scans, as well as CCTA scans.
Using AutoChamber quantitative imaging measurements and their clinical evaluation, healthcare providers can investigate asymptomatic patients who are unaware of their risk of heart failure, atrial fibrillation, stroke and other life-threatening conditions associated with enlarged cardiac chambers, and LVH that may warrant additional risk-assessment or follow-up. AutoChamber quantitative imaging measurements are to be reviewed by radiologists or other medical professionals and should only be used by healthcare providers in conjunction with clinical evaluation.
The AutoChamber Software is an opportunistic AI-powered quantitative imaging tool that provides an estimate of cardiac volume, cardiac chambers volumes and left ventricular (LV) mass from non-contrast chest CT scans as well as contrast-enhanced chest CT scans. In addition to cardiac chambers volume and LV mass, AutoChamber measures and reports cardiothoracic ratio (CTR).
AutoChamber Software reads a CT scan (in DICOM format) and extracts scan specific information like acquisition time, pixel size and scanner type. The AutoChamber Software uses an AI trained model to identify cardiac chambers in the field of view and measure the volume of each chamber including left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), and LV wall (LVW). AutoChamber calculates the volume of each chamber as well as the corresponding total volume of all cardiac chambers and, if the field of view contains the entire width of the thoracic cavity in the axial view, it calculates and reports cardiothoracic ratio (CTR).
AutoChamber calculates the volume of each chamber based upon the volume of each pixel multiplied by the number of pixels in the region of interest per slice, multiplied by the number of slices included in each chamber's segmentation. The total volume per chamber is reported in cubic centimeters (CC). In addition to reporting the measured volume in CC per chamber report shows volumes adjusted by body surface area (BSA) and corresponding percentiles using reference data from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The default cut-off value for further investigations is the 75th percentile but it is optional and subject to provider's judgement.
AutoChamber does not provide a numerical individualized risk score/prediction or categorial assessment for whether an individual patient will develop cardiovascular disease over a specified period of their percentile(s).
AutoChamber is a post-processing quantitative imaging software that works on existing and new CT scans. The AutoChamber Software is a software module installed by trained personnel only. The AutoChamber Software is executed via a parent software which provides the necessary input and visualizes the output data. The software itself does not offer user controls or access. The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done. If rejected, the user must retry with a new series of images or conduct an alternate method to measure cardiac chamber volume. The expert's review solely pertains to the region of interest being properly located.
Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge. Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy. Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.
Based on the provided text, here's a description of the acceptance criteria and the study proving the device meets them:
Acceptance Criteria and Device Performance
The document does not explicitly state a table of quantitative acceptance criteria for the performance of the AutoChamber software (e.g., a specific mean absolute error for volume measurements or a target F1-score for segmentation). Instead, the software validation section states: "Software Verification and Validation testing was completed to demonstrate the safety and effectiveness of the device. Testing demonstrates the AutoChamber Software meets all its functional requirements and performance specifications."
The closest the document comes to defining acceptance criteria is in the "Principles of Ops" section, describing conditions for software pass/fail from a user's perspective:
- Software passes if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are correctly placed on the cardiac region based upon expert knowledge.
- Software fails if the healthcare provider sees the cardiac chamber volumes and left ventricular mass highlighted by AutoChamber are incorrectly placed outside of the cardiac anatomy.
- Software fails if the healthcare provider sees that the quality of the CT scan is compromised by image artifacts, motion, or excessive noise.
- The only user interaction is to accept or reject the region where the cardiac chamber volume measurement is done, with rejection leading to a retry or alternate method. "The expert's review solely pertains to the region of interest being properly located."
Given this, the qualitative acceptance criteria appear to be centered on the correct anatomical localization of the measured cardiac chambers by the AI, as confirmed by expert review.
Reported Device Performance:
The document does not provide specific metrics (e.g., mean absolute error, Dice coefficient, accuracy, sensitivity, specificity) for the performance of the AutoChamber software against its ground truth. It only states that "AutoChamber results were compared with measurements previously made by cardiac MRI" and other CT scans. Therefore, a table of acceptance criteria vs. reported device performance cannot be fully constructed from the provided text, as the specific performance outcomes are not detailed, nor are the quantitative acceptance thresholds.
Study Details:
The clinical validation of the AutoChamber software was based on retrospective analyses.
-
Sample sizes used for the test set and data provenance:
- Study 1: 5003 cases where AutoChamber results from non-contrast cardiac CT scans were compared with measurements previously made by cardiac MRI.
- Study 2: 1433 patients with paired non-contrast and contrast-enhanced cardiac CT scans.
- Study 3: 171 patients who underwent both ECG-gated cardiac CT scan and non-gated full chest lung scan.
- Study 4: 131 cases where AutoChamber results were compared directly with a Reference device (K060937).
- Data Provenance: The reference data for percentiles is from 5830 people who participated in the Multi-Ethnic Study of Atherosclerosis (MESA). The specific country of origin for the test set data (the 5003, 1433, 171, and 131 cases) is not explicitly stated, but MESA is a US-based study. All studies were retrospective analyses of existing databases.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document implies that "expert knowledge" is used to confirm the correct placement of cardiac chamber volumes and LV mass. However, it does not specify the number of experts, their qualifications (e.g., specific board certifications, years of experience), or the process by which they established ground truth for the volumes themselves (e.g., manual segmentation by experts, or if the "cardiac MRI" measurements served as the primary ground truth, and if so, how those were established).
-
Adjudication method for the test set:
- The document does not specify a formal adjudication method (e.g., 2+1, 3+1 consensus) for the expert review or the establishment of ground truth for the test set. It mentions "The expert's review solely pertains to the region of interest being properly located," implying individual expert qualitative assessment of the AI's output, rather than a multi-reader consensus process for establishing the ground truth values themselves.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure how much human readers improve with AI vs. without AI assistance is not described. The document states that the AI measurements were compared against existing data (e.g., MRI measurements, other CT scans). The AI is presented as a "post-processing quantitative imaging software" that helps physicians investigate patients and is to be reviewed by radiologists or medical professionals. This implies an assistive role, but a formal MRMC study demonstrating enhancement of human reader performance is not mentioned.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the clinical validation involved comparing the AutoChamber software's measurements directly with other established measurement methods (cardiac MRI, other CT scans, and a reference device). This indicates a standalone performance evaluation of the algorithm's output against a reference. The "Principles of Ops" section states, "The user cannot change or edit the segmentation or results of the device. The user must accept or reject the region where the cardiac chamber volume measurement is done." This suggests the algorithm performs autonomously, and its output is then presented for acceptance/rejection based on anatomical placement.
-
The type of ground truth used:
- The primary ground truth appears to be measurements previously made by cardiac MRI in one key study (5003 cases), and measurements from other CT scans or a cleared reference device (K060937) in other studies. The document does not explicitly state that these "measurements" were derived from pathology or clinical outcomes data, but rather from other imaging modalities considered reference standards (MRI) or other devices. The qualitative "expert knowledge" mentioned for passing/failing the software seems to be about the anatomical correctness of the AI's segmentation/placement rather than the true quantitative values themselves.
-
The sample size for the training set:
- The sample size for the training set is not specified in the provided text. It only mentions that the AutoChamber Software uses an "AI trained model."
-
How the ground truth for the training set was established:
- The method for establishing ground truth for the training set is not specified in the provided text.
Ask a specific question about this device
(240 days)
The Automated Bone Mineral Density Software Module (ABMD) is a post-processing AI-powered software intended to measure bone mineral density (BMD) from existing CT scans by averaging Hounsfield units in the trabecular region of vertebral bones. ABMD is not intended to replace DXA or any other tests dedicated to BMD measurement. It is solely designed for measuring BMD in existing CT scans ordered for reasons other than BMD measurement. In summary, ABMD is an opportunistic AI-powered tool that enables: (1) retrospective assessment of bone density from CT scans acquired for other purposes, (2) assessment of bone density in conjunction with another medically appropriate procedure involving CT scans, and (3) assessment of bone density without a phantom as an independent measurement procedure.
The Automated Bone Mineral Density (ABMD) Software is a software module that estimates bone mineral density in the vertebral bones by averaging Hounsfield Units (HU) in the trabecular area. ABMD Software is a post-processing software that works on existing CT scans. ABMD Software measurements are to be reviewed by radiologists and should be used by healthcare providers in conjunction with clinical evaluation.
Here's a breakdown of the acceptance criteria and study information for the ABMD Software based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Correlation with Manual QCT-based BMD Measurement | Strong correlation reported (t = 0.97, p<0.01). |
| Correlation with DXA BMD Measurement | Significant correlation reported (r = 0.72, p<0.01). This closely matched correlations reported in literature between DXA and manual QCT (r=0.5 to r=0.75). |
| Sample Volume Placement (relative to cortical bone) | Software passes if the sample volume is at least 1 pixel away from the cortical border. |
| Functional Requirements and Performance Specifications | All functional requirements and performance specifications were met. |
| Agreement with Manual QCT-based BMD Measurement | Strong agreement reported. |
| Agreement with DXA BMD Measurement | Modest but significant agreement reported. |
2. Sample Sizes Used for the Test Set and Data Provenance
- Study 1 (Reference Dataset):
- Sample Size: 993 cases.
- Data Provenance: Not explicitly stated, but indicated as a "cohort of asymptomatic cases who underwent CT scans." The geographical origin (country) is not specified. It is retrospective as it uses "existing CT scans."
- Study 2:
- Sample Size: 172 asymptomatic cases.
- Data Provenance: Not explicitly stated, but indicated as cases who underwent "whole-body DXA scans as well as CT scans." The geographical origin (country) is not specified. It is retrospective as it uses "existing CT scans."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated in terms of a specific count. However, the ground truth was established by "trained operators."
- Qualifications of Experts: Described as "trained operators" for both manual QCT measurements and DXA scan derivations. Specific qualifications like "radiologist with 10 years of experience" are not provided.
4. Adjudication Method for the Test Set
- The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set's ground truth. It states that "QCT BMD, T-score, and Z-score values derived from manual measurement by trained operators" were used for ground truthing. This implies a single measurement by a "trained operator" formed the ground truth for QCT, and DXA values were also used, presumably from standard reports.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance?
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The studies focused on comparing the ABMD software to manual QCT measurements and DXA measurements, not on how human readers perform with or without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
- Yes, standalone performance was assessed. The studies directly compare the ABMD Software's measurements (which are algorithm-driven) to established ground truth methods (manual QCT and DXA). The text states, "The ABMD Software strongly correlated with manual QCT-based BMD measurement" and "The ABMD Software also correlated with DXA BMD measurement." This indicates the algorithm's performance independent of human-in-the-loop interaction for the measurement itself, though the results are intended for review by radiologists.
7. The Type of Ground Truth Used
- Study 1:
- Type: Expert consensus (implicitly, from "trained operators") for QCT BMD, T-score, and Z-score values derived from manual measurements.
- Study 2:
- Type: Combined expert consensus (implicitly, from "trained operators") for QCT BMD, T-score, and Z-score values derived from manual measurements, and clinical outcomes/measurements from DXA scans (DXA T-score and Z-score values).
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It mentions the ABMD Software uses "an AI trained model," but the details of the training data are not provided in this summary.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly state how the ground truth for the training set was established. It only mentions the ground truthing process for the test/reference datasets.
Ask a specific question about this device
Page 1 of 1