Search Results
Found 3 results
510(k) Data Aggregation
(150 days)
SDJ
Us2.ca processes acquired transthoracic cardiac ultrasound images to support qualified cardiologists, sonographers, or other licensed professional healthcare practitioners in their diagnosis of cardiac amyloidosis. Us2.ca is intended for use only in adult patients with increased left ventricular wall thickness, defined as an interventricular septal thickness (IVSd) or left ventricular posterior wall thickness (LVPWd) ≥ 12mm. Us2.ca is not intended to provide a diagnosis and does not replace current standards of care. The results from Us2.ca are not intended to exclude the need for further follow-up on cardiac amyloidosis.
The Us2.ai platform is a clinical decision support tool that analyzes echocardiogram images in order to generate a series of AI-derived measurements. Fully automated, functional reporting with disease indications is also provided, in line with ASE & ESC guidelines. Echo images are sent to the Us2.ai platform where they are processed, analyzed and measured. Results that meet the confidence threshold for both image quality and measurement accuracy are passed through to a report for review by the clinical users. Report text is also generated and presented with the measurements, providing functional reporting and disease indications. The ultimate clinical decision and interpretation reside solely with the clinician. Us2.ca is an enhancement to Us2.ai existing Us2.v2 software, adding the capability to detect cardiac amyloidosis. It is an image post-processing analysis software device used for viewing and quantifying cardiovascular ultrasound images in DICOM format. The device is intended to aid diagnostic review and analysis of echocardiographic data, patient record management and reporting. The primary intended function of Us2.ca is to automatically identify patients who require additional follow-up for cardiac amyloidosis. In doing so, the primary benefit is to improve clinical echocardiographic workflow, enabling clinicians to generate and edit reports faster, with precision and with full control. The final clinical decision of the results still remains with the clinicians.
Here's a breakdown of the acceptance criteria and the study proving Us2.ca meets them, based on the provided FDA 510(k) Clearance Letter:
Us2.ca Device Performance Study Summary
Us2.ca is an AI-powered software designed to analyze transthoracic cardiac ultrasound images to support healthcare practitioners in the diagnosis of cardiac amyloidosis in adult patients with increased left ventricular wall thickness (IVSd or LVPWd ≥ 12mm). The device is not intended as a standalone diagnostic tool but as an adjunctive clinical decision support system.
1. Acceptance Criteria and Reported Device Performance
The primary performance metrics for Us2.ca were sensitivity and specificity for the detection of cardiac amyloidosis. The benchmarks for acceptance criteria were established with reference to current standards of care and existing relevant publications.
Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Derived from "current standards of care and existing relevant publications") | Reported Device Performance (95% CI) |
---|---|---|
Sensitivity | Implicitly met by reported performance within clinical relevance | 86.9% (84.2%-89.7%) |
Specificity | Implicitly met by reported performance within clinical relevance | 87.4% (85.2%-89.7%) |
Overall Yield | Sufficiently high | 87.1% |
Note: The document states "The benchmark used in deriving the acceptance criteria of Us2.ca was made with reference to current standards of care and existing relevant publications." However, explicit numerical acceptance thresholds for sensitivity and specificity are not provided in the excerpt. The reported performance metrics are presented as the results that met the unstated acceptance criteria.
2. Sample Sizes and Data Provenance
- Training Set Sample Size: 4,371 patients (2,241 CA Cases, 2,130 Control Cases)
- Test Set (External Validation) Sample Size: 1,647 patients (664 CA Cases, 983 Control Cases)
- Data Provenance:
- Country of Origin: The external validation cohort was sourced from six clinical sites across the United States (USA) and Japan. The training data came from "entirely separate data providers," implying diverse origins as well.
- Retrospective or Prospective: All echocardiographic studies were retrospectively obtained from routine clinical evaluations.
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. However, it indicates that the device "supports qualified cardiologists, sonographers, or other licensed professional healthcare practitioners in their diagnosis of cardiac amyloidosis," implying that the ground truth would have been established by such qualified professionals.
4. Adjudication Method for the Test Set
The document does not describe the specific adjudication method (e.g., 2+1, 3+1) used for establishing the ground truth of the test set. It mentions the "testing data involved two cohorts: Cardiac Amyloidosis Group (CA Group) and Control Group," but not the process for classifying patients into these groups.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to assess how human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the Us2.ca algorithm.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance study was conducted. The reported sensitivity of 86.9% and specificity of 87.4% are results of the Us2.ca algorithm's performance on the test set, without human intervention or assistance during the evaluation phase. The overall yield of 87.1% also reflects the algorithm's ability to generate confident predictions.
7. Type of Ground Truth Used
The type of ground truth used was expert consensus / clinical diagnosis implicitly. Patients were categorized into a "Cardiac Amyloidosis Group (CA Group)" and "Control Group," indicating that established clinical diagnoses of cardiac amyloidosis (or lack thereof) were used as the reference standard. The "diagnosis of cardiac amyloidosis" is the target of the device's support to "qualified cardiologists, sonographers, or other licensed professional healthcare practitioners."
8. Sample Size for the Training Set
The sample size for the training set was 4,371 patients.
9. How Ground Truth for the Training Set Was Established
The document states that the training and external validation datasets were "sourced from entirely separate data providers." While it doesn't explicitly detail the methodology for establishing ground truth for the training set, it can be inferred that it followed similar clinical diagnostic processes as the test set, leading to the classification of "CA Cases" and "Control Cases." This would typically involve clinical evaluation, imaging interpretation by experts, and potentially confirmatory tests as standard clinical practice for cardiac amyloidosis diagnosis.
Ask a specific question about this device
(155 days)
SDJ
InVision Precision Cardiac Amyloid is an automated machine learning-based decision support system, indicated as a screening tool for adult patients aged 65 years and over undergoing cardiovascular assessment using echocardiography.
When utilized by an interpreting physician, this device provides information alerting the physician for referral to confirmatory investigations.
InVision Precision Cardiac Amyloid is indicated in adult populations over 65 years of age. Patient management decisions should not be made solely on the results of the InVision Precision Cardiac Amyloid.
The InVision Precision Cardiac Amyloid (InVision PCA) is a Software as a Medical Device (SaMD) machine-learning screening algorithm to identify high suspicion of cardiac amyloidosis from routinely obtained echocardiogram videos. The device assists clinicians in the diagnosis of cardiac amyloidosis.
The InVision PCA algorithm uses a machine learning process to identify the presence of cardiac amyloidosis. The device inputs images and videos from echocardiogram studies, and it outputs a report suggestive or not suggestive of cardiac amyloidosis.
The device has no physical form and is installed as a third-party application to an institution's PACS system.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter:
InVision Precision Cardiac Amyloid: Acceptance Criteria and Performance Study
InVision Precision Cardiac Amyloid (InVision PCA) is a Software as a Medical Device (SaMD) machine-learning screening algorithm developed to identify a high suspicion of cardiac amyloidosis from routinely obtained echocardiogram videos. It acts as a decision support system, alerting interpreting physicians for referral to confirmatory investigations for adult patients aged 65 years and over undergoing cardiovascular assessment using echocardiography.
The device's performance was validated through a comprehensive study, demonstrating its substantial equivalence to the predicate device.
1. Acceptance Criteria and Reported Device Performance
The primary acceptance criteria for the InVision PCA device were established based on its ability to reliably screen for cardiac amyloidosis. The reported performance metrics from the validation study are as follows:
Acceptance Criteria | Reported Device Performance |
---|---|
Sensitivity | 0.607 (60.7%) |
Specificity | 0.990 (99.0%) |
Note: While specific numerical acceptance thresholds are not explicitly stated as "passing" values (e.g., "must achieve >X% sensitivity"), these reported values are presented as the results that successfully met the predefined endpoints of the validation study, implying they satisfied the implicit acceptance criteria deemed necessary for clearance.
2. Sample Size and Data Provenance for Test Set
- Sample Size: 1221 unique echocardiogram studies.
- Data Provenance: The data were selected from three geographically different U.S. sites. The study was conducted on "previously acquired" images, indicating it was a retrospective study.
3. Number of Experts and Qualifications for Ground Truth
The provided document does not explicitly state the number of experts used to establish the ground truth nor their specific qualifications. It mentions "confirmatory reference data," which could imply a consensus of expert opinion but does not detail the process.
4. Adjudication Method for Test Set
The document does not explicitly state the adjudication method used (e.g., 2+1, 3+1). It refers to the ground truth being established by "confirmatory reference data, such as diagnostic imaging or pathology," suggesting a definitive diagnostic pathway rather than a multi-reader visual interpretation adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text, meaning there is no information on how much human readers improve with AI vs. without AI assistance. The study focused on the standalone performance of the AI model.
6. Standalone (Algorithm Only) Performance
Yes, a standalone (algorithm only) performance study was conducted. The reported sensitivity of 0.607 and specificity of 0.990 are directly attributable to the InVision PCA algorithm's performance in analyzing echocardiogram studies against confirmed ground truth.
7. Type of Ground Truth Used
The ground truth for the test set was established using confirmatory reference data, such as diagnostic imaging or pathology. This indicates a high-fidelity ground truth derived from definitive diagnostic procedures rather than solely expert consensus on images.
8. Sample Size for Training Set
The document does not specify the sample size used for the training set. It only details the sample size for the validation/test set.
9. How Ground Truth for Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It is assumed that similar rigorous methods involving confirmatory diagnostic imaging or pathology would have been used for the training data, consistent with the approach for the test set, but this is not directly mentioned.
Ask a specific question about this device
(232 days)
SDJ
EchoGo Amyloidosis 1.0 is an automated machine learning-based decision support system, indicated as a screening tool for adult patients aged 65 years and over with heart failure undergoing cardiovascular assessment using echocardiography.
When utilised by an interpreting physician, this device provides information alerting the physician for referral to confirmatory investigations.
EchoGo Amyloidosis 1.0 is indicated in adult patients aged 65 years and over with heart failure. Patient management decisions should not be made solely on the results of the EchoGo Amyloidosis 1.0 analysis.
EchoGo Amyloidosis 1.0 takes a 2D echocardiogram of an apical four chamber (A4C) as its input and reports as an output a binary classification decision suggestive of the presence of Cardiac Amyloidosis (CA).
The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls.
The A4C echocardiogram should be acquired without contrast and contain at least one full cardiac cycle. Independent training, tune and test datasets were used for training and performance assessment of the device.
EchoGo Amyloidosis 1.0 is fully automated without a graphical user interface.
The ultimate diagnostic decision remains the responsibility of the interpreting clinician using patient presentation, medical history, and the results of available diagnostic tests, one of which may be EchoGo Amyloidosis 1.0.
EchoGo Amyloidosis 1.0 is a prescription only device.
The provided text describes the acceptance criteria and a study proving that the EchoGo Amyloidosis 1.0 device meets these criteria.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as clear, quantitative thresholds in a "table" format within the provided text. Instead, the document describes the study that was conducted to demonstrate performance against generally accepted metrics for such devices (e.g., sensitivity, specificity, PPP, NPV, repeatability, reproducibility).
However, based on the results presented in the "10.2 Essential Performance" and "10.4 Precision" sections, we can infer the achieved performance metrics. The text states: "All measurements produced by EchoGo Amyloidosis 1.0 were deemed to be substantially equivalent to the predicate device and met pre-specified levels of performance." It does not, however, explicitly list those "pre-specified levels."
Here's a table summarizing the reported device performance:
Metric | Reported Device Performance (95% CI) | Notes |
---|---|---|
Essential Performance | ||
Sensitivity | 84.5% (80.3%, 88.5%) | Based on native disease proportion (36.7% prevalence) |
Specificity | 89.7% (87.0%, 92.4%) | Based on native disease proportion (36.7% prevalence) |
Positive Predictive Value (PPV) | 82.7% (78.8%, 86.5%) | At 36.7% prevalence |
Negative Predictive Value (NPV) | 90.9% (88.8%, 93.2%) | At 36.7% prevalence |
PPV (Inferred) | 15.6% (11.0%, 20.8%) | At 2.2% prevalence |
NPV (Inferred) | 99.6% (99.5%, 99.7%) | At 2.2% prevalence |
No-classifications Rate | 14.0% | Proportion of data for which the device returns "no classification" |
Precision | ||
Repeatability (Positive Agreement) | 100% | Single DICOM clip analyzed multiple times |
Repeatability (Negative Agreement) | 100% | Single DICOM clip analyzed multiple times |
Reproducibility (Positive Agreement) | 85.5% (82.4%, 88.2%) | Different DICOM clips from the same individual |
Reproducibility (Negative Agreement) | 79.9% (76.5%, 83.2%) | Different DICOM clips from the same individual |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 1,164 patients
- 749 controls
- 415 cases
- Data Provenance: Retrospective case:control study, collected from multiple sites spanning nine states in the USA. The data also included some "Non-USA" origin (as seen in the subgroup analysis table, but the overall testing data seems to be primarily US-based based on the description).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number of experts or their specific qualifications (e.g., radiologists with X years of experience) used to establish the ground truth for the test set. It mentions that clinical validation was conducted to "assess agreement with reference ground truth" but does not detail how this ground truth was derived or by whom.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) used for the test set's ground truth establishment.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, the document does not describe an MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance. The study described is a standalone performance validation of the algorithm against a defined ground truth.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The results presented (sensitivity, specificity, PPV, NPV) are for the algorithm's performance without a human-in-the-loop. The device is described as "fully automated without a graphical user interface" and is a "decision support system" that "provides information alerting the physician for referral." The performance metrics provided are directly from the algorithm's output compared to ground truth.
7. The Type of Ground Truth Used
The document states: "The clinical validation study was used to demonstrate consistency of the device output as well as to assess agreement with reference ground truth." However, it does not specify the nature of this "reference ground truth" (e.g., expert consensus, pathology, outcomes data).
8. The Sample Size for the Training Set
The training data characteristics table shows the following sample sizes:
- Controls: 1,262 (sum of age categories: 118+197+337+388+222)
- Cases: 1,302 (sum of age categories: 122+206+356+389+229)
- Total Training Set Sample Size: 2,564 patients
9. How the Ground Truth for the Training Set Was Established
The document states: "The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls." It mentions that "Algorithm training data was collected from collaborating centres." However, it does not explicitly describe how the ground truth labels (cases/controls) for the training set were established. It is implied that these were clinically confirmed diagnoses of cardiac amyloidosis (cases) and non-amyloidosis (controls), but the method (e.g., biopsy, clinical diagnosis based on multiple tests, expert review) is not detailed.
Ask a specific question about this device
Page 1 of 1