Search Results
Found 1 results
510(k) Data Aggregation
(101 days)
The device is intended for noninvasive processing of ultrasound images to detect, measure, and calculate relevant medical parameters of structures and function of patients aged 18 years or older with suspected disease.
Blineslide is a cloud service application that helps qualified users with image-based assessment of lung ultrasound (LUS) cines acquired from the anterior or anterolateral chest regions during a physician-led LUS examination of patients aged 18 years or older. It does not directly interface with ultrasound systems.
Blineslide takes as input user-uploaded B-Mode LUS video clips (cines) in MP4 format and allows users to detect the relevant medical parameters of structures and function (LUS artifacts). Key features of the software are:
- B Line Artifact Module: an AI-assisted tool for detecting the presence or absence of B line artifacts in LUS cines
Blineslide is incompatible with:
- Cines that are acquired from Linear array ultrasound transducers;
- Cines acquired at less than 18 frames per second;
- Cines that require more than 2048 megabytes of memory;
- Cines that are less than 2600 milliseconds in duration; and
- Cines that are greater than 7800 milliseconds in duration
Each of these exclusion criteria are automatically assessed by the software. If detected, an output of Cannot Evaluate is returned to the user to minimize the risk of false LUS artifact detections.
Blineslide does not perform any function that could not be accomplished by a trained user manually. Patient management decisions should not be made solely on the results of Blineslide's analysis.
Here's a breakdown of the acceptance criteria and study details for the BlineSlide device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance
1. A table of acceptance criteria and the reported device performance:
| Metric | Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|---|
| Sensitivity | Not explicitly stated, but high agreement expected | 0.91 (95% CI: 0.88 – 0.94) |
| Specificity | Not explicitly stated, but high agreement expected | 0.84 (95% CI: 0.81 – 0.86) |
Note: The FDA 510(k) summary does not explicitly state pre-defined acceptance criteria for statistical metrics like sensitivity and specificity. Instead, the reported performance is presented to demonstrate substantial equivalence to the predicate device. The "implied" acceptance criteria are derived from the need for the device to be "as safe and as effective as the predicate device."
Study Details
2. Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Initially 1005 cines. After exclusions for poor image quality, the final dataset comprised 326 positive class examples (B Line Artifacts Present) and 679 negative class examples (B Line Artifacts Absent), totaling 1005 cines.
- Data Provenance:
- Country of Origin: Not explicitly stated, but mentioned as "various clinical sites in cities with diverse race and ethnicity populations," and "geographically distinct from the data sources used in the development set." This implies a diverse, likely multi-site, geographical origin.
- Retrospective or Prospective: Not explicitly stated, but typical for these types of studies, the data is likely retrospective, collected from existing archives, then curated into a test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "Two or more experts."
- Qualifications of Experts: Not explicitly stated beyond "experts." However, given the context of identifying B line artifacts in lung ultrasound, it can be inferred that these experts would be physicians credentialed to use lung ultrasound clinically, such as intensivists, emergency physicians, pulmonologists, or other clinicians interpreting LUS cines, as described in the "Intended User" section.
4. Adjudication method for the test set:
- Adjudication Method: Consensus agreement of two or more experts. In rare cases where consensus could not be reached due to poor image quality, clips were excluded.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly reported in this 510(k) summary. The evaluation focused on the standalone performance of the AI algorithm against expert ground truth.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done. The summary explicitly states: "The performance of the B Line Artifact Detection Module was successfully evaluated on a test dataset..." and "Performance was assessed by measuring agreement using sensitivity and specificity as co-primary endpoints with Cannot Evaluate outputs scored as false predictions." This directly describes standalone performance.
7. The type of ground truth used:
- Type of Ground Truth: Expert consensus (two or more experts).
8. The sample size for the training set:
- The sample size for the training set is not explicitly stated in the provided 510(k) summary. It only mentions that the "test data was entirely separated from that used for development" and the "data sources used in the test set were entirely different and geographically distinct from the data sources used in the development set."
9. How the ground truth for the training set was established:
- How the ground truth for the training set was established is not explicitly stated in the provided 510(k) summary. It is implied that ground truth was established during the development phase to train the "non-adaptive machine learning algorithms." This would typically involve expert annotations or labels, similar to the test set, but the specific methodology is not detailed.
Ask a specific question about this device
Page 1 of 1