Search Results
Found 3 results
510(k) Data Aggregation
(125 days)
Imbio, LLC
The Imbio RV/LV Software device is designed to measure the maximal diameters of the right and left ventricles of the heart from a volumetric CTPA acquisition and report the ratio of those measurements. RV/LV analyzes cases using an artificial intelligence algorithm to identify the location and measurements of the ventricles. The RV/LV software provides the user with annotated images showing ventricular measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of CTPA cases.
The Imbio CT RV/LV Software is a set of medical image post-processing computer algorithms that together perform automated image segmentation and diameter measurements on computed tomography pulmonary angiography (CTPA) images. The device then reports the ratio of those diameter measurements. The Imbio CT RV/LV Software is a single command-line executable program that may be run directly from the command-line or through scripting and thus the user interface is minimal.
Imbio RV/LV Software is a Software and Medical Device (SaMD) intended to provide annotated images and a PDF report that will be read most typically at a PACS workstation. Imbio RV/LV Software is an aid only used to support a physician in the analysis of CTPA images.
The Imbio RV/LV Software program reads in DICOM CPTA image datasets, processes the data, then writes output DICOM files and summary reports to a specified directory. Imbio RV/LV Software outputs DICOMs of the original input DICOM CPTA images overlaid with color-codings representing the results of RV/LV computer caliper measurement. Additionally, a summary PDF report is output.
Imbio RV/LV Software does not interface directly with any CT scanner or data collection equipment; instead the software imports data files previously generated by such equipment and is integrated as part of the radiological work-flow, reducing the risk of use errors.
Here's an analysis of the Imbio RV/LV Software based on the provided FDA 510(k) summary, structured to answer your questions:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) summary provided does not explicitly state numerical acceptance criteria in terms of performance metrics (e.g., accuracy thresholds, sensitivity, specificity targets). Instead, it describes clinical performance testing designed to demonstrate improvement in agreement among general radiologists and accuracy compared to radiologists' measurements.
Performance Metric | Acceptance Criteria (Not Explicitly Stated in Document) | Reported Device Performance (Summary of Findings) |
---|---|---|
Reader Study I: Improvement in Agreement Among General Radiologists | (Implicit: Demonstrates improved agreement with AI assistance) | Demonstrated improvement of agreement among general radiologists with the assistance of the RV/LV output report. |
Reader Study II: Accuracy of RV/LV Diameter Ratios | (Implicit: Demonstrates accuracy compared to radiologists' measurements) | Demonstrated accuracy of RV/LV diameter ratios compared to radiologists' measurement of the RV/LV diameter ratio. |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Anonymized CTPA datasets were utilized in the reader study." However, it does not specify the exact sample size of the test set nor the country of origin of the data. It does indicate that the study was retrospective as it used "anonymized CTPA datasets."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states: "The second test (Reader Study-II) will demonstrated the accuracy of RVLV diameter ratios compared to radiologist's measurement of the RVLV diameter ratio." This implies that radiologists' measurements were used as a reference for accuracy. However, the document does not specify the number of radiologists or their specific qualifications (e.g., years of experience, subspecialty) used to establish this ground truth.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing the ground truth or resolving discrepancies in Reader Study II. It only mentions "radiologist's measurement," implying individual radiologist readings were used as comparison. For Reader Study I, which focused on "agreement among general radiologists," the specific adjudication or consensus method to determine the "improved agreement" is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Yes, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed in the form of Reader Study I. This study "demonstrated the improvement of the agreement among general radiologist with the assistance of the RVLV output report."
Effect size of how much human readers improve with AI vs. without AI assistance: The document states that Reader Study I "demonstrated the improvement of the agreement among general radiologist with the assistance of the RVLV output report." However, it does not provide quantitative effect size results (e.g., specific statistical metrics like AUC improvement, sensitivity/specificity changes, or inter-reader agreement measures like kappa values) directly in this summary.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance evaluation was also done, implicitly referred to as Reader Study II. This study "demonstrated the accuracy of RVLV diameter ratios compared to radiologist's measurement of the RVLV diameter ratio." This test assesses the algorithm's direct output (RV/LV diameter ratios) against a human reference, indicating a standalone performance evaluation.
7. Type of Ground Truth Used
The ground truth used for performance evaluation (specifically for Reader Study II, or for comparison in general) was based on expert (radiologist) measurements. The document states, "accuracy of RVLV diameter ratios compared to radiologist's measurement of the RVLV diameter ratio."
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set of the AI algorithm.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established.
Ask a specific question about this device
(58 days)
Imbio, LLC
The Imbio Segmentation Editing Tool Software is used by trained medical professionals asa tool to modify the contours of segmentation masks produced by Imbio algorithms or to manually create segmentation mask contours. The Segmentation Editing Tool can provide further support to the users of Imbio's algorithms.
Imbio Segmentation Editing Tool (SET) Software is a segmentation editing tool designed to allow users to optimize segmentations calculated by Imbio's fully-automated suite of algorithms (Each algorithm is a separate Imbio program and either has or will be submitted for regulatory approval independently). Imbio is building a suite of medical image post-processing applications that run automatically after data transfer off the medical imaging scanner. Automatic image segmentation is often an essential step in Imbio's analyses. To date, the automatic segmentation algorithms used in Imbio's applications have been robust, however segmentation failures do occur. The purpose of the Segmentation Editing Tool is to provide customers with a tool to locally correct poor segmentations. Additionally, if the Imbio automatic segmentation fails such that it is unable to produce a result, this tool can be used to semi-manually draw the segmentation required for analysis. SET reads in anatomical images used in an automatic segmentation algorithm and the results of the automated segmentation algorithm (if available). The user is then able to locally correct insufficiencies in the segmentation result, or create a segmentation mask from scratch. The finalized segmentation mask is then pushed back to Imbio's Core Computing Platform and the job is re-processed.
The provided FDA 510(k) summary for the "Imbio Segmentation Editing Tool Software" (K180129) does not contain details about specific acceptance criteria or a study that proves the device meets such criteria.
Instead, it focuses on demonstrating substantial equivalence to a predicate device (MIM 5.2 Brachy K103576). The document mentions "Non-clinical testing was done to show validity of SET software" and "Design validation was performed using the Imbio Segmentation Editing Tool Software in actual and simulated use settings," but it does not provide the results of these tests, specific acceptance criteria, or detailed methodologies.
Here's a breakdown based on the information provided and not provided in the document:
1. A table of acceptance criteria and the reported device performance:
- Not provided. The document does not list any specific quantitative acceptance criteria (e.g., Dice score, mean surface distance, sensitivity, specificity) or present device performance metrics against such criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not provided. The document states "Design validation was performed using the Imbio Segmentation Editing Tool Software in actual and simulated use settings," but it does not specify the number of cases (sample size) used for these tests, nor the origin or nature (retrospective/prospective) of the data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not provided. Since specific test sets and ground truth establishment are not detailed, this information is absent. The device is a segmentation editing tool meant to be used by "trained medical professionals," suggesting a human-in-the-loop context, but not a fully automated algorithm that produces its own segmentations against a pre-established ground truth for validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not provided. This information is typically relevant for studies where multiple readers determine ground truth or interpret results, which is not described for this device's validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not provided. The document states, "This technology is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
- While "Usability testing was completed for this product to ensure proper use of the product by intended users," this is not equivalent to an MRMC comparative effectiveness study measuring improved human performance with AI assistance. The device is a tool to edit segmentations, not an AI that provides initial interpretations.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated as a primary validation method for this specific 510(k). The device is described as "a segmentation editing tool designed to allow users to optimize segmentations calculated by Imbio's fully-automated suite of algorithms." It's an editing tool for other Imbio algorithms' outputs. Therefore, its performance is inherently linked to human interaction. The validation focuses on the tool's functionality for editing, not on a standalone algorithmic diagnostic performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not provided for the "non-clinical testing" or "design validation." Since the device is an editing tool, the "ground truth" in operational use would be the expert's final, corrected segmentation. How the accuracy of the editing tool itself was measured against a truth standard is not detailed.
8. The sample size for the training set:
- Not applicable / Not provided. The Imbio Segmentation Editing Tool Software is described as a software tool for editing segmentation masks, not an AI algorithm that learns from a training set to produce segmentations itself. The document mentions "Imbio's fully-automated suite of algorithms" which do perform segmentation, but this 510(k) is for the editing tool, not for those underlying algorithms. Therefore, a training set for the editing tool is not relevant in the same way it would be for a segmentation algorithm.
9. How the ground truth for the training set was established:
- Not applicable / Not provided. As mentioned above, the editing tool itself does not have a training set in the context of learning to perform segmentation.
Ask a specific question about this device
(146 days)
IMBIO LLC
The Imbio CT Lung Density Analysis Software provides reproducible CT values for pulmonary tissue, which is essential for providing quantitative support for diagnosis and follow up examinations. The Imbio CT Lung Density Analysis Software can be used to support the diagnosis and documentation of pulmonary tissue images (e.g., abnormalities) from CT thoracic datasets. Three-D segmentation of sub-compartments, volumetric analysis, density evaluations and reporting tools are provided.
The Imbio CT Lung Density Analysis Software (Imbio LDA) is a set of image post-processing algorithms that perform image segmentation, registration, thresholding, and classification on CT images of human lungs. The algorithms within the Imbio CT Lung Density Analysis Software are combined into a single command-line executable program that may be run directly from the command-line or through scripting. The Imbio CT Lung Density Analysis Software program performs segmentation, then registration, then thresholding and classification. The program reads in DICOM datasets, processes the data, then writes output DICOM files to a specified directory. The Imbio CT Lung Density Analysis Software is a command-line software application that analyzes DICOM CT lung image datasets and generates reports and DICOM output that show the lungs segmented and overlaid with color-codings representing the results of its thresholding and classification rules. It has simple file management functions for input and output, and separate modules that implement the CT image-processing algorithms. Imbio CT Lung Density Analysis Software does not interface directly with any CT or data collection equipment; instead the software imports data files previously generated by such equipment.
Here's a breakdown of the acceptance criteria and study details for the Imbio CT Lung Density Analysis Software, based on the provided FDA 510(k) summary:
This device is cleared under a 510(k), which means it demonstrates substantial equivalence to a legally marketed predicate device rather than proving de novo safety and effectiveness through extensive clinical trials. Therefore, the "acceptance criteria" here refers more to the demonstration that the device's performance aligns with its specifications and is comparable to the predicate, rather than meeting specific clinical efficacy thresholds.
Study that proves the device meets the acceptance criteria:
The study primarily focused on non-clinical testing to demonstrate substantial equivalence to the predicate device (VIDA Pulmonary Workstation 2 (PW2), K083227).
1. Table of Acceptance Criteria and Reported Device Performance:
Since this is a 510(k) summary focused on substantial equivalence through non-clinical testing, specific quantitative "acceptance criteria" and "reported performance" in a typical clinical trial sense are not explicitly provided with numerical thresholds. Instead, the acceptance criteria were implicitly "functional equivalence," "accurate segmentation," and "accurate thresholding" compared to the predicate device and the ground truth derived from the datasets.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Functional Equivalence to Predicate: Performs similar image post-processing (segmentation, registration, thresholding, classification) for CT lung images. | The Imbio CT Lung Density Analysis Software imports CT DICOM data, analyzes it, and produces reports with quantitative and graphical results, similar to the predicate. "Direct quantitative comparisons using the same CT lung scans yielded similar results." Differences (command-line interface vs. GUI, automated vs. manual inspiration/expiration registration, lack of interactive visualization, low-density cluster analysis, and airway report compared to predicate) were deemed not to affect efficacy and safety. |
Accurate Scan Processing Completion: Software successfully processes all scans as expected. | "Direct predicate comparison for scan processing completion" was performed, indicating successful processing. |
Accurate Segmentation: Correctly identifies and separates anatomical structures (e.g., lungs). | "Direct predicate comparison for... segmentation" was performed, indicating accurate segmentation. |
Accurate Thresholding: Applies density thresholds correctly for classification. | "Direct predicate comparison for... thresholding" was performed, indicating accurate thresholding. |
Specification Compliance: Software functions according to its stated specifications. | "Software verification and validation testing for each requirement specification" was conducted. |
Algorithmic Functionality: Each algorithmic function performs as intended. | "Software verification and validation testing for each algorithmic function" was conducted. |
System Reliability: Software operates reliably at unit, integration, and system levels. | "Software verification and validation testing at the unit, integration, and system level" was conducted. |
Safety and Effectiveness: No new questions of safety or effectiveness are raised compared to the predicate device. | The conclusion states: "It has been shown in this 510(k) submission that the differences between the Imbio CT Lung Density Analysis Software and the VIDA PW2 (K0832277) do not raise any questions regarding safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated as a number of patients or cases. The document mentions "CT datasets available upon request from the COPDGene study (www.copdgene.org) and the DIR-Lab (www.dir-lab.com)." These are large, publicly available research datasets, implying a potentially substantial number of cases were available for testing.
- Data Provenance:
- Country of Origin: Not specified for individual cases, but COPDGene is a multi-center study in the United States. DIR-Lab also sources data, often from US institutions. So, likely predominantly USA.
- Retrospective or Prospective: The use of "available upon request" datasets like COPDGene and DIR-Lab strongly suggests a retrospective analysis of existing data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- This information is not provided in the summary. Since the testing was primarily non-clinical and involved comparing software output to established datasets and a predicate device, explicit expert ground truth labeling for a test set (e.g., by radiologists) is not detailed. The "ground truth" for segmentation and density measurements would be derived from the inherent data characteristics and comparison to the predicate's outputs, which are themselves based on accepted medical imaging principles.
4. Adjudication Method for the Test Set:
- Not applicable/Not specified. Given the nature of the non-clinical testing focused on software functionality and comparison to a predicate, an expert adjudication process (like 2+1 reading) is not described. The "direct predicate comparison" served as a primary reference.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Reader Improvement:
- No, an MRMC comparative effectiveness study was not done. The summary explicitly states: "This technology is not new, therefore a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Yes, a standalone performance assessment was effectively done. The non-clinical testing detailed ("Direct predicate comparison for scan processing completion, segmentation, and thresholding," and "Software verification and validation testing for each requirement specification," "each algorithmic function," and "at the unit, integration, and system level") assesses the algorithm's performance directly, without human intervention during the processing steps.
7. The Type of Ground Truth Used:
- The primary "ground truth" for the non-clinical testing was:
- Reference standard from public datasets: The inherent, accepted characteristics of the COPDGene and DIR-Lab CT datasets regarding lung anatomy and density.
- Comparison to Predicate Device Output: The outputs of the legally marketed predicate device (VIDA PW2) on the same CT lung scans were used as a primary comparative reference, implying that the predicate's performance served as a de facto "ground truth" for equivalence.
- Software Specifications: The internal design specifications and expected behaviors of the Imbio software itself were also used as a basis for verification and validation.
8. The Sample Size for the Training Set:
- Not specified. The document mentions the use of COPDGene and DIR-Lab datasets for "non-clinical testing" and verifying function. It does not provide details on how the training of the algorithms (if applicable, for example, for segmentation models) was performed or the specific datasets and sample sizes used for that purpose. This summary is focused on the verification and validation of the final product for regulatory submission.
9. How the Ground Truth for the Training Set Was Established:
- Not specified. As the training set details are not provided, neither is the method for establishing its ground truth. However, for algorithms like segmentation and density analysis, ground truth for training would typically involve manual annotation by expert radiologists or technologists, or derived from other well-established imaging techniques or pathological correlation, depending on the algorithm's specifics.
Ask a specific question about this device
Page 1 of 1