Search Results
Found 4 results
510(k) Data Aggregation
(254 days)
BriefCase-Quantification is a radiological image management and processing system software indicated for use in the analysis of CT exams with contrast, that include the abdominal aorta, in adults or transitional adolescents aged 18 and older.
The device is intended to assist appropriately trained medical specialists by providing the user with the maximum abdominal aortic axial diameter measurement of cases that include the abdominal aorta (M-AbdAo), BriefCase-Quantification is indicated to evaluate normal and aneurysmal abdominal aortas and is not intended to evaluate postoperative aortas.
The BriefCase-Quantification results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of cases. These measurements are unofficial, are not final, and are subject to change after review by a radiologist. For final clinically approved measurements, please refer to the official radiology report. Clinicians are responsible for viewing full images per the standard of care.
BriefCase-Quantification is a radiological medical image management and processing device. The software consists of a single module based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Quantification receives filtered DICOM Images, and processes them chronologically by running the algorithm on relevant series to measure the maximum abdominal aortic diameter. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application), and forwarded to user review in the PACS.
The BriefCase-Quantification produces a preview image annotated with the maximum axial diameter measurement. The diameter marking is not intended to be a final output, but serves the purpose of visualization and measurement. The original, unmarked series remains available in the PACS as well. The preview image presents an unofficial and not final measurement, and the user is instructed to review the full image and any other clinical information before making a clinical decision. The image includes a disclaimer: "Not for diagnostic use. The measurement is unofficial, not final, and must be reviewed by a radiologist."
BriefCase-Quantification is not intended to evaluate post-operative aortas.
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Mean absolute error between ground truth measurement and algorithm | 1.95 mm (95% Cl: 1.59 mm, 2.32 mm) |
Performance Goal | Mean absolute error estimate below prespecified performance goal (specific numerical goal not explicitly stated, but was met) |
2. Sample size used for the test set and the data provenance
- Test set sample size: 160 cases
- Data provenance: Retrospective, from 6 US-based clinical sites (both academic and community centers). The cases were distinct in time and/or center from the training data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: 3
- Qualifications of experts: US board-certified radiologists.
4. Adjudication method for the test set
The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that the ground truth was "determined by three US board-certified radiologists." This implies a consensus-based approach, but the specific decision rule is not detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance versus without AI assistance was not done. This study focused on the standalone performance of the algorithm against a ground truth established by experts.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance study of the algorithm was done. The "Primary Endpoint" section details the algorithm's performance (mean absolute error) compared to the ground truth.
7. The type of ground truth used
Expert consensus was used as the ground truth. The ground truth measurements were "determined by three US board-certified radiologists."
8. The sample size for the training set
The document does not specify the sample size for the training set. It only states that the cases collected for the pivotal dataset (test set) were "distinct in time and/or center from the cases used to train the algorithm."
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the cases were "distinct in time and/or center from the cases used to train the algorithm," implying that training data also had a ground truth, likely established by similar expert review, but this is not detailed.
Ask a specific question about this device
(121 days)
CoLumbo is an image post-processing and measurement software tool that provides quantitative spine measurements from previously-acquired DICOM lumbar spine Magnetic Resonance (MR) images for users' review, analysis, and interpretation. It provides the following functionality to assist users in visualizing, measuring and documenting out-of-range measurements:
- . Feature segmentation;
- . Feature measurement;
- . Threshold-based labeling of out-of-range measurement; and
- . Export of measurement results to a written report for user's revise and approval.
CoLumbo does not produce or recommend any type of medical diagnosis or treatment. Instead, it simply helps users to more easily identify and classify features in lumbar MR images and compile a report. The user is responsible for confirming/modifying settings. reviewing and verifying the software-generated measurements, inspecting out-of-range measurements, and approving draft report content using their medical judgment and discretion.
The device is intended to be used only by hospitals and other medical institutions.
Only DICOM images of MRI acquired from lumbar spine exams of patients aged 18 and above are considered to be valid input. CoLumbo does not support DICOM images of patients that are prognant, undergo MRI scan with contrast media, or have post-operational complications, scoliosis, tumors, infections, fractures.
CoLumbo is a medical device (software) for viewing and interpreting magnetic resonance imaging (MRI) of the lumbar spine. The software is a quantitative imaging tool that assists radiologists and neuro- and spine surgeons ("users") to identify and measure lumbar spine features in medical images and record their observations in a report. The users then confirm whether the out-of-range measurements represent any true abnormality versus a spurious finding, such as an artifact or normal variation of the anatomy. The segmentation and measurements are classified using "modifiers" based on rule-based algorithms and thresholds set by each software user and stored in the user's individualized software settings. The user also identifies and classifies any other observations that the software may not annotate.
The purpose of CoLumbo is to provides information regarding common spine measurements confirmed by the user and the pre-determined thresholds confirmed or defined by the user. Every feature annotated by the software, based on the user-defined settings, must be reviewed and affirmed by the radiologist before the measurements of these features can be stored and reported. The software initiates adjustable measurements resulting from semi-automatic segmentation. If the user rejects a measurement the corresponding segmentation is rejected too. Segmentations are not intended to be a final output but serve the purpose of visualization and calculating measurements. The device outputs are intended to be a starting point for a clinical workflow and should not be interpreted or used as a diagnosis. The user is responsible for confirming segmentation and all measurement outputs. The output is an aid to the clinical workflow of measuring patient anatomy and should not be misused as a diagnosis tool.
User-confirmed defined settings control the sensitivity of the software for labelling measurements in an image. The user (not the software) controls the threshold for identifying out-of-range measurements, and, in every case once an out-of-range measurement is identified, the user must confirm or reject its presence. The software facilitates this process by annotating or drawing contours (segmentations) around features of the relevant anatomy and displaying measurements based on these contours. The user maintains control of the process by inspecting the segmentation, measurements and annotations upon which the measurements are based. The user may also examine other features of the imaging not annotated by the software to form a complete impression and diagnostic judgment of the overall state of disease, disorder, or trauma.
Here's a breakdown of the acceptance criteria and the study that proves CoLumbo meets them, based on the provided FDA submission:
1. Acceptance Criteria and Reported Device Performance
Primary Endpoint (Measurement Accuracy):
- Acceptance Criteria: The maximum Mean Absolute Error (MAE), defined as the upper limit of the 95% confidence interval for MAE, is below a predetermined allowable error limit (MAE_Limit) for each measurement listed.
- Reported Performance: All primary endpoints were met.
Measurement | Reported Mean Absolute Error (MAE) | 95% Confidence Interval (CI) | MAE_Limit | Meets Criteria? |
---|---|---|---|---|
Dural Sac Area (Axial) | 14.8 mm² | 12.4 - 17.3 mm² | 20 mm² | Yes (17.3 0.8) |
Vertebral Arch and Adjacent Ligaments (Axial) | 0.87 | 0.86 - 0.88 | 0.8 | Yes (0.86 > 0.8) |
Dural Sac (Axial) | 0.92 | 0.92 - 0.93 | 0.8 | Yes (0.92 > 0.8) |
Nerve Roots (Axial) | 0.75 | 0.72 - 0.78 | 0.6 | Yes (0.72 > 0.6) |
Disc Material Outside Intervertebral Space (Axial) | 0.76 | 0.72 - 0.80 | 0.6 | Yes (0.72 > 0.6) |
Disc (Sagittal) | 0.93 | 0.93 - 0.94 | 0.8 | Yes (0.93 > 0.8) |
Vertebral Body (Sagittal) | 0.95 | 0.94 - 0.95 | 0.8 | Yes (0.94 > 0.8) |
Sacrum S1 (Sagittal) | 0.93 | 0.92 - 0.94 | 0.8 | Yes (0.92 > 0.8) |
Disc Mat. Outside IV Space and/or Bulging Part | 0.69 | 0.66 - 0.72 | 0.6 | Yes (0.66 > 0.6) |
2. Sample Size and Data Provenance
- Test Set Sample Size: 101 MR image studies from 101 patients.
- Data Provenance:
- Country of Origin: Collected from seven (7) sites across the U.S.
- Retrospective/Prospective: The document does not explicitly state whether the data was retrospective or prospective, but the phrasing "collected from seven (7) sites across the U.S." typically implies retrospective collection for this type of validation.
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: Three (3) U.S. radiologists.
- Qualifications: The document states they were "U.S. radiologists" but does not provide details on their years of experience, subspecialty, or specific certifications.
4. Adjudication Method for the Test Set
- Ground Truth Method: For segmentations, the per-pixel majority opinion of the three radiologists established the ground truth. For measurements, the median of the three radiologists' measurements established the ground truth. This is a form of multi-reader consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly reported. The study conducted was a "standalone software performance assessment study," meaning it evaluated the algorithm's performance against ground truth without human readers in the loop.
- Effect Size: N/A, as an MRMC study comparing human readers with and without AI assistance was not performed.
6. Standalone (Algorithm Only) Performance Study
- Was it done? Yes. A standalone software performance assessment study was conducted.
- Details: The study "compared the CoLumbo software outputs without any editing by a radiologist to the ground truth defined by 3 radiologists on segmentations and measurements."
7. Type of Ground Truth Used
- Ground Truth Type: Expert consensus.
- For segmentations: Per-pixel majority opinion of three radiologists using a specialized pixel labeling tool.
- For measurements: Median of three radiologists' measurements using a commercial software tool.
8. Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated in the provided text. The document only mentions that the "training and testing data used during the algorithm development, as well as validation data used in the U.S. standalone software performance assessment study were all independent data sets." It does not specify the size of the training set.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not explicitly stated. The document only mentions that the training data and validation data were independent. It does not detail the method by which ground truth was established for the training data used in algorithm development.
Ask a specific question about this device
(161 days)
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software could be used to support the physician quantitatively in the diagnosis, follow up evaluation of CT lung tissue images by providing image segmentation of sub-structures in lung, lobe, airways and cardiac, registration and expiration which could analyze quantitative information such as air trapped index, and inspiration/ expiration ratio. And also, volumetric and structure analysis, density evaluation and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on premise and as cloud environment as well to allow users to connect by various environment such as mobile devices and chrome browser. Characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major; length of the longest diameter measured in 3D for solid portion of the nodule, Solid 2nd Major: The longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides CAC analysis by segmentation of four main artery (right coronary artery, left main coronary, left anterior descending and left circumflex artery then extracts calcium on coronary artery to provide Agatston score, volume score and mass score by whole and each segmented artery type. Based on the score, provides CAC risk based on age and gender.
The AVIEW is a software product which can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0 which is the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving and sending images by using the software tools. And is intended for use as diagnostic patient imaging which is intended for the review and analysis of CT scanning. Provides following features as semi-automatic nodule management, maximal plane measure, 3D measures and columetric measures, automatic nodule detection by integration with 3rd party CAD. Also provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported. It also automatically analyzes coronary artery calcification which support user to detect cardiovascular disease in early stage and reduce the burden of medical.
The provided FDA 510(k) summary for the AVIEW 2.0 device (K200714) primarily focuses on establishing substantial equivalence to a predicate device (AVIEW K171199, among others) rather than presenting a detailed clinical study demonstrating its performance against specific acceptance criteria.
However, based on the nonclinical performance testing section and the overall description, we can infer some aspects and present the available information regarding the device's capabilities and how it was tested. It is important to note that explicit acceptance criteria and detailed clinical study results are not fully elaborated in the provided text. The document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly state "acceptance criteria" with numerical or performance targets. Instead, it describes general validation methods and "performance tests" that were conducted to ensure functionality and reliability. The "Reported Device Performance" here refers to the successful completion or validation of these functions.
Feature/Function | Acceptance Criteria (Inferred from Validation) | Reported Device Performance (as per 510(k) Summary) |
---|---|---|
Software Functionality & Reliability | Absence of 'Major' or 'Moderate' defects. | All tests passed based on pre-determined Pass/Fail criteria. No 'Major' or 'Moderate' defects found during System Test. Minor defects, if any, did not impact intended use. |
Unit Test (Major Software Components) | Functional test conditions, performance test conditions, algorithm analysis met. | Performed using Google C++ Unit Test Framework; included functional, performance, and algorithm analysis for image processing. Implied successful completion. |
System Test | No 'Major' or 'Moderate' defects identified. | Conducted by installing software to hardware with recommended specifications. New errors from 'Exploratory Test' were managed. Successfully passed as no 'Major' or 'Moderate' defects were found. |
Specific Performance Tests | (Implied: Accurate, reliable, and consistent output) | |
Auto Lung & Lobe Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic lung/lobe segmentation using deep-learning algorithms." |
Airway Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic airway segmentation using deep-learning algorithms." |
Nodule Matching Experiment Using Lung Registration | (Implied: Accurate nodule matching and registration) | Performed. The device features "Follow-up support with nodule matching and comparison." |
Validation on DVF Size Optimization with Sub-sampling | (Implied: Optimized DVF size with sub-sampling) | Performed. |
Semi-automatic Nodule Segmentation | (Implied: Accurate segmentation) | Performed. The device features "semi-automatic nodule management" and "semi-automatic nodule measurement (segmentation)." |
Brock Model (PANCAN) Calculation | (Implied: Accurate malignancy score calculation) | Performed. The device "provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs" and "PANCAN risk calculator." |
VDT Calculation | (Implied: Accurate volume doubling time calculation) | Performed. The device offers "Automatic calculation of VDT (volume doubling time)." |
Lung RADS Calculation | (Implied: Accurate Lung-RADS categorization) | Performed. The device "automatically categorize Lung-RADS score" and integrates with "Lung-RADS (classification proposed to aid with findings)." |
Validation LAA Analysis | (Implied: Accurate LAA analysis) | Performed. The device features "LAA analysis (LAA-950HU for INSP, LAA-856HU for EXP), LAA size analysis (D-Slope), and true 3D analysis of LAA cluster sizes." |
Reliability Test for Airway Wall Measurement | (Implied: Reliable airway wall thickness measurement) | Performed. The device offers "Precise airway wall thickness measurement" and "Robust measurement using IBHB (Integral-Based Half-BAND) method" and "Precise AWT-Pi10 calculation." |
CAC Performance (Coronary Artery Calcification) | (Implied: Accurate Agatston, volume, mass scores, and segmentation) | Performed. The device "automatically analyzes coronary artery calcification," "Extracts calcium on coronary artery to provide Agatston score, volume score and mass score," and "Automatically segments calcium area of coronary artery based on deep learning... Segments and provides overlay of four main artery." Also "Provides CAC risk based on age and gender." |
Air Trapping Analysis | (Implied: Accurate air trapping analysis) | Performed. The device features "Air-trapping analysis using INSP/EXP registration." |
INSP/EXP Registration | (Implied: Accurate non-rigid elastic registration) | Performed. The device features "Fully automatic INSP/EXP registration (non-rigid elastic) algorithm." |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary does not specify the sample size used for the test set(s) used in the performance evaluation, nor does it detail the data provenance (e.g., country of origin, retrospective or prospective). It simply mentions "software verification and validation" and "nonclinical performance testing."
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not provide information on the number of experts used to establish ground truth or their specific qualifications for any of the nonclinical or performance tests mentioned. Given that no clinical study was performed, it is unlikely that medical experts were involved in establishing ground truth for a test set in the conventional sense for clinical performance.
4. Adjudication Method
No information is provided regarding an adjudication method. Since the document states no clinical study was conducted, adjudication by multiple experts would not have been applicable for a clinical performance evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device." Therefore, there is no mention of an effect size for human readers with or without AI assistance.
6. Standalone Performance Study
Yes, a standalone (algorithm only without human-in-the-loop) performance evaluation was conducted, implied by the "Nonclinical Performance Testing" and "Software Verification and Validation" sections. The "Performance Test" section specifically lists several automatic and semi-automatic functions (e.g., "Auto Lung & Lobe Segmentation," "Airway Segmentation," "CAC Performance") that were tested for the device's inherent capability.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for each specific performance test. For software components involving segmentation, it is common to use expert-annotated images (manual segmentation by experts) as ground truth for a quantitative comparison. For calculations like Agatston score, or VDT, the ground truth would likely be mathematical computations based on established formulas or reference standards applied to the segmented regions. However, this is inferred, not explicitly stated.
8. Sample Size for the Training Set
The document does not specify the sample size for any training set. It mentions the use of "deep-learning algorithms" for segmentation, which implies a training phase, but details about the training data are absent.
9. How Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for any training set was established. While deep learning is mentioned for certain segmentation tasks, the methodology for creating the labeled training data is not detailed.
Ask a specific question about this device
(111 days)
AI-Rad Companion (Musculoskeletal) is an image processing software that provides quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice in the evaluation and assessment of musculoskeletal disease. It provides the following functionality:
- Segmentation of vertebras
- Labelling of vertebras
- Measurements of heights in each vertebra and indication if they are critically different
- Measurement of mean Hounsfield value in volume of interest within vertebra.
Only DICOM images of adult patients are considered to be valid input.
Al-Rad Companion (Musculoskeletal) is software-only image post-processing application that uses deep learning algorithms to post-process CT data of the thorax. Al-Rad Companion (Musculoskeletal) supports workflows for visualization and various measurements of musculoskeletal disease, including:
- Segmentation of vertebras ●
- Labelling of vertebras
- Measurements of heights in each vertebra and indication if they are critically ● different
- . Measurement of mean Hounsfield value in volume of interest within vertebra
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Vertebra height measurements within 95% limits of agreement (LoA) for thin slices (≤1 mm slice thickness) | 95.1% |
Vertebra height measurements within 95% limits of agreement (LoA) for thicker slices (>1 mm slice thickness) | 87.5% |
Vertebra density measurements within 95% limits of agreement (LoA) | 98.8% |
Note: The document explicitly states that the device's performance was "consistent for all critical subgroups, such as vendors or reconstruction parameters and patient age."
Study Details
2. Sample size used for the test set and the data provenance:
- Sample Size: N=140
- Data Provenance: Retrospective performance study on chest CT data from multiple clinical sites across the United States and Europe.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Four radiologists.
- Qualifications: Not explicitly stated beyond "radiologists."
4. Adjudication method for the test set:
- Adjudication Method: Two readers per case, plus a third reader for adjudications (effectively a 2+1 method for cases with disagreement).
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, a comparative effectiveness study with human readers (MRMC) was not explicitly described. The study focused on the device's standalone performance compared to human ground truth.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, the described study evaluated the "performance of the Al-Rad Companion (Musculoskeletal) device," which is an algorithm-only image processing software. The reported performance metrics (ratio of measurements within LoA) are for the device itself against the human-established ground truth.
7. The type of ground truth used:
- Type of Ground Truth: Expert consensus, established using manual vertebra height and density measurements performed by four radiologists with adjudication.
8. The sample size for the training set:
- Sample Size for Training Set: Not specified in the provided text. The document mentions that the device "uses the same deep learning technology as in the previously cleared reference device Siemens Al-Rad Companion (Cardiovascular) (K183268)," implying a pre-existing training process, but details of that training set are not in this document.
9. How the ground truth for the training set was established:
- Ground Truth for Training Set: Not specified in the provided text. Since the device leverages deep learning, it would have required a large, annotated dataset for training, but the specifics of how that training ground truth was established are not detailed in this document.
Ask a specific question about this device
Page 1 of 1