Search Results
Found 2 results
510(k) Data Aggregation
(107 days)
Smart Soft Healthcare AD
Here's a breakdown of the acceptance criteria and the study proving the CoLumboX device meets them, based on the provided FDA approval letter:
Acceptance Criteria and Reported Device Performance
The provided document doesn't explicitly state "acceptance criteria" in a tabulated format with numerical targets. However, the performance study section outlines the overall goal of the validation: to compare CoLumboX's output (both standalone and with physician assistance) against a ground truth for segmentation and measurements, demonstrating its intended performance.
Based on the information, the implied acceptance criteria revolve around the software's ability to accurately perform "Feature segmentation" and "Feature measurement," and its utility in assisting users.
Here's a table summarizing the reported device performance as described:
Acceptance Criteria (Implied) | Reported Device Performance (from "Software Performance Validation on Clinical Data") |
---|---|
Accurate Feature Segmentation | CoLumboX software outputs compared to ground truth defined by 3 radiologists. Specific metrics (e.g., Dice score, mean average precision) are not provided in the document. |
Accurate Feature Measurement | CoLumboX software outputs compared to ground truth defined by 3 radiologists. Specific metrics (e.g., mean absolute error, correlation) are not provided in the document. |
Physician Workflow Improvement | Output of a physician using CoLumboX compared to a physician not using CoLumboX. No specific effect size or improvement metric is provided in the document. |
Cybersecurity Performance | Satisfactory security performance with no critical and high-risk vulnerabilities. |
Details of the Study Proving Device Acceptance
2. Sample size used for the test set and the data provenance:
- Sample Size: 100 image studies for 100 patients.
- Data Provenance:
- Country of Origin: U.S.
- Retrospective/Prospective: Not explicitly stated, but "previously-acquired DICOM lumbar spine radiograph x-ray images" (from Indications for Use) and "clinical data-based software performance assessment study" (from Performance Data) generally point towards retrospective data collection. However, without further details, it's not definitively confirmed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: 3 radiologists.
- Qualifications of Experts: Only stated as "radiologists." No information on their years of experience, subspecialty, or board certification is provided in this document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document states "ground truth defined by 3 radiologists." This implies a consensus-based approach, but the specific adjudication method (e.g., majority vote, all three agreement required, or a sub-reader resolving disagreements) is not specified. It just says "defined by" them, not how they defined it collaboratively.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study Conducted?: Yes, a comparative study was done. The study "compared the CoLumboX software outputs, and the output of a physician using and physician not using CoLumboX to the ground truth." This structure suggests an MRMC-like approach comparing human readers with and without AI assistance.
- Effect Size: The document does not provide any specific effect size or quantitative results on how much human readers improved with AI assistance compared to without AI assistance. It only states that this comparison was made.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Performance?: Yes. The study "compared the CoLumboX software outputs... to the ground truth," indicating an evaluation of the algorithm's performance in isolation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: Expert consensus. Specifically, "ground truth defined by 3 radiologists on segmentations and measurements."
8. The sample size for the training set:
- The document does not provide any information regarding the sample size used for the training set.
9. How the ground truth for the training set was established:
- The document does not provide any information regarding how the ground truth was established for the training set. It only discusses the ground truth for the test set.
Ask a specific question about this device
(121 days)
Smart Soft Healthcare AD
CoLumbo is an image post-processing and measurement software tool that provides quantitative spine measurements from previously-acquired DICOM lumbar spine Magnetic Resonance (MR) images for users' review, analysis, and interpretation. It provides the following functionality to assist users in visualizing, measuring and documenting out-of-range measurements:
- . Feature segmentation;
- . Feature measurement;
- . Threshold-based labeling of out-of-range measurement; and
- . Export of measurement results to a written report for user's revise and approval.
CoLumbo does not produce or recommend any type of medical diagnosis or treatment. Instead, it simply helps users to more easily identify and classify features in lumbar MR images and compile a report. The user is responsible for confirming/modifying settings. reviewing and verifying the software-generated measurements, inspecting out-of-range measurements, and approving draft report content using their medical judgment and discretion.
The device is intended to be used only by hospitals and other medical institutions.
Only DICOM images of MRI acquired from lumbar spine exams of patients aged 18 and above are considered to be valid input. CoLumbo does not support DICOM images of patients that are prognant, undergo MRI scan with contrast media, or have post-operational complications, scoliosis, tumors, infections, fractures.
CoLumbo is a medical device (software) for viewing and interpreting magnetic resonance imaging (MRI) of the lumbar spine. The software is a quantitative imaging tool that assists radiologists and neuro- and spine surgeons ("users") to identify and measure lumbar spine features in medical images and record their observations in a report. The users then confirm whether the out-of-range measurements represent any true abnormality versus a spurious finding, such as an artifact or normal variation of the anatomy. The segmentation and measurements are classified using "modifiers" based on rule-based algorithms and thresholds set by each software user and stored in the user's individualized software settings. The user also identifies and classifies any other observations that the software may not annotate.
The purpose of CoLumbo is to provides information regarding common spine measurements confirmed by the user and the pre-determined thresholds confirmed or defined by the user. Every feature annotated by the software, based on the user-defined settings, must be reviewed and affirmed by the radiologist before the measurements of these features can be stored and reported. The software initiates adjustable measurements resulting from semi-automatic segmentation. If the user rejects a measurement the corresponding segmentation is rejected too. Segmentations are not intended to be a final output but serve the purpose of visualization and calculating measurements. The device outputs are intended to be a starting point for a clinical workflow and should not be interpreted or used as a diagnosis. The user is responsible for confirming segmentation and all measurement outputs. The output is an aid to the clinical workflow of measuring patient anatomy and should not be misused as a diagnosis tool.
User-confirmed defined settings control the sensitivity of the software for labelling measurements in an image. The user (not the software) controls the threshold for identifying out-of-range measurements, and, in every case once an out-of-range measurement is identified, the user must confirm or reject its presence. The software facilitates this process by annotating or drawing contours (segmentations) around features of the relevant anatomy and displaying measurements based on these contours. The user maintains control of the process by inspecting the segmentation, measurements and annotations upon which the measurements are based. The user may also examine other features of the imaging not annotated by the software to form a complete impression and diagnostic judgment of the overall state of disease, disorder, or trauma.
Here's a breakdown of the acceptance criteria and the study that proves CoLumbo meets them, based on the provided FDA submission:
1. Acceptance Criteria and Reported Device Performance
Primary Endpoint (Measurement Accuracy):
- Acceptance Criteria: The maximum Mean Absolute Error (MAE), defined as the upper limit of the 95% confidence interval for MAE, is below a predetermined allowable error limit (MAE_Limit) for each measurement listed.
- Reported Performance: All primary endpoints were met.
Measurement | Reported Mean Absolute Error (MAE) | 95% Confidence Interval (CI) | MAE_Limit | Meets Criteria? |
---|---|---|---|---|
Dural Sac Area (Axial) | 14.8 mm² | 12.4 - 17.3 mm² | 20 mm² | Yes (17.3 0.8) |
Vertebral Arch and Adjacent Ligaments (Axial) | 0.87 | 0.86 - 0.88 | 0.8 | Yes (0.86 > 0.8) |
Dural Sac (Axial) | 0.92 | 0.92 - 0.93 | 0.8 | Yes (0.92 > 0.8) |
Nerve Roots (Axial) | 0.75 | 0.72 - 0.78 | 0.6 | Yes (0.72 > 0.6) |
Disc Material Outside Intervertebral Space (Axial) | 0.76 | 0.72 - 0.80 | 0.6 | Yes (0.72 > 0.6) |
Disc (Sagittal) | 0.93 | 0.93 - 0.94 | 0.8 | Yes (0.93 > 0.8) |
Vertebral Body (Sagittal) | 0.95 | 0.94 - 0.95 | 0.8 | Yes (0.94 > 0.8) |
Sacrum S1 (Sagittal) | 0.93 | 0.92 - 0.94 | 0.8 | Yes (0.92 > 0.8) |
Disc Mat. Outside IV Space and/or Bulging Part | 0.69 | 0.66 - 0.72 | 0.6 | Yes (0.66 > 0.6) |
2. Sample Size and Data Provenance
- Test Set Sample Size: 101 MR image studies from 101 patients.
- Data Provenance:
- Country of Origin: Collected from seven (7) sites across the U.S.
- Retrospective/Prospective: The document does not explicitly state whether the data was retrospective or prospective, but the phrasing "collected from seven (7) sites across the U.S." typically implies retrospective collection for this type of validation.
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: Three (3) U.S. radiologists.
- Qualifications: The document states they were "U.S. radiologists" but does not provide details on their years of experience, subspecialty, or specific certifications.
4. Adjudication Method for the Test Set
- Ground Truth Method: For segmentations, the per-pixel majority opinion of the three radiologists established the ground truth. For measurements, the median of the three radiologists' measurements established the ground truth. This is a form of multi-reader consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly reported. The study conducted was a "standalone software performance assessment study," meaning it evaluated the algorithm's performance against ground truth without human readers in the loop.
- Effect Size: N/A, as an MRMC study comparing human readers with and without AI assistance was not performed.
6. Standalone (Algorithm Only) Performance Study
- Was it done? Yes. A standalone software performance assessment study was conducted.
- Details: The study "compared the CoLumbo software outputs without any editing by a radiologist to the ground truth defined by 3 radiologists on segmentations and measurements."
7. Type of Ground Truth Used
- Ground Truth Type: Expert consensus.
- For segmentations: Per-pixel majority opinion of three radiologists using a specialized pixel labeling tool.
- For measurements: Median of three radiologists' measurements using a commercial software tool.
8. Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated in the provided text. The document only mentions that the "training and testing data used during the algorithm development, as well as validation data used in the U.S. standalone software performance assessment study were all independent data sets." It does not specify the size of the training set.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not explicitly stated. The document only mentions that the training data and validation data were independent. It does not detail the method by which ground truth was established for the training data used in algorithm development.
Ask a specific question about this device
Page 1 of 1