Search Results
Found 2 results
510(k) Data Aggregation
(192 days)
IB Lab GmbH
IB Lab LAMA is a fully-automated radiological image processing software device intended to aid users in the measurement of limb-length discrepancy and quantitative knee alignment parameters on uni- and bilateral AP full leg radiographs of individuals at least 22 years of age. It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The software device is intended to be used by healthcare professionals trained in radiology.
IB Lab LAMA is not indicated for use on radiographs on which Ankle Arthroplasties and/or Unicompartmental Knee Arthroplasties are present.
IB Lab LAMA uses deep learning technology to provide precise fully-automated geometric length and angle measurements of the lower limb on full leg X-ray images. The outputs aid healthcare professionals who are interested in the analysis of leg-length discrepancy and knee alignment in adult patients with suspected or present deformities of the lower extremities. IB Lab LAMA provides the following measurements:
- mechanical axis deviation .
- full leg length .
- femur length ●
- tibia length
- leg length discrepancy ●
- hip knee ankle angle
- anatomical tibiofemoral angle
- anatomical mechanical angle
- joint-line convergence angle
- mechanical lateral proximal femoral angle
- mechanical lateral distal femoral angle
- mechanical medial proximal tibia angle
- mechanical lateral distal tibia angle ●
The user does not interact directly with IB Lab LAMA except or reject the generated report findings via cleared third party medical viewers. The measurements are compared to fixed predetermined norm-ranges, based on standard state of the art clinical practices hard-coded into the software. Outputs are summarized in reports that can be viewed on any cleared medical DICOM viewer. IB Lab LAMA operates in a Linux environment and can be deployed on any operating system that supports the third-party software Docker. The integration environment has to support IB Lab LAMA data input and output requirements. The device does not interact with the patient directly, nor does it control any life-sustaining devices.
Below is the requested information regarding the acceptance criteria and study proving the device meets those criteria for IB Lab LAMA.
1. Table of Acceptance Criteria and the Reported Device Performance
Measure | Acceptance Criteria (Implicit from Range of Agreement) | Reported Device Performance (Mean Difference +/- Std. Dev. (Lower LOA, Upper LOA) or Sensitivity/Specificity) |
---|---|---|
MAD (Mechanical Axis Deviation) | (Range of Agreement: -6.89 to 4.15 mm) | -1.37 +/- 2.54 mm (-6.89 mm, 4.15 mm) |
Femur Length | (Range of Agreement: -0.26 to 0.45 cm) | 0.09 +/- 0.16 cm (-0.26 cm, 0.45 cm) |
Tibia Length | (Range of Agreement: -0.27 to 0.28 cm) | 0.01 +/- 0.13 cm (-0.27 cm, 0.28 cm) |
Leg Length | (Range of Agreement: -0.23 to 0.32 cm) | 0.05 +/- 0.12 cm (-0.23 cm, 0.32 cm) |
LLD (Leg Length Discrepancy) | (Range of Agreement: -3.16 to 3.43 mm) | 0.13 +/- 1.4 mm (-3.16 mm, 3.43 mm) |
HKA (Hip Knee Ankle Angle) | (Range of Agreement: -1.79 to 1.4 degrees) | -0.19 +/- 0.73 degrees (-1.79 degrees, 1.4 degrees) |
aTFA (Anatomical Tibiofemoral Angle) | (Range of Agreement: -3.08 to 2.05 degrees) | -0.51 +/- 1.18 degrees (-3.08 degrees, 2.05 degrees) |
AMA (Anatomical Mechanical Angle) | (Range of Agreement: -1.88 to 1.99 degrees) | 0.06 +/- 0.89 degrees (-1.88 degrees, 1.99 degrees) |
JLCA (Joint-Line Convergence Angle) | (Range of Agreement: -2.72 to 3.15 degrees) | 0.22 +/- 1.35 degrees (-2.72 degrees, 3.15 degrees) |
mLPFA (Mechanical Lateral Proximal Femoral Angle) | (Range of Agreement: -2.89 to 7.85 degrees) | 2.48 +/- 2.45 degrees (-2.89 degrees, 7.85 degrees) |
mLDFA (Mechanical Lateral Distal Femoral Angle) | (Range of Agreement: -2.46 to 1.72 degrees) | -0.37 +/- 0.96 degrees (-2.46 degrees, 1.72 degrees) |
mMPTA (Mechanical Medial Proximal Tibia Angle) | (Range of Agreement: -2.79 to 2.77 degrees) | -0.01 +/- 1.28 degrees (-2.79 degrees, 2.77 degrees) |
mLDTA (Mechanical Lateral Distal Tibia Angle) | (Range of Agreement: -5.08 to 3.85 degrees) | -0.62 +/- 2.05 degrees (-5.08 degrees, 3.85 degrees) |
Arthroplasty Detection Sensitivity | Not explicitly defined as an acceptance criterion in table, but reported as a performance metric. | 95.05% (90.29%, 98.96%) |
Arthroplasty Detection Specificity | Not explicitly defined as an acceptance criterion in table, but reported as a performance metric. | 99.80% (99.39%, 100.00%) |
Failure Rate | ~ 2.8 % (Stated as not exceeding 5%) | ~ 2.8 % (9 of 324 legs failed) |
Note: The acceptance criteria for the length and angle measurements are implicitly derived from the reported Lower and Upper Limits of Agreement (LOA) in the Bland-Altman analysis, which demonstrates the agreement between the device and ground truth. The failure rate is explicitly given an acceptable threshold.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The test set comprised 189 radiographs of bilateral AP lower extremity radiographs of adults, which resulted in 325 legs.
- Data Provenance: The data was obtained from US clinical sites affiliated with the University of Texas Southwestern Medical Center (UTSW). The study was a standalone performance study. While not explicitly stated as retrospective or prospective, obtaining images from clinical sites and analyzing them for a standalone performance study typically implies a retrospective collection for the purpose of validation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two (2) US Board certified musculoskeletal radiologists.
- Qualifications: Both radiologists had at least 5 years post-fellowship expertise in the assessment of lower limb length and alignment on AP lower extremity radiographs.
4. Adjudication Method for the Test Set
- Measurements from the two radiologists were averaged to form the initial ground truth.
- If any pair of assessments differed by more than a pre-defined threshold, the respective leg was subjected to a consensus read by the two truthers to establish a reliable ground truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated or described in the provided text. The study described is a standalone performance study comparing the algorithm's measurements against expert-established ground truth.
6. If a Standalone Study Was Done
- Yes, a standalone performance study (algorithm only without human-in-the-loop performance) was done. The study specifically states, "To validate the outputs of IB Lab LAMA, a clinical data-based standalone performance study was conducted in the U.S."
7. The Type of Ground Truth Used
- Expert Consensus: The ground truth was established by two US Board certified musculoskeletal radiologists, with an adjudication process for discrepancies (averaging and subsequent consensus reading if differences exceeded a threshold).
8. The Sample Size for the Training Set
- The document does not provide information regarding the sample size used for the training set. It only mentions that "The performance of the individual deep-neural networks was tested on hold-out sets" in the non-clinical tests, implying a separate dataset for training and validation beyond the described clinical test set.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established. It only refers to "hold-out sets" for non-clinical testing of the deep-neural networks.
Ask a specific question about this device
(92 days)
IB Lab GmbH
IB Lab KOALA is a radiological fully-automated image processing software computed (CR) or directly digital (DX) images intended to aid medical professionals in the measurement of minimum joint space width; the assessment of the presence or absence of sclerosis, joint space narrowing, and osteophytes based OARSI criteria for these parameters; and, the presence or absence of radiographic knee OA based on Kellgren & Lawrence Grading of standing, fixed-flexion radiographs of the knee. It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained professionals including, but not limited to, radiologists, orthopedics, physicians and medical technicians.
The Knee OsteoArthritis Labeling Assistant (KOALA) software provides metric measurements of the ioint space width and indicators for presence of radiographic features of osteoarthritis (OA) on posterior-anterior-posterior (PA/AP) knee X-ray images. The outputs aid clinical professionals who are interested in the analysis of knee OA in adult patients, either suffering from knee OA or having an elevated risk of developing the disease.
Outputs are summarized in a KOALA report that can be viewed on any FDA approved DICOM viewer workstation. KOALA operates in a Linux environment and can be deployed to be compatible with any operating system supporting the third-party software Docker. The integration environment has to support KOALA data input and output requirements. The device does not interact with the patient directly, nor does it control any life-sustaining devices.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Kellgren-Lawrence status (KL ≥ 2) | Sensitivity: 0.87 (95% CI: 0.84, 0.9) |
Specificity: 0.83 (95% CI: 0.8, 0.86) | |
Joint Space Narrowing Status (JSN OARSI grade > 0) | Sensitivity: 0.83 (95% CI: 0.8, 0.86) |
Specificity: 0.8 (95% CI: 0.76, 0.83) | |
Osteophytosis status (Ost OARSI grade > 0) | Sensitivity: 0.86 (95% CI: 0.81, 0.9) |
Specificity: 0.79 (95% CI: 0.76, 0.83) | |
Sclerosis status (Scl OARSI grade > 0) | Sensitivity: 0.82 (95% CI: 0.8, 0.87) |
Specificity: 0.8 (95% CI: 0.76, 0.83) | |
Joint Space Width (JSW) Measurements - Medial | Slope: 1.02 (0.99; 1.05) |
Intercept [mm]: -0.08 (-0.22; 0.03) | |
Joint Space Width (JSW) Measurements - Lateral | Slope: 0.97 (0.93; 1.00) |
Intercept [mm]: 0.08 (-0.15; 0.30) |
Note: The document states that the "analysis supports good agreement between the two sets of measurements" for JSW, indicating these values meet the unstated acceptance criteria for agreement. The sensitivity and specificity values are direct reported performance against implied thresholds for clinical utility.
Study Details
-
Sample size used for the test set and the data provenance:
- Sample Size: 6597 radiographs.
- Data Provenance: From a large longitudinal US study, the Osteoarthritis Initiative (OAI) study. The data is retrospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three physicians.
- Qualifications: The document does not specify their exact qualifications (e.g., number of years of experience, specific specialty like "radiologist"), only that they are "physicians."
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The ground truth was established by three physicians "following adjudication procedures for discrepancies." This implies a form of consensus-based adjudication, likely majority rule or discussion to resolve disagreements. The specific method (e.g., 2+1 meaning if two agree, that's the truth, otherwise a third arbitrates) is not explicitly detailed.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not reported in this section. The study described is a standalone performance validation of the AI system.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone clinical performance validation was done. The reported sensitivities, specificities, and JSW measurements are for the KOALA algorithm's performance alone.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus: The ground truth was established by three physicians based on Kellgren-Lawrence grades and OARSI guidelines for osteophyte, sclerosis, and joint space narrowing grades, following adjudication procedures. This is a form of expert consensus derived from image interpretation.
-
The sample size for the training set:
- The document does not specify the sample size for the training set. It only mentions that the machine-learning algorithms were "trained on medical images."
-
How the ground truth for the training set was established:
- The document does not specify how the ground truth for the training set was established. It only implies that the algorithms were "trained on medical images" with sufficient ground truth to achieve high accuracy.
Ask a specific question about this device
Page 1 of 1