Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K240793
    Device Name
    MSKai
    Manufacturer
    Date Cleared
    2024-12-16

    (269 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K193290, K193267

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MSKai is an image identification, post-processing, measurement, and reporting software tool that provides qualitative viewing and quantitative spine measurements from previously-acquired T2 weighted DICOM lumbar spine Magnetic Resonance Imaging (MRI) images for users' review, evaluation and analysis. It provides the following functionality to assist users in identifying, observing, measuring and reporting measurements:

    • Anatomy segmentation;
    • Anatomy labeling;
    • Anatomy measurement; and
    • Export of measurement results to a qualitative and quantitative report for user's evaluation, amendment and authorization

    MSKai does not serve as a diagnostic device by providing or recommending any type of medical diagnosis or treatment. MSKai simply provides users the ability to access objective and repeatable identification, segmentation, measurement and reported measurements of the Lumbar spine. The user is responsible for the indications of preferences and settings, confirming the software-generated measurements, and reviewing, confirming and approving draft reports based on their medical training.

    The device is intended to be used only by physicians, radiologist, hospitals and other medical institutions. Only T2 weighted DICOM images of MRI acquired from lumbar spine exams of patients aged 18 and above are acceptable input. MSKai does not support DICOM images of patients that are pregnant, have post-operational complications, tumors, infections, or complex hardware.

    Device Description

    MSKai is a medical device (software) for inspecting and evaluating T2-weighted magnetic resonance imaging (MRI) of the lumbar spine. The software is an imaging interpretation tool that assists radiologists and neuro/ortho spine surgeons ("users") to identify and measure lumbar spine features in medical images and document their interpretations in a report. The segmentation and measurements are classified using "alerts" based on rule-based algorithms. The user also identifies and classifies any other observations that the software may not annotate.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the MSKai device meets those criteria, based on the provided FDA 510(k) clearance letter:


    MSKai Device Performance Study Summary

    The MSKai device is an image identification, post-processing, measurement, and reporting software tool for T2-weighted lumbar spine MRI images. A standalone software performance assessment study was conducted to demonstrate its accuracy in segmentation and measurement, meeting pre-defined acceptance criteria.

    1. Table of Acceptance Criteria and Reported Device Performance:

      A. Segmentation Performance (Mean Dice Coefficient - MDC)

      Anatomy SegmentationViewAcceptance Criteria (MDC Limit)Reported Device Performance (Mean Dice Coefficient)95% Confidence Interval (CI)Met Criteria?
      Vertebral Body (L1)Sagittal0.80.9680.92-0.98Yes
      Vertebral Body (L2)Sagittal0.80.9770.93-0.98Yes
      Vertebral Body (L3)Sagittal0.80.9810.94-0.99Yes
      Vertebral Body (L4)Sagittal0.80.9630.92-0.98Yes
      Vertebral Body (L5)Sagittal0.80.9850.91-0.98Yes
      Vertebral Body (S1)Sagittal0.80.9450.93-0.99Yes
      L5/S1 DiscSagittal0.80.9930.91-0.99Yes
      L4/L5 DiscSagittal0.80.9910.93-0.99Yes
      L3/L4 DiscSagittal0.80.9920.93-0.99Yes
      L2/L3 DiscSagittal0.80.9890.91-0.99Yes
      L1/L2 DiscSagittal0.80.9860.94-0.99Yes
      Cord CanalSagittal0.80.9830.93-0.99Yes
      Axial DiscAxial0.80.9840.89-0.97Yes
      Vertebral BodyAxial0.80.9910.93-0.99Yes
      Dural SacAxial0.80.9780.90-0.98Yes
      Nerve RootAxial0.80.9520.90-0.95Yes
      Posterior ArchAxial0.80.9110.90-0.96Yes

      All reported Mean Dice Coefficients (MDC) met or exceeded the acceptance criterion of 0.8.

      B. Measurement Performance (Mean Absolute Error - MAE)

      Structural MeasurementsViewAcceptance Criteria (MAE Limit)Reported Device Performance (Mean Absolute Error)95% Confidence Interval (CI)Met Criteria?
      Protruding Disc Material (L5/S1)Sagittal2mm1.19mm1.11 -1.68mmYes
      Protruding Disc Material (L4/L5)Sagittal2mm1.22mm1.12 -1.71mmYes
      Protruding Disc Material (L3/L4)Sagittal2mm1.23mm1.14 -1.65mmYes
      Protruding Disc Material (L2/L3)Sagittal2mm1.19mm1.07 -1.61mmYes
      Protruding Disc Material (L1/L2)Sagittal2mm1.21mm1.11 -1.63mmYes
      Intervertebral Angle (L5/S1)Sagittal2.6°1.58 - 2.45°Yes
      Intervertebral Angle (L4/L5)Sagittal2.7°1.61 - 2.59°Yes
      Intervertebral Angle (L3/L4)Sagittal2.7°1.57 - 2.54°Yes
      Intervertebral Angle (L2/L3)Sagittal2.9°1.64 - 2.62°Yes
      Intervertebral Angle (L1/L2)Sagittal2.4°1.66 - 2.48°Yes
      Vertebral Body Height (Anterior) (L1)Sagittal2mm0.66mm0.62 -0.91mmYes
      Vertebral Body Height (Anterior) (L2)Sagittal2mm0.68mm0.61 -0.88mmYes
      Vertebral Body Height (Anterior) (L3)Sagittal2mm0.69mm0.61 -0.93mmYes
      Vertebral Body Height (Anterior) (L4)Sagittal2mm0.64mm0.58 -0.91mmYes
      Vertebral Body Height (Anterior) (L5)Sagittal2mm0.67mm0.61 -0.91mmYes
      Vertebral Body Height (Midline) (L1)Sagittal2mm0.94mm0.62 -0.87mmYes
      Vertebral Body Height (Midline) (L2)Sagittal2mm0.93mm0.54 -1.03mmYes
      Vertebral Body Height (Midline) (L3)Sagittal2mm0.96mm0.61 -1.01mmYes
      Vertebral Body Height (Midline) (L4)Sagittal2mm0.97mm0.57 -1.13mmYes
      Vertebral Body Height (Midline) (L5)Sagittal2mm0.94mm0.57 -0.99mmYes
      Vertebral Body Height (Posterior) (L1)Sagittal2mm0.92mm0.67 -0.99mmYes
      Vertebral Body Height (Posterior) (L2)Sagittal2mm0.93mm0.61 -1.01mmYes
      Vertebral Body Height (Posterior) (L3)Sagittal2mm0.91mm0.68 -0.99mmYes
      Vertebral Body Height (Posterior) (L4)Sagittal2mm0.92mm0.71 -1.06mmYes
      Vertebral Body Height (Posterior) (L5)Sagittal2mm0.93mm0.68 -1.09mmYes
      Disc Height (Anterior) (L5/S1)Sagittal2mm0.91mm0.67 -0.99mmYes
      Disc Height (Anterior) (L4/L5)Sagittal2mm0.90mm0.57 -1.06mmYes
      Disc Height (Anterior) (L3/L4)Sagittal2mm0.87mm0.62 -1.03mmYes
      Disc Height (Anterior) (L2/L3)Sagittal2mm0.89mm0.78 -1.06mmYes
      Disc Height (Anterior) (L1/L2)Sagittal2mm0.93mm0.71 -1.23mmYes
      Disc Height (Midline) (L5/S1)Sagittal2mm0.93mm0.73 -1.12mmYes
      Disc Height (Midline) (L4/L5)Sagittal2mm0.90mm0.68 -1.01mmYes
      Disc Height (Midline) (L3/L4)Sagittal2mm0.89mm0.71 -1.13mmYes
      Disc Height (Midline) (L2/L3)Sagittal2mm0.91mm0.64 -1.03mmYes
      Disc Height (Midline) (L1/L2)Sagittal2mm0.92mm0.69 -1.11mmYes
      Disc Height (Posterior) (L5/S1)Sagittal2mm0.87mm0.58 -1.03mmYes
      Disc Height (Posterior) (L4/L5)Sagittal2mm0.93mm0.67 -0.99mmYes
      Disc Height (Posterior) (L3/L4)Sagittal2mm0.87mm0.66 -1.07mmYes
      Disc Height (Posterior) (L2/L3)Sagittal2mm0.93mm0.72 -1.21mmYes
      Disc Height (Posterior) (L1/L2)Sagittal2mm0.89mm0.58 -0.91mmYes
      Anterio-Lithesis (L5/S1)Sagittal2mm1.04mm0.81 -1.43mmYes
      Anterio-Lithesis (L4/L5)Sagittal2mm1.02mm0.77 -1.52mmYes
      Anterio-Lithesis (L3/L4)Sagittal2mm1.05mm0.88 -1.61mmYes
      Anterio-Lithesis(L2/L3)Sagittal2mm1.07mm0.84 -1.43mmYes
      Anterio-Lithesis (L1/L2)Sagittal2mm1.02mm0.79 -1.33mmYes
      Retro-Lithesis (L5/S1)Sagittal2mm1.07mm0.82 -1.51mmYes
      Retro-Lithesis (L4/L5)Sagittal2mm1.049mm0.78 -1.42mmYes
      Retro-Lithesis (L3/L4)Sagittal2mm1.01mm0.81 -1.29mmYes
      Retro-Lithesis (L2/L3)Sagittal2mm1.05mm0.77 -1.34mmYes
      Retro-Lithesis (L1/L2)Sagittal2mm1.08mm0.83 -1.27mmYes
      Lordotic AngleSagittal2.99°2.01 - 3.62°Yes
      Protruding Disc MaterialAxial2mm0.97 mm0.72 -1.42mmYes
      Dural Sac DiameterAxial2mm1.3 mm0.87 -1.39mmYes

      All reported Mean Absolute Errors (MAE) were below the acceptance criterion of 2mm or 6°.

    2. Sample Size and Data Provenance:

      • Test set sample size: 238 MR image studies (from 238 patients).
      • Data Provenance: Images were collected from five sites across the U.S.
      • Timeframe: Not explicitly stated whether retrospective or prospective, but generally clinical performance studies for 510(k) clearances use retrospective data. The description "collected from five sites across the U.S." doesn't specify if it was specifically collected for this study, implying it could be retrospective.
    3. Number of Experts and Qualifications for Ground Truth:

      • Total number of experts for ground truth establishment (initial curation for training/testing): 5 experts.
      • Qualifications of these experts: 3 neurosurgeons, 1 interventional radiologist, and 1 PhD in Biomechanics.
      • Number of experts for measurement analysis in the testing dataset (independent group): 4 separate and independent experts.
      • Qualifications of these specific measurement experts: 2 neurosurgeons, 2 radiologists.
    4. Adjudication Method for the Test Set:

      • The document implies a consensus method for ground truth, stating "Ground truth data, curated by five experts in a two-phase process, underpins model training." and for the independent testing dataset, "being measured by an independent group of 4 experts."
      • For the testing dataset measurements, it says "Measurement analysis was performed by 4 separate and independent experts." It also mentions "Inter-ratter reliabilities were conducted in the experts who perform the training/testing measurements."
      • However, it does not explicitly state a formal adjudication method like "2+1" or "3+1" (where agreement by a majority or third reader is required to resolve discrepancies). The language "curated by five experts" and "measured by an independent group of 4 experts" suggests a consensus or multiple-read approach, but the specific rule for resolving disagreements is not detailed.
    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No MRMC study was done. The document explicitly states: "No human clinical study was conducted to support the pre-market clearance." This means there is no data provided on how much human readers improve with AI vs. without AI assistance. The study described is a standalone performance study.
    6. Standalone Performance:

      • Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The study's title is "Standalone Software Performance Study" and it states, "the MSKai software outputs without any editing by a radiologist" were compared to ground truth.
    7. Type of Ground Truth Used:

      • Expert Consensus: The ground truth was established by human experts (3 neurosurgeons, 1 interventional radiologist, and 1 PhD in Biomechanics) who curated and measured anatomical segmentations and structures.
    8. Sample Size for the Training Set:

      • Training Data: The document mentions "Three blind independent data sets were used to train, test and measure within the model." It specifically states the "Ground Truth dataset: 255 patient images." While this dataset was used for "ground truth development" for model training, the exact number of images specifically used only for the training phase versus those used for internal testing/validation during development is not distinctly broken out beyond the general "Ground Truth dataset: 255 patient images" being used to "underpin model training."
    9. How Ground Truth for the Training Set Was Established:

      • The ground truth for the training set (referred to as the "Ground Truth dataset") was "curated by five experts in a two-phase process." These experts were 3 neurosurgeons, 1 interventional radiologist, and 1 PhD in Biomechanics. This involved establishing the accurate segmentations and measurements that the algorithm was trained to reproduce.
    Ask a Question

    Ask a specific question about this device

    K Number
    K222406
    Device Name
    Clarius AI
    Date Cleared
    2023-01-23

    (167 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220497, K193267

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clarius AI is intended to semi-automatically place calipers for non-invasive measurements of musculoskeletal structures (e.g., Achilles' tendon, plantar fascia, patellar tendon) on ultrasound data acquired by the Clarius Ultrasound Scanner (i.e., L 7 and L15). The user shall be a healthcare professional trained and qualified in MSK (musculoskeletal) ultrasound. The user shall retain the ultimate responsibility of ascertaining the measurements based on standard practices and clinical iudgment.

    Device Description

    Clarius AI is a radiological (ultrasound) image processing software application which implements artificial intelligence (Al), including non-adaptive machine learning algorithms, and is incorporated into the Clarius App software for use as part of the complete Clarius Ultrasound Scanner system product offering in musculoskeletal (MSK) ultrasound imaging applications. Clarius Al (MSK model) is intended for use by trained healthcare practitioners for non-invasive measurements of ultrasound data from musculoskeletal (MSK) ultrasound imaging acquired by the Clarius Ultrasound Scanner system using an artificial intelligence (AI) image segmentation algorithm. Clarius AI (MSK model) is intended to semi-automatically place adjustable calipers and provide supplementary information to the user regarding tendon thickness measurements (i.e., foot/plantar fascia, ankle/Achilles' tendon, knee/patellar tendon). Clarius Al is intended to inform clinical management and is not intended to replace clinical decision-making. The clinician retains the ultimate responsibility of ascertaining the measurements based on standard practices and clinical judgment. Clarius Al is indicated for use in adult patients only.

    During the ultrasound imaging procedure, the anatomical site is selected through a preset software selection (e.g., foot, ankle, knee), in which the Clarius Al will engage to segment the correlating tendon. Clarius Al analyzes ultrasound images in real-time and outputs probabilities for each pixel within the image for determination of the particular tendon thickness.

    The combination of all the pixels meeting a programmed threshold will render an overlay being displayed on top of the ultrasound image with a pre-programmed transparency so that the ultrasound greyscale is still visible. Once the user has obtained the best view, imaging can be manually paused, in which the Clarius Al will further analyze the tendon segmentation to determine the greatest thickness, in number of pixels, and subsequently place two measurement calipers that correspond to the top and bottom of the tendon at its thickest region, outputting a value in millimeters. The user can then manually alter the measurement calipers to make any necessary adjustments if desired. Clarius Al does not perform any functions that could not be accomplished manually by a trained and qualified user.

    Clarius AI (MSK model) is incorporated into the Clarius App software and is intended for use with the following Clarius Ultrasound Scanner system transducers (previously 510(k)-cleared in K180799, K192107, and K213436):

    Clarius Ultrasound Transducers: L7 and L15
    Clarius App Software: Clarius Ultrasound App (Clarius App) for iOS; Clarius Ultrasound App (Clarius App) for Android

    AI/ML Overview

    The Clarius AI device is intended for semi-automatic placement of calipers for non-invasive measurements of musculoskeletal structures (e.g., Achilles' tendon, plantar fascia, patellar tendon) on ultrasound data. The device's performance was evaluated through non-clinical and clinical testing to demonstrate its safety and effectiveness.

    Here’s a breakdown of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the Clarius AI device, particularly regarding measurement accuracy, are based on achieving non-inferiority to manual measurements made by expert clinicians. The specific criterion for clinical significance was defined as a difference of greater than 20% from normal thickness measurements.

    Acceptance Criteria (Measurement Accuracy)Reported Device Performance
    Automatic thickness measurement to be non-inferior to mean manual measurements.Non-inferior (p-value of 9.0 x 10^-5). The mean difference between automated and manual measurements was 0.03% (95% CI: -0.05% to -0.01%).
    Clinically significant difference defined as >20% absolute percent difference between automatic measurement and mean reviewer measurement, corresponding to:
    • 0.6 mm for plantar fascia
    • 1 mm for patellar tendon
    • 1.2 mm for Achilles' tendon. | The automatic measurement was found to be non-inferior to manual measurements within this clinically significant margin. |
      | Automatic segmentation to have a high degree of overlap with ground truth. | Average Dice score of 96% and mean IoU of 94% for tendon segmentation. |

    2. Sample Sizes and Data Provenance

    • Test Set (Validation Phase):

      • Sample Size: 73 subjects, resulting in a total of 2,503 ultrasound images/frames.
      • Data Provenance: Retrospective analysis of anonymized ultrasound images. Images were captured in-house from volunteer subjects and by clinical partners, mainly located in the USA. The data was anonymized and queried from Clarius Cloud storage.
    • Training Set:

      • Sample Size: A total of 20,287 images.
      • Data Provenance: Images were captured in-house from volunteer subjects and by clinical partners, mainly located in the USA. The data was anonymized and queried from Clarius Cloud storage.

    3. Number of Experts and Qualifications for Ground Truth - Test Set

    The document mentions "expert clinicians" and "licensed clinicians with relevant (i.e., musculoskeletal) ultrasound experience" were involved in establishing the ground truth for the test set (manual measurements). However, the exact number of experts and their specific qualifications (e.g., years of experience, board certification) are not explicitly stated in the provided text. The study states "reviewer pairs" for calculating manual measurement differences, implying at least two reviewers per case for comparison.

    4. Adjudication Method for the Test Set

    The provided text does not explicitly detail an adjudication method (e.g., 2+1, 3+1). It states that the "difference between auto-measurements and mean manual measurements" was compared to the "mean difference between manual measurements within the clinically significant margin." This suggests that multiple manual measurements were obtained, and their mean was used for comparison, rather than a specific consensus or adjudication process described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text describes a verification test where "Clarius AI auto-measurements are non-inferior to manual measurements performed by licensed clinicians." While this compares AI performance against human performance, it doesn't describe a typical MRMC study designed to assess how human readers improve with AI assistance versus without AI assistance (i.e., an effect size of AI assistance on human readers). The focus was on the AI's standalone accuracy relative to human measurements.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    Yes, a standalone performance evaluation was largely conducted for the AI algorithm. The core of the "Measurement Accuracy" testing compared the automatic thickness measurement directly against the mean manual measurements, indicating the algorithm's performance without specific human-in-the-loop assistance influencing the AI's initial output for that specific evaluation. The segmentation accuracy (Dice score and IoU) also represents standalone algorithm performance. While the AI semi-automatically places calipers and allows for manual adjustment, the non-inferiority study specifically evaluated the AI's auto-measurement against expert manual measurements.

    7. Type of Ground Truth Used

    • Expert Consensus / Manual Measurements: For the measurement accuracy study, the ground truth was established by "mean manual measurements performed by licensed clinicians with relevant (i.e., musculoskeletal) ultrasound experience."
    • Expert Annotation (Segmentation): For the segmentation evaluation, the ground truth was described as "segmentation ground truth" where "the tendon regions in the images were annotated by a clinical scientist as the ground truth."

    8. Sample Size for the Training Set

    A total of 20,287 images were used for the training phase.

    9. How the Ground Truth for the Training Set was Established

    For the training phase, "the tendon regions in the images were annotated by a clinical scientist as the ground truth." This indicates that human experts (clinical scientists) manually outlined or labeled the tendon regions in the ultrasound images to create the reference data used to train the machine learning model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220905
    Manufacturer
    Date Cleared
    2022-11-17

    (234 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K193267, K211254

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The xvision Spine System, with xvision System Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous spine procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine or pelvis, can be identified relative to CT imagery of the anatomy. This can include the following spinal implant procedures:

    • Posterior Pedicle Screw Placement in the thoracic and sacro-lumbar region.
    • Posterior Screw Placement in C3-C7 vertebrae
    • Iliosacral Screw Placement

    The Headset of the xvision System displays 2D stereotaxic screens and a virtual anatomy screen. The stereotaxic screen is indicated for correlating the tracked instrument location to the registered patient imagery. The virtual screen is indicated for displaying the virtual instrument location to the virtual anatomy to assist in percutaneous visualization and trajectory planning.

    The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed stereotaxic information.

    Device Description

    The xvision Spine (XVS) system is an image-guided navigation system that is designed to assist surgeons in placing pedicle screws accurately, during open or percutaneous computer-assisted spinal surgery. The system consists of a dedicated software, Headset, single use passive reflective markers and reusable components. It uses wireless optical tracking technology and displays to the surgeon the location of the tracked surgical instruments relative to the acquired intraoperative patient's scan, onto the surgical field. The 2D scanned data and 3D reconstructed model, along with tracking information, are projected to the surgeons' retina using a transparent near-eye-display Headset, allowing the surgeon to both look at the patient and the navigation data at the same time.

    The following modifications have been applied to the previously cleared XVS system:

    The indications for use of the subject device are expanded compared to the cleared predicate device and include screw instrumentation in additional spine segments, i.e., cervical C3-C7 vertebrae and iliosacral region. Additionally, an Artificial Intelligence (AI) spine segmentation algorithm, based on Convolutional Neural Network (CNN), has been added to provide an improved virtual 3D spine model. The virtual 3D model can be built from the original CT scan or from the Al segmented CT scan. Neither of these modifications alters the intended use of the device as an aid in localization during spine surgery or its principles of operation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the xvision Spine System, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document provides the "System Accuracy Requirement" as the primary acceptance criterion related to performance. The study then reports on validation studies that demonstrate the device meets these specifications.

    Acceptance Criterion (System Level Accuracy)Reported Device Performance
    Mean 3D positional error of 2.0 mmValidated in two cadaver studies. Positional errors calculated as the difference between actual and virtual screw tip position.
    Mean trajectory error of 2°Validated in two cadaver studies. Trajectory errors calculated as the difference between screw orientation and its recorded virtual trajectory.
    Additional Performance Parameter (AI Segmentation)Reported Device Performance
    Not explicitly stated as an "acceptance criterion" in a quantitative manner, but performance of the AI segmentation algorithm was validated.Mean Dice coefficient calculated. Compared to manual segmentations approved by US physicians.
    Additional Performance Parameter (Clinical Accuracy)Reported Device Performance
    Not explicitly stated as an "acceptance criterion," but clinical accuracy was evaluated.Evaluated using the Gertzbein-Robbins score by viewing post-op scans.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Two cadaver studies were conducted. The specific number of cases, screws, or segments tested within these cadaver studies is not explicitly stated in the provided document.
    • Data Provenance: The document states "two cadaver studies." This suggests the data is prospective (generated for this specific testing) and likely from a laboratory or research setting. The country of origin of the cadavers or the study location is not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • For the cadaver studies (positional and trajectory errors), the method of establishing ground truth (e.g., through physical measurements) is implied by "actual... screw tip position" and "actual... screw orientation" but the number or qualifications of experts involved in these measurements are not specified.
    • For the AI segmentation algorithm validation: "manual segmentations that were approved by US physicians" were used as ground truth. The number of physicians/experts and their specific qualifications (e.g., years of experience as radiologists or surgeons) are not specified.

    4. Adjudication Method for the Test Set

    • For the cadaver studies, no adjudication method is described. Measurements for positional and trajectory errors are typically objective and can be directly measured.
    • For the AI segmentation validation, the manual segmentations were "approved by US physicians." This suggests a consensus or review process, but the specific adjudication method (e.g., 2+1, 3+1) is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • The document does not indicate that an MRMC comparative effectiveness study was done to evaluate how human readers improve with AI vs. without AI assistance. The AI component is described as providing an "improved virtual 3D spine model" but its impact on human reader performance is not measured in this submission.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone performance evaluation was conducted for the AI segmentation algorithm. The "mean Dice coefficient was calculated" to measure the quality of the algorithm's segmentation compared to manual ground truth. This is a common metric for evaluating the performance of segmentation algorithms independently.

    7. The Type of Ground Truth Used

    • For system accuracy (positional/trajectory errors): The ground truth was based on physical measurements of actual screw tip position and orientation in cadavers.
    • For AI segmentation algorithm: The ground truth was established by manual segmentations approved by US physicians.

    8. The Sample Size for the Training Set

    • The document does not specify the sample size for the training set used for the Convolutional Neural Network (CNN) based AI spine segmentation algorithm. It only mentions that the algorithm has been "added to provide an improved virtual 3D spine model."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set of the AI algorithm was established. While it mentions manual segmentations by US physicians for the validation set, it does not detail the process for the training data. It's common practice for training data ground truth to also be established by expert annotation, but this is not explicitly confirmed in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K222361
    Date Cleared
    2022-10-20

    (77 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K193267

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion (Musculoskeletal) is an image processing software that provides quantitative andysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice in the evaluation and assessment of musculoskeletal disease.

    It provides the following functionality:

    • · Segmentation of vertebras
    • · Labelling of vertebras
    • · Measurements of heights in each vertebra and indication if they are critically different
    • · Measurement of mean Hounsfield value in volume of interest within vertebra.

    Only DICOM images of adult patients are considered to be valid input.

    Device Description

    AI-Rad Companion (Musculoskeletal) SW version VA20 is an enhancement to the previously cleared device AI-Rad Companion (Musculoskeletal) K193267 that utilizes deep learning algorithms to provide quantitative and qualitative analysis to computed tomography DICOM images to support qualified clinicians in the evaluation and assessment of the spine.

    As an update to the previously cleared device, the following modifications have been made:

    • Enhanced AI Algorithm The vertebrae segmentation accuracy has been improved through retraining the algorithm.
    • DICOM Reports

    The reports generated out of the system have been enhanced to support both human and machine-readable formats. Additionally, an update version of the system changed the DICOM structured report format to TID 1500 for applicable content.

    • Architecture Enhancement for on premise Edge deployment The system supports the existing cloud deployment as well as an on premise "edge" deployment. The system remains hosted in the teamplay digital health platform and remains driven by the AI-Rad Companion Engine. Now the edge deployment implies that the processing of clinical data and the generation of results can be performed onpremises within the customer network. The edge system is fully connected to the cloud for monitoring and maintenance of the system from remote.
    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets those criteria, based on the provided document:

    Acceptance Criteria and Device Performance Study

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeAcceptance CriteriaReported Device Performance
    Mislabeling of Vertebrae or absence of height measurementRatio of cases that are mislabeled or missing measurements shall be 1.0 mm)For cases with slice thickness > 1.0 mm, the difference should be within the LoA for ≥ 85% of cases
    Consistency of height and density measurement across critical subgroupsFor each sub-group, the ratio of measurements within the corresponding LoA should not drop by more than 5% compared to the ratio for all data setsOverall failure rate of the subject device was consistent with the predicate, and results of all sub-group analysis were rated equal or superior to the predicate regarding the ratio of measurements within the corresponding LoA.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 140 Chest CT scans, comprising 1,553 thoracic vertebrae.
    • Data Provenance: The document lists two sources for the data:
      • KUM (N=80): Primary indications and various medical conditions are detailed (e.g., Lung/airways, infect focus, malignancy, cardiovascular, trauma).
      • NLST (N=60): Comorbidities are detailed (e.g., diabetes, heart disease, hypertension, cancer, smoking history).
      • The patient demographics (sex, age, manufacturer of CT scanner, slice thickness, dose, reconstruction method, kernel, contrast enhancement) are provided.
      • The document implies this is retrospective data collected from existing patient studies, as it describes the "testing data information" with pathologies and patient information already existing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Four board-certified radiologists.
    • Qualifications of Experts: Board-certified radiologists. (No specific years of experience are mentioned).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Each case was read independently by two radiologists in randomized order.
      • For outliers (cases where the initial two radiologists' annotations potentially differed significantly or inconsistently), a third annotation was blindly provided by a radiologist who had not previously annotated that specific case.
      • The ground truth was then generated by the average of the two most concordant measurements.
      • For all other cases (non-outliers), the two initial annotations were used as ground truth. This suggests a form of 2+1 adjudication for outliers and 2-reader consensus for non-outliers.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • The document describes a validation study comparing the device's performance against ground truth established by human readers. However, it does not describe a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the AI algorithm.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone study was performed. The "Summary Performance Data" directly reports the "Failure Rate" and "Inter-reader variability" (difference between AIRC and ground truth) of the AI-Rad Companion (Musculoskeletal) itself. The study's design of comparing device measurements to expert-established ground truth evaluates the algorithm's standalone accuracy.

    7. The Type of Ground Truth Used

    • Expert Consensus. The ground truth for the test set was established by the manual measurements and annotations of four board-certified radiologists, utilizing an adjudication process to determine the most concordant measurements for vertebra heights and average density (HU) values.

    8. The Sample Size for the Training Set

    • The document does not specify the exact sample size for the training set. It only states that the "training data used for the training of the post-processing algorithm is independent of the data used to test the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the "vertebrae segmentation accuracy has been improved through retraining the algorithm," implying that training data with associated ground truth was used for this process, but the method of establishing that ground truth is not detailed in this submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220497
    Device Name
    CoLumbo
    Date Cleared
    2022-06-23

    (121 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K183268, K193267

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CoLumbo is an image post-processing and measurement software tool that provides quantitative spine measurements from previously-acquired DICOM lumbar spine Magnetic Resonance (MR) images for users' review, analysis, and interpretation. It provides the following functionality to assist users in visualizing, measuring and documenting out-of-range measurements:

    • . Feature segmentation;
    • . Feature measurement;
    • . Threshold-based labeling of out-of-range measurement; and
    • . Export of measurement results to a written report for user's revise and approval.

    CoLumbo does not produce or recommend any type of medical diagnosis or treatment. Instead, it simply helps users to more easily identify and classify features in lumbar MR images and compile a report. The user is responsible for confirming/modifying settings. reviewing and verifying the software-generated measurements, inspecting out-of-range measurements, and approving draft report content using their medical judgment and discretion.

    The device is intended to be used only by hospitals and other medical institutions.

    Only DICOM images of MRI acquired from lumbar spine exams of patients aged 18 and above are considered to be valid input. CoLumbo does not support DICOM images of patients that are prognant, undergo MRI scan with contrast media, or have post-operational complications, scoliosis, tumors, infections, fractures.

    Device Description

    CoLumbo is a medical device (software) for viewing and interpreting magnetic resonance imaging (MRI) of the lumbar spine. The software is a quantitative imaging tool that assists radiologists and neuro- and spine surgeons ("users") to identify and measure lumbar spine features in medical images and record their observations in a report. The users then confirm whether the out-of-range measurements represent any true abnormality versus a spurious finding, such as an artifact or normal variation of the anatomy. The segmentation and measurements are classified using "modifiers" based on rule-based algorithms and thresholds set by each software user and stored in the user's individualized software settings. The user also identifies and classifies any other observations that the software may not annotate.

    The purpose of CoLumbo is to provides information regarding common spine measurements confirmed by the user and the pre-determined thresholds confirmed or defined by the user. Every feature annotated by the software, based on the user-defined settings, must be reviewed and affirmed by the radiologist before the measurements of these features can be stored and reported. The software initiates adjustable measurements resulting from semi-automatic segmentation. If the user rejects a measurement the corresponding segmentation is rejected too. Segmentations are not intended to be a final output but serve the purpose of visualization and calculating measurements. The device outputs are intended to be a starting point for a clinical workflow and should not be interpreted or used as a diagnosis. The user is responsible for confirming segmentation and all measurement outputs. The output is an aid to the clinical workflow of measuring patient anatomy and should not be misused as a diagnosis tool.

    User-confirmed defined settings control the sensitivity of the software for labelling measurements in an image. The user (not the software) controls the threshold for identifying out-of-range measurements, and, in every case once an out-of-range measurement is identified, the user must confirm or reject its presence. The software facilitates this process by annotating or drawing contours (segmentations) around features of the relevant anatomy and displaying measurements based on these contours. The user maintains control of the process by inspecting the segmentation, measurements and annotations upon which the measurements are based. The user may also examine other features of the imaging not annotated by the software to form a complete impression and diagnostic judgment of the overall state of disease, disorder, or trauma.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves CoLumbo meets them, based on the provided FDA submission:

    1. Acceptance Criteria and Reported Device Performance

    Primary Endpoint (Measurement Accuracy):

    • Acceptance Criteria: The maximum Mean Absolute Error (MAE), defined as the upper limit of the 95% confidence interval for MAE, is below a predetermined allowable error limit (MAE_Limit) for each measurement listed.
    • Reported Performance: All primary endpoints were met.
    MeasurementReported Mean Absolute Error (MAE)95% Confidence Interval (CI)MAE_LimitMeets Criteria?
    Dural Sac Area (Axial)14.8 mm²12.4 - 17.3 mm²20 mm²Yes (17.3 0.8)
    Vertebral Arch and Adjacent Ligaments (Axial)0.870.86 - 0.880.8Yes (0.86 > 0.8)
    Dural Sac (Axial)0.920.92 - 0.930.8Yes (0.92 > 0.8)
    Nerve Roots (Axial)0.750.72 - 0.780.6Yes (0.72 > 0.6)
    Disc Material Outside Intervertebral Space (Axial)0.760.72 - 0.800.6Yes (0.72 > 0.6)
    Disc (Sagittal)0.930.93 - 0.940.8Yes (0.93 > 0.8)
    Vertebral Body (Sagittal)0.950.94 - 0.950.8Yes (0.94 > 0.8)
    Sacrum S1 (Sagittal)0.930.92 - 0.940.8Yes (0.92 > 0.8)
    Disc Mat. Outside IV Space and/or Bulging Part0.690.66 - 0.720.6Yes (0.66 > 0.6)

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 101 MR image studies from 101 patients.
    • Data Provenance:
      • Country of Origin: Collected from seven (7) sites across the U.S.
      • Retrospective/Prospective: The document does not explicitly state whether the data was retrospective or prospective, but the phrasing "collected from seven (7) sites across the U.S." typically implies retrospective collection for this type of validation.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Three (3) U.S. radiologists.
    • Qualifications: The document states they were "U.S. radiologists" but does not provide details on their years of experience, subspecialty, or specific certifications.

    4. Adjudication Method for the Test Set

    • Ground Truth Method: For segmentations, the per-pixel majority opinion of the three radiologists established the ground truth. For measurements, the median of the three radiologists' measurements established the ground truth. This is a form of multi-reader consensus.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly reported. The study conducted was a "standalone software performance assessment study," meaning it evaluated the algorithm's performance against ground truth without human readers in the loop.
    • Effect Size: N/A, as an MRMC study comparing human readers with and without AI assistance was not performed.

    6. Standalone (Algorithm Only) Performance Study

    • Was it done? Yes. A standalone software performance assessment study was conducted.
    • Details: The study "compared the CoLumbo software outputs without any editing by a radiologist to the ground truth defined by 3 radiologists on segmentations and measurements."

    7. Type of Ground Truth Used

    • Ground Truth Type: Expert consensus.
      • For segmentations: Per-pixel majority opinion of three radiologists using a specialized pixel labeling tool.
      • For measurements: Median of three radiologists' measurements using a commercial software tool.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated in the provided text. The document only mentions that the "training and testing data used during the algorithm development, as well as validation data used in the U.S. standalone software performance assessment study were all independent data sets." It does not specify the size of the training set.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: Not explicitly stated. The document only mentions that the training data and validation data were independent. It does not detail the method by which ground truth was established for the training data used in algorithm development.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1