Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K223553
    Manufacturer
    Date Cleared
    2023-08-02

    (250 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Spine Planning (2.0), Elements Spine Planning, Elements Planning Spine

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine Planning is intended for pre- and intraoperative planning of open and minimally invasive spinal procedures. It displays digital patient images (CT, Cone Beam CT, MR, X-ray) and allows measurement and planning of spinal implants like screws and rods.

    Device Description

    The Spine Planning software allows the user to plan spinal surgery pre-operatively or intra-operatively. The software is able to display 2D X-Ray images and 3D datasets (e.g. CT or MR scans). The software consists of features for automated labelling of vertebrae and proposals for screw and rod implants, proposals for measurement of spinal parameters.

    The device can be used in combination with spinal navigation software during surgery, where preplanned or intra-operatively created information can be displayed, or solely as a pre-operative tool to prepare the surgery.

    AI/ML algorithms are used in Spine Planning for

    • . Detection of landmarks on 2D images for vertebrae labeling and measurement and
    • . Vertebra detection on Digitally Reconstructed Radiograph (DRR) images of 3D datasets for atlas reqistration (labeling of the vertebra).

    The AI/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach. The algorithm was developed using a controlled internal process that defines from the inspection of input data to the training and verification of the algorithm.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Spine Planning 2.0 device, based on the provided document:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a pass/fail format. However, based on the Performance Data section, we can infer the areas of assessment and general performance claims. The "Reported Device Performance" column will reflect the general findings described in the text.

    Acceptance Criteria (Inferred from Performance Data)Reported Device Performance
    Software Verification:Requirements met through integration and unit tests, including SOUP items and cybersecurity. Newly added components underwent integration tests.
    AI/ML Detected X-Ray Landmarks Assessment:- Quantified object detection.
    • Quantified quality of vertebra level assignment.
    • Quantified quality of landmark predictions.
    • Quantified performance of observer view direction for 2D X-rays. |
      | Screw Proposal Algorithm Evaluation (Comparison to Predicate): | Thoracic and lumbar pedicle screw proposals generated by the new algorithm were found to be similar to those generated by the predicate algorithm. |
      | Usability Evaluation: | No critical use-related problems identified. |

    Study Details

    The provided text describes several evaluations rather than a single, unified study with a comprehensive design. Information for some of the requested points is not explicitly stated in the document.

    2. Sample size used for the test set and the data provenance:

    • AI/ML Detected X-Ray Landmarks Assessment:
      • Sample Size: Not explicitly stated.
      • Data Provenance: "2D X-rays from the Universal Atlas Transfer Performer 6.0." This suggests either a curated dataset or potentially synthetic data within a software environment, but specific origin (e.g., country, hospital) or whether it was retrospective/prospective is not provided.
    • Screw Proposal Algorithm Evaluation:
      • Sample Size: Not explicitly stated.
      • Data Provenance: Not explicitly stated, but implies the use of test cases to generate screw proposals for comparison.
    • Usability Evaluation:
      • Sample Size: Not explicitly stated.
      • Data Provenance: Not explicitly stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • AI/ML Detected X-Ray Landmarks Assessment: Not explicitly stated in the provided text. The document mentions "quantifying" various quality aspects, which implies a comparison to a known standard or expert annotation, but details are missing.
    • Screw Proposal Algorithm Evaluation: Not explicitly stated. The comparison is "to the predicate and back-up algorithms," suggesting an algorithmic ground truth or comparison standard rather than human expert ground truth for individual screw proposals.
    • Usability Evaluation: Not applicable, as usability testing focuses on user interaction rather than ground truth for clinical accuracy.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not explicitly stated for any of the described evaluations.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in the provided text. The evaluations focus on the algorithm's performance or similarity to a predicate, and usability for the intended user group.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the AI/ML Detected X-Ray Landmarks Assessment and the Screw Proposal Algorithm Evaluation appear to be standalone algorithm performance assessments.
      • The "AI/ML Detected X-Ray Landmarks Assessment" explicitly evaluates the AI/ML detected landmarks.
      • The "Screw Proposal Algorithm Evaluation" compares the new algorithm's proposals to existing algorithms, indicating a standalone algorithmic assessment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • AI/ML Detected X-Ray Landmarks Assessment: Inferred to be a form of expert-defined or algorithm-defined gold standard given the quantification of object detection, quality of labeling, and landmark predictions. The source "Universal Atlas Transfer Performer 6.0" might imply a reference standard built within that system.
    • Screw Proposal Algorithm Evaluation: The ground truth used for comparison was the output of the "predicate and back-up algorithms," implying an algorithmic gold standard.
    • Usability Evaluation: Ground truth is not applicable in the sense of clinical accuracy; rather, the measure is the identification of "critical use-related problems" by users during testing.

    8. The sample size for the training set:

    • Not explicitly stated for the AI/ML algorithms mentioned. The document only mentions that the "AI/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach" and "developed using a controlled internal process that defines from the inspection of input data to the training and verification of the algorithm."

    9. How the ground truth for the training set was established:

    • Not explicitly stated. Given it's a "Supervised Learning approach," it would imply that the training data was meticulously labeled, likely by experts (e.g., radiologists, orthopedic surgeons) or through a highly curated process, but the document does not elaborate on this.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1