K Number
K243292
Manufacturer
Date Cleared
2025-03-20

(153 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

brAIn™ Shoulder Positioning is intended to be used as an information tool to assist in the preoperative surgical planning and visualization of a primary total shoulder replacement.

Device Description

The brAIn™ Shoulder Positioning software is a cloud-based application intended for shoulder surgeons. It is used to plan primary anatomic and reverse total shoulder replacement surgeries using FX Shoulder Solutions implants. The software is a webbased interface, where the user is prompted to upload their patient's shoulder CT-scan (DICOM series) accompanied with their information in a dedicated interface. The software automatically segments (using machine learning) and performs measurements on the scapula and humerus anatomy contained in the DICOM series. These segmentations are used for planning, which includes an interactive 3D viewer that allows for soft tissue visualization. Implants for the glenoid and humerus are positioned using this same 3D interface through a dedicated manipulation panel. The changes in shoulder anatomy resultant from the implants are relayed in a post-position interface that displays information related to distalization. The software outputs a planning multimodal summary that includes textual information (patient information, pre- and post-op measurements) and visual information (screen captures of the shoulder pre- and postimplantation).

AI/ML Overview

Here's the information about the acceptance criteria and the study that proves the device meets them, based on the provided text:

Acceptance Criteria and Device Performance

Acceptance CriteriaReported Device Performance
Segmentation Performance: Mean Dice Similarity Coefficient (DSC) on the testing set greater than or equal to 0.95 for automatic segmentation when validated against manual segmentation.All tests confirmed that the segmentation performance meets the acceptance criteria (DSC ≥ 0.95). The validation criterion was a Dice Similarity Coefficient of 0.95 or higher.
Shoulder Side Detection Performance: Correct detection of shoulder side (right or left) in DICOM images when compared to manual assessment.All performance tests for Shoulder Side Detection validation were successfully completed with no deviations, confirming compliance with the required performance standards.
Measurement Accuracy Performance: Accuracy of software measurements when editing landmark positions similar to the reported accuracy of the predicate device.All performance tests for Measurement Accuracy Validation were successfully completed with no deviations, confirming compliance with the required performance standards. The text does not provide a specific numerical acceptance criterion for this, but states it met "required performance standards" by being similar to the predicate.
Landmark Performance: Mean distance of 3 mm for landmark positions when compared to final positions adjusted by experts.All performance tests for landmark validation were successfully completed with no deviations, confirming compliance with the required performance standards, with a 3 mm mean distance as the acceptance criterion.

Study Details

2. Sample size used for the test set and the data provenance:

  • Test Set Sample Size: 173 samples (pairs of 3D images with segmentation labels).
  • Data Provenance: The data corresponds to patients who underwent arthroplasty with an FX Shoulder implant, without specific selection. It represents diversity in shoulder types, imaging equipment, institutions, study year, and geographical provenance.
    • Geographical Origin (Test Set):
      • Left shoulder (79 samples): 58.2% Europe (46), 41.8% USA (33)
      • Right shoulder (94 samples): 56.4% Europe (53), 43.6% USA (41)
  • Retrospective/Prospective: Not explicitly stated, but the description "data corresponds to patients that under arthroplasty... without any further specific selection" suggests it is likely retrospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Number of Experts: Not explicitly stated.
  • Qualifications of Experts: For segmentation, the labels were created "manually by medical professionals." For shoulder side detection, ground truth was a "manual assessment performed by a Clinical Solutions Specialist." For landmark performance, ground truth involved "final positions adjusted by experts." Specific qualifications (e.g., years of experience, specialty) are not provided beyond "medical professionals" and "Clinical Solutions Specialist."

4. Adjudication method for the test set:

  • Not explicitly stated. The text mentions "manual segmentation performed" for the segmentation ground truth, "manual assessment" for shoulder side detection, and "final positions adjusted by experts" for landmark performance. It does not detail if multiple experts performed these tasks and how discrepancies were resolved (e.g., 2+1, 3+1).

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance was not described in the provided text. The study focused on the standalone performance of the AI algorithm.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • Yes, standalone performance testing was done. The "Segmentation Performance Testing," "Shoulder Side Detection performance testing," and "Landmark Performance Testing" sections describe the algorithm's performance against established ground truth.

7. The type of ground truth used:

  • Expert Consensus/Manual Annotation:
    • For segmentation: Manual segmentation performed by "medical professionals."
    • For shoulder side detection: Manual assessment performed by a "Clinical Solutions Specialist."
    • For landmark performance: "Final positions adjusted by experts."

8. The sample size for the training set:

  • Training Set Sample Size: 335 samples (pairs of 3D images with segmentation labels).

9. How the ground truth for the training set was established:

  • The text states, "The labels [for segmentation] were created manually by medical professionals." This implies the same method of ground truth establishment (manual annotation by medical professionals) was used for the training set as for the test set.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).