(73 days)
The Preview Shoulder software is intended to be used as a tool for orthopedic surgeons to develop pre-operative shoulder plans based on a patient CT imaging study.
The import process allows the user to select a DICOM CT scan series from any location that the user's computer sees as an available file source.
3D digital representations of various implant models are available in the planning software. Preview Shoulder allows the user to digitally perform the surgical planning a representation of the patient's shoulder anatomy as a 3D model and allows the surgeon to place the implant in the patient's anatomy.
The software allows the surgeon to generate a report, detailing the output of the planning activity. Experience in usage and a clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
The Preview Shoulder, a 3D total shoulder arthroplasty (TSA) surgical planning software, is a standalone software application which assists the surgeon in planning reverse and anatomic shoulder arthroplasty. Preview Shoulder includes 3D digital representations of implants for placement in images used for surgical planning. Preview Shoulder is a secure software application used by qualified or trained surgeons and is accessed by authorized users.
The primary function of Preview Shoulder is to receive and process DICOM CT image(s) of patients. Preview Shoulder can be used to place an implant in the original CT image and place an implant in the 3D model of reconstructed bone. The Preview Shoulder allow the user to perform surgical planning and generate an output surgical report. Preview Shoulder does not provide a diagnosis or surgical recommendation. The surgeon is responsible for selecting and placing the implant model for pre-surgical planning purposes.
The provided text focuses on the 510(k) summary for the Preview Shoulder software, outlining its substantial equivalence to a predicate device and general non-clinical testing. However, it does not include detailed information about specific acceptance criteria for performance metrics, nor does it describe a study that explicitly proves the device meets such criteria with reported performance values.
The document states:
- "Software Verification and Validation testing was performed on the Preview Shoulder, and documentation is provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Content of Premarket Submissions for Device Software Functions'."
- "Testing verified that the system performs as intended."
- "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)."
- "All validation testing was performed on a fully configured system using anonymized patient shoulder CT images to emulate intended use."
- "All user features have been validated by surgeons."
- "Clinical testing was not necessary to demonstrate substantial equivalence of the Preview Shoulder to the predicate device."
Given this, I cannot provide the requested information in full detail. Here's what can be inferred and what is missing:
1. A table of acceptance criteria and the reported device performance
This information is not provided in the document. While it mentions "Testing verified that the system performs as intended" and "measurement capabilities... were validated to be significantly equivalent to a benchmark tool," it does not specify what those performance metrics, acceptance criteria, or reported performance values are.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not explicitly stated. The document mentions "anonymized patient shoulder CT images" were used for validation testing, but the number of such images (sample size) is not given.
- Data Provenance: "Anonymized patient shoulder CT images" were used. The country of origin and whether the data was retrospective or prospective is not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- The document states, "All user features have been validated by surgeons." It does not specify the number of surgeons or their qualifications or how they established "ground truth" for quantitative assessments.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- The document mentions "All user features have been validated by surgeons" but does not detail any adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not necessary to demonstrate substantial equivalence of the Preview Shoulder to the predicate device." The software is not an AI for diagnosis or interpretation but a surgical planning tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not explicitly detailed as a standalone performance study in the context of typical AI performance evaluation. The document states that "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)." This suggests a comparison of the software's output with a benchmark, which could be considered a form of standalone validation for its measurement capabilities. However, specific metrics and results are not provided. The device itself is described as a "tool for orthopedic surgeons," implying human-in-the-loop operation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The document implies that "benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)" served as a reference for validating measurement capabilities. For user features, "validation by surgeons" might have implicitly used their clinical judgment as a form of ground truth for usability and functionality, but this is not explicitly defined in terms of a formal ground truth process.
8. The sample size for the training set
- This device is described as a "software application" for surgical planning and emphasizes its validation through comparison of measurement capabilities against a benchmark tool and user feature validation by surgeons. It does not explicitly state that it uses machine learning/AI models that require a "training set" in the conventional sense. If there are AI algorithms as implied by "Post-processing algorithm is added to further refine the 3D mesh quality" and "Algorithm is added to calculate humerus-side features," the training set size is not provided.
9. How the ground truth for the training set was established
- As the existence of a "training set" is not confirmed or described, the method for establishing its ground truth is also not provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).