(51 days)
The Arthrex OrthoVis Preoperative Plan is a preoperative plan created via the OrthoVis software accurate preoperative planning and intraoperative placement of the glenoid component in total shoulder replacement.
The Arthrex Preoperative Plan is indicated for use in planning the central glenoid guide pin for the Arthrex Univers™ II and Univers™ Apex Keeled glenoid component, Univers™ II and Univers™ Apex Pegged glenoid component, and the Univers Revers™ Baseplate.
The indications for use of the Arthrex shoulder systems with which the Arthex OrthoVis Preoperative Plan is intended to be used are the same as those described in the labeling for these shoulder systems.
The Arthrex OrthoVis Preoperative Plan is a preoperative plan document that is created in the OrthoVis software. A patient CT scan is loaded into the OrthoVis software and the desired bony anatomy can be separated and segmented with OrthoVis tools, allowing extracted and segmented bones (e.g., scapula, humerus) to be virtually implanted with shoulder replacement implants.
Here's a breakdown of the acceptance criteria and study information for the Arthrex OrthoVis Preoperative Plan, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with numerical targets. Instead, it describes a "Non-Clinical Testing" section that outlines the studies performed to demonstrate substantial equivalency to a predicate device. The primary performance metric mentioned is related to central guide pin placement accuracy as improved by the preoperative plan.
Given the information, we can infer the acceptance criteria and reported performance qualitatively from the "Inter and Intra User Surgical Planning Comparison Study". The goal of this study would be to show that the Arthrex OrthoVis Preoperative Plan, with the new Arthrex implants, provides comparable or improved accuracy in planning features (like the central glenoid guide pin) compared to the predicate device with its original implants.
Inferred Acceptance Criteria Table:
Acceptance Criteria Category | Specific Acceptance Criterion (Inferred) | Reported Device Performance (Inferred) |
---|---|---|
Software Functionality | The OrthoVis software, with the integration of Arthrex implants, functions reliably and as intended, producing accurate preoperative plan documents. | "Software verification and validation" was performed, indicating the software meets its functional specifications and presumably operates without critical errors. The creation of a .pdf document with text, images, and a rotatable 3D model, as described, implies successful functionality. |
Planning Accuracy | The Arthrex OrthoVis Preoperative Plan facilitates comparable or improved central glenoid guide pin placement accuracy compared to the predicate device. | An "Inter and Intra User Surgical Planning Comparison Study" was conducted. While no numerical results are provided in this summary, the assertion of substantial equivalence based on this testing implies that the study demonstrated satisfactory accuracy. The document states both the subject and predicate devices "aim to improve central guide pin placement accuracy," suggesting the subject device achieved this aim. |
User Consistency | Preoperative plans generated by different users (inter-user) and by the same user multiple times (intra-user) are consistent. | An "Inter and Intra User Surgical Planning Comparison Study" was performed. The execution of this study suggests that consistency in planning across different users and within the same user was evaluated and found to be acceptable for substantial equivalence. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified in the provided document. The "Inter and Intra User Surgical Planning Comparison Study" is mentioned, but details on the number of cases or users included are absent.
- Data Provenance: Not specified. There is no information regarding the country of origin of the CT scans or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- The document does not explicitly state how ground truth was established, nor does it specify the number or qualifications of experts for the test set. The study type ("Inter and Intra User Surgical Planning Comparison Study") suggests that potentially human planners (surgeons or trained personnel) were involved in generating plans, which were then compared, but it does not clarify if these plans themselves served as a ground truth or if an independent "true" surgical plan was used for comparison.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- The adjudication method is not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document mentions an "Inter and Intra User Surgical Planning Comparison Study." This study design often involves multiple users (readers/planners) creating plans for multiple cases, which aligns with the "multi-reader multi-case" concept. However, it's not strictly an AI-assistance study in the sense of comparing human performance with and without an AI diagnosis/recommendation. Instead, the device is a planning tool that facilitates planning.
- The study's goal was to demonstrate substantial equivalence to a predicate device. It's likely comparing the planning outcome (e.g., pin placement accuracy) when using the OrthoVis software with the new Arthrex implants versus potentially the predicate device's software with its specific implants, or against a manual planning method.
- Effect Size: The document does not provide any effect size or numerical improvement metrics for human readers using the device. It only states that the testing was performed to demonstrate substantial equivalency.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, a standalone (algorithm only) performance study is not described. The device is explicitly a "preoperative plan created via the OrthoVis software" that "facilitates accurate preoperative planning and intraoperative placement." This implies a human-in-the-loop process where the software is a tool for a planner/surgeon. The "Inter and Intra User Surgical Planning Comparison Study" further supports this by involving human users.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The type of ground truth is not explicitly stated. For a planning tool, ground truth would ideally be the "optimal" or "correct" surgical plan/placement. This could be established by:
- Expert Consensus: Multiple highly experienced surgeons or planners agreeing on an optimal plan.
- Pathology/Intraoperative Images: Post-operative imaging or intraoperative photos confirming the actual placement of components (though this is more for verification than establishment of the preoperative ground truth).
- Biomechanical Simulation: Computational models determining ideal placement based on patient-specific anatomy and biomechanics.
- Given the nature of the device, it's most probable that an expert consensus or a gold-standard manual planning method was used as a reference for comparison, but the document does not confirm this.
8. The sample size for the training set
- The document does not refer to a training set. This product is described as software that uses patient CT scans to create a preoperative plan. While the software itself would have been developed and "trained" in a broader sense (e.g., development of segmentation algorithms), this 510(k) summary focuses on the validation of the software's output for a specific clinical application (new implants). There is no mention of machine learning or deep learning models that require a distinct "training set" in the context of this regulatory submission. "Software verification and validation" generally refers to traditional software engineering testing.
9. How the ground truth for the training set was established
- As no training set is mentioned in the context of this regulatory submission, there is no information on how its ground truth would have been established.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).