Search Results
Found 1 results
510(k) Data Aggregation
(49 days)
The OrthoVis Preoperative Plan is a preoperative plan document created via the OrthoVis software that facilitates accurate preoperative planning and intraoperative placement of the glenoid component in total shoulder replacement. The OrthoVis Software and resulting Preoperative Plan is indicated for use with the DePuy Global APTM Shoulder glenoid, Global Shoulder StepTech™ Anchor Peg glenoid, or Delta XtendTM Reverse Shoulder metaglene components.
The indications for use of the DePuy shoulder systems with which the OrthoVis Preoperative Plan is intended to be used are the same as those described in 510(k) KI23122.
The OrthoVis Preoperative Plan is a preoperative plan document that is created in OrthoVis software. A patient CT scan is loaded into OrthoVis software and the desired bony anatomy can be separated and segmented with OrthoVis tools, allowing extracted and segmented bones (e.g., scapula, humerus) to be virtually implanted with shoulder replacement implants. OrthoVis currently is used only with the DePuy Global APG glenoid, DePuy Global StepTech, and DePuy Delta Xtend components for total shoulder arthroplasty. OrthoVis can then produce a preoperative plan document (.pdf file), the OrthoVis Preoperative Plan, that contains text, images, and in electronic format, a rotatable 3D model(s) of the implanted component and bone. This preoperative plan document is labeled, via a watermark, as unapproved until the ordering surgeon approves the plan, at which point such labeling is removed and the final plan provided to the ordering surgeon.
The following information describes the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state formal acceptance criteria with numerical targets. However, the non-clinical testing performed aims to demonstrate the device's ability to accurately perform its intended use and function, particularly in aiding accurate guide pin placement in total shoulder arthroplasty.
| Feature Tested/Metric | Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|---|
| Software Accuracy (Dimensional Validation) | Accurate measurement and portrayal of known lengths in CT scanned objects. | Showed that known lengths of actual CT scanned objects could be accurately measured and portrayed within the OrthoVis software. |
| Accuracy of Guide Pin Placement (Version) | Improvement in accuracy compared to standard of care. | Improved accuracy by 4.5° (±1° s.d.) compared to standard of care instruments (p<0.001). |
| Accuracy of Guide Pin Placement (Inclination) | Improvement in accuracy compared to standard of care. | Improved accuracy by 3.3° (±1.3° s.d.) compared to standard of care instruments (p=0.013). |
| Accuracy of Guide Pin Placement (Pin Placement) | Improvement in accuracy compared to standard of care. | Improved accuracy by 0.4 mm (±0.2 mm) compared to standard of care instruments (p=0.042). |
Study Proving Device Meets Acceptance Criteria:
The "Sawbones study" section details the bench testing performed to demonstrate the device's efficacy.
2. Sample Size Used for the Test Set and Data Provenance:
The test set for the Sawbones study involved:
- Sample Size: 9 different pathologies, with each instance studied twice. This implies a total of 18 simulated cases (9 pathologies * 2 instances).
- Data Provenance: The study used "Sawbones," which are synthetic bone models. This indicates that the data is prospective and simulated (bench testing), not derived from human patients or retrospective clinical data. The country of origin of the Sawbones itself is not specified, but the study was conducted by Custom Orthopaedic Solutions, Inc., located in Cleveland, Ohio, USA.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
The document does not explicitly state how "ground truth" was established for the Sawbones study for the specific measurements of version, inclination, and pin placement. However, it indicates that three different surgeons performed the study. Their qualifications are not detailed beyond "various surgeons," but given the context of orthopedic device development, it is reasonable to infer they are qualified orthopedic surgeons. The study measures the improvement in accuracy relative to standard of care instruments, implying a baseline was established, but the method for determining the "true" anatomical values for each Sawbone pathology is not elaborated.
4. Adjudication Method for the Test Set:
The document does not describe an explicit adjudication method for the test set in the traditional sense of resolving discrepancies between expert readings. Instead, the study design appears to be a comparative effectiveness study where the performance of surgeons with the OrthoVis Preoperative Plan is compared to their performance without it (using standard of care instruments). The measurements (version, inclination, pin placement) were quantified, and statistical analysis (p-values) was applied to the results.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Yes, a form of comparative effectiveness study was done using human readers (surgeons) with and without the device. While not explicitly termed an "MRMC study" in the document, it involved multiple readers (three surgeons) and multiple cases (9 pathologies, each performed twice).
The "AI assistance" in this context is the OrthoVis Preoperative Plan. The effect sizes (improvement in accuracy) reported are:
- Version: Improved by 4.5° (±1° s.d.) with the OrthoVis Preoperative Plan compared to standard of care instruments (p<0.001).
- Inclination: Improved by 3.3° (±1.3° s.d.) with the OrthoVis Preoperative Plan compared to standard of care instruments (p=0.013).
- Pin Placement: Improved by 0.4 mm (±0.2 mm) with the OrthoVis Preoperative Plan compared to standard of care instruments (p=0.042).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, components of standalone testing were done, though not for the entire end-to-end surgical simulation aspect.
- Software verification and validation: Performed to show that bones segmented in the OrthoVis software were accurate to their actual size, and for quality purposes according to guidance on Software in Medical Devices.
- Dimensional validation: Showed that known lengths of actual CT scanned objects could be accurately measured and portrayed within the OrthoVis software.
These tests evaluate the accuracy of the software's core functions (segmentation, measurement) independent of a surgeon's interaction using the plan for an intervention.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For software/dimensional validation: Known physical dimensions of CT scanned objects were used as ground truth. This is a form of physical measurement/calibration ground truth.
- For the Sawbones study: The document doesn't explicitly state how the "true" preoperatively planned implant location or pin placement for each Sawbone was defined as ground truth. It implies a comparison against "standard of care instruments," suggesting that the baseline for measurement might have been derived from typical surgical practice or a meticulously prepared "ideal" placement for each Sawbone model. However, the precise method for establishing the absolute ground truth for the optimal version, inclination, and pin placement for each simulated pathology is not detailed.
8. The Sample Size for the Training Set:
The document does not provide any information regarding a training set size. The device is described as a "Preoperative Planning tool" created via software, which segments bony anatomy from CT scans. It is likely that the segmentation algorithms and measurement tools would have been developed and refined using a dataset of CT scans, but the size or characteristics of such a dataset are not mentioned in this 510(k) summary. Given the context of a 2013 submission, "AI" in the modern machine-learning sense might not have been a primary component; it's more likely rule-based or image-processing algorithms were used, which don't always involve explicit "training sets" in the same way deep learning models do.
9. How the Ground Truth for the Training Set was Established:
As no training set is mentioned, there is no information on how its ground truth was established.
Ask a specific question about this device
Page 1 of 1