Search Results
Found 2 results
510(k) Data Aggregation
(199 days)
e-Ortho Shoulder Software v1.1
E-Ortho shoulder is intended to be used as an information tool to assist in the preoperative surgical planning and visualization of a primary total shoulder replacement.
e-Ortho Shoulder software is a web-based surgical planning software application. e-Ortho Shoulder provides a pre-surgical planning tool for surgeons that helps them understand their patient's anatomy prior to surgery. Compared to using two-dimensional (2D) images to plan a shoulder arthroplasty (current method used by FH-Orthopedic surgeons), e-Ortho supplies information to surgeons to help prepare an intraoperative plan. E-Ortho allows surgeons to work in three-dimensional (3D) visualization, implant visualization and positioning within the specific patient's bone model (scapula and humerus), using reliable landmarks. This allows surgeons to preoperatively select the needed implant and determine its desired position.
The subject submission seeks to add humeral planning capabilities to the previously cleared FH E-Ortho Shoulder Software. Additional changes to the software have been made to improve functionality within the previously cleared intended use.
The e-Ortho Shoulder Software v1.1 is intended to be used as an information tool to assist in the preoperative surgical planning and visualization of a primary total shoulder replacement. The performance testing for this device is primarily focused on verification and validation activities, and a comparison against a "gold standard" software.
Here's a breakdown of the requested information based on the provided document:
-
Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly lay out acceptance criteria in a quantitative table with corresponding device performance metrics like sensitivity, specificity, or AUC as might be seen for diagnostic AI. Instead, the acceptance is demonstrated through the successful completion of verification and validation processes and equivalence to a "gold standard" software.
Acceptance Criterion Type Description Reported Device Performance Functional Verification Verification of functional components of the subject device through test campaigns. Five test campaigns carried out by five different evaluators in two different environments. Three minor bugs identified, but "not expected to impact the planning itself." Usability Validation Validation of critical features through usability testing. A usability test campaign conducted with five surgeons. "The result of the validation tests coincides with the expected results for each test case and no test failed." Accuracy Testing Comparison of implant values (version and inclination) obtained from the subject device against a "gold standard" software in simulated dangerous situations and potential harms. Simulations included right and left-sided scenarios, head-first/feet-first supine positioning, varying reaming depths, and implant visualization from varying angles. Compared to Materialise Innovation Suite (Mimics V22 and 3matic V14) and SolidWork 2016. "All tests passed." "Thus, the accuracy of e-Ortho is adequate to provide safe use of the product." -
Sample size used for the test set and the data provenance:
- Functional Verification Test Set: The sample size is not explicitly stated in terms of patient cases or images. Instead, it refers to "five test campaigns" carried out by "five different evaluators in two different environments" to verify "functional components." This suggests a focus on software functionality testing rather than patient data performance.
- Usability Validation Test Set: "Usability test campaign, with critical features requiring validation by five surgeons." The number of "test cases" or patient data involved in this usability validation is not specified.
- Accuracy Testing Test Set: Not specified in terms of patient cases. The testing involved "different virtual cases including right and left sided scenarios in head-first supine positioning of patient as well as feet-first supine patient positioning, varying reaming depths, and implant visualization from varying angles." This implies synthetically generated or modified cases rather than a specific set of retrospective or prospective patient data from a particular country. The data provenance is described as "simulated in different virtual cases."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For Functional Verification and Usability Validation, the "experts" were the "five different evaluators" and "five surgeons" respectively. Their qualifications are not explicitly detailed beyond being "surgeons" for usability.
- For Accuracy Testing, the ground truth was established by "gold standard" software: Materialise innovation Suite (Mimics V22 and 3matic V14) and SolidWork 2016. No human experts are described as establishing the ground truth directly for this specific part of the testing.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
The document does not describe an adjudication method for establishing ground truth, as the accuracy testing relied on "gold standard" software rather than human consensus. For functional and usability testing, it appears to be direct observation of test results and comparison to expected outcomes.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study is mentioned in the document. The study described focuses on the standalone performance and accuracy of the software against "gold standard" software, and its usability. There is no comparison of human reader performance with and without AI assistance described.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, a form of standalone performance assessment was done, particularly in the "Accuracy Testing" section. The device's output for implant values (version and inclination) in various simulated scenarios was compared directly to the output of Materialise Innovation Suite (Mimics V22 and 3matic V14) and SolidWork 2016, which serve as the "gold standard" for these measurements. This is an evaluation of the algorithm's performance in generating specific measurements.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The ground truth for the accuracy testing was established by results from "gold standard" commercial software: Materialise innovation Suite (Mimics V22 and 3matic V14) and SolidWork 2016. For the functional and usability testing, the ground truth was based on expected software behavior and user experience.
-
The sample size for the training set:
The document does not mention the use of a "training set" or machine learning algorithms in the conventional sense that would require a separate training dataset. The device is described as "web-based surgical planning software" that provides analysis tools and 3D visualization. The performance testing focuses on verification, validation, and accuracy against "gold standard" software, rather than the performance of a machine learning model trained on a specific dataset.
-
How the ground truth for the training set was established:
Since no training set is mentioned or described as part of the device's development or evaluation in the document, there is no information on how its ground truth was established.
Ask a specific question about this device
(82 days)
e-Ortho Shoulder Software
E-Ortho shoulder is intended to be used as an information tool to assist in the preoperative surgical planning and visualization of a primary total shoulder replacement.
e-Ortho Shoulder software is a web-based surgical planning software application. e-Ortho Shoulder provides a pre-surgical planning tool for surgeons that helps them understand their patient's anatomy prior to surgery. Compared to using two-dimensional (2D) images to plan a shoulder arthroplasty (current method used by FH-Orthopedic surgeons), e-Ortho supplies information to surgeons to help prepare an intraoperative plan. E-Ortho allows surgeons to work in three-dimensional (3D) visualization, implant visualization and positioning within the specific patient's bone model (scapula and humerus), using reliable landmarks. This allows surgeons to preoperatively select the needed implant and determine its desired position.
The provided text describes the e-Ortho Shoulder Software, a web-based surgical planning tool for primary total shoulder replacement. It outlines the device's intended use, its substantial equivalence to a predicate device, and a general overview of performance testing but lacks specific details regarding acceptance criteria, study design, and results.
Here's an analysis based on the provided information, with explicit statements about what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
The document mentions that "The result of the validation tests coincides with the expected results for each test case and no test failed." However, it does not provide a specific table of acceptance criteria or quantitative performance metrics for the e-Ortho Shoulder Software. It vaguely states that accuracy testing was carried out to "guarantee the performance," but no specific results are shared.
Acceptance Criteria (Missing) | Reported Device Performance (Missing specific metrics) |
---|---|
Specific quantitative thresholds for implant sizing accuracy, positioning accuracy, or visualization fidelity. | The validation tests coincided with expected results, and no test failed. Accuracy testing was carried out. |
Usability metrics (e.g., time to complete a planning task, error rate in planning). | Usability test campaign conducted with multiple surgeons; results coincided with expected results. |
Software stability and reliability (e.g., uptime, crash rate). | Verification process implemented through multiple test campaigns carried out by various evaluators and environments. |
2. Sample Size Used for the Test Set and Data Provenance:
The document states:
- "The validation process was implemented through a usability test campaign, with critical features requiring validation by multiple surgeons."
- "Additional accuracy testing was carried out to guarantee the performance of e-Ortho."
The exact sample size used for the test set is not specified. We only know that "multiple surgeons" were involved in the usability testing.
The data provenance (e.g., country of origin, retrospective or prospective) for the test set is not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
The document mentions "multiple surgeons" for usability testing, but it does not explicitly state how many experts were used to establish the ground truth for any accuracy testing, nor does it detail their qualifications (e.g., years of experience, subspecialty).
4. Adjudication Method for the Test Set:
No adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth from experts is mentioned or described in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was conducted comparing human readers with and without AI assistance. The focus is on the device as a planning tool for surgeons rather than an assistive AI for diagnostic reading. It mentions that "Compared to using two-dimensional (2D) images to plan a shoulder arthroplasty (current method used by FH-Orthopedic surgeons), e-Ortho supplies information to surgeons to help prepare an intraoperative plan," suggesting a comparison, but no formal MRMC study is detailed.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop) Performance Was Done:
The e-Ortho Shoulder Software is described as an "information tool to assist in the preoperative surgical planning" and states that "the chosen procedure is the responsibility of the Surgeon." It clarifies that an "e-Ortho engineer provides inputs via the e-Ortho software." This suggests that the device operates within a human-in-the-loop workflow, providing tools and visualizations.
While accuracy testing was mentioned, it's not clear if a standalone performance evaluation of the algorithm without human interaction for implant sizing, positioning, etc., was performed and reported. The summary emphasizes its role as an assistive tool for surgeons, with an engineer providing inputs.
7. The Type of Ground Truth Used:
The document does not explicitly state the type of ground truth used for any accuracy testing. Given its use for surgical planning, potential ground truth sources could include:
- Expert Consensus: Likely for the "expected results" in validation tests.
- Pathology/Outcomes Data: Not mentioned, but ideal for long-term clinical validation.
- Physical measurements/ Cadaveric studies: Not mentioned.
The specific source of ground truth for accuracy claims is not detailed.
8. The Sample Size for the Training Set:
The document does not provide any information regarding the sample size for the training set used to develop or train the e-Ortho Shoulder Software. As a pre-market submission, such details about model development are often included.
9. How the Ground Truth for the Training Set Was Established:
Similar to question 8, since no information regarding a training set is provided, there is no mention of how the ground truth for any potential training set was established.
Ask a specific question about this device
Page 1 of 1