Search Results
Found 2 results
510(k) Data Aggregation
(28 days)
Navbit Rapid Surgical Plan is intended for pre-operative planning for primary total hip arthroplasty. Rapid Surgical Plan is intended to be used as a tool to assist the surgeon in the selection and positioning of components in primary total hip arthroplasty.
Rapid Surgical Plan is indicated for individuals undergoing primary hip surgery.
The Navbit Rapid Surgical Plan is a non-invasive total hip arthroplasty planning software. It is software as a medical device (SaMD) which provides pre-operative planning for acetabular component orientation for orthopaedic surgeons. The software provides a recommended cup target intended to reduce impingement based on each patient's spinopelvic mobility. The ultimate decision to use the cup target recommended by Navbit is the surgeon's based on their clinical judgement.
The provided text details the 510(k) premarket notification for the Navbit Rapid Surgical Plan (RSP-SW-001) device. It outlines the device's intended use, comparison to a predicate device (RI.HIP MODELER), and the non-clinical testing performed to demonstrate substantial equivalence.
Here's the breakdown of the acceptance criteria and the study proving the device meets them:
1. A table of acceptance criteria and the reported device performance
The document details the following acceptance criteria (referred to as "Device Measurement Accuracy Evaluation" in the text) and the reported device performance:
Measurement Type | Acceptance Criteria (95% Confidence) | Reported Device Performance |
---|---|---|
Linear Measurements | ±0.1mm | Met |
Angular Measurements | ±0.3° | Met |
Ratio Measurements | ±0.1 | Met |
2. Sample size used for the test set and the data provenance
The document states that in the "Clinical Measurement Accuracy Evaluation," "representative patient images" were used. However, it does not specify the exact sample size for this test set.
Regarding data provenance:
- The data used for the test set in the "Clinical Measurement Accuracy Evaluation" consisted of "representative patient images." The country of origin and whether the data was retrospective or prospective are not explicitly stated in the provided text.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- The ground truth for the test set in the "Clinical Measurement Accuracy Evaluation" was "established by surgeons."
- The number of surgeons used to establish this ground truth is not specified.
- Their specific qualifications (e.g., years of experience) are not explicitly stated, beyond them being "surgeons."
4. Adjudication method for the test set
The document states that the ground truth for the "Clinical Measurement Accuracy Evaluation" was "established by surgeons." It does not explicitly detail an adjudication method (e.g., 2+1, 3+1, consensus process) for these surgeons in establishing the ground truth.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A formal MRMC comparative effectiveness study, evaluating human readers with and without AI assistance, was not described for this specific device.
- The focus was on demonstrating that the performance of the planning technicians using RSP is "equivalent to that of the surgeons for the scope of the landmarking tasks involved" during user testing. This is a comparison of two user groups (technicians vs. surgeons) on a reference dataset, rather than an MRMC study on human improvement with AI assistance. Therefore, an effect size of human reader improvement with AI assistance is not provided.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone evaluation was implicitly done as part of the "Device Measurement Accuracy Evaluation." This section describes the performance of the device's underlying measurement capabilities (linear, angular, ratio measurements) against defined limits, independent of human interaction beyond inputting test patterns.
7. The type of ground truth used
- For the "Clinical Measurement Accuracy Evaluation," the ground truth was "established by surgeons" based on landmarking clinical features on radiographs, which can be categorized as expert consensus/opinion (though the consensus process is not detailed).
- For the "Device Measurement Accuracy Evaluation," the ground truth was derived from "test patterns," implying a known, engineered truth against which the device's fundamental measurements were compared.
8. The sample size for the training set
The document does not specify the sample size for the training set used for the Navbit Rapid Surgical Plan algorithm.
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It mentions that "Navbit RSP uses an algorithm based on clinical recommendations for spinopelvic mobility to provide cup target recommendations" and that "RI.HIP Modeler uses an algorithm based on spinopelvic mobility classifications and hip kinematics in literature to recommend cup targets." This suggests reliance on existing clinical knowledge and literature for algorithm development, but the specifics of how this translated into labeled data for a training set are not provided.
Ask a specific question about this device
(106 days)
Intended Use
CORIOGRAPH Pre-Op Planning and Modeling Services are intended to provide preoperative planning for surgical procedures based on patient imaging, provided that anatomic landmarks necessary for generating the plan are identifiable on patient imaging scans.
Indications for Use
The CORIOGRAPH Pre-op Planning and Modeling Services are indicated for use for the following procedures:
- · Unicondylar Knee Replacement (UKR)
- · Total Knee Arthroplasty (TKA)
- · Primary Total Hip Arthroplasty (THA)
The subject CORIOGRAPH Hip Pre-Op Plan and CORIOGRAPH Modeler are medical function modules within the CORIOGRAPH Pre-Op Planning and Modeling Services are additional offerings being introduced by Blue Belt Technologies, Inc. to allow for pre-operative planning for surgical procedures based on patient imaging for primary total hip arthroplasty (THA). The CORIOGRAPH Hip Pre-Op Plan system will utilize Smith and Nephew personnel to generate patient specific bone models and preoperative plans for primary THA which will be viewable and editable on CORIOGRAPH Modeler 1.0. Together, CORIOGRAPH Hip Pre-Op Plan and CORIOGRAPH Modeler are the subject of this submission.
The acceptance criteria for the CORIOGRAPH Pre-Op Planning and Modeling Services V2.0 were demonstrated through verification and validation testing, and summative usability testing. The provided document does not explicitly list numerical acceptance criteria values for metrics like accuracy, sensitivity, or specificity. Instead, it broadly states that testing demonstrated the safety and effectiveness of the software applications and that all design inputs were met.
Acceptance Criteria and Reported Device Performance
Note: The document does not provide specific quantitative acceptance criteria or detailed performance metrics. It indicates that the device met its design inputs and was found to be safe and effective.
Acceptance Criteria Category | Reported Device Performance (as stated in document) |
---|---|
Verification and Validation Testing | Demonstrated the safety and effectiveness of the software applications used in CORIOGRAPH Pre-Op Planning & Modeling Services V2.0. All design inputs were met. |
Summative Usability Testing | Demonstrated that participating surgeons were able to use the subject device safely and effectively in a simulated use environment. |
Credibility Evaluation | Demonstrated that the kinematic models and Activities of Daily Living (ADLs) utilized in the subject device are clinically relevant. |
Study Details:
-
Sample size used for the test set and the data provenance:
- The document mentions "sumative usability testing" was performed, indicating a test set was used, but does not specify the sample size for the test set.
- The document does not specify the data provenance (e.g., country of origin, retrospective or prospective) for the data used in testing.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document implies that "participating surgeons" were involved in the summative usability testing, but does not state the number of experts used to establish ground truth or their specific qualifications (e.g., years of experience, subspecialty).
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not describe any specific adjudication method used for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or report any effect size for human reader improvement with AI assistance. The study focuses on the device's ability to generate pre-operative plans and its usability.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document states that the software applications underwent "verification and validation testing," implying a standalone component to ensure functional correctness. However, it also highlights "summative usability testing" with "participating surgeons," which indicates human-in-the-loop performance was also evaluated, particularly for the overall service. It is not explicitly stated whether fully standalone performance was evaluated as a separate metric without any human involvement.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the "credibility evaluation," the document states it "demonstrated that the kinematic models and Activities of Daily Living (ADLs) utilized in the subject device are clinically relevant." This suggests clinical relevance or expert opinion/consensus as a form of ground truth for these aspects. For the outputs of the pre-operative planning, the implied ground truth is agreement with surgical principles and objectives, likely assessed by participating surgeons during usability testing.
-
The sample size for the training set:
- The document does not specify the sample size for the training set. It describes the device as providing "Pre-Op Planning and Modeling Services" based on patient imaging and does not detail a machine learning model's training process or associated dataset sizes.
-
How the ground truth for the training set was established:
- Since the document does not explicitly mention a training set or machine learning components in terms of specific algorithms that require labeled training data (beyond general software development and functionality), it does not describe how ground truth for a training set was established. The "services" aspect implies that the system is used by human personnel (Smith and Nephew personnel, as stated in device description) to generate plans, meaning the "training" might refer to the development and refinement of these human-led processes and software functionalities under established surgical guidelines.
Ask a specific question about this device
Page 1 of 1