(29 days)
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include:
- unicondylar knee replacement (UKR),
- . total knee arthroplasty (TKA),
- revision knee arthroplasty, and
- . total hip arthroplasty (THA).
The subject of this Special 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, the system aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location. For knee applications, CORI then aids the surgeon in executing the surgical plan by controlling the cutting engagement of the surgical bur.
CORI knee application software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- . Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- . Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
The provided text describes a 510(k) submission for the REAL INTELLIGENCE™ CORI™ surgical navigation and burring system. This submission is a "Special 510(k)" which supports an update to the CORI system to allow it to be used with porous Unicondylar Knee Replacement (UKR) implants.
The core of this submission is to demonstrate substantial equivalence to a previously cleared predicate device (also REAL INTELLIGENCE™ CORI™, K221224). Therefore, the provided documentation focuses on showing that the modifications to support porous UKR implants do not impact the system's intended use, indications for use, or fundamental scientific technology, and that the device remains safe and effective.
Crucially, the document does NOT contain a detailed report of a study proving the device meets specific performance acceptance criteria for the AI component, nor does it explicitly mention an AI component as typically understood in medical image analysis (e.g., for diagnosis or prognosis). The device is described as a "robotic-assisted orthopedic surgical navigation and burring system" that uses "intraoperative data collection (image-free or non-CT data generation)" and "predefined boundaries generated during the planning process to control the motion of the surgical bur." This suggests a system for surgical precision and control, not necessarily one that employs machine learning/AI for diagnostic or predictive tasks from imaging data.
Given the information provided, it's not possible to populate all the requested fields as they pertain to a study proving an AI/ML-based device meets acceptance criteria through performance metrics like sensitivity, specificity, or reader improvement. The document focuses on demonstrating that the modified surgical control system still performs as expected and is safe and effective for its intended surgical guidance purpose.
However, I can extract information related to the closest aspects of "acceptance criteria" and "study" described, which in this context refer to verification and validation testing of the updated surgical system.
Here's an attempt to answer based on the provided text, highlighting where information is absent for an AI/ML context:
Device: REAL INTELLIGENCE™ CORI™ (K231963 Special 510(k) update)
Predicate Device: REAL INTELLIGENCE™ CORI™ (K221224)
Purpose of Submission: Update to allow use with porous Unicondylar Knee Replacement (UKR) Implants.
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with specific performance metrics (e.g., accuracy, precision measurements for anatomical structures, or percentages for successful burring). Instead, it makes a general statement about meeting design inputs.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
All design inputs (for the updated system) are met. | "Blue Belt Technologies has concluded that all design inputs have been met." |
Safety and efficacy of CORI for porous UKR implants demonstrated. | "Verification and validation testing demonstrated the safety and efficacy of CORI when the system is used to place porous UKR implants..." |
Usability for surgeons to safely and effectively use the device. | "Summative usability testing (including labeling validation) demonstrated that participating surgeons were able use the subject device safely and effectively in a simulated use environment." |
No new questions of safety or effectiveness raised by testing. | "...the verification and validation testing performed did not raise any new questions of safety or effectiveness." |
Substantial equivalence to predicate device maintained. | "The information presented in this 510(k) premarket notification demonstrates that CORI may be used for the placement of porous UKR implants, and that CORI is as safe and effective as the predicate CORI system (K221224)." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The document mentions "verification and validation testing" and "summative usability testing," but does not provide details on the number of cadaveric specimens, phantom models, or simulated cases used.
- Data Provenance: Not specified. The studies appear to be bench and simulated use tests, not clinical data from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified. For "summative usability testing," the document states "participating surgeons," but provides no number or qualifications beyond "surgeons."
- Qualifications of Experts: Assumed to be surgeons, but no specific professional experience or certification details are given.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: Not applicable/not mentioned. This device is a surgical navigation and burring system, not an AI for diagnostic image interpretation that would typically involve an MRMC study comparing human readers with and without AI assistance. The "AI" in "REAL INTELLIGENCE" appears to refer more broadly to advanced computational control/guidance rather than a diagnostic AI algorithm.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not explicitly detailed as a separate "algorithm only" study. The system provides "software-defined spatial boundaries" and "controls the motion of the surgical bur." The testing mentioned is for the integrated system's performance (human-in-the-loop operation, or mechanical performance of the robotic arm/burring based on software control), not a standalone AI algorithm validating its own output without physical action.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not explicitly stated but implied to be based on:
- Design Inputs: The system's performance is verified against its predefined engineering design specifications for spatial accuracy and burring control.
- Simulated Surgical Environment: For usability testing, the "ground truth" would be the successful and safe completion of the simulated surgical task according to established surgical principles and desired outcomes for UKR implant placement. This would likely involve measuring accuracy of bone cuts against a planned target.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not mentioned. This document describes an update to a surgical system, not the development of a new machine learning model. The system uses "intraoperative data collection (image-free or non-CT data generation) to create a model of the patient's femur and/or tibia" and "predefined boundaries." This implies real-time data capture and processing based on established anatomical models and surgical plans, rather than a system trained on a large dataset of patient images to learn patterns.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable/not mentioned. As per point 8, there's no indication of a machine learning training set in the conventional sense. The "ground truth" for the system's operational principles would be based on anatomical science, biomechanics, engineering specifications, and surgical planning principles.
§ 882.4560 Stereotaxic instrument.
(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).