Search Results
Found 20 results
510(k) Data Aggregation
(90 days)
Blue Belt Technologies, Inc.
Intended Use
CORIOGRAPH Pre-Op Planning and Modeling Services are intended to provide preoperative planning for surgical procedures based on patient imaging, provided that anatomic landmarks necessary for generating the plan are identifiable on patient imaging scans.
Indications for Use
The CORIOGRAPH Pre-op Planning and Modeling Services are indicated for use for the following procedures:
- Unicondylar Knee Replacement (UKR)
- Total Knee Arthroplasty (TKA)
- Primary Total Hip Arthroplasty (THA)
- Primary Anatomic Total Shoulder Arthroplasty (aTSA)
- Primary Reverse Total Shoulder Arthroplasty (rTSA)
Coriograph Pre-Op Planning and Modeling Services (CORIOGRAPH) are Software as a Medical Device (SaMD) that provide pre-operative planning for orthopedic surgical procedures based on patient imaging, surgeon preferences and implant geometry. CORIOGRAPH is comprised of several medical software systems (modules), and these modules share a set of non-medical function software applications called the Case Processing System.
CORIOGRAPH is a Software Medical Device System. The medical function modules share a graphic user interface where the surgeon provides their case preferences and patient imaging, retrieves PDF plans generated by the Pre-Op Plan modules, and launches the Modeler modules.
CORIOGRAPH Pre-Op Plan Modules
Using patient-specific information, patient imaging, and surgeon inputs, a pre-operative plan is generated by Smith+Nephew personnel. The plan provides initial alignment recommendations to the surgeon on implant placement based on the geometries of the implant and of the generated bone models.
CORIOGRAPH Hip Modeler and CORIOGRAPH Shoulder Modeler Modules
CORIOGRAPH Hip Modeler and CORIOGRAPH Shoulder Modeler are web-based software applications intended for surgeons to modify the patient-specific pre-operative plan and to evaluate the impact of the modifications through the impingement analysis tools such as Impingement-free Range-of Motion (ROM) and Activities of Daily Living (ADL) simulations.
The provided text is a 510(k) clearance letter and summary for a medical device called CORIOGRAPH Pre-Op Planning and Modeling Services. It details the device's intended use, indications for use, and a comparison to predicate devices, focusing on the addition of shoulder arthroplasty procedures.
However, the document does not contain the specific information requested regarding acceptance criteria and the detailed study proving the device meets these criteria. It mentions non-clinical bench testing, software verification testing (IEC 62304), credibility evaluation of kinematic models and ADL simulations, and summative usability validation testing (IEC 62366). It states that "Design verification and validation testing demonstrated that CORIOGRAPH Pre-Op Planning and Modeling Services (v 3.0) meets all design requirements and is as safe and effective as its primary and secondary predicate devices." and "The credibility evaluation demonstrated that the kinematic models and Activities of Daily Living (ADLs) simulations utilized in the subject device are clinically relevant."
Crucially, it does not provide:
- Specific acceptance criteria for accuracy, precision, or performance metrics.
- Quantifiable reported device performance against those criteria.
- Sample sizes for test sets used for performance evaluation (other than mentioning "representative users" for usability).
- Data provenance (country of origin, retrospective/prospective) for performance data.
- Details on expert ground truth establishment (number of experts, qualifications, adjudication method).
- Information on MRMC comparative effectiveness studies.
- Standalone (algorithm-only) performance.
- Type of ground truth used (e.g., pathology, outcomes data) for clinical performance, if any.
- Sample size for the training set or how its ground truth was established.
The document is a high-level summary confirming regulatory clearance based on substantial equivalence, implying that the detailed testing data would have been part of the full 510(k) submission, but is not included in this public-facing summary.
Therefore, based only on the provided text, I cannot complete the requested tables and details about acceptance criteria and study proof. The document confirms that testing was done and results met requirements, but does not provide the specifics of those results or the criteria themselves.
Ask a specific question about this device
(106 days)
Blue Belt Technologies, Inc.
Intended Use
CORIOGRAPH Pre-Op Planning and Modeling Services are intended to provide preoperative planning for surgical procedures based on patient imaging, provided that anatomic landmarks necessary for generating the plan are identifiable on patient imaging scans.
Indications for Use
The CORIOGRAPH Pre-op Planning and Modeling Services are indicated for use for the following procedures:
- · Unicondylar Knee Replacement (UKR)
- · Total Knee Arthroplasty (TKA)
- · Primary Total Hip Arthroplasty (THA)
The subject CORIOGRAPH Hip Pre-Op Plan and CORIOGRAPH Modeler are medical function modules within the CORIOGRAPH Pre-Op Planning and Modeling Services are additional offerings being introduced by Blue Belt Technologies, Inc. to allow for pre-operative planning for surgical procedures based on patient imaging for primary total hip arthroplasty (THA). The CORIOGRAPH Hip Pre-Op Plan system will utilize Smith and Nephew personnel to generate patient specific bone models and preoperative plans for primary THA which will be viewable and editable on CORIOGRAPH Modeler 1.0. Together, CORIOGRAPH Hip Pre-Op Plan and CORIOGRAPH Modeler are the subject of this submission.
The acceptance criteria for the CORIOGRAPH Pre-Op Planning and Modeling Services V2.0 were demonstrated through verification and validation testing, and summative usability testing. The provided document does not explicitly list numerical acceptance criteria values for metrics like accuracy, sensitivity, or specificity. Instead, it broadly states that testing demonstrated the safety and effectiveness of the software applications and that all design inputs were met.
Acceptance Criteria and Reported Device Performance
Note: The document does not provide specific quantitative acceptance criteria or detailed performance metrics. It indicates that the device met its design inputs and was found to be safe and effective.
Acceptance Criteria Category | Reported Device Performance (as stated in document) |
---|---|
Verification and Validation Testing | Demonstrated the safety and effectiveness of the software applications used in CORIOGRAPH Pre-Op Planning & Modeling Services V2.0. All design inputs were met. |
Summative Usability Testing | Demonstrated that participating surgeons were able to use the subject device safely and effectively in a simulated use environment. |
Credibility Evaluation | Demonstrated that the kinematic models and Activities of Daily Living (ADLs) utilized in the subject device are clinically relevant. |
Study Details:
-
Sample size used for the test set and the data provenance:
- The document mentions "sumative usability testing" was performed, indicating a test set was used, but does not specify the sample size for the test set.
- The document does not specify the data provenance (e.g., country of origin, retrospective or prospective) for the data used in testing.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document implies that "participating surgeons" were involved in the summative usability testing, but does not state the number of experts used to establish ground truth or their specific qualifications (e.g., years of experience, subspecialty).
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not describe any specific adjudication method used for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or report any effect size for human reader improvement with AI assistance. The study focuses on the device's ability to generate pre-operative plans and its usability.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document states that the software applications underwent "verification and validation testing," implying a standalone component to ensure functional correctness. However, it also highlights "summative usability testing" with "participating surgeons," which indicates human-in-the-loop performance was also evaluated, particularly for the overall service. It is not explicitly stated whether fully standalone performance was evaluated as a separate metric without any human involvement.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the "credibility evaluation," the document states it "demonstrated that the kinematic models and Activities of Daily Living (ADLs) utilized in the subject device are clinically relevant." This suggests clinical relevance or expert opinion/consensus as a form of ground truth for these aspects. For the outputs of the pre-operative planning, the implied ground truth is agreement with surgical principles and objectives, likely assessed by participating surgeons during usability testing.
-
The sample size for the training set:
- The document does not specify the sample size for the training set. It describes the device as providing "Pre-Op Planning and Modeling Services" based on patient imaging and does not detail a machine learning model's training process or associated dataset sizes.
-
How the ground truth for the training set was established:
- Since the document does not explicitly mention a training set or machine learning components in terms of specific algorithms that require labeled training data (beyond general software development and functionality), it does not describe how ground truth for a training set was established. The "services" aspect implies that the system is used by human personnel (Smith and Nephew personnel, as stated in device description) to generate plans, meaning the "training" might refer to the development and refinement of these human-led processes and software functionalities under established surgical guidelines.
Ask a specific question about this device
(60 days)
Blue Belt Technologies, Inc.
REAL INTELLIGENCE CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include:
· unicondylar knee replacement (UKR),
· total knee arthroplasty (TKA),
· revision knee arthroplasty, and
· total hip arthroplasty (THA).
The subject of this Traditional 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. For knee applications, CORI aids the surgeon in planning the surgical implant location and in executing the surgical plan by controlling the cutting engagement of the surgical bur.
For knee applications, CORI software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- o Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
This document, K240139, describes a 510(k) premarket notification for a modification to the REAL INTELLIGENCE™ CORI™ device. The modification integrates an image-based planning option into the Total Knee Arthroplasty (TKA) and Unicondylar Knee Replacement (UKR) applications. The provided text, however, does not include detailed acceptance criteria or a specific study proving the device meets those criteria with performance data in the format requested.
The document primarily focuses on demonstrating substantial equivalence to a predicate device (K231963) by outlining similarities in intended use, indications for use, technological characteristics, and environmental use. It mentions "Design verification and validation testing demonstrated that CORI meets all design requirements" and "Comprehensive testing demonstrated that the system meets required design inputs," but does not provide the specific quantitative acceptance criteria or performance results from these tests.
Therefore, the following points address the requested information based only on what is available in the provided text. Many points will be marked as "Not provided" or "Not applicable" due to the nature of the submission (a 510(k) for a modification focusing on substantial equivalence rather than a full de novo study with detailed performance metrics).
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
System Meets Design Requirements | "CORI meets all design requirements." (No specific quantitative criteria or metrics provided) |
Meets Required Design Inputs | "Comprehensive testing demonstrated that the system meets required design inputs." (No specific quantitative criteria or metrics provided) |
Image-Based Planning Registration is Safe and Effective | "The key determining factor in establishing substantial equivalence is whether CORI can register the pre-operative plans safely and effectively." |
"Comprehensive verification testing demonstrated that the system meets required design inputs." (No specific quantitative metrics for safety and effectiveness of registration, e.g., accuracy, precision, or success rates, are provided.) | |
Usability (Safe & Effective Use in Simulated Environment) | "Summative usability validation testing demonstrating that representative users were able to use the subject device safely and effectively in a simulated use environment." |
"The study demonstrated that the surgeon users were able to safely and effectively perform tasks necessary to execute TKA and UKA procedures using the CORI system." (No specific quantitative usability metrics, like task completion rates, error rates, or time to completion, are provided.) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not specified for the software verification, design verification, or image-based planning registration testing. For the summative usability validation testing, it mentions "representative users," but the number is not provided.
- Data Provenance: Not specified. (e.g., country of origin, retrospective/prospective). This is a modification of an existing device, so the testing mainly focuses on the modified functionalities.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Number of Experts: Not specified.
- Qualifications of Experts: For the usability study, "surgeon users" are mentioned, implying qualified medical professionals, but specific qualifications (e.g., years of experience, specialty) are not detailed. For other testing, the method of establishing ground truth for technical performance (e.g., registration accuracy) is not explicitly described as relying on "experts" in the sense of clinical reviewers, but rather on engineering verification.
4. Adjudication Method for the Test Set
- Not applicable/Not specified. The document refers to "design verification and validation testing," "software verification testing," and "summative usability validation testing." There is no mention of an adjudication process typically associated with reviewer consensus on medical images or clinical outcomes.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not done/Not applicable. This submission is for a surgical navigation system, not an AI-assisted diagnostic tool that would typically involve a multi-reader, multi-case study to assess human reader performance improvement. The "image-based planning option" implies the import and use of pre-operative imaging data, but the device itself is not described as an AI intended to assist human readers in image interpretation.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- Not applicable in the conventional sense of a diagnostic algorithm. The device, REAL INTELLIGENCE™ CORI™, is described as a "robotic-assisted orthopedic surgical navigation and burring system" with an integrated image-based planning option. Its function inherently involves human interaction (a surgeon) and mechanical components. The "standalone" performance would relate to the accuracy of its navigation and burring control, which is implied by "Comprehensive testing demonstrated that the system meets required design inputs," but specific standalone performance metrics are not given.
7. The Type of Ground Truth Used
- The type of ground truth for general "design requirements" and "design inputs" would typically be engineering specifications, physical measurements, and computational models. For the "image-based planning option," the ground truth for registration accuracy would likely be established metrologically against known anatomical landmarks or fiducials in the pre-operative image data. For usability, the ground truth is simply the ability of users to safely and effectively complete tasks, assessed through observation and user feedback. No mention of pathology or outcomes data in this context.
8. The Sample Size for the Training Set
- Not applicable/Not specified. The document does not describe the development of a machine learning or AI model with a distinct "training set." The modification is described as integrating an "image-based planning option," implying a capability to import and use image data, rather than a system trained on image data in an AI development pipeline.
9. How the Ground Truth for the Training Set was Established
- Not applicable. As noted in point 8, no training set for an AI model is described.
Ask a specific question about this device
(29 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include:
- unicondylar knee replacement (UKR),
- . total knee arthroplasty (TKA),
- revision knee arthroplasty, and
- . total hip arthroplasty (THA).
The subject of this Special 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, the system aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location. For knee applications, CORI then aids the surgeon in executing the surgical plan by controlling the cutting engagement of the surgical bur.
CORI knee application software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- . Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- . Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
The provided text describes a 510(k) submission for the REAL INTELLIGENCE™ CORI™ surgical navigation and burring system. This submission is a "Special 510(k)" which supports an update to the CORI system to allow it to be used with porous Unicondylar Knee Replacement (UKR) implants.
The core of this submission is to demonstrate substantial equivalence to a previously cleared predicate device (also REAL INTELLIGENCE™ CORI™, K221224). Therefore, the provided documentation focuses on showing that the modifications to support porous UKR implants do not impact the system's intended use, indications for use, or fundamental scientific technology, and that the device remains safe and effective.
Crucially, the document does NOT contain a detailed report of a study proving the device meets specific performance acceptance criteria for the AI component, nor does it explicitly mention an AI component as typically understood in medical image analysis (e.g., for diagnosis or prognosis). The device is described as a "robotic-assisted orthopedic surgical navigation and burring system" that uses "intraoperative data collection (image-free or non-CT data generation)" and "predefined boundaries generated during the planning process to control the motion of the surgical bur." This suggests a system for surgical precision and control, not necessarily one that employs machine learning/AI for diagnostic or predictive tasks from imaging data.
Given the information provided, it's not possible to populate all the requested fields as they pertain to a study proving an AI/ML-based device meets acceptance criteria through performance metrics like sensitivity, specificity, or reader improvement. The document focuses on demonstrating that the modified surgical control system still performs as expected and is safe and effective for its intended surgical guidance purpose.
However, I can extract information related to the closest aspects of "acceptance criteria" and "study" described, which in this context refer to verification and validation testing of the updated surgical system.
Here's an attempt to answer based on the provided text, highlighting where information is absent for an AI/ML context:
Device: REAL INTELLIGENCE™ CORI™ (K231963 Special 510(k) update)
Predicate Device: REAL INTELLIGENCE™ CORI™ (K221224)
Purpose of Submission: Update to allow use with porous Unicondylar Knee Replacement (UKR) Implants.
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with specific performance metrics (e.g., accuracy, precision measurements for anatomical structures, or percentages for successful burring). Instead, it makes a general statement about meeting design inputs.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
All design inputs (for the updated system) are met. | "Blue Belt Technologies has concluded that all design inputs have been met." |
Safety and efficacy of CORI for porous UKR implants demonstrated. | "Verification and validation testing demonstrated the safety and efficacy of CORI when the system is used to place porous UKR implants..." |
Usability for surgeons to safely and effectively use the device. | "Summative usability testing (including labeling validation) demonstrated that participating surgeons were able use the subject device safely and effectively in a simulated use environment." |
No new questions of safety or effectiveness raised by testing. | "...the verification and validation testing performed did not raise any new questions of safety or effectiveness." |
Substantial equivalence to predicate device maintained. | "The information presented in this 510(k) premarket notification demonstrates that CORI may be used for the placement of porous UKR implants, and that CORI is as safe and effective as the predicate CORI system (K221224)." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The document mentions "verification and validation testing" and "summative usability testing," but does not provide details on the number of cadaveric specimens, phantom models, or simulated cases used.
- Data Provenance: Not specified. The studies appear to be bench and simulated use tests, not clinical data from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified. For "summative usability testing," the document states "participating surgeons," but provides no number or qualifications beyond "surgeons."
- Qualifications of Experts: Assumed to be surgeons, but no specific professional experience or certification details are given.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: Not applicable/not mentioned. This device is a surgical navigation and burring system, not an AI for diagnostic image interpretation that would typically involve an MRMC study comparing human readers with and without AI assistance. The "AI" in "REAL INTELLIGENCE" appears to refer more broadly to advanced computational control/guidance rather than a diagnostic AI algorithm.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not explicitly detailed as a separate "algorithm only" study. The system provides "software-defined spatial boundaries" and "controls the motion of the surgical bur." The testing mentioned is for the integrated system's performance (human-in-the-loop operation, or mechanical performance of the robotic arm/burring based on software control), not a standalone AI algorithm validating its own output without physical action.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not explicitly stated but implied to be based on:
- Design Inputs: The system's performance is verified against its predefined engineering design specifications for spatial accuracy and burring control.
- Simulated Surgical Environment: For usability testing, the "ground truth" would be the successful and safe completion of the simulated surgical task according to established surgical principles and desired outcomes for UKR implant placement. This would likely involve measuring accuracy of bone cuts against a planned target.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not mentioned. This document describes an update to a surgical system, not the development of a new machine learning model. The system uses "intraoperative data collection (image-free or non-CT data generation) to create a model of the patient's femur and/or tibia" and "predefined boundaries." This implies real-time data capture and processing based on established anatomical models and surgical plans, rather than a system trained on a large dataset of patient images to learn patterns.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable/not mentioned. As per point 8, there's no indication of a machine learning training set in the conventional sense. The "ground truth" for the system's operational principles would be based on anatomical science, biomechanics, engineering specifications, and surgical planning principles.
Ask a specific question about this device
(113 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined.
These procedures include unicondylar knee replacement (UKR), total knee arthroplasty (TKA), and total hip atthroplasty (THA).
The subject of this Traditional 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, the system aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location. For knee applications, CORI then aids the surgeon in executing the surgical plan by controlling the cutting engagement of the surgical bur.
CORI knee application software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- . Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
Here's a breakdown of the acceptance criteria and study information for the Real Intelligence Cori (Cori) device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document primarily focuses on demonstrating substantial equivalence to a predicate device, rather than explicit numerical acceptance criteria for a new clinical study. The "acceptance criteria" are implied through various verification and validation activities.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Design Requirements Met | Design verification and validation testing demonstrated that CORI meets all design requirements. |
Safety and Effectiveness Equivalence to Predicate | CORI is as safe and effective as its primary predicate device (K220255) and secondary predicate device OMNIBotics Knee System (K200888). |
Physical Performance | Comprehensive performance testing demonstrated that the system meets required design inputs. Performance data consisted of physical performance testing for all system components. |
Biocompatibility (for patient-contacting components) | Biocompatibility evaluation demonstrating that the system satisfies the requirements of ISO 10993-1. |
Safety and Electromagnetic Compatibility (EMC) | Safety and Electromagnetic Compatibility (EMC) testing demonstrating that the device complies with IEC 60601-1 and IEC 60601-1-2. |
Software Verification and Validation | Software verification testing, including software integration and workflow testing, was completed. Software was developed in accordance with IEC 62304. This submission contains documentation per the requirements of FDA's Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices. |
Usability (safe and effective use by representative users) | Usability Engineering Validation Testing demonstrating that representative users were able to safely and effectively use CORI and TENSIONER in a simulated use environment. Human factors and usability engineering processes were followed per IEC 62366-1:2015+A1:2020. Additionally, "usability testing demonstrated that users are able to successfully perform gap balancing using TENSIONER and the CORI system; therefore, the difference of the technological characteristics does not introduce new questions of safety or effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly mention a "test set" in the context of a clinical study with real patient data for the TENSIONER update. The testing described is primarily bench testing, software verification, and usability engineering validation.
- Bench Testing: No specific sample sizes for physical components are detailed, but "comprehensive performance testing" and "physical performance testing for all system components" are mentioned.
- Usability Engineering Validation: It involved "representative users" in a "simulated use environment." The exact number of users is not provided nor is the data provenance (e.g., country) specified. Given it's a simulated environment, it's not patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The testing described does not seem to involve the establishment of "ground truth" by experts in the context of a diagnostic or predictive device study, but rather verification against design requirements and usability assessments.
4. Adjudication Method for the Test Set
This information is not provided as the document does not describe a study involving expert review for establishing ground truth in a clinical data set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The submission focuses on demonstrating substantial equivalence through non-clinical testing and usability, particularly regarding the addition of the TENSIONER accessory and associated software update. There is no information on how human readers (or surgeons, in this context) improve with or without AI assistance, as the device itself is a surgical navigation and robotic-assisted system, not an AI diagnostic tool that assists human readers in interpreting images.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This concept is not directly applicable in the terms of a diagnostic AI algorithm. The CORI system is a robotic-assisted surgical navigation and burring system. Its "performance" is inherently tied to its interaction with a human surgeon during a procedure (e.g., controlling bur engagement, providing navigation). The document describes rigorous software verification and validation, which would assess the algorithm's functionality in a standalone manner prior to human interaction, but not as an "algorithm only without human-in-the-loop performance" in the typical sense of a diagnostic AI system's clinical performance.
7. The Type of Ground Truth Used
For the software and system performance, the "ground truth" would be the design specifications and requirements that the device and its algorithms are designed to meet. For usability, the ground truth is the ability of users to safely and effectively operate the device as intended, which is assessed through usability engineering validation. There is no mention of pathology, outcomes data, or expert consensus on clinical cases.
8. The Sample Size for the Training Set
This submission is for an update to an existing system (CORI K220255) to integrate the TENSIONER accessory and software upgrade. The document does not describe the development or training of a new AI algorithm for which a "training set" would typically be referenced. Therefore, no sample size for a training set is provided. The "training" for this submission would involve the development of the software to manage the TENSIONER, which is verified against design requirements.
9. How the Ground Truth for the Training Set was Established
As no training set for a new AI algorithm is described, this information is not applicable/not provided. The software associated with the TENSIONER is verified against engineering and design specifications.
Ask a specific question about this device
(117 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include:
- · unicondylar knee replacement (UKR),
- · total knee arthroplasty (TKA),
- · revision knee arthroplasty, and
- · total hip arthroplasty (THA).
The subject of this Traditional 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, the system aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location. For knee applications, CORI then aids the surgeon in executing the surgical plan by controlling the cutting engagement of the surgical bur.
CORI knee application software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- . Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- . Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
This document describes the Real Intelligence CORI surgical navigation and burring system. The current submission (K220958) is an update to a previously cleared device (K220255), specifically updating the Indications for Use to include revision knee arthroplasty procedures.
Here's an analysis of the acceptance criteria and study information provided:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state specific numerical acceptance criteria for performance metrics like accuracy or precision. Instead, it relies on demonstrating that the device's performance is not negatively impacted by the expanded indications and is "as safe and effective" as the predicate device.
The reported device performance primarily focuses on the successful completion of a simulated revision knee arthroplasty procedure and confirmation that existing accuracy is maintained.
Acceptance Criteria (Implied/General) | Reported Device Performance |
---|---|
Safe and effective use for revision knee arthroplasty procedures. | Summative usability testing demonstrated that participating surgeons were able to use the subject device safely and effectively to complete a revision knee arthroplasty procedure in a simulated use environment. The usability testing also validated the instructions for use. |
Maintenance of implant and cut guide position accuracy. | Analysis confirmed implant position and cut guide position accuracy is not impacted by the addition of revision knee arthroplasty to the CORI indications for use statement since no modifications have been made to the CORI system, reusable or disposable components, software, implant/cut guide database, functional or performance requirements, or bone preparation methods. |
Compliance with design input requirements. | Verification testing demonstrated that the system meets required design inputs. |
Substantial equivalence to predicate device (K220255). | The submission concludes that CORI is as safe and effective as the predicate CORI system (K220255) and is substantially equivalent. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Usability Testing: Not explicitly stated as a number. The document mentions "participating surgeons" and "representative users."
- Sample Size for Accuracy Analysis: Not explicitly stated. The analysis focused on confirming non-impact rather than new testing on a specific sample size.
- Data Provenance: The usability testing was performed in a "simulated use environment," implying a lab or controlled setting. The document does not specify a country of origin, but Blue Belt Technologies, Inc. is based in Plymouth, Minnesota, USA. The testing appears to be prospective as it was conducted for this submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Usability Testing: The document mentions "participating surgeons" and "representative users." The specific number is not provided, nor are their detailed qualifications (e.g., years of experience, specific orthopedic specialties). However, it implies they were qualified surgeons capable of performing revision knee arthroplasty procedures.
- Accuracy Analysis: For the accuracy analysis, no experts are explicitly mentioned as establishing ground truth in the context of a test set, as the analysis primarily confirmed that no changes were made that would impact the existing accuracy parameters, which would have been established during the predicate device's clearance.
4. Adjudication Method for the Test Set
Not applicable/not specified. The studies described are usability testing and an analysis of impact on accuracy. There is no mention of an adjudication process typically associated with diagnostic performance studies involving multiple readers.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This device is a surgical navigation and burring system, not an AI-assisted diagnostic imaging device for "human readers." The evaluation focused on usability and maintenance of surgical accuracy.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The device itself is inherently "human-in-the-loop" as it assists a surgeon. The system controls bur engagement based on proximity to a planned target surface, but the surgeon operates the bur and plans the surgical implant location. No "standalone" algorithm-only performance is described, as it's an intraoperative assistance system.
7. The Type of Ground Truth Used
- Usability Testing: The "ground truth" for usability was the ability of surgeons to "safely and effectively" complete a revision knee arthroplasty procedure in the simulated environment, and validation of the instructions for use. This can be considered a form of expert observation and task completion verification against defined procedural steps and safety metrics.
- Accuracy Analysis: For accuracy, the ground truth would be the established accuracy parameters of the predicate device. The analysis confirmed that the updated indications did not alter the physical system or software in a way that would modify these established accuracy limits.
8. The Sample Size for the Training Set
Not applicable. This document describes a surgical navigation system, not a machine learning model that requires a training set in the conventional sense. The "training" of the system would imply its design, development, and validation based on engineering principles and previous surgical data, but not a distinct "training set" of patient data for an AI algorithm. The device uses "intraoperatively-defined bone landmarks and known geometry of the surgical implant" but this is real-time operation, not a pre-trained model on a dataset.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no traditional "training set" for a machine learning algorithm described in this submission. The system's functionality is based on established engineering principles, navigation technologies, and predefined anatomical and implant geometries.
Ask a specific question about this device
(57 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include unicondylar knee replacement (UKR), total knee arthroplasty (TKA), and total hip arthroplasty (THA).
The subject of this Traditional 510(k) is REAL INTELLIGENCE CORI (CORI), a robotic-assisted orthopedic surgical navigation and burring system. CORI uses established technologies of navigation via a passive infrared tracking camera. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, the system aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location. For knee applications, CORI then aids the surgeon in executing the surgical plan by controlling the cutting engagement of the surgical bur.
CORI knee application software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- Speed control regulates the signal going to the tool control unit itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
I am sorry, but the provided text does not contain the specific information required to answer your request.
Here's why:
- Acceptance Criteria and Reported Device Performance: The document states that "Verification and validation testing demonstrated the safety and efficacy of CORI when the system is used to place porous total knee implants intended for use without bone cement." and "Comprehensive verification testing demonstrated that the system meets required design inputs." However, it does not list the specific acceptance criteria (e.g., target accuracy, precision thresholds) or report the numerical performance results against these criteria.
- Sample Size and Data Provenance for Test Set: The document mentions "Verification and validation testing" and "Summative usability testing" but does not provide details on the sample size used for these tests (e.g., number of procedures, number of patients, number of simulated cases) or the provenance of the data (e.g., country of origin, retrospective/prospective).
- Experts for Ground Truth: There is no mention of experts used to establish a ground truth for a test set, nor their qualifications or number.
- Adjudication Method: No information regarding an adjudication method is provided.
- MRMC Comparative Effectiveness Study: The document does not describe any multi-reader multi-case (MRMC) comparative effectiveness study, nor does it mention any effect size for human reader improvement with AI assistance.
- Standalone Performance: While the document describes the device's functionality, it does not explicitly state or provide data for a standalone (algorithm only without human-in-the-loop) performance evaluation. The description focuses on its function as an assistant to the surgeon.
- Type of Ground Truth: The document does not specify the type of ground truth used (e.g., expert consensus, pathology, outcomes data) for any of its testing.
- Sample Size for Training Set & Ground Truth for Training Set: The document is a 510(k) summary for a premarket notification, which focuses on device modifications and substantial equivalence. It does not provide details about the training set used for the underlying AI/software components, including its size or how its ground truth was established.
The document primarily focuses on establishing substantial equivalence to a predicate device (K212537) by highlighting that the subject device (CORI) has the same intended use and technological characteristics, with the main difference being an updated indication for use to support porous total knee implants. It confirms that non-clinical testing and usability testing were performed but does not delve into the detailed results or methodologies that would be needed to answer your specific questions about acceptance criteria and study particulars.
Ask a specific question about this device
(254 days)
Blue Belt Technologies, Inc.
RI.HIP MODELER is intended for preoperative planning for primary total hip arthroplasty. RI.HIP MODELER is intended to be used as a tool to assist the surgeon in the selection and positioning of components in primary total hip arthroplasty.
RI.HIP MODELER is indicated for individuals undergoing primary hip surgery.
RI.HIP MODELER is a non-invasive standalone Total Hip Arthroplasty (THA) planning software application intended to provide preoperative planning for hip implant acetabular cup rotational selection and placement. RI.HIP MODELER allows surgeons to visualize and perform analysis of digital images for assessment of spinopelvic mobility of the patient. The app is used to characterize patient conditions, manage patient performance expectations, and help surgeons determine acetabular cup placement based on spinopelvic mobility.
The software provides a baseline cup orientation recommendation intended to reduce incidences of implant impingement based on patient condition and implant specifications. The surgeon is reminded to verify and adjust to the parameters based on their clinical judgment.
The provided text does not contain a detailed study with specific acceptance criteria and reported device performance values in a direct table format. However, it does discuss performance testing and verification/validation activities.
Here's an attempt to extract and synthesize the information based on the provided text, addressing your points where possible:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in a table. It refers to general design requirements, safety and effectiveness, and meeting intended use. The closest to a quantifiable performance metric is:
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Accuracy of landmarking measurements in worst-case conditions | Within 2 degrees (for image capture verification testing) |
Ability to provide a baseline cup orientation that increases distance to implant impingement | Verification and validation indicate no safety or performance concerns. Tested by computational models compliant with ASME V&V 40: 2018. |
Workflow execution and safe/effective use by representative users | Performance testing and Human Factors validation confirmed successful, safe, and effective use. |
Generation of clinically relevant image with minimal distortion | Verification confirmed. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify a numerical sample size for the test set used in "Image Capture Verification Testing" or "Performance Testing." It mentions "representative users" for Usability Engineering Validation Testing.
- Data Provenance: Not explicitly stated. The text notes "No human clinical testing was required," implying that the testing was primarily bench and simulated. The mention of "typical patient activities of daily living (accessed through a simulation database)" suggests some simulated patient data was used, but the origin of this database is not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided. The study relies on simulated data and computational models rather than expert-established ground truth on patient scans for the primary performance claims. For "Usability Engineering Validation Testing," it states "representative users" were involved, implying surgeons, but their number and specific qualifications are not detailed.
4. Adjudication Method for the Test Set
Not applicable. The reported testing focuses on technical performance and usability, not on diagnostic agreement among experts.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states, "No human clinical testing was required to determine the safety and effectiveness of RI.HIP MODELER." The device's primary function is described as assisting the surgeon with a baseline cup orientation recommendation, rather than directly improving human reader performance in diagnosis or measurement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, testing was done on the standalone software's performance. The document describes "Software Verification Testing" and "Image Capture Verification Testing" that assessed the algorithm's performance and accuracy (e.g., landmarking accuracy within 2 degrees). The computational models used in the design are also verified and validated, supporting the algorithm's ability to provide a baseline cup orientation.
7. The Type of Ground Truth Used
The ground truth for the core functionality appears to be derived from:
- Computational Models: The algorithm for baseline cup orientation relies on "computational models used in the design of RI.HIP MODELER comply with ASME V&V 40: 2018, Assessing Credibility of Computational Modeling through Verification and Validation." These models simulate implant kinematics and distance to impingement.
- Design Requirements/Input Specifications: Performance testing demonstrated that the device "meets the required design inputs."
- Simulated Database: "Implant kinematics and distance to impingement are calculated for each activity, given a set of implant specifications," using information "accessed through a simulation database."
For the image capture verification, the ground truth for "accuracy of landmarking measurements" would likely be based on known reference points or measurements on the test images, though this isn't explicitly detailed.
8. The Sample Size for the Training Set
This information is not provided. The document focuses on verification and validation of the device, not the details of its training phase. The device "employs an algorithm" and uses a "simulation database," but there is no mention of a traditional machine learning "training set" with a specified size.
9. How the Ground Truth for the Training Set Was Established
This information is not provided because details about a "training set" in the context of machine learning are not explicitly discussed. The algorithm's behavior is based on "computational models" and a "simulation database," whose internal mechanisms and ground truth establishment are not described in this summary.
Ask a specific question about this device
(83 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined.
These procedures include unicondylar knee replacement (UKR), total knee arthroplasty (TKA) and total hip arthroplasty (THA).
For Knee applications, CORI is indicated for use with cemented implants only.
CORI is a computer-assisted orthopedic surgical navigation and surgical burring system. CORI uses established technologies of navigation, via a passive infrared tracking camera.
For robotic knee applications, CORI software can control the cutting engagement of the surgical bur based on its proximity to the planned target surface. Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
For the hip navigation application, CORI incorporates the Brainlab HIP7 software (K193307) which will be market by Smith & Nephew as RI.HIP NAVIGATION. It uses instruments and reference arrays, which are tracked by the IR camera, to determine pelvis and femur anatomical landmarks as well as implant orientation. RI.HIP NAVIGATION assists the orientation of prosthetic hip implants and enables measurement of leg length and offset when used intraoperatively in combination with CORI.
This document does not contain the requested information about acceptance criteria and a study proving device performance as it is a 510(k) summary for a medical device (REAL INTELLIGENCE CORI) seeking clearance for a new indication (Total Hip Arthroplasty - THA).
Here's why the requested information is not present in this document:
- 510(k) Summaries Focus on Substantial Equivalence: A 510(k) summary aims to demonstrate that a new device is "substantially equivalent" to a legally marketed predicate device, meaning it has the same intended use and similar technological characteristics, or different characteristics that do not raise new questions of safety or effectiveness.
- Performance Data is Summarized, Not Detailed: While the document mentions "Design verification and validation tests were performed on CORI to demonstrate that the changes presented in this submission meet all design input requirements and that CORI is as safe and effective as its predicate device," it does not provide the detailed acceptance criteria, specific study designs, sample sizes, expert qualifications, or ground truth methodologies that you've asked for. These details would typically be found in the full 510(k) submission, not in the publicly available summary.
- "No human clinical testing was required": The document explicitly states this, which means there wouldn't be studies involving human readers, comparative effectiveness, or certain types of ground truth (like pathology from live patients) that might be expected from a device requiring clinical trials. The focus here is on bench testing and software validation.
- Focus on Bench Testing and Software Validation: The section on "Non-Clinical Testing (Bench)" mentions software integration testing (adhering to IEC 62304 and FDA guidance) and usability engineering validation testing (for safe and effective use in a simulated environment). This indicates the type of testing performed, but not the specific quantitative performance metrics you've requested.
Therefore, I cannot extract the table of acceptance criteria, reported device performance, sample sizes, data provenance, expert details, adjudication methods, MRMC study results, standalone performance, type of ground truth, or training set information from the provided text. This document is a high-level summary for regulatory clearance, not a detailed technical report of performance studies.
Ask a specific question about this device
(53 days)
Blue Belt Technologies, Inc.
CORI is indicated for use in surgical knee procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. These procedures include unicondylar knee replacement (UKR) and total knee arthroplasty (TKA).
CORI is indicated for use with cemented implants only.
CORI is a computer-assisted orthopedic surgical navigation and surgical burring system. CORI uses established technologies of navigation, via a passive infrared tracking camera, to aid the surgeon in establishing a bone surface model for the target surgery and in planning the surgical implant location. Based on intraoperatively-defined bone landmarks and known geometry of the surgical implant, CORI aids the surgeon in establishing a bone surface model for the target surgery and planning the surgical implant location.
CORI software controls the cutting engagement of the surgical bur based on its proximity to the planned target surface. The cutting control is achieved with two modes:
- Exposure control adjusts the bur's exposure with respect to a guard. If the surgeon encroaches on a portion of bone that is not to be cut, the robotic system retracts the bur inside the guard, disabling cutting.
- . Speed control regulates the signal going to the drill motor controller itself and limits the speed of the drill if the target surface is approached.
Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill.
The acceptance criteria and study proving device performance are described below, based on the provided text for the Real Intelligence Cori (CORI) device.
1. A table of acceptance criteria and the reported device performance
The document mentions "Comprehensive performance testing demonstrated that the system meets required design inputs" and "The comparative results of the cut-to-plan accuracy data is acceptable and equivalent to the predicate devices." However, specific quantitative acceptance criteria and detailed reported performance metrics are not explicitly stated in the provided text. The submission focuses on substantial equivalence to predicate devices (K193120 and K191223) rather than presenting new, specific acceptance criteria with distinct performance numbers for this iteration of CORI.
It can be inferred that the acceptance criteria for accuracy are implicitly tied to demonstrating equivalence to the predicate devices, which are also navigational systems for orthopedic surgery.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document primarily describes bench testing and states, "No human clinical testing was required to determine the safety and effectiveness of CORI." Therefore, the concept of a "test set" in the context of clinical data provenance (country, retrospective/prospective) does not directly apply here.
The performance data consisted of "physical performance test for all system components and system accuracy testing." The sample size for these non-clinical tests is not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Since "No human clinical testing was required" and the testing primarily involved "physical performance test for all system components and system accuracy testing," the concept of "experts" establishing ground truth for a clinical test set is not applicable in the traditional sense. The "ground truth" for accuracy testing would have been established through validated measurement techniques (e.g., precise optical metrology, CMM measurements) under controlled laboratory conditions, not by human expert assessment of medical images or outcomes.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Given that clinical data from human subjects was not used for this submission, an adjudication method for a clinical test set is not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was conducted, as stated: "No human clinical testing was required." The CORI system is an orthopedic stereotaxic instrument for surgical guidance, not an AI-assisted diagnostic imaging tool that would typically involve human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the CORI device as a "computer-assisted orthopedic surgical navigation and surgical burring system" that "assists the surgeon" and provides "software-defined spatial boundaries." It also states, "Alternatively, the surgeon can disable both controls and operate the robotic drill as a standard navigated surgical drill." This indicates that the device is designed to be human-in-the-loop, providing assistance to a surgeon.
However, "Performance data consisted of physical performance test for all system components and system accuracy testing," which implies that the accuracy of the algorithm's guidance and burring control was tested in a standalone, bench setting, separate from a surgeon's subjective usage. The "cut-to-plan accuracy data" would represent the standalone performance of the algorithm and hardware.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the non-clinical "system accuracy testing" and "cut-to-plan accuracy data," the ground truth would have been established through precise, objective physical measurements (e.g., using metrology equipment) of the planned bone resection versus the actual bone resection created by the device on a test medium. This is not expert consensus, pathology, or outcomes data.
8. The sample size for the training set
The document refers to the device as a "computer-assisted orthopedic surgical navigation and surgical burring system" that uses "established technologies." While it mentions "software verification testing" and development in accordance with IEC 62304, there is no specific mention of a "training set" in the context of machine learning. If machine learning models were used, details about their training setup are not provided in this regulatory submission summary. The equivalence is primarily based on the functional and technological similarities to predicate devices and performance in physical accuracy tests.
9. How the ground truth for the training set was established
As no "training set" in the context of machine learning is explicitly mentioned, how its ground truth was established is not provided. If the system is primarily rule-based or model-based on known geometry and intraoperative data rather than a data-driven machine learning model, then a traditional "training set" with established ground truth would not be applicable. The "ground truth" for the system's underlying models would be based on engineering specifications and anatomical models.
Ask a specific question about this device
Page 1 of 2