Search Results
Found 3 results
510(k) Data Aggregation
(29 days)
The 360CAS Knee is intended to be used as a planning and intraoperative guidance system in open or percutaneous image guided surgical procedures. The 360CAS Knee is indicated for patients undergoing orthopaedic surgery and where reference to a rigid anatomical structure, such as the pelvis, femur, or tibia can be identified. The 360CAS Knee is indicated for the following surgical procedures: . Total Knee Arthroplasty (TKA) . For conditions of the knee joint in which the use of computer assisted surgery may be appropriate
The 360 Computer Assisted Surgery (360CAS) is a stereotaxic surgical navigation system designed for orthopedic surgical procedures. The 360CAS device consists of four main components: 360CAS Navigation Software, Surgical Instruments, Spatial Tracking Components, and the Navigation Cart. It is intended to be used as a planning and intraoperative guidance system in open or percutaneous orthopedic surgical procedures. This device utilizes optical tracking technology, enabling surgeons to map patient morphology, navigate surgical instruments, and assess joint conditions throughout the surgery. Optical trackers are attached to navigation instruments, and the Spatial Tracking Components locate the 3D position of these instruments in space. The coordinates are relayed to the 360CAS Navigation Software, which provides the user with relevant orientation and position information. The 360CAS Knee is a specific application of the 360CAS Navigation Software tailored for knee replacement surgery.
The provided text describes a 510(k) premarket notification for the "360CAS Knee" device. However, it does not include detailed acceptance criteria or a comprehensive study report proving the device meets these criteria. The document states that "Design verification and validation testing demonstrated that the 360CAS System meets all design requirements and is as safe and effective as its predicate device (K223927)." It lists the types of performance tests performed but does not provide the specific acceptance criteria or the reported device performance values for these tests.
Based on the provided text, I can infer some general information about the performance tests, but I cannot fill in the table with specific quantitative acceptance criteria or reported performance values as that information is not present.
Here's a breakdown of what can and cannot be answered:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Required Value) | Reported Device Performance (Achieved Value) |
|---|---|
| Software Verification and Validation Testing (IEC 62304) | "Meets all design requirements" (Specifics not provided) |
| Positional Accuracy Testing (ASTM F2554) | "Achieve the same system accuracy" as predicate (Specific values not provided) |
| Systems accuracy testing | "Achieve the same system accuracy" as predicate (Specific values not provided) |
| Non-clinical surgical simulations on bone models | "Meets all design requirements" (Specifics not provided) |
Explanation: The document states that these tests were performed and that the system "meets all design requirements" and "achieve the same system accuracy" as the predicate device (K223927). However, the specific numerical acceptance criteria (e.g., accuracy within X mm, recall > Y%) and the actual measured performance values are not disclosed in this summary.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified in the provided text. The tests mentioned are "Software Verification and Validation Testing," "Positional Accuracy Testing (ASTM F2554)," "Systems accuracy testing," and "Non-clinical surgical simulations conducted on bone models." These typically involve controlled test environments and/or cadaveric/bone models rather than patient data.
- Data Provenance: Not specified. Given the nature of the tests (software V&V, positional accuracy, non-clinical simulations), it primarily involves engineering and lab data, not patient data in the typical sense of retrospective/prospective clinical studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. The tests performed are engineering and lab-based (software, positional accuracy, simulations on bone models), not clinical studies involving expert interpretation of patient data to establish ground truth for diagnostics or treatment efficacy.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. This is typically relevant for studies involving human interpretation (e.g., image reading) where consensus is needed. The described tests are objective engineering measurements and simulations.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document describes a computer-assisted surgery system, not an AI-assisted diagnostic imaging system that would typically involve human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in the sense that the "Performance Data" section discusses "Positional Accuracy Testing," "Systems accuracy testing," and "Software Verification and Validation Testing." These would primarily evaluate the algorithm/system's performance independently of a specific human operator for its core functions (tracking, calculations, etc.). While it's a surgical navigation system and human surgeons are the ultimate users, these tests assess the technical accuracy and functionality of the device itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth would be based on:
- For Software V&V: Defined software requirements and specifications.
- For Positional Accuracy/Systems Accuracy: Precisely measured physical references and known values.
- For Non-clinical surgical simulations: Predefined surgical plans, anatomical landmarks on the bone models, and accurate measurement tools (e.g., CMMs, highly accurate optical trackers).
8. The sample size for the training set
Not applicable. This device is described as a "stereotaxic surgical navigation system" that uses "optical tracking technology." The description does not indicate that it is an AI/machine learning model that requires a "training set" in the conventional sense of supervised learning on a large dataset of patient images or outcomes. It appears to be a rule-based or model-based system, for which the term "training set" is not relevant.
9. How the ground truth for the training set was established
Not applicable, as there's no indication of a training set as would be found with an AI/ML model.
Ask a specific question about this device
(28 days)
The 360CAS is indicated for patients undergoing orthopaedic surgery and where reference to a rigid anatomical structure can be identified.
The 360CAS Knee is indicated for the following surgical procedures:
- · Any form of Total Knee Arthroplasty (TKA)
- · For conditions of the knee joint in which the use of computer assisted surgery may be appropriate
The 360CAS Hip is indicated for the following surgical procedures:
· Any form of Total Hip Arthroplasty (THA) e.g., open or minimally invasive, where a posterior or anterior approach is used
· For conditions of the hip joint in which the use of computer assisted surgery may be appropriate
The 360 Computer Assisted Surgery (360CAS) is a stereotaxic surgical navigation system for orthopaedic surgical procedures. The 360CAS is intended to be used as a planning and intraoperative quidance system with any manufacturers implant in open or percutaneous orthopaedic surgical procedures. The 360CAS uses optical tracking technology that allows surgeons to map subject's morphology, navigate surgical instruments and implants and assess state of the joint throughout the surgery. The system consists of 360CAS navigation software, which consists of two modules: 360CAS Knee and 360CAS Hip, surgical instruments, spatial tracking components and a navigation cart. 360CAS Knee is a 360CAS navigation software for knee replacement surgery. 360CAS Hip is a 360CAS navigation software for hip replacement surgery. The navigation software interfaces with the optical trackers which are attached to navigation instruments (e.g., pointer, bone fixator(s)).
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| System Accuracy (Translational Error) | < ±2 mm |
| System Accuracy (Rotational Error) | < ±1° |
| Electrical Safety | Complied with IEC 60601-1-2:2014 |
| Electromagnetic Compatibility (EMC) | Complied with IEC 60601-1-2:2014 |
| Software Verification & Validation | Successfully completed (software considered "MAJOR" level of concern) |
| Functionality | All functional requirements fulfilled |
| System Integration & Compatibility | All system components (application, computer, accessories) are compatible |
| Risk Control Measures Effectiveness | Safety testing verified effectiveness of all risk controls |
2. Sample size used for the test set and the data provenance
The document indicates that "Sawbones mimicking patient's anatomy" were used for system accuracy testing and clinical workflow verification. However, the specific sample size (i.e., number of Sawbones or tests conducted) is not explicitly provided.
- Data Provenance: The tests were conducted using Sawbones, which are synthetic bone models, rather than actual patient data or cadavers. This suggests a bench-top testing environment, and thus the data is not tied to a specific country of origin in the same way clinical data would be. The testing is prospective in the sense that the device's performance was evaluated against a predefined protocol using these models.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth or their qualifications for the bench performance testing.
4. Adjudication method for the test set
The document does not describe any adjudication method for the test set. Given that the testing involved Sawbones and objective measurements against known accuracy standards, a formal adjudication process involving multiple human reviewers might not have been deemed necessary in the same way it would be for subjective clinical assessments.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. The device (360CAS Knee and 360CAS Hip) is a computer-assisted surgical navigation system, not an AI-assisted diagnostic tool that would typically involve human readers interpreting images. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not directly apply to this device's reported evaluation. No effect size is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the performance testing described appears to be a standalone (algorithm only) evaluation. The "System accuracy testing verifying the specified accuracy" and the "functional testing" assess the device's ability to perform its core functions and meet accuracy specifications independently of a human surgeon's real-time subjective assessment or intervention during the measurement process. The context is measuring the device's inherent accuracy and functionality.
7. The type of ground truth used
The ground truth used for performance testing was based on:
- Standardized test procedures and known measurements: For the ASTM accuracy testing (ASTM F2554-18 and ASTM F2554-22).
- Engineered precision/known values: For the "System accuracy testing verifying the specified accuracy of ±2mm and ±1º using Sawbones." This implies that the 'true' or target measurements on the Sawbones were precisely known or set.
8. The sample size for the training set
The document does not provide information on a "training set" as this device is a navigation system and not a machine learning or AI-based diagnostic algorithm that typically undergoes a distinct training phase with a labeled dataset. While software verification and validation were conducted, these generally involve testing the implemented code against requirements, rather than training a model.
9. How the ground truth for the training set was established
As there is no mention of a training set in the context of machine learning, there is no information on how its ground truth would have been established. The device instead relies on established mechanical and optical principles for navigation.
Ask a specific question about this device
(90 days)
The 360CAS is intended to be used as a planning and intraoperative guidance system to enable open or percutaneous image guided surgical procedures.
The 360CAS is indicated for patients undergoing orthopaedic surgery and where reference to a rigid anatomical structure, such as the pelvis, femur, or tibia, can be identified.
The 360CAS Knee is indicated for the following surgical procedures:
- Total Knee Arthroplasty (TKA)
- For conditions of the knee joint in which the use of computer assisted surgery may be appropriate
The 360CAS Hip is indicated for the following surgical procedures:
- Total Hip Arthroplasty (THA) e.g., open or minimally invasive, where a posterior or anterior approach is used
- For conditions of the hip joint in which the use of computer assisted surgery may be appropriate
The 360 Computer Assisted Surgery (360CAS) is a stereotaxic surgical navigation system for orthopaedic surgical procedures. The 360CAS is intended to be used as a planning and intraoperative quidance system with any manufacturers implant in open or percutaneous orthopaedic surgical procedures. The 360CAS uses optical tracking technology that allows surgeons to map a patient's morphology, navigate surgical instruments and implants and assess the state of the joint throughout the surgery. The system consists of four main components: 360CAS navigation software, which consists of two modules: 360CAS Knee and 360CAS Hip, surgical instruments, spatial tracking components and a navigation cart. 360CAS Knee is a 360CAS navigation software for knee replacement surgery. 360CAS Hip is a 360CAS navigation software for hip replacement surgery. The navigation software interfaces with the optical trackers which are attached to navigation instruments (e.g. pointer, bone fixator(s)).
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| System Accuracy (Knee) | The system enables the determination of the mechanical axes of the lower limb as well as cut and component alignment with a mean translational error of < ±2 mm and a mean rotational error of < ±1°. |
| System Accuracy (Hip) | The system enables the determination of the mechanical axes of the lower limb as well as cut and component alignment with a mean translational error of < ±2 mm and a mean rotational error of < ±1°. (Note: The predicate device for Hip had a rotational error of < ±2°, making the subject device's performance superior or at least equivalent if the "less than" applies to the absolute value.) |
| Electrical Safety | Conducted in accordance with AS/NZS 3551:2012 and IEC 60601-1-2:2014. |
| Electromagnetic Compatibility | Conducted in accordance with IEC 60601-1-2:2014. |
| Software Verification/Validation | Performed according to FDA guidance (Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (2005) and Off-the-Shelf Use in Medical Devices (2019)). Considered a "MAJOR" level of concern. |
| Functional Testing | All functional requirements are fulfilled. |
| Safety Testing | Effectiveness of all risk controls determined in the device risk analysis was verified. |
| Clinical Workflow | Verified that all system components (application, computer platform and accessories) are compatible through complete knee and hip arthroplasty procedures simulated using Sawbones mimicking the patient's anatomy and cadaver laboratories. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document primarily describes bench testing and cadaveric laboratory testing. Specific sample sizes for each test are not explicitly provided, but the types of materials used are:
- ASTM accuracy testing: Not specified, but uses a standardized test procedure according to ASTM F2554-18.
- System accuracy testing: Sawbones mimicking patient's anatomy.
- Clinical accuracy testing: Not specified, but states "in a cadaveric laboratory."
- Clinical workflow testing: Sawbones mimicking the patient's anatomy and cadaver laboratory.
The data provenance is from laboratory and cadaveric studies, not real-world patient data. The country of origin of the data is not explicitly stated, but the applicant company is located in Australia.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The document describes verification and validation activities but does not detail how ground truth was established for these tests or the experts involved.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. The document describes accuracy and workflow testing but does not mention any adjudication method for establishing ground truth or evaluating disagreements.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. The device is a surgical navigation system, and the performance testing focuses on its accuracy and functionality, not its impact on human reader performance. No AI-assistance claims are made that would necessitate such a study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, standalone performance testing was done for the device's accuracy. The "System Accuracy" and "Clinical Accuracy" testing (translational and rotational error) are examples of standalone performance evaluations for the navigation system's output. The entire performance data section (bench testing, software V&V) implicitly describes standalone performance, as it assesses the device's inherent characteristics. The design of the device as a "planning and intraoperative guidance system" suggests it assists a human surgeon, but the accuracy metrics are for the system itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth for the performance testing appears to be based on:
- Standardized test procedures and physical measurements: ASTM F2554-18 for spatial tracking accuracy.
- Engineered or known physical values: Sawbones mimicking patient anatomy and cadaveric laboratories are used to assess the device's ability to measure and guide with specified accuracy (±2mm and ±1°). The "specified accuracy" itself serves as the benchmark for evaluation.
8. The sample size for the training set
This information is not applicable/not provided. The document describes a "stereotaxic surgical navigation system" and "optical tracking technology." While it includes "360CAS navigation software," it does not explicitly mention machine learning or AI models that would require a distinct "training set" in the context of deep learning. The software verification and validation, along with functional testing, imply that the software's performance was evaluated against its design specifications, not through a machine learning training/validation split.
9. How the ground truth for the training set was established
This information is not applicable/not provided as there is no mention of a machine learning training set. The software's "ground truth" would be its design requirements and specifications, validated through standard software V&V processes and functional testing.
Ask a specific question about this device
Page 1 of 1