Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K141601
    Manufacturer
    Date Cleared
    2014-09-11

    (87 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The iASSIST Knee System is a computer assisted stereotaxic surgical instrument system to assist the surgeon in the positioning of orthopedic implant system components intra-operatively. It involves surgical instruments and position sensors to determine alignment axes in relation to anatomical landmarks and to precisely position alignments and implant components relative to these axes.

    Example orthopedic surgical procedures include but are not limited to: Total Knee Arthroplasty.

    Device Description

    As in the predicate, the iASSIST Knee System consists of tracking sensors ('pods'), a computer system, software, and surgical instruments designed to assist the surgeon in the placement of Total Knee Replacement components. The pods combined with the surgical instruments provide positional information to help orient and locate the main femoral and tibial cutting planes as required in knee replacement surgery. This includes means for the surgeon to determine and thereafter track each of the bones' alignment axes relative to which the cutting planes are set.

    AI/ML Overview

    The provided document is a 510(k) Pre-Market Notification for the iASSIST™ Knee System. It details the device, its intended use, and comparisons to a predicate device. However, it does not explicitly provide a table of acceptance criteria with reported device performance statistics in the way that would typically be seen for AI/ML device performance.

    Instead, the document states that "Non-clinical tests were performed to assess that no new safety and efficacy issues were raised in the device." The performance data section describes the types of tests conducted, rather than specific quantitative acceptance criteria and results against those criteria.

    Therefore, I cannot fulfill all parts of your request with the provided information. I will, however, extract all available information about the study and acceptance criteria as described.

    Here's an analysis based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted above, a detailed table with specific quantitative acceptance criteria and corresponding reported device performance (e.g., accuracy, sensitivity, specificity, or specific error margins with numerical results) is not provided in this document. The document focuses on demonstrating that modifications to an existing device (predicate) did not introduce new safety or efficacy issues and that the device still meets its intended functionality.

    The closest to "acceptance criteria" are implied by the types of tests described, indicating that the system must maintain its required functionality, robust performance, and compatibility.

    Acceptance Criteria (Implied)Reported Device Performance (Summary from text)
    Required functionalities maintained or correctly updated without hazardous anomaliesSoftware system tests performed to ensure functionalities were maintained/updated correctly.
    Performance of bone registration related functionalities verifiedPerformance tests performed under simulated bench test conditions and analyses.
    Robustness and compatibility of added/modified instruments verifiedBench test and analyses performed.
    Resistance of pods to electro-static discharges verifiedBench test and analyses performed.
    Sufficiency of pod's battery expected lifetime verifiedBench test and analyses performed.
    Overall system performance, usage, surgical flow, and instrument ergonomics validatedFull use simulations tests using sawbones performed.
    Electrical safety certification (IEC 60601-1:2005) metElectrical certification tests related to the update performed.

    2. Sample Size Used for the Test Set and Data Provenance

    The document describes "Non-clinical tests" and "simulated bench test conditions," and "Full use simulations tests using sawbones." This indicates the tests were conducted in a controlled, non-human, and likely retrospective or simulated environment, rather than on patient data.

    • Sample Size: Not explicitly mentioned.
    • Data Provenance: Simulated bench tests and sawbone simulations. This is not patient data; therefore, "country of origin" is not applicable in the typical sense. It implies lab or manufacturing environment testing.
    • Retrospective/Prospective: Not applicable, as it's not patient-level data. The tests would have been performed prospectively during the development and modification phases.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    This information is not provided. Since the tests were primarily engineering and functional verification on simulated or bench setups (e.g., software, hardware, mechanical components, sawbones), "ground truth" would likely be established by engineering specifications, calibration standards, and comparison to the predicate device's known performance, rather than clinical expert consensus.

    4. Adjudication Method for the Test Set

    Not applicable, as "adjudication" typically refers to resolving discrepancies between human readers or ground truth experts for clinical data. The tests described are engineering validations.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study is not mentioned. The document primarily focuses on verifying that device modifications do not introduce new safety or efficacy concerns compared to its own predicate, rather than comparing its performance against humans or quantifying human improvement with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device is described as a "computer assisted stereotaxic surgical instrument system to assist the surgeon." This inherently implies a "human-in-the-loop" system. While the software and hardware components were tested individually ("Software system tests," "Performance tests were performed...to verify the implementation of the performance of the bone registration related functionalities"), these tests were to ensure the components functioned correctly for the purpose of assisting a surgeon. A standalone performance without a human interaction is not the intended use and therefore not explicitly evaluated in isolation as a primary performance metric in the way an AI diagnostic tool might be.

    7. The Type of Ground Truth Used

    For the engineering tests:

    • Software tests: Likely against defined software requirements and specifications.
    • Performance tests (bone registration): Likely against known, calibrated physical measurements or established mathematical models for bone alignment.
    • Robustness/Compatibility/Battery life/ESD: Against engineering specifications, industry standards, and predicate device performance.
    • Sawbone simulations: Likely against established surgical techniques and expected outcomes for total knee arthroplasty, possibly with objective measurements of alignment.

    8. The Sample Size for the Training Set

    This device is not described as an AI/ML device that undergoes "training" in the typical sense of a deep learning model. It's a computer-assisted surgical instrument system using predefined algorithms and sensors. Therefore, a "training set" as understood in machine learning is not applicable here. The software development would involve traditional software engineering and testing cycles.

    9. How the Ground Truth for the Training Set Was Established

    As explained above, there isn't a "training set" in the AI/ML context for this device. The algorithms are likely based on biomechanical principles, geometry, and surgical protocols, with "ground truth" derived from engineering specifications and clinical understanding of proper implant positioning.

    Ask a Question

    Ask a specific question about this device

    K Number
    K131129
    Device Name
    CAS PSI SHOULDER
    Manufacturer
    Date Cleared
    2013-08-20

    (120 days)

    Product Code
    Regulation Number
    888.3660
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CAS PSI Shoulder is intended to be used as a surgical instrument to construct and transfer a pre-surgical plan to orthopaedic surgical procedures. The CAS PSI Shoulder is indicated, based on patient-specific radiological images with identifiable placement anatomical landmarks, to assist in pre-operative planning and/or intra-operative guiding of surgical instruments for shoulder replacement surgical procedures on patients not otherwise precluded from being radiologically scanned.

    The CAS PSI Shoulder is to be used with the Zimmer® Trabecular Metal™ Reverse Shoulder Baseplate in accordance with the implant system's indications and contraindications.

    The CAS PSI Shoulder hardware components (jigs and bone model) are intended for single use only.

    Device Description

    The CAS PSI Shoulder consists of both software and hardware components and requires the patient to be radiologically scanned. The CAS PSI Shoulder has been developed with the fundamental goals to assist in pre-operative planning (using the CAS PSI Shoulder Software) and to accurately construct and transfer a pre-operative plan to orthopedic surgical procedures (using the CAS PSI Shoulder Hardware). The hardware (jigs and bone model) have features designed to mate with legally marketed instruments to aid in the implantation of legally marketed Class II implant devices. The hardware components are designed to mate with legally marketed instruments and thus indirectly aid in the placement of legally marketed implants. The software is developed in C++ programming language for a windows operating system. The hardware (jigs and bone guide) are made from biocompatible polyamide (Duraform) with press-fit 304 and 17-4 Stainless Steel components.

    AI/ML Overview

    The provided text describes the "CAS PSI Shoulder" device, a surgical planning and instrument guidance system for shoulder replacement procedures. However, the document does not contain the level of detail requested in the prompt regarding acceptance criteria and a specific study proving the device meets those criteria.

    Instead, it provides a general overview of non-clinical performance studies and a high-level conclusion.

    Here's a breakdown of what can and cannot be extracted from the given text based on your request:

    What can be extracted:

    1. A table of acceptance criteria and the reported device performance:

    The document states: "Non-clinical testing demonstrated that the CAS PSI Shoulder meets performance requirements as defined by Design Control activities and is substantially equivalent to the predicate device in terms of safety and efficacy."

    However, it does not provide a specific table of acceptance criteria nor quantified device performance metrics from these tests. It only lists the types of non-clinical studies conducted.

    Acceptance CriteriaReported Device Performance
    Not specified in detail. The document generally indicates meeting "performance requirements as defined by Design Control activities" and achieving "substantial equivalence.""Meets performance requirements as defined by Design Control activities and is substantially equivalent to the predicate device in terms of safety and efficacy."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

    • Sample Size for Test Set: Not specified for any of the listed non-clinical studies.
    • Data Provenance: Not specified for any of the listed non-clinical studies (e.g., country of origin, retrospective/prospective). The studies are listed as "Simulated Use Testing," "Cadaveric Testing," etc., implying laboratory or cadaver-based testing rather than clinical patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    • This information is not provided in the document. The non-clinical studies (Simulated Use, Cadaveric) would likely involve internal experts or engineers evaluating performance, but their number and qualifications are not detailed.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done or reported. The document focuses on non-clinical testing and states that "clinical data and conclusions were not needed to demonstrate substantial equivalence." The device assists in planning but the studies listed are not designed to measure human reader improvement with or without AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The document describes "Software Verification and Validation" as one of the non-clinical studies. This would likely involve evaluating the algorithm's performance in a standalone capacity internally. However, specific metrics of its standalone performance (e.g., accuracy, precision) are not reported.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • For the non-clinical studies:
      • Simulated Use Testing & Cadaveric Testing: The ground truth would likely be based on established anatomical landmarks, surgical protocols, engineering specifications, and possibly measurements of implant or instrument placement accuracy against a predefined ideal.
      • Software Verification and Validation: Ground truth would be based on expected software output given specific inputs, adherence to design specifications, and computational accuracy.
    • Specific methodologies for establishing ground truth are not detailed.

    8. The sample size for the training set:

    • Not applicable/Not provided. The document describes a "Software Verification and Validation" study and does not mention machine learning or AI in a way that suggests a distinct training set for an AI model. The software component aids in pre-operative planning, which is a rules-based or computational task rather than a traditional machine learning model requiring a training set in the context of this document.

    9. How the ground truth for the training set was established:

    • Not applicable/Not provided for the same reasons as point 8.

    In summary:

    The provided 510(k) summary focuses on demonstrating substantial equivalence through non-clinical performance studies (simulated use, cadaveric, biocompatibility, sterilization, dimensional stability, drop testing, and software verification/validation). It explicitly states that clinical data and conclusions were not needed. As such, it lacks the detailed performance metrics, sample sizes for test/training sets, expert qualifications, and ground truth establishment methods typically found in studies evaluating AI diagnostic or prognostic devices against specific acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110054
    Manufacturer
    Date Cleared
    2011-03-24

    (76 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Navitrack® System – OS Knee Universal is indicated for use as a stereotaxic instrument to assist in the positioning of Total Knee Replacement components intraoperatively.

    It is a computer controlled image-guidance system equipped with a three-dimensional tracking sub-system. It is intended to assist the surgeon in determining reference alignment axes in relation to anatomical landmarks, and in precisely positioning the alignment instruments relative to these axes by displaying their locations.

    Device Description

    The Navitrack System - OS Knee Universal device consists of software, a computer workstation, an optical tracking system, surgical instruments, and tracking accessories, designed to assist the surgeon in the placement of total knee replacement components. Tracking devices are incorporated with given surgical instruments, as well as on to fixation bases that attach to each of the femur and tibia, such to allow the ability to track and display to the user their respective positions intra-operatively. The femur and tibia are displayed to the user in the form of their main alignment axes. The alignment axes are determined and recorded intra-operatively by identifying the key anatomical references that are used clinically to align and position the components.

    AI/ML Overview

    The provided text is a 510(k) summary for the Navitrack® System - OS Knee Universal, a computer-assisted surgical navigation system for Total Knee Replacement. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed clinical study with specific acceptance criteria and performance metrics typically found in efficacy trials for novel devices.

    Therefore, much of the requested information regarding detailed acceptance criteria, specific performance metrics, sample sizes for test and training sets, expert qualifications, and adjudication methods is not explicitly available in this document. The document describes non-clinical tests and validation on cadavers, but does not provide specific quantitative results against pre-defined acceptance criteria.

    Below is an attempt to address the request based on the available information, noting where information is absent:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    No new safety and efficacy issues are raised by the modifications to the predicate device.Non-clinical tests (bench tests, simulated use on cadavers) confirmed that the proposed bone references function as required, provide adequate fixation, and do not interfere with other instrumentation. For other changes, similar non-clinical verification and validation testing was performed. The device was deemed substantially equivalent to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated. The document mentions "simulated use on cadavers," implying a cadaveric study, but numbers are not provided.
    • Data Provenance: The tests were conducted internally by the manufacturer ("Non-clinical tests were performed to assess...") as part of a 510(k) submission. The exact country of origin for the cadavers or test facility is not specified but the applicant is based in Montreal, Quebec, Canada. The testing was prospective for the device modifications.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly mentioned. Given the nature of cadaveric testing for a surgical navigation system, ground truth would likely be established through direct measurement or observation during the simulated surgical procedures, potentially involving qualified surgical personnel, but the specific method is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. The document describes non-clinical testing for substantial equivalence, not a comparative study with human readers or clinical outcomes.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Performance: The performance data described is a form of standalone performance in a simulated environment (cadaveric testing) to assess the device's functionality. The focus is on the device's ability to track and display anatomical information and instrument positions accurately, rather than an evaluation of an algorithm's diagnostic output. Specific metrics of standalone accuracy (e.g., angular deviation, translational error) are not quantified in this summary.

    7. Type of Ground Truth Used

    • Ground Truth Type: Based on "simulated use on cadavers" and "Non-clinical tests," the ground truth was likely established through direct physical measurement and observation within the cadaveric environment of anatomical landmarks and instrument positions, to verify the system's accuracy and functionality. It would be a technical/engineering ground truth related to mechanical and tracking accuracy.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable in the traditional sense of machine learning training data. This device is a stereotaxic instrument for surgical guidance, not a diagnostic AI system that learns from a large dataset. The "training" in this context refers to the development and refinement of the software and hardware, which occurs iteratively rather than through a distinct "training set" of data.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set Establishment: Not applicable as there isn't a "training set" in the machine learning sense. The underlying physics and algorithms for tracking and navigation are based on established principles, and their accuracy would be continuously validated during the device's development cycle against known physical properties and measurements.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1