Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K221943
    Device Name
    EmbedMed
    Date Cleared
    2023-02-01

    (211 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K193614

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EmbedMed is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the system, and the result is an output data file. This file may then be provided as digital models or used as input to an additive manufacturing portion of the system. The additive manufacturing portion of the system produces physical outputs including anatomical models and surgical guides for use in the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing osteotomies, including the resection of bone tumors, for the appendicular skeleton. EmbedMed is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.

    Device Description

    EmbedMed utilizes Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images to create digital and additive manufactured, patient-specific physical anatomical models and surgical guides for use in non-joint replacing orthopedic surgical procedures for the appendicular skeleton.

    Imaging data files are obtained from the surgeons for treatment planning and various patient-specific products that are manufactured with biocompatible photopolymer resins using additive manufacturing (stereolithography).

    AI/ML Overview

    The provided FDA 510(k) clearance letter for EmbedMed focuses on demonstrating substantial equivalence to predicate devices, primarily through comparison of intended use, design, materials, and manufacturing processes, rather than detailed performance against specific acceptance criteria for an AI/algorithm-driven device.

    Therefore, the document does not contain the level of detail typically found in a study proving an AI/software device meets specific acceptance criteria. Specifically, it lacks information regarding:

    • A table of acceptance criteria and reported device performance metrics (e.g., accuracy, precision for segmentation).
    • Sample sizes and provenance for test sets designed to evaluate algorithmic performance (only mentions "worst-case features" for verification and "real patient scan data" for validation without specific numbers).
    • Number of experts, their qualifications, and adjudication methods for establishing ground truth.
    • Details of MRMC comparative effectiveness studies or standalone algorithmic performance.
    • Specific types of ground truth used beyond "real patient scan data."
    • Sample size for training sets and how ground truth for training was established.

    This is because EmbedMed's primary function, as described, revolves around human-guided image segmentation using COTS software and subsequent additive manufacturing of physical models and guides, rather than an automated AI algorithm making diagnostic or interpretive outputs that would necessitate such detailed performance metrics for regulatory clearance. The "image segmentation system" appears to be a tool used by trained personnel rather than an autonomous AI making critical decisions.

    The document emphasizes physical output accuracy and biological safety, as well as the manufacturing process, which aligns with its classification as an orthopedic surgical planning and instrument guide system, rather than a diagnostic AI.

    However, based on the limited information related to performance testing in the document, here's a summary of what can be inferred or directly stated, and what is missing:


    1. A table of acceptance criteria and the reported device performance

    The document does not provide a quantitative table of acceptance criteria and reported numerical performance metrics for the software's image segmentation capabilities (e.g., Dice score, Hausdorff distance for segmentation accuracy).

    Instead, it states:

    • Acceptance Criteria (Implied): The physical outputs (anatomical models and surgical guides) should meet "feature and dimensional accuracy requirements" and "meet the intended use of the product and its design requirements."
    • Reported Performance: "Verification testing performed on coupons... demonstrated that the EmbedMed physical outputs meets the feature and dimensional accuracy requirements for patient-specific surgical guides and anatomical models." And "The validation testing on the EmbedMed physical outputs manufactured from real patient scan data demonstrated through simulated use testing that the system produces patient-specific outputs that meet the intended use of the product and its design requirements."

    2. Sample sized used for the test set and the data provenance

    • Test Set Sample Size: Not explicitly stated. The document mentions "coupons, which were designed with the worst-case features and dimensions" for verification testing and "real patient scan data" for validation testing. No specific number of instances or patients is provided.
    • Data Provenance: Not explicitly stated (e.g., country of origin, specific hospitals). The data consists of "patient specific medical imaging files" and "real patient scan data." The document does not specify whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not provided. Given the description of the software being used to "segment the image file" and the "digital output is then reviewed and approved by the prescribing clinician," the "ground truth" for the utility of the output seems to be clinician approval. However, for the accuracy of the segmentation itself, the ground truth establishment method is not detailed.

    4. Adjudication method for the test set

    • No adjudication method is described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

    • No, an MRMC comparative effectiveness study was not done. The document states: "Clinical testing was not necessary for the demonstration of substantial equivalence." This implies that the study focused on technical performance and comparison to predicate devices rather than a human-reader performance study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document implies that the "image segmentation system" works with human input/guidance ("processed by the system," "reviewed and approved by the prescribing clinician"). It focuses on the accuracy of the physical outputs derived from this process. It does not provide metrics for a standalone algorithmic performance of the segmentation software itself, independent of human review/guidance.

    7. The type of ground truth used

    • For the physical outputs: The ground truth appears to be "design requirements" and "intended use of the product" as validated through "simulated use testing."
    • For the software's segmentation capabilities directly: Not explicitly defined beyond the implication that a "prescribing clinician" reviews and approves the digital output. This suggests the "ground truth" for clinical acceptability is clinician approval, not an independent, pre-established expert consensus or pathology.

    8. The sample size for the training set

    • Information about a training set is not provided, as the "image segmentation system" uses "Commercial Off-The-Shelf (COTS) software." This implies it is a commercially available, established segmentation tool (Simpleware Scan IP) rather than a newly developed AI model requiring a bespoke training set.

    9. How the ground truth for the training set was established

    • Not applicable/Not provided, as the software is COTS and not a newly trained AI model for which the submitter would establish a training ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K212237
    Device Name
    3D-Cut
    Manufacturer
    Date Cleared
    2021-11-29

    (133 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K192979, K193614

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3D-Cut is intended to be used as a surgical instrument to assist in preoperative planning and/or in guiding the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing ostector of bone tumors, for femur, tibia and pelvis including sacrum.

    Device Description

    3D-Cut is a patient-matched additively manufactured single use surgical instrument (PS). Based on a preoperative planning, the instruments are intended to assist physicians in guiding the marking of bone and guiding surgical instruments in bone tumor resection surgery, excluding joint replacement surgeries.

    The 3D-Cut instruments are designed starting from patient medical images, computed tomography (CT) and magnetic resonance imaging (MRI) device. The clinician delineates the tumor on the MRI. MRI and the delineated tumor are merged onto the CT which is used to extract the 3D CAD model of the bone. A draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced using additive manufacturing.

    AI/ML Overview

    The provided text describes the 3D-Cut device and its 510(k) submission, focusing on regulatory aspects, indications for use, and a high-level summary of performance data. However, this document does not contain the detailed information necessary to fully answer the specific questions regarding acceptance criteria, sample sizes, expert qualifications, ground truth establishment, or clinical study specifics like MRMC study results or effect sizes.

    The text states: "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications. A combination of bench, cadaveric and clinical (OUS published case series) testing was executed to demonstrate the subject device is substantially equivalent to the predicate device and performs in accordance with its intended use." It also mentions "Software verification and validation were performed, and documentation was included in this submission in accordance with FDA Guidance 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices'".

    This indicates that internal performance specifications and software verification/validation were performed, but the document does not elaborate on the specific acceptance criteria for these tests, nor does it provide details of a clinical study that would assess algorithm performance in the way suggested by the questions (e.g., number of experts, adjudication methods, MRMC studies, standalone performance data).

    Therefore, I can only provide an answer that reflects the absence of the requested detailed information in the provided document.


    Here's an assessment based on the provided document, highlighting the missing information:

    1. A table of acceptance criteria and the reported device performance

    The document mentions that "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications." and "Software verification and validation were performed". However, the specific acceptance criteria and the quantitative reported device performance for these tests are NOT provided in this document.

    2. Sample sizes used for the test set and the data provenance

    The document refers to "bench, cadaveric and clinical (OUS published case series) testing."

    • Bench and Cadaveric Testing: No sample sizes are specified.
    • Clinical Testing ("OUS published case series"): No sample size for the "case series" is provided, nor are details about the data provenance (e.g., specific country of origin, retrospective or prospective nature of these case series).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document states that the "clinician delineates the tumor on the MRI" and a "draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced." This implies clinical input for planning, but it does not specify the number of experts, their qualifications, or how a 'ground truth' for evaluating the device's performance (e.g., guiding surgery accuracy) was established for a test set. The clinical "validation" mentioned likely refers to the surgeon's approval of the design for a specific patient, not a generalized ground truth for a test set.

    4. Adjudication method for the test set

    No information on adjudication methods for a test set is provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no mention of an MRMC comparative effectiveness study, nor any data on how human readers (or surgeons in this context) improve with or without AI (device) assistance. The device is a physical surgical instrument resulting from preoperative planning, not explicitly an AI diagnostic tool for image interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is a "patient-matched additively manufactured single use surgical instrument (PSI)" based on "preoperative planning." The planning process involves a "clinician delineat[ing] the tumor on the MRI" and approving the treatment plan and PSI design. This indicates a human-in-the-loop process. No standalone algorithm performance data without human input is mentioned or applicable given the nature of the device.

    7. The type of ground truth used

    For the design and approval process, the "treating clinician's approval" and "validation" serves as the ground truth for shaping the PSI. For the performance of the manufactured device, "dimensional stability," "mechanical testing," and "simulated use testing on cadaveric specimen" would rely on physical measurements and surgical outcomes on cadavers. The "OUS published case series" would likely rely on clinical outcomes. However, a specific, generalized "ground truth" definition for a test set that would validate the device's accuracy in a structured study format (e.g., expert consensus on image interpretation, pathology, or long-term patient outcomes for a large cohort) is not described.

    8. The sample size for the training set

    The document describes a custom manufacturing process where the device is "designed starting from patient medical images." This implies a patient-specific design, not a general algorithm that is "trained" on a large dataset in the typical sense of machine learning. While there might be internal design rules or algorithms, the concept of a "training set" as understood in a machine learning context for diagnostic AI is not explicitly described or applicable in the provided information about this custom-manufactured surgical instrument.

    9. How the ground truth for the training set was established

    As there is no described "training set" in the context of an AI algorithm, this question is not applicable based on the provided text. The "ground truth" for the device's design is the clinician's approval of the proposed plan and PSI.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203697
    Date Cleared
    2021-03-12

    (84 days)

    Product Code
    Regulation Number
    888.3520
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181302, K163700, K193614

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The patient-specific BC Reflex Uni™ is indicated for unicompartmental knee arthroplasty (UKA) in patients with advanced knee osteoarthritis (OA) of the medial compartment with evidence of adequate healthy bone to support the implanted components. Candidates for unicompartmental knee replacement include those with:

    · joint impairment due to osteoarthritis or traumatic arthritis of the knee,

    · varus deformitv of the knee, and

    · as an alternative to tibial osteotomy in patients with unicompartmental OA.

    The patient-specific BC Reflex Uni™ components fit within an envelope of dimensions that are specific to each patient. The BC Reflex Uni™ femoral component and tibial baseplate are intended for cemented fixation.

    Device Description

    The BC Reflex Uni™ Knee System is a patient-specific unicompartmental knee system that consists of femoral and tibial implants for replacement of the medial tibiofemoral compartment of the knee. The patient-specific femoral and tibial implants and single-use instruments are manufactured from CAD and CAM files generated from Bodycad software, which are based on MRI or CT images of the patient's knee and input from the surgeon. The BC Reflex Uni™ is for cemented use only and is sterilized by gamma radiation.

    The subject device of this 510(k) is the same as the primary predicate device. The purpose of this Special 510(k) Device Modification is to notify the FDA of minor changes to the design and contents of the patient specific kits and reusable instruments for the BC Reflex Uni™ Knee System.

    Materials: Wrought Cobalt-28Chromium-6Molybdenum Alloy (CoCrMo; ASTM F1537-11) for the femoral component, wrought Titanium-6Aluminum-4Vanadium ELI (Extra Low Interstitial) Alloy (Ti6Al4V ELI; ASTM F136-13) for the tibial baseplate and locking pin, Ultra-High-Molecular-Weight Polyethylene (UHMWPE; F648-14) for the tibial insert.

    AI/ML Overview

    This document, a 510(k) Premarket Notification from Bodycad Laboratories, Inc., describes a medical device, the BC Reflex Uni™ Knee System, and its substantial equivalence to previously cleared devices. However, it does not contain the detailed information necessary to answer your specific questions about acceptance criteria and a study proving the device meets those criteria, particularly in the context of an AI/algorithm-driven medical device requiring performance metrics like sensitivity, specificity, or human reader improvement.

    The document primarily focuses on demonstrating substantial equivalence (a regulatory pathway for medical devices) for a traditional knee implant, not an AI-powered diagnostic or therapeutic tool. The "Performance Data" section mentions "Software V&V accounting for all changes" and "Verification testing of usability of updated devices," but these are general statements about engineering and design controls, not clinical performance studies with acceptance criteria for an AI system.

    Here's why the provided text cannot fulfill your request, along with what would be needed if this were an AI device:

    Missing Information & Why it's Absent from This Document:

    1. A table of acceptance criteria and the reported device performance: This document is about a physical knee implant. Acceptance criteria for such devices typically revolve around mechanical properties (fatigue, wear, strength), biocompatibility, and sterilization, along with manufacturing process validation. It wouldn't include AI performance metrics like sensitivity, specificity, or AUC.
    2. Sample sizes used for the test set and data provenance: Not relevant for this type of device submission.
    3. Number of experts used to establish ground truth & qualifications: Not relevant. Ground truth for an implant is its physical and material properties meeting specifications, and clinical outcomes for safety and effectiveness.
    4. Adjudication method for the test set: Not relevant.
    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done: MRMC studies are for evaluating diagnostic image interpretation by humans, often with and without AI assistance. This device is an implant, not an imaging interpretation tool.
    6. If a standalone (algorithm only) performance was done: Not relevant, as there is no standalone algorithm being validated here in the typical AI sense. The "Bodycad software" mentioned is for design and manufacturing, not for making diagnostic or treatment decisions.
    7. The type of ground truth used: For this knee implant, ground truth would be based on engineering specifications, material standards (ASTM), and potentially clinical outcomes from previous versions or similar devices. It's not based on expert consensus on image interpretation or pathology in the way an AI diagnostic would be.
    8. The sample size for the training set: Not applicable. The "Bodycad software" is a design tool, not a machine learning model that undergoes "training."
    9. How the ground truth for the training set was established: Not applicable.

    What little "performance data" is mentioned:

    • "Software V&V accounting for all changes per Bodycad procedures, which are the same procedures presented to FDA previously for the predicate and reference devices."
    • "Risk analysis and design control review confirming no new or changed risks relative to the indications for use and efficacy of product."
    • "Verification testing of usability of updated devices."

    These indicate standard regulatory steps for medical device changes, focusing on ensuring the software and design changes for the manufacturing and design of the physical implant do not introduce new safety or effectiveness concerns. They do not describe performance evaluation of an AI-driven decision-making system in a clinical trial context.

    In summary, the provided document is a 510(k) clearance letter for a mechanical knee implant system and does not contain the information requested about AI device acceptance criteria and performance studies.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1