Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221943
    Device Name
    EmbedMed
    Date Cleared
    2023-02-01

    (211 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    3D LifePrints UK Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EmbedMed is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the system, and the result is an output data file. This file may then be provided as digital models or used as input to an additive manufacturing portion of the system. The additive manufacturing portion of the system produces physical outputs including anatomical models and surgical guides for use in the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing osteotomies, including the resection of bone tumors, for the appendicular skeleton. EmbedMed is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.

    Device Description

    EmbedMed utilizes Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images to create digital and additive manufactured, patient-specific physical anatomical models and surgical guides for use in non-joint replacing orthopedic surgical procedures for the appendicular skeleton.

    Imaging data files are obtained from the surgeons for treatment planning and various patient-specific products that are manufactured with biocompatible photopolymer resins using additive manufacturing (stereolithography).

    AI/ML Overview

    The provided FDA 510(k) clearance letter for EmbedMed focuses on demonstrating substantial equivalence to predicate devices, primarily through comparison of intended use, design, materials, and manufacturing processes, rather than detailed performance against specific acceptance criteria for an AI/algorithm-driven device.

    Therefore, the document does not contain the level of detail typically found in a study proving an AI/software device meets specific acceptance criteria. Specifically, it lacks information regarding:

    • A table of acceptance criteria and reported device performance metrics (e.g., accuracy, precision for segmentation).
    • Sample sizes and provenance for test sets designed to evaluate algorithmic performance (only mentions "worst-case features" for verification and "real patient scan data" for validation without specific numbers).
    • Number of experts, their qualifications, and adjudication methods for establishing ground truth.
    • Details of MRMC comparative effectiveness studies or standalone algorithmic performance.
    • Specific types of ground truth used beyond "real patient scan data."
    • Sample size for training sets and how ground truth for training was established.

    This is because EmbedMed's primary function, as described, revolves around human-guided image segmentation using COTS software and subsequent additive manufacturing of physical models and guides, rather than an automated AI algorithm making diagnostic or interpretive outputs that would necessitate such detailed performance metrics for regulatory clearance. The "image segmentation system" appears to be a tool used by trained personnel rather than an autonomous AI making critical decisions.

    The document emphasizes physical output accuracy and biological safety, as well as the manufacturing process, which aligns with its classification as an orthopedic surgical planning and instrument guide system, rather than a diagnostic AI.

    However, based on the limited information related to performance testing in the document, here's a summary of what can be inferred or directly stated, and what is missing:


    1. A table of acceptance criteria and the reported device performance

    The document does not provide a quantitative table of acceptance criteria and reported numerical performance metrics for the software's image segmentation capabilities (e.g., Dice score, Hausdorff distance for segmentation accuracy).

    Instead, it states:

    • Acceptance Criteria (Implied): The physical outputs (anatomical models and surgical guides) should meet "feature and dimensional accuracy requirements" and "meet the intended use of the product and its design requirements."
    • Reported Performance: "Verification testing performed on coupons... demonstrated that the EmbedMed physical outputs meets the feature and dimensional accuracy requirements for patient-specific surgical guides and anatomical models." And "The validation testing on the EmbedMed physical outputs manufactured from real patient scan data demonstrated through simulated use testing that the system produces patient-specific outputs that meet the intended use of the product and its design requirements."

    2. Sample sized used for the test set and the data provenance

    • Test Set Sample Size: Not explicitly stated. The document mentions "coupons, which were designed with the worst-case features and dimensions" for verification testing and "real patient scan data" for validation testing. No specific number of instances or patients is provided.
    • Data Provenance: Not explicitly stated (e.g., country of origin, specific hospitals). The data consists of "patient specific medical imaging files" and "real patient scan data." The document does not specify whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not provided. Given the description of the software being used to "segment the image file" and the "digital output is then reviewed and approved by the prescribing clinician," the "ground truth" for the utility of the output seems to be clinician approval. However, for the accuracy of the segmentation itself, the ground truth establishment method is not detailed.

    4. Adjudication method for the test set

    • No adjudication method is described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

    • No, an MRMC comparative effectiveness study was not done. The document states: "Clinical testing was not necessary for the demonstration of substantial equivalence." This implies that the study focused on technical performance and comparison to predicate devices rather than a human-reader performance study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document implies that the "image segmentation system" works with human input/guidance ("processed by the system," "reviewed and approved by the prescribing clinician"). It focuses on the accuracy of the physical outputs derived from this process. It does not provide metrics for a standalone algorithmic performance of the segmentation software itself, independent of human review/guidance.

    7. The type of ground truth used

    • For the physical outputs: The ground truth appears to be "design requirements" and "intended use of the product" as validated through "simulated use testing."
    • For the software's segmentation capabilities directly: Not explicitly defined beyond the implication that a "prescribing clinician" reviews and approves the digital output. This suggests the "ground truth" for clinical acceptability is clinician approval, not an independent, pre-established expert consensus or pathology.

    8. The sample size for the training set

    • Information about a training set is not provided, as the "image segmentation system" uses "Commercial Off-The-Shelf (COTS) software." This implies it is a commercially available, established segmentation tool (Simpleware Scan IP) rather than a newly developed AI model requiring a bespoke training set.

    9. How the ground truth for the training set was established

    • Not applicable/Not provided, as the software is COTS and not a newly trained AI model for which the submitter would establish a training ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220366
    Device Name
    EmbedMed
    Date Cleared
    2022-09-30

    (234 days)

    Product Code
    Regulation Number
    872.4120
    Panel
    Dental
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    3D LifePrints UK Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EmbedMed is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The is processed by the system, and the result is an output data file. This file may then be provided as digital models or used an input to an additive manufacturing portion of the system. The additive manufacturing portion of the system produces physical outputs including anatomical models and surgical guides for use in maxillofacial surgeries. EmbedMed is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.

    Device Description

    EmbedMed utilizes Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images to create digital and additive manufactured, patient-specific physical anatomical models and surgical guides for use in surgical procedures. Imaging data files are obtained from the surgeons for treatment planning and various patient-specific products that are manufactured with biocompatible photopolymer resins using additive manufacturing (stereolithography).

    AI/ML Overview

    The provided text describes the 3D LifePrints UK Ltd. EmbedMed device (K220366), an image segmentation software and additive manufacturing system for creating patient-specific anatomical models and surgical guides for maxillofacial surgeries.

    Here's an analysis of the acceptance criteria and the study conducted:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA 510(k) summary does not explicitly present a table of quantitative acceptance criteria with corresponding performance metrics for the EmbedMed device in terms of clinical accuracy (e.g., sensitivity, specificity, or deviation). Instead, the performance data presented is focused on demonstrating the physical and functional aspects of the manufactured outputs and their compliance with general medical device standards.

    However, based on the provided text, we can infer some "acceptance criteria" through the verification and validation testing performed. These are more general compliance points rather than precise numerical performance targets for the AI component's diagnostic accuracy.

    Acceptance Criteria CategoryStated Verification/Validation/Performance
    BiocompatibilityEmbedMed meets the requirements of ISO 10993-1:2018, ISO 14971:2019, and FDA Guidance Document Use of International Standard ISO 10993-1:2016 for short term (≤ 24 hours) contact with tissue and bone. Tested endpoints: cytotoxicity, sensitization, acute systemic toxicity, material-mediated pyrogenicity.
    Sterilization Validation (End User)Sterilization process validated to a sterility assurance level (SAL) of 10-6 using the over-kill method according to ISO 17665-1:2006. Drying time validation also conducted.
    Functional/System Performance (Software & Manufacturing)Installation, Operational, and Performance Qualification (IO/PQ) confirmed.
    Dimensional Accuracy (Physical Outputs)Verified that physical outputs (anatomical models, surgical guides) meet dimensional accuracy requirements across the range of possible patient-specific devices.
    Feature Accuracy (Physical Outputs)Verified that physical outputs meet feature accuracy requirements across the range of possible patient-specific devices.
    Simulated Use TestingPerformed to confirm EmbedMed physical outputs meet requirements.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a specific "test set" for evaluating the performance of the image segmentation system in terms of diagnostic accuracy or clinical utility. The testing described focuses on the physical outputs and system performance rather than the AI's ability to accurately segment anatomical structures on a test dataset.

    • Sample Size for Test Set: Not explicitly stated for the image segmentation component. The "Verification and Validation Testing" indicates testing was performed "across the range of possible patient-specific devices," implying a variety of cases were used for dimensional and feature accuracy, but not a specific count or dataset description.
    • Data Provenance: Not specified. The input imaging information is stated to come from "a medical scanner such as a CT based system." There is no mention of country of origin or whether data was retrospective or prospective for any internal testing of the image segmentation.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the document. The study described does not involve establishing ground truth for image segmentation using expert consensus for a test set, nor does it quantify the performance of the image segmentation algorithm in terms of accuracy against such a ground truth. The "ground truth" for the physical outputs (dimensional and feature accuracy) would likely be based on the digital models generated by the system and engineering specifications, not expert clinical interpretation.

    4. Adjudication Method for the Test Set

    This information is not provided. As no specific test set for image segmentation performance against expert ground truth is described, an adjudication method is not mentioned.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document explicitly states under "5.8.4. Clinical Studies": "Clinical testing was not necessary for the demonstration of substantial equivalence." The focus was on the substantial equivalence of the system and the physical outputs, not on comparing reader performance with and without AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    A standalone performance evaluation of the image segmentation algorithm in terms of its accuracy (e.g., Dice score, Hausdorff distance, etc.) against a clinical ground truth is not explicitly described or provided. The system is described as a "software system and image segmentation system," but its performance metrics as an algorithm by itself are not detailed. The digital output is "reviewed and approved by the prescribing clinician prior to delivery of the final outputs," indicating a human-in-the-loop workflow.

    7. The Type of Ground Truth Used

    For the "Feature Accuracy" and "Dimensional Accuracy" validation, the ground truth would likely be the digital design files generated by the EmbedMed software itself, against which the physical 3D-printed outputs are compared. For the biocompatibility and sterilization validation, the ground truth is established by international standards (ISO) and FDA guidance documents.

    There is no mention of ground truth derived from expert consensus, pathology, or outcomes data for the performance of the image segmentation component of the software.

    8. The Sample Size for the Training Set

    The document does not provide any information about a training set size for the image segmentation software. Given that the software is described as utilizing "Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images," it is possible that the underlying segmentation algorithms were developed and trained externally or are based on traditional image processing techniques rather than a large, custom-trained deep learning model.

    9. How the Ground Truth for the Training Set Was Established

    Since information regarding a specific training set is not provided, how its ground truth was established is also not mentioned.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1