Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K213969
    Date Cleared
    2022-10-07

    (291 days)

    Product Code
    Regulation Number
    878.3720
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073468, K101342

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VisionAir Patient-Specific Airway Stent is indicated for the treatment of adults ≥22 years of age with symptomatic stenosis of the airway. The silicone stent is intended for implantation into the airway by a physician using the recommended deployment system or an equivalent rigid bronchoscope and stent placement system that accepts the maximum stent diameter being placed. The stent is intended to be in the patient up to 12 months after initial placement.

    Device Description

    The subject device, VisionAir Patient-Specific Airway Stent is comprised of a cloudbased software suite and the patient-specific airway stent. These two function together as a system to treat symptomatic stenosis of the airway per the indications for use. The implantable patient-specific airway stent is designed by a physician using a CT scan as a guide in the cloud-based software suite. The airway is segmented from the CT scan and used by the physician in designing a patient-specific stent. When design is complete, the stent is manufactured via silicone injection into a 3D-printed mold and delivered to the treating physician nonsterile, to be sterilized before use.

    The implantable patient-specific airway stent includes the following general features:

    • Deployed through a compatible rigid bronchoscope system
    • Made of biocompatible, implant-grade silicone
    • Steam sterilizable by the end user
    • Anti-migration branched design
    • Anti-migration studs on anterior surface of main branch
    • Single-use

    The cloud-based software suite has the following general features:

    • Upload of CT scans
    • Segmentation of the airway
    • Design of a patient specific stent from segmented airway
    • Order management of designed stents
    AI/ML Overview

    The provided text is a 510(k) Summary for the VisionAir Patient-Specific Airway Stent, which focuses on demonstrating substantial equivalence to a predicate device. It primarily discusses the device description, indications for use, technological characteristics, and a list of nonclinical performance and functional tests conducted.

    However, the document does not contain the detailed information required to fulfill the request regarding acceptance criteria and the study that proves the device meets those criteria. Specifically, it lacks:

    1. A table of acceptance criteria and reported device performance: While it lists types of tests, it does not provide specific quantitative acceptance criteria or the actual results from these tests.
    2. Sample size used for the test set and data provenance: No information is given about the sample size for any clinical or performance test, nor the origin or nature of the data (retrospective/prospective, country).
    3. Number of experts used to establish ground truth and qualifications: This information is completely absent.
    4. Adjudication method for the test set: Not mentioned.
    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study details: No MRMC study is described; the testing mentioned is primarily non-clinical or related to software validation/verification, not human-AI comparative effectiveness.
    6. Standalone (algorithm-only) performance: While "Software Verification and Validation Testing" and "Airway Segmentation Process Testing" are mentioned, no specific standalone performance metrics (e.g., accuracy, precision for segmentation) or acceptance criteria are provided.
    7. Type of ground truth used: The document mentions "Airway Segmentation Process Testing" and refers to a predicate device (Mimics) for "performance reference specification" for dimensional testing of airway segmentation. This implies that the ground truth for segmentation would likely be derived from expert-reviewed segmentations or potentially from known anatomical measurements, but the method is not explicitly detailed.
    8. Sample size for the training set: There is no mention of a "training set" or any machine learning model that would require one. The software aspect described is for physician-guided design and semi-automated segmentation, not explicitly an AI/ML model that undergoes a training phase in the typical sense for medical image analysis.
    9. How the ground truth for the training set was established: Not applicable, as no training set is described.

    The document states: "Reference devices, Mimics (K073468) and Osirix MD (K101342) were used for reference software performance specifications." and "Dimensional Testing of Airway Segmentation (reference device Mimics K073468 used for performance reference specification)". These statements hint at software validation, especially for the segmentation component, but do not provide the detailed study design, acceptance criteria, or results.

    In summary, the provided text does not contain the necessary information to answer the request in detail, as it focuses on demonstrating substantial equivalence through non-clinical performance and functional testing rather than a clinical study with acceptance criteria for device performance based on human reader interaction or AI model performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182743
    Manufacturer
    Date Cleared
    2019-10-23

    (390 days)

    Product Code
    Regulation Number
    878.3720
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073468, K894380, K121048

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Patient-Specific Airway Stent is indicated for the treatment of adults ≥22 years of age with symptomatic stenosis of the airway. The silicone stent is intended for implantation into the airway by a physician using the recommended deployment system or an equivalent rigid bronchoscope and stent placement system that accepts the maximum stent diameter being placed. The stent is intended to be in the patient up to 12 months after initial placement.

    Device Description

    The Patient-Specific Airway Stent is comprised of Web Software and the Patient-Specific Silicone Y-Stent. These two function together as a system to treat symptomatic stenosis of the airway. The Patient-Specific Silicone Stent is designed by a physician using a CT scan as a guide in the Web Software. The Web Software gives the user (physician) the ability to upload a scan, view the airway, and design a stent. The stent, after physician approval, is manufactured via silicone injection into a 3D-printed mold and delivered to the treating physician's medical center nonsterile.

    AI/ML Overview

    This document (K182743) is a 510(k) Premarket Notification for a Patient-Specific Airway Stent. It primarily focuses on demonstrating substantial equivalence to a predicate device (ENDOXANE, K971509) and the safety/effectiveness of the device.

    Based on the provided text, the device in question is a Patient-Specific Airway Stent System, which includes Web Software and the Patient-Specific Silicone Y-Stent. The software allows physicians to design the stent based on a CT scan, and then the stent is manufactured via silicone injection into a 3D-printed mold.

    The acceptance criteria and study that proves the device meets them are mostly related to non-clinical performance testing and software verification/validation, rather than a full clinical study with human patients evaluating the AI's diagnostic performance. Therefore, many of the typical acceptance criteria and study elements for an AI-powered diagnostic device are not explicitly detailed in this 510(k) summary.

    Here's an attempt to extract the relevant information based on your request, acknowledging the limitations of the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't provide a direct table of acceptance criteria with specific numerical performance metrics for the software's ability to accurately design the stent or for the final stent's intended clinical outcome in terms of patient-specific fit. Instead, it describes various tests performed to ensure the device functions as intended and is substantially equivalent to the predicate.

    The closest to "acceptance criteria" are the objectives of the non-clinical performance tests, and "reported device performance" is described qualitatively as "supports the claim" or "confirms."

    Please note: The "device performance" here refers to the engineering and functional performance of the stent and software, not clinical outcome metrics (e.g., patient improvement rates).

    Acceptance Criteria (Stated Objective of Test)Reported Device Performance (as described in document)
    Sterilization Testing: Ability to be steam sterilized to a SAL of 10-6.Confirms that the subject device can be steam sterilized to a SAL of 10-6, using a common cycle in medical centers.
    Material Equivalence (Tear Strength): Subject device material equivalent to predicate's in tear strength.Supports the claim that the subject device's material is equivalent to the predicate's material in tear strength.
    Fatigue Testing: Device does not fatigue when cyclically compressed over intended life (1 year).Supports the claim that the subject device does not fatigue when cyclically compressed over the intended life of the implant (1 year).
    Stent Deployment Testing: Able to be deployed by common applicator and rigid bronchoscope system.Supports the claim that the subject device is able to be deployed by a common applicator and rigid bronchoscope system.
    Biocompatibility Testing: Acceptable for implant up to one year.Confirmed that the Patient-Specific Silicone Stent is acceptable for use as a medical device following ISO 10993-1. (Specific tests: Cytotoxicity, Sensitization, Irritation, Toxicity, Pyrogenicity, Subacute/Sub-Chronic Toxicity, Genotoxicity, Chemical Characterization).
    Software Verification and Validation: Software functions as designed; risk mitigations are effective.Supports the claim that the software functions as designed, including any design mitigations.
    Human Factors and Usability Testing (Web Software): Software is safe and effective when used by intended users in its intended use-environment.Supports the claim that the software is as safe and as effective as the predicate device when used by its intended users in its intended use-environment.
    Dimensional Testing of Airway Segmentation: Accuracy of segmented airway rendering in proprietary software.Evaluated the accuracy of segmented airway rendering in proprietary segmentation software. (No specific metric provided, just that it was evaluated).
    Airway Segmentation Process Validation: Validation of the process of segmenting an airway from a CT scan in proprietary software.Validated the process of segmenting an airway from a CT scan in proprietary segmentation software. (No specific metric provided, just that it was validated).

    Note on Quantitative Acceptance Criteria: The document explicitly mentions some differences, such as the subject device having lower flat-plate compression strength than the predicate device. However, it states that "Any risks related to these technological differences have been mitigated to an acceptable level," implying that these differences did not prevent meeting overall safety/effectiveness. For the AI component (segmentation and design), specific quantitative acceptance criteria (e.g., Dice score for segmentation accuracy, deviation from ideal stent dimensions) are not provided in this summary document.

    2. Sample Size for Test Set and Data Provenance

    • The document does not specify a sample size for the "test set" in the context of typical AI performance evaluation (e.g., number of CT scans used to validate segmentation or design accuracy).
    • The closest mentions are "Dimensional Testing of Airway Segmentation" and "Airway Segmentation Process Validation." It's implied that some CT scan data was used for these, but neither the sample size nor the provenance (country, retrospective/prospective) of this data is mentioned.

    3. Number of Experts and Qualifications for Ground Truth

    • The document describes the device as a "Patient-Specific Airway Stent" where creation involves a "physician using a CT scan as a guide in the Web Software" and the stent is manufactured "after physician approval."
    • The "Web Software" allows "COS technicians to segment the airway and automatically calculate a centerline."
    • "Ground truth" for the AI component (segmentation, centerline calculation, stent design) is not explicitly defined in terms of expert consensus or pathological verification in this summary. Instead, it appears the software's output is reviewed and approved by a single physician for an individual patient.
    • Therefore, it's not a panel of experts establishing ground truth for a general test set, but rather an individual physician performing the critical review and approval step for each patient. No specific number of experts used to establish ground truth for a general test set is mentioned, nor are their qualifications.

    4. Adjudication Method for the Test Set

    • Given that the "ground truth" for the AI's output is implied to be physician review and approval for each specific case, there is no multi-reader adjudication method (like 2+1 or 3+1) described for a general test set. The process involves one physician approving the design.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study comparing human readers with and without AI assistance is mentioned or implied. The product is a custom-designed stent system, not a diagnostic AI intended to assist in interpreting medical images. The AI (software) assists in the design and manufacturing process, which is then approved by a physician.

    6. Standalone (Algorithm Only) Performance

    • No standalone (algorithm only) performance metrics are explicitly provided. The software acts as a design tool that then requires physician approval before manufacturing. The document describes "Software Verification and Validation Testing" and "Dimensional Testing of Airway Segmentation" which implies internal testing of the algorithm, but specific standalone metrics (e.g., segmentation accuracy against ground truth) are not reported in this summary.

    7. Type of Ground Truth Used

    • For the software's AI components (segmentation, centerline calculation), the implicit "ground truth" during real-world use is the physician's subjective review and approval based on the CT scan.
    • For the non-clinical tests (e.g., tear strength, fatigue), the ground truth relates to engineering specifications and established test methods.
    • No pathology or outcomes data is mentioned as ground truth for the software's performance, as this is a device design tool, not a diagnostic algorithm.

    8. Sample Size for the Training Set

    • The document does not specify the sample size for the training set used for any AI component (segmentation, centerline calculation). It only refers to a "proprietary software" used by "COS technicians" to segment the airway and calculate the centerline.

    9. How Ground Truth for Training Set Was Established

    • The document does not describe how ground truth for any potential AI training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K180239
    Date Cleared
    2018-05-16

    (107 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151692, K170272, K073468

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Additive Orthopaedics 3D Printed Bone Segments are intended to be used for internal bone fixation for bone fractures or osteotomies in the ankle and foot, such as:

    *Cotton (opening wedge) osteotomies of the medial cuneiform *Evans lengthening osteotomies

    The Additive Orthopaedics 3D Printed Bone Segments are intended for use with ancillary plating fixation.

    The Additive Orthopaedics 3D Printed Bone Segments are not intended for use in the spine.

    It is a patient specific device.

    Device Description

    The Additive Orthopaedics Patient Specific 3D Printed Bone Segments is a simple one piece device constructed individually for each patient using CT image data. It is intended to be used for internal bone fixation for bone fractures or osteotomies in the foot and ankle. The segments are additively manufactured from medical grade titanium alloy (Ti-6AL-4V Eli). It is a patient specific device. The bone segments come in a variety of configurations that depend on the geometry of the application. The surgeon approves the design of the 3D Printed Bone Segments by comparing his/her design requirements to engineering drawings prior to the construction of the implant device.

    AI/ML Overview

    This is a 510(k) premarket notification for a medical device, not an AI/ML device. Therefore, the requested information about acceptance criteria and studies (such as MRMC, standalone performance, ground truth, sample sizes for training/test sets, and expert qualifications) is not typically found in this type of document because it pertains to the evaluation of AI/ML algorithm performance.

    Here's what can be extracted from the document regarding the device's evaluation, rephrased to align with the spirit of the request, focusing on how the device meets the regulatory requirements for "substantial equivalence":

    Device Name: Additive Orthopaedics Patient Specific 3D Printed Bone Segments
    510(k) Number: K180239


    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a non-AI/ML device submission, there are no "acceptance criteria" in the traditional sense of performance metrics like AUC, sensitivity, or specificity. Instead, the "acceptance criteria" are related to demonstrating substantial equivalence to a predicate device. The "performance" is shown through comparative testing against that predicate.

    Feature/TestAcceptance Criteria (for Substantial Equivalence to Predicate)Reported Device Performance
    Indications for UseNearly identical to predicate device.Verified to be nearly identical to predicate.
    MaterialIdentical to predicate device (medical grade titanium alloy (Ti-6AL-4V Eli)).Verified to be identical.
    Manufacturing ProcessIdentical to predicate device (additive manufacturing).Verified to be identical.
    Dimensions & Geometry (patient-specific)Within the range of sizes claimed for the predicate device, developed in a process equivalent to the reference device, and verified by the surgeon.Demonstrated to meet these conditions.
    Morphological CharacterizationComparable to predicate device.Results demonstrated identity to the predicate device.
    Mechanical Testing (friction, roughness, durability/abrasion, compressive fatigue)Comparable to predicate device.Results demonstrated identity to the predicate device.
    Biocompatibility TestingComparable to predicate device.Results demonstrated identity to the predicate device.

    Study Proving Device Meets Criteria (Substantial Equivalence Study):

    The submission highlights a substantial equivalence study based on non-clinical evidence.

    2. Sample Size Used for the Test Set and Data Provenance:

    This information is not applicable and not provided in the document. The evaluation is based on demonstrating equivalence in materials, manufacturing, indications, and non-clinical performance characteristics (morphological, mechanical, biocompatibility) rather than a "test set" of patient data for an algorithm. The "data" here refers to test results from the device itself and the predicate.

    3. Number of Experts Used to Establish Ground Truth and Qualifications:

    This information is not applicable. "Ground truth" in the context of an AI/ML device (e.g., expert consensus on medical images) is not relevant for this type of device submission. The verification of the patient-specific design is done by the surgeon.

    4. Adjudication Method for the Test Set:

    Not applicable. There is no "test set" in the sense of evaluating diagnostic performance.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    No, an MRMC study was not done. This type of study is relevant for evaluating the impact of AI on human reader performance, which doesn't apply to this 3D-printed bone segment device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Not applicable. This device is a physical, patient-specific implant, not a standalone algorithm.

    7. The type of ground truth used:

    The concept of "ground truth" (e.g., pathology, outcomes data) as it applies to AI/ML diagnostic or prognostic devices is not relevant here. For device design, the "ground truth" for the patient-specific geometry is derived from the patient's CT image data and subsequently "verified by the surgeon."

    8. The Sample Size for the Training Set:

    Not applicable. There is no machine learning "training set" for this device. The device is custom-designed for each patient based on their CT scan data.

    9. How the Ground Truth for the Training Set was Established:

    Not applicable, as there is no training set. The patient-specific designs are generated using individual patient CT image data, and the final design is approved by the surgeon.

    Ask a Question

    Ask a specific question about this device

    K Number
    K132636
    Date Cleared
    2013-10-17

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073468

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DYONICS PLAN Hip Impingement Planning System software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative or post-operative software for simulating/evaluating hip preservation surgical treatment options and historical case review, respectively.

    Device Description

    The Smith & Nephew DYONICS PLAN Hip Impingement Planning System (here in after referred to as DYONICS PLAN software) is a software product that allows orthopedic surgeons and other healthcare professionals to visualize and perform analysis of digital images for assessment of hip preservation treatment options pre-operatively or post-operatively. The software enables the user to import computed tomography (CT) images. display various 2D views of the images, execute image segmentation and 3D rendering of the femur and pelvis. generate anatomic measurements, identify the areas and devree of conflict and simulate the resection of bony lesions, perform a dynamic range of motion analysis of the hip joint, and export the results in an output report. The software automatically generates a default estimate for each step of the analysis based on published literature, and the surgeon should always verify and make adjustments of the parameters based on their clinical judgment. The purpose of the software is to support other clinical findings and patient examination when assessing hip preservation treatment options.

    The software is designed to be installed and run locally on a PC-compatible personal computer with a Windows operating system and a graphics card that meets the specified minimum requirements. The software facilitates the importation of CT images in DICOM format and allows the export of the output report in PDF or HTML format which can be referenced pre-operatively, intraoperatively or post-operatively. The user is provided with installation instructions which include the following: a link to a secure website, steps to download the installation file along with a license activation code and password.

    AI/ML Overview

    The Smith & Nephew DYONICS PLAN Hip Impingement Planning System is a software product designed for orthopedic surgeons and other healthcare professionals to visualize and analyze digital images for assessing hip preservation treatment options.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not contain a specific table outlining quantitative acceptance criteria and reported device performance metrics. Instead, it states that "Software verification and validation testing demonstrates that the DYONICS PLAN does not raise any new questions of safety and efficacy as compared to the predicate device Mimics cleared in K073468." This implies that the device's performance was evaluated to ensure it functions as intended and is comparable to its predicate device, Mimics (K073468), which is a general surgical planning tool.

    Given the information, the general acceptance criteria can be inferred as:

    Acceptance Criteria CategoryDescriptionReported Device Performance
    Functional EquivalenceThe device performs the same core functionalities as the predicate device relevant to hip impingement planning (e.g., image import, segmentation, 3D rendering, measurement tools, surgical simulation/planning)."Software verification and validation testing demonstrates that the DYONICS PLAN does not raise any new questions of safety and efficacy as compared to the predicate device Mimics cleared in K073468."
    Safety and EfficacyThe device does not introduce new safety concerns or demonstrate a lack of efficacy compared to the predicate device."Software verification and validation testing demonstrates that the DYONICS PLAN does not raise any new questions of safety and efficacy as compared to the predicate device Mimics cleared in K073468."
    Intended UseThe device fulfills its intended use of simulating/evaluating hip preservation surgical treatment options pre-operatively or post-operatively, and historical case review.The device's intended use is clearly stated and is considered met through its functional capabilities.
    Output GenerationThe device accurately generates an output report in PDF or HTML format that can be referenced pre-operatively, intra-operatively, or post-operatively.The software exports the results in an output report.
    User AdjustabilityThe software allows users to verify and adjust automatically generated estimates based on their clinical judgment."The software automatically generates a default estimate for each step of the analysis based on published literature, and the surgeon should always verify and make adjustments of the parameters based on their clinical judgment."

    2. Sample Size Used for the Test Set and the Data Provenance

    The provided text does not specify the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective). The statement "Software verification and validation testing" is a general declaration without specific details about the clinical data used for this testing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    The document does not provide information on the number of experts used to establish ground truth for a test set, nor their specific qualifications.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method for a test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    The provided text does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study, nor does it quantify any effect size of human readers improving with or without AI assistance. The submission focuses on substantial equivalence to a predicate device rather than a comprehensive clinical effectiveness study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device is described as "software product that allows orthopedic surgeons and other healthcare professionals to visualize and perform analysis." It also states, "The software automatically generates a default estimate... and the surgeon should always verify and make adjustments... based on their clinical judgment." This indicates that the device is intended to be used with a human-in-the-loop, allowing for surgeon oversight and adjustment. There is no information suggesting a standalone (algorithm-only) performance evaluation.

    7. The Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any testing. However, given the nature of a surgical planning system, it's plausible that ground truth would involve:

    • Expert Consensus: For verifying the accuracy of measurements, segmentations, and simulated resections against clinical best practices.
    • Radiological Interpretation: Expert review of images and software outputs.
    • Published Literature: The software generates default estimates "based on published literature," which effectively serves as a form of established ground truth for these estimates.

    8. The Sample Size for the Training Set

    The document does not provide information regarding a training set sample size. This type of detail is often associated with machine learning models, and while the device uses "image segmentation and 3D rendering," the specific details of its underlying algorithms and whether they involve a distinct training phase with a labeled dataset are not disclosed.

    9. How the Ground Truth for the Training Set Was Established

    Since no information about a training set or its sample size is provided, there is no description of how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K124051
    Device Name
    THE VAULT SYSTEM
    Date Cleared
    2013-05-17

    (137 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073468, K073714

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VAULT® System is intended for use as a software interface and image manipulation system for the transfer of imaging information from a medical scanner such as Computerized Axial Tomography (CT) or Magnetic Resonance Imaging (MRI). It is also intended as pre-operative software for simulating/evaluating implant placement and surgical treatment options. The physician chooses the out-put data file for printing and/or subsequent use in CAD modeling or CNC/Rapid-prototyping.

    Device Description

    The VAULT® System software described here was developed in conformance with reference to the FDA Guidance Document for Industry "Guidance for the Submission of Premarket Notifications for Medical Image Management Devices, July 27, 2000". Based on the information contained in Section G of that document, a final determination of submission content was developed. A secondary reference entitled "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices. May 11, 2005 was also used and resulted in a determination of a "MODERATE" level of concern for the software.

    The UAULT® System software is made available to the user via a web-accessed software interface. The program is a surgeon directed surgical planning package primarily but not exclusively directed at trauma and orthopedic indications. After secure log-in the user requests, creates, reviews and finally authorizes their desired surgical plan. When authorizing, the surgeon/user may choose additional options such as implant sizing and/or various file output options.

    AI/ML Overview

    The VAULT® System Surgery Planning Software received 510(k) clearance (K124051) from the FDA. The submission focused on demonstrating substantial equivalence to predicate devices (Mimics Software - K073468 and TraumaCAD Software- K073714) rather than a direct study against predefined acceptance criteria for a novel device. The performance data was evaluated through non-clinical testing.

    Here's an breakdown based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a 510(k) submission demonstrating substantial equivalence, explicit "acceptance criteria" in the sense of predefined thresholds for diagnostic performance metrics (like sensitivity, specificity, AUC) are not presented in the same way as for a novel diagnostic AI device. Instead, the "acceptance criteria" are implied by the functional and safety requirements defined for the VAULT® System and its performance being "equivalent" to the predicates.

    Feature/RequirementAcceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence
    Image TransferTransfer imaging information from CT/MRI scanners.The VAULT® System is intended for use as a software interface and image manipulation system for the transfer of imaging information from a medical scanner such as Computerized Axial Tomography (CT) or Magnetic Resonance Imaging (MRI).
    Preoperative PlanningSimulate/evaluate implant placement and surgical treatment options.It is also intended as pre-operative software for simulating/evaluating implant placement and surgical treatment options.
    Output File GenerationPhysician chooses output data file for printing/CAD modeling/CNC/Rapid-prototyping.The physician chooses the out-put data file for printing and/or subsequent use in CAD modeling or CNC/Rapid-prototyping. Additional options include implant sizing and/or various file output options.
    DICOM Image UseUse DICOM images.Yes, uses DICOM images (from feature comparison table). Digital file image upload controlled by DICOM process met specifications. The VAULT® System performs initial conversion of image files to graphical formats (JPEG, BMP, PNG, TIFF) before planning, an improvement over predicates which convert post-plan.
    Overlays & TemplatesSupport overlays and templates.Yes, supports overlays and templates (from feature comparison table).
    Accuracy & Integrity
    Anatomical Model TestingRequired level of accuracy and functionality for anatomical and phantom models.Anatomical and phantom model digital file testing demonstrated the required level of accuracy and functionality.
    Image File IntegrityImage file integrity, accuracy, and suitability after conversion, save, and transfer operations.Image file integrity, accuracy and suitability following required conversion, save and transfer operations met all specifications.
    Image Calculations & MeasurementCalculations & measurement of anatomic features and landmarks meet specifications.Image calculations & measurement of anatomic features and landmarks meets specifications.
    SafetyAbsence of control over life-saving devices; adherence to safety risk/hazard analysis.Does not control life-saving devices (from feature comparison table). Safety requirements were developed using a safety risk/hazard analysis based on ISO 14971:2007 approach.
    Software ValidationTraceability, boundary values, and stress testing as per FDA guidance.Functional requirements as defined by the VAULT® System Software Requirements Specification (SRS) were tested and traceability was performed and documented using FDA's General Principles of Software Validation guidance document. Validation included boundary values and stress testing as defined by the FDA's Guidance for the Content of Premarket Submission for Software Contained in Medical Devices guidance document.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" with a particular sample size of patient data. The non-clinical performance data relied on:

    • "Anatomical and phantom model digital file testing": The exact number of models used is not specified.
    • The testing of various software functionalities (DICOM process, image file integrity, calculations, measurements).

    The data provenance is not explicitly stated as country of origin or retrospective/prospective data for a clinical study. The testing appears to be primarily software functional and performance testing using internal data (anatomical and phantom models) rather than a large clinical dataset.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not describe the use of human experts to establish ground truth for a diagnostic test set in the conventional sense. The "ground truth" for the software's performance seems to be based on:

    • Specifications: Whether the software performed according to its defined functional and safety specifications ("met specifications").
    • Accuracy against known physical/digital models: For anatomical and phantom model testing, the "ground truth" would be the known parameters of these models.

    There are no details provided about experts involved in establishing this "ground truth" or their qualifications.

    4. Adjudication Method for the Test Set

    Not applicable. The document does not describe an adjudication method as would be used for a clinical study involving human readers or interpretation of results. The testing was focused on meeting software specifications.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not described or conducted. This submission focused on the functional equivalence of the software to existing predicate devices.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the performance testing described is essentially "standalone" in the sense that it evaluates the software's inherent functions (image processing, calculations, file handling) without explicitly measuring its impact on human reader performance or a human-in-the-loop scenario. The assessment is of the software itself fulfilling its defined requirements.

    7. The Type of Ground Truth Used

    The ground truth used for testing appears to be primarily:

    • Software Specifications: The software's ability to "meet specifications" for various functions (DICOM process, image integrity, calculations, measurements).
    • Reference Data/Models: For "anatomical and phantom model digital file testing," the ground truth would be the known, accurate parameters of these models against which the software's output was compared.

    It does not mention ground truth derived from expert consensus, pathology, or outcomes data in a clinical trial setting.

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of machine learning or AI algorithm development. The VAULT® System appears to be a rule-based or traditional image processing software rather than an AI/ML-driven device that requires training data. No training set size is mentioned.

    9. How the Ground Truth for the Training Set was Established

    Not applicable, as a training set for machine learning is not mentioned or implied by the device's description or the validation approach.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1