Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K124051
    Device Name
    THE VAULT SYSTEM
    Date Cleared
    2013-05-17

    (137 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073468, K073714

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VAULT® System is intended for use as a software interface and image manipulation system for the transfer of imaging information from a medical scanner such as Computerized Axial Tomography (CT) or Magnetic Resonance Imaging (MRI). It is also intended as pre-operative software for simulating/evaluating implant placement and surgical treatment options. The physician chooses the out-put data file for printing and/or subsequent use in CAD modeling or CNC/Rapid-prototyping.

    Device Description

    The VAULT® System software described here was developed in conformance with reference to the FDA Guidance Document for Industry "Guidance for the Submission of Premarket Notifications for Medical Image Management Devices, July 27, 2000". Based on the information contained in Section G of that document, a final determination of submission content was developed. A secondary reference entitled "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices. May 11, 2005 was also used and resulted in a determination of a "MODERATE" level of concern for the software.

    The UAULT® System software is made available to the user via a web-accessed software interface. The program is a surgeon directed surgical planning package primarily but not exclusively directed at trauma and orthopedic indications. After secure log-in the user requests, creates, reviews and finally authorizes their desired surgical plan. When authorizing, the surgeon/user may choose additional options such as implant sizing and/or various file output options.

    AI/ML Overview

    The VAULT® System Surgery Planning Software received 510(k) clearance (K124051) from the FDA. The submission focused on demonstrating substantial equivalence to predicate devices (Mimics Software - K073468 and TraumaCAD Software- K073714) rather than a direct study against predefined acceptance criteria for a novel device. The performance data was evaluated through non-clinical testing.

    Here's an breakdown based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a 510(k) submission demonstrating substantial equivalence, explicit "acceptance criteria" in the sense of predefined thresholds for diagnostic performance metrics (like sensitivity, specificity, AUC) are not presented in the same way as for a novel diagnostic AI device. Instead, the "acceptance criteria" are implied by the functional and safety requirements defined for the VAULT® System and its performance being "equivalent" to the predicates.

    Feature/RequirementAcceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence
    Image TransferTransfer imaging information from CT/MRI scanners.The VAULT® System is intended for use as a software interface and image manipulation system for the transfer of imaging information from a medical scanner such as Computerized Axial Tomography (CT) or Magnetic Resonance Imaging (MRI).
    Preoperative PlanningSimulate/evaluate implant placement and surgical treatment options.It is also intended as pre-operative software for simulating/evaluating implant placement and surgical treatment options.
    Output File GenerationPhysician chooses output data file for printing/CAD modeling/CNC/Rapid-prototyping.The physician chooses the out-put data file for printing and/or subsequent use in CAD modeling or CNC/Rapid-prototyping. Additional options include implant sizing and/or various file output options.
    DICOM Image UseUse DICOM images.Yes, uses DICOM images (from feature comparison table). Digital file image upload controlled by DICOM process met specifications. The VAULT® System performs initial conversion of image files to graphical formats (JPEG, BMP, PNG, TIFF) before planning, an improvement over predicates which convert post-plan.
    Overlays & TemplatesSupport overlays and templates.Yes, supports overlays and templates (from feature comparison table).
    Accuracy & Integrity
    Anatomical Model TestingRequired level of accuracy and functionality for anatomical and phantom models.Anatomical and phantom model digital file testing demonstrated the required level of accuracy and functionality.
    Image File IntegrityImage file integrity, accuracy, and suitability after conversion, save, and transfer operations.Image file integrity, accuracy and suitability following required conversion, save and transfer operations met all specifications.
    Image Calculations & MeasurementCalculations & measurement of anatomic features and landmarks meet specifications.Image calculations & measurement of anatomic features and landmarks meets specifications.
    SafetyAbsence of control over life-saving devices; adherence to safety risk/hazard analysis.Does not control life-saving devices (from feature comparison table). Safety requirements were developed using a safety risk/hazard analysis based on ISO 14971:2007 approach.
    Software ValidationTraceability, boundary values, and stress testing as per FDA guidance.Functional requirements as defined by the VAULT® System Software Requirements Specification (SRS) were tested and traceability was performed and documented using FDA's General Principles of Software Validation guidance document. Validation included boundary values and stress testing as defined by the FDA's Guidance for the Content of Premarket Submission for Software Contained in Medical Devices guidance document.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" with a particular sample size of patient data. The non-clinical performance data relied on:

    • "Anatomical and phantom model digital file testing": The exact number of models used is not specified.
    • The testing of various software functionalities (DICOM process, image file integrity, calculations, measurements).

    The data provenance is not explicitly stated as country of origin or retrospective/prospective data for a clinical study. The testing appears to be primarily software functional and performance testing using internal data (anatomical and phantom models) rather than a large clinical dataset.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not describe the use of human experts to establish ground truth for a diagnostic test set in the conventional sense. The "ground truth" for the software's performance seems to be based on:

    • Specifications: Whether the software performed according to its defined functional and safety specifications ("met specifications").
    • Accuracy against known physical/digital models: For anatomical and phantom model testing, the "ground truth" would be the known parameters of these models.

    There are no details provided about experts involved in establishing this "ground truth" or their qualifications.

    4. Adjudication Method for the Test Set

    Not applicable. The document does not describe an adjudication method as would be used for a clinical study involving human readers or interpretation of results. The testing was focused on meeting software specifications.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not described or conducted. This submission focused on the functional equivalence of the software to existing predicate devices.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the performance testing described is essentially "standalone" in the sense that it evaluates the software's inherent functions (image processing, calculations, file handling) without explicitly measuring its impact on human reader performance or a human-in-the-loop scenario. The assessment is of the software itself fulfilling its defined requirements.

    7. The Type of Ground Truth Used

    The ground truth used for testing appears to be primarily:

    • Software Specifications: The software's ability to "meet specifications" for various functions (DICOM process, image integrity, calculations, measurements).
    • Reference Data/Models: For "anatomical and phantom model digital file testing," the ground truth would be the known, accurate parameters of these models against which the software's output was compared.

    It does not mention ground truth derived from expert consensus, pathology, or outcomes data in a clinical trial setting.

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of machine learning or AI algorithm development. The VAULT® System appears to be a rule-based or traditional image processing software rather than an AI/ML-driven device that requires training data. No training set size is mentioned.

    9. How the Ground Truth for the Training Set was Established

    Not applicable, as a training set for machine learning is not mentioned or implied by the device's description or the validation approach.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120115
    Device Name
    ORTHOSIZE
    Manufacturer
    Date Cleared
    2012-03-14

    (61 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K073714

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Orthosize is indicated for assisting healthcare professionals in preoperative planning of orthopedic surgery. The device allows for overlaying of prosthesis templates on radiological images, and includes tools for performing measurements on the image and for positioning the templates. Clinical judgments and experience are required to properly use the software.

    Device Description

    Orthosize software uses digital templates, template overlays provided by orthopedic manufacturers, to estimate the size of joints. Orthosize software allows the user to place a template over a radiographic image. The user may then select an overlay that best approximates the size of the joint in the image. The user may also translate and rotate the overlay such that it substantially matches the shape and outline of the ioint in the image. In this way, Orthosize software enables the user to estimate the size and shape of implant that most closely approximates the joint presented in the image. Orthosize also allows the user to make simple measurements.

    AI/ML Overview

    This document describes the Orthosize software, a medical device for preoperative planning in orthopedic surgery. The information provided focuses on the software's functional and safety requirements testing, rather than a clinical study evaluating its performance in terms of surgical outcomes or accuracy against a ground truth.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Acceptance Criteria (Implied)Reported Device Performance
    Functional RequirementsAll functional requirements as defined by the Orthosize Software Requirements Specification (SRS)"The Orthosize software passed all tests." "Final evaluation showed that testing of all software requirements was completed with passing results."
    Safety RequirementsAll safety requirements identified by a safety risk analysis performed in accordance with ISO 14971:2007"Safety requirements were tested as identified by a safety risk analysis ... The Orthosize software passed all tests."
    Software ValidationTraceability performed and documented as defined by FDA's General Principles of Software Validation (January 2002) guidance document"traceability was performed and documented as defined by FDA's General Principles of Software Validation"
    Stress/Boundary TestingBoundary values and stress testing as defined by the FDA's Guidance for the Content of Premarket Submission for Software Contained in Medical Devices (May 2005) guidance document"Validation included boundary values and stress testing as defined by the FDA's Guidance..."
    Overall Software QualityNo test faults, no test variances, all software requirements addressed, performs as intended, and meets product specifications"No test faults were found. Additionally, no test variances were found during testing. Final assessment using a requirements coverage matrix showed that all software requirements were addressed by the tests." "Evaluation of the test results demonstrates that the software performs as intended and meets product specifications."

    2. Sample Size Used for the Test Set and the Data Provenance

    The provided text does not describe a test set in terms of patient data or images. The testing described is intrinsic to the software itself (functional, safety, and validation testing). Therefore, there is no information on sample size for a test set or data provenance (country of origin, retrospective/prospective).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided as the document describes software validation and verification, not a clinical study involving expert interpretation of medical images.

    4. Adjudication Method for the Test Set

    This information is not applicable/not provided as there was no test set of medical images requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    A MRMC comparative effectiveness study was not performed or described in the provided text. The document focuses on the software's internal validation, not its clinical impact on human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The document describes the standalone performance of the software in meeting its functional and safety requirements. The performance report confirms that the software, as an algorithm, passed all its defined tests. However, this is in the context of software engineering validation, not in the context of a standalone diagnostic or predictive performance against a ground truth from patient data. The device's indication for use explicitly states it is for "assisting healthcare professionals," implying human-in-the-loop operation.

    7. The Type of Ground Truth Used

    The "ground truth" for the tests performed was against the Software Requirements Specification (SRS) and safety risk analysis. This is an internal ground truth for software development, not a clinical ground truth like pathology, expert consensus on images, or patient outcomes data.

    8. The Sample Size for the Training Set

    The document does not mention a training set. This type of software, which involves overlaying digital templates, translating, and rotating overlays by a user, is likely rule-based or template-driven, rather than a machine learning model that requires a training set.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied, this information is not applicable/not provided.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1