K Number
K222577
Date Cleared
2023-01-06

(134 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The TECHFIT Diagnostic Models are patient-specific devices intended to be used as a pre-operative planning tool for treatment in the field of maxillofacial surgery.

The input data file (DICOM imaging information from a medical scanner file) is processed by commercial off-the-shelf software, and the result is an output data file that may then be provided as digital models or used as input to produce physical anatomic models using additive manufacturing.

The physical replica can be used for diagnostic purposes in the field of, maxillofacial applications.

TECHFIT Diagnostic Models should be used in conjunction with other diagnostic tools and expert clinical judgment.

TECHFIT Diagnostic Models are not intended to enter the operating room

Device Description

TECHFIT Diagnostic Models are virtual and additive manufactured anatomic models intended for diagnostic use during maxillofacial surgery planning.

The models are created from a CT scan of the patient's anatomy, which is segmented through Commercial-Off-The-Shelf (COTS) software and converted into virtual 3D models. The surgeon uses these 3D models to make the initial plan/diagnosis based on examination or physical measurement of the patient's anatomy, this includes planning anatomic structures movements (for example, maxilla and mandible movements for occlusion), the resections, measurement of anatomic distances (e.g., the facial symmetry), and determining fixation points and the size and shape of the implants if requested. These functions are those that the surgeon can perform, not functions that the subject device provides by itself.

TECHFIT creates a design proposal for the case based on the information given by the medical professional and the process continues until the final design proposal is approved. Finally, the digital file can be used as an input to produce physical anatomic models through additive manufacturing.

TECHFIT Diagnostic Models are intended for single use only.

AI/ML Overview

The provided text describes the TECHFIT Diagnostic Models, which are patient-specific devices intended for pre-operative planning in maxillofacial surgery. The device processes DICOM imaging data to create virtual 3D models or physical anatomical models through additive manufacturing.

Here's an analysis of the acceptance criteria and study information provided:

1. Table of Acceptance Criteria and Reported Device Performance:

Test PerformedAcceptance Criteria (Implied)Reported Device Performance
3D Printing process validationThe manufacturing process correctly prints TECHFIT Diagnostic Models."The acceptance criteria were met." (Details not specified, but likely refers to successful execution of Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ)).
Dimensional ValidationAll measurements must comply with the acceptance criterion (implied accuracy within a certain tolerance when comparing original models to scanned 3D-printed models)."All the measurements complied with the acceptance criterion." (Specific numerical tolerance/criterion for dimensional accuracy is not provided.)
Packaging ValidationThe packaging system must successfully undergo simulated shipment and environmental conditioning as per ASTM D4169-16 and ISTA 3A, without compromising the device."The acceptance criteria were met." (Indicates successful completion of tests without damage to the product or packaging integrity.)
Diagnostic Qualitative EvaluationMaxillofacial surgeons must deem the models diagnostically significant and helpful for identifying pathologies and planning surgical interventions when used with other diagnostic tools."The interviewed surgeons deemed TECHFIT Diagnostic models a significant help when it comes to identifying different pathologies and plan a more precise surgical intervention when used in conjunction of other diagnostic tools."
Fidelity Validation of detectable anatomical landmarksAll selected landmarks in the educational anatomic model must be identified in both the virtual and 3D-printed models, maintaining dimensional accuracy."All selected landmarks in the educational anatomic model were identified in the virtual models and the 3D printed models." (Implies successful replication of landmarks and retention of dimensional accuracy, though specific metrics for accuracy are not detailed.)

2. Sample size used for the test set and the data provenance:

  • Dimensional Validation: "multiple anatomic models from different patients (mandibles and maxilla)" were used. The exact number of patients or models is not specified.
  • Diagnostic Qualitative Evaluation: The number of clinicians interviewed is not specified, only referred to as "interviewed surgeons".
  • Fidelity Validation of detectable anatomical landmarks: "educational anatomic models" were used. The number of models or specific types is not provided.
  • Data Provenance: Not explicitly stated whether the patient data used for dimensional validation was retrospective or prospective, or its country of origin. The use of "educational anatomic models" for landmark fidelity implies a non-clinical, potentially synthetic, origin for that specific test.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Dimensional Validation: The ground truth for dimensional accuracy would likely be the original CAD model or physical measurement of the source object. No human experts are explicitly mentioned for establishing this ground truth, rather the comparison is against the "original model".
  • Diagnostic Qualitative Evaluation: "Maxillofacial surgeons" were interviewed. Their specific qualifications (e.g., years of experience, subspecialty) are not provided beyond being maxillofacial surgeons. The number of surgeons is not specified.
  • Fidelity Validation of detectable anatomical landmarks: The ground truth would be the known anatomical landmarks on the "educational anatomic models." No experts are explicitly mentioned for establishing this ground truth, as it is inherent to the models.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

  • No specific adjudication method is mentioned for any of the studies. For qualitative evaluations, it sounds like individual surgeon opinions were aggregated, but no formal adjudication process (like consensus reading or tie-breaking) is described.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No MRMC comparative effectiveness study is mentioned. The qualitative evaluation focused on the perceived utility of the models as a "significant help" to surgeons, rather than quantifying improvement in human reader performance with or without the device. The device itself is described as a "pre-operative planning tool" and "diagnostic models" to be used "in conjunction with other diagnostic tools and expert clinical judgment," implying it's an aid, not a standalone diagnostic tool or an AI for image interpretation.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

  • The studies described are primarily focused on the physical and qualitative aspects of the generated models. The device's function involves human input (processing DICOM by "commercial off-the-shelf software" and subsequent human interaction for planning) and the output (digital or physical models) is for a human surgeon to use in conjunction with their judgment. Therefore, a standalone algorithm-only performance study in the sense of a purely interpretative AI is not applicable or described. The "Fidelity Validation" and "Dimensional Validation" assess the accuracy of the output product (models) in replicating anatomical features, which is a form of standalone performance for the manufacturing process, but not for diagnostic interpretation by an AI.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

  • Dimensional Validation: Original digital model or measured physical dimensions.
  • Diagnostic Qualitative Evaluation: Clinical experience and qualitative assessment by maxillofacial surgeons.
  • Fidelity Validation of detectable anatomical landmarks: Known anatomical landmarks on educational models.
  • No mention of pathology or outcomes data being used for ground truth.

8. The sample size for the training set:

  • The document describes the device as processing "DICOM imaging information" through "commercial off-the-shelf software" (Mimics Medical, K183105) and then having "biomedical engineers" work with surgeons for planning. It also states that the "anatomic models are created by TechFit versus the end user" and that they use "the same segmentation/3D reconstruction software to create 3D anatomic models Mimics Medical (K183105)."
  • This suggests that the core segmentation and 3D reconstruction is performed by a previously cleared software (Mimics Medical). The "TECHFIT Diagnostic Models" appear to be the output of a service that utilizes this software, along with human expertise, to create patient-specific models.
  • Therefore, there isn't a "training set" for the "TECHFIT Diagnostic Models" in the typical AI/ML sense, as the core algorithms for image processing and segmentation likely belong to the COTS software (Mimics Medical) and would have their own validation. The TECHFIT process leverages this and human expertise.

9. How the ground truth for the training set was established:

  • As elaborated in point 8, a dedicated training set for "TECHFIT Diagnostic Models" is not described as it is not a de novo AI/ML algorithm being trained by TECHFIT. The ground truth establishment for the underlying COTS software (Mimics Medical) would be relevant here, but is not detailed in this document.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).