K Number
K203486
Device Name
Otoplan
Manufacturer
Date Cleared
2021-08-20

(266 days)

Product Code
Regulation Number
892.2050
Panel
EN
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).

Device Description

OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can

  • import DICOM-conform medical images and view these images.
  • navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view.
  • use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
  • create a virtual trajectory, which can be displayed in the 2D images and 3D view.
  • identify electrode array contacts of a cochlear implant to assess electrode insertion and position.
  • input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.
    OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode array portfolio.
    The information provided by OTOPLAN is solely assistive and for the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.
    OTOPLAN is designed to run on a PC and requires the 64 bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use.
    For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities. The minimum hardware requirements are:
  • 12.3in wide screen
  • 8GB of RAM
  • 2 core CPU (such as a 5th generation i5 or i7)
  • dedicated GPU with OpenGL 4.0 capabilities
  • 250GB hard drive
AI/ML Overview

The provided text is a 510(k) summary for the OTOPLAN device. This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report with specific acceptance criteria and performance metrics for an AI/algorithm component.

Based on the provided text, OTOPLAN is described as a software interface for displaying, segmenting, and transferring medical image data for pre-operative planning and post-operative assessment. It does include functions like semi-automatic segmentation and calculations based on manual 2D measurements, but it largely appears to be a tool that assists human users and does not replace their judgment or perform fully autonomous diagnostics. Therefore, it's unlikely to have the kind of acceptance criteria typically seen for AI/ML diagnostic algorithms (e.g., sensitivity, specificity, AUC).

The document states that "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN. This conclusion is based upon a comparison of intended use, technological characteristics, and nonclinical performance data (Software Verification and Validation Testing, Human Factors and Usability Validation, and Internal Test Standards)." This explicitly means there was no clinical study of the type that would prove the device meets acceptance criteria related to diagnostic performance.

However, I can extract information related to the closest aspects of "acceptance criteria" and "study that proves the device meets the acceptance criteria" from the provided text, focusing on the software's functional performance and usability. Since this is not a diagnostic AI/ML device in the sense of making independent clinical decisions, the "acceptance criteria" will be related to its intended functions and safety.

Here's a breakdown based on the information available:

1. A table of acceptance criteria and the reported device performance

The document does not provide a formal table of specific, quantifiable performance acceptance criteria (e.g., segmentation accuracy, measurement precision) with numerical results as one would expect for an AI diagnostic algorithm. Instead, the "performance" is demonstrated through various validation activities.

CategoryAcceptance Criteria (Implied from testing focus)Reported Device Performance
Software FunctionalitySoftware functions as intended; outputs are accurate and reliable (e.g., correct calculation of cochlear length, correct display of information, accurate 2D measurements). Software is "moderate" level of concern."All tests have been passed and demonstrate that no question on safety and effectiveness is raised by this technological difference."
"The internal tests demonstrate that the subject device can fulfill the expected performance characteristics and no questions of safety or performance were raised." (Referencing comparison with known dimensions).
Human Factors & UsabilityDevice is safe and effective for intended users, uses, and use environments; users can successfully perform tasks and there are no critical usability errors. Conformance to FDA guidance and AAMI/ANSI/IEC 62366-1:2015."OTOPLAN has been found to be safe and effective for the intended users, uses and use environments."
Safety and EffectivenessNo questions of safety or effectiveness are raised by technological differences or overall device operation."The subject device is equivalent to the predicate device with regard to intended use, safety and efficacy."
"The subject device is substantially equivalent to the predicate device with regard to device performance."

2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Software Verification and Validation Testing & Internal Test Standards:
    • The document mentions "tests with known dimensions which were loaded into OTOPLAN." No specific "sample size" of medical images or data is mentioned for these internal software tests, nor is the provenance of this "known dimension" data explicitly stated (e.g., synthetic, real anonymized clinical data). Given it's internal testing of software functionality rather than clinical performance, it's likely proprietary test cases.
  • Human Factors and Usability Validation:
    • Sample Size: "15 users from each user group." (User groups are not specified, but typically refer to the intended users like otologists and neurotologists).
    • Data Provenance: "to be carried out in the US". This implies prospective usability testing with human users.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

  • Software Verification and Validation & Internal Test Standards: The concept of "ground truth" as established by experts for medical image interpretation is not directly applicable here for these functional tests. The ground truth refers to "known dimensions" or expected calculation results, which are determined by the software developers and internal quality processes rather than expert radiologists.
  • Human Factors and Usability Validation: No "ground truth" in the diagnostic sense is established by experts for this type of testing. The "ground truth" for usability testing relates to whether users can successfully complete tasks and if the device performs as expected according to the user.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

Not applicable. "Adjudication" methods (like 2+1 or 3+1 consensus) are used to establish ground truth in clinical image interpretation studies, typically when there's ambiguity or disagreement among expert readers. Since no clinical study involving image interpretation by multiple readers in this manner was performed (as explicitly stated that clinical testing was not required), no such adjudication method was used.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN." Therefore, no MRMC comparative effectiveness study was conducted.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: The documentation focuses on the software's functional correctness. It states that OTOPLAN "does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually." It emphasizes that "All tasks performed with OTOPLAN require user interaction" and "the user is required to have clinical experience and judgment."
    • The internal tests seem to evaluate the standalone computational aspects (e.g., "correct calculation according to the published formula and display of the information," "tests with known dimensions which were loaded into OTOPLAN and results compared to the know dimension"). This validates the algorithm's performance for specific computational tasks but not its overall clinical diagnostic performance in a "standalone" fashion that replaces human judgment.

7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

  • Software Verification and Validation & Internal Test Standards: "Known dimensions" and
    "Published formulas" for calculations. This indicates a ground truth based on pre-defined, mathematically verifiable inputs and outputs.
  • No ground truth from expert consensus, pathology, or outcomes data was used for a clinical study, as no clinical study was performed.

8. The sample size for the training set

The document describes OTOPLAN as a software interface with functions like segmentation and measurement, often based on user interaction or published formulas. It does not describe a machine learning or deep learning model that requires a "training set" in the conventional sense. The "semi-automatic" segmentation is mentioned, but if it uses algorithms that learn from data, no information is provided about such a training set size. This device appears to be a software tool with algorithmic functions rather than a continuously learning AI model.

9. How the ground truth for the training set was established

Not applicable, as no "training set" for a machine learning model is described in the document.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).