Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K242120
    Device Name
    OTOPLAN
    Manufacturer
    Date Cleared
    2025-04-11

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    OTOPLAN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological and neurotological procedures (e.g., cochlear implantation).

    Device Description

    OTOPLAN is a Software as a medical Device (SaMD) which consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can

    • import DICOM-conform medical images, fuse supported images and view these images.
    • navigate through the images and segment ENT relevant structures (semi-automatic/automatic), which can be highlighted in the 2D images and 3D view.
    • use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
    • create a virtual trajectory, which can be displayed in the 2D images and 3D view.
    • identify electrode array contacts, lead, and housing of a cochlear implant to assess electrode insertion and position.
    • input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.

    OTOPLAN allows the visualization of third-party information, that is, cochlear implant electrodes, implant housings and audio processors.

    The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.

    AI/ML Overview

    The provided document describes the acceptance criteria and the study that proves the device (OTOPLAN version 3.1) meets these criteria for several new functionalities.

    Here's the breakdown:

    Acceptance Criteria and Device Performance Study for OTOPLAN v3.1

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes performance tests for several new automatic functions introduced in OTOPLAN v3.1. These are broadly categorized into Temporal Bone, Skin, and Inner Ear segmentation and thickness mapping, and CT-CT and CT-MR Image Fusion.

    Table: Acceptance Criteria and Reported Device Performance

    Functionality TestedAcceptance CriteriaReported Device PerformancePass/Fail
    Temporal Bone Thickness MappingMean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mmMAD: 0.17–0.20 mm, CI: 0.19–0.22Pass
    Temporal Bone 3D ReconstructionMean DICE coefficient ≥ 0.85, 95% CI lower limit ≥ 0.85DICE coefficient (R1): 0.88 [CI: 0.87–0.89]
    DICE coefficient (R2): 0.86 [CI: 0.85–0.87]
    DICE coefficient (R3): 0.89 [CI: 0.88–0.90]Pass
    Skin Thickness MappingMean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mmMAD: 0.21–0.23 mm, CI: 0.23–0.26Pass
    Skin 3D ReconstructionMean DICE coefficient ≥ 0.68, 95% CI lower limit ≥ 0.68DICE coefficient (R1): 0.89 [CI: 0.88–0.90]
    DICE coefficient (R2): 0.87 [CI: 0.86–0.88]
    DICE coefficient (R3): 0.86 [CI: 0.84–0.88]Pass
    Scala Tympany 3D ReconstructionMean DICE coefficient ≥ 0.65, 95% CI lower limit ≥ 0.65DICE coefficient: 0.76 [CI: 0.75–0.77]Pass
    Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (CT)Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80DICE coefficient (R1): 0.82 [CI: 0.81–0.83]
    DICE coefficient (R2): 0.84 [CI: 0.83–0.85]
    DICE coefficient (R3): 0.85 [CI: 0.84–0.86]Pass
    Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (MR)Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80DICE coefficient (R1): 0.81 [CI: 0.80–0.82]
    DICE coefficient (R2): 0.83 [CI: 0.82–0.84]
    DICE coefficient (R3): 0.84 [CI: 0.83–0.85]Pass
    Cochlear Parameters (CT)Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mmMAE (±SD) for CDLoc:
    R1: 0.59 ± 0.37 mm
    R2: 0.64 ± 0.44 mm
    R3: 0.62 ± 0.39 mmPass
    Cochlear Parameters (MR)Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mmMAE (±SD) for CDLoc:
    R1: 0.56 ± 0.42 mm
    R2: 0.70 ± 0.39 mm
    R3: 0.64 ± 0.43 mmPass
    Image Fusion (CT-CT) - SemitonesMaximum mean absolute semitone error per electrode contact
    Ask a Question

    Ask a specific question about this device

    K Number
    K220300
    Device Name
    OTOPLAN
    Manufacturer
    Date Cleared
    2022-06-24

    (142 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    OTOPLAN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).

    Device Description

    OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can import DICOM-conform medical images and view these images, navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view, use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency, create a virtual trajectory, which can be displayed in the 2D images and 3D view, identify electrode array contacts of a cochlear implant to assess electrode insertion and position, and input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN. OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode arrav portfolio. The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment. OTOPLAN is designed to run on a PC and requires the 64-bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use. For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities.

    AI/ML Overview

    The provided text discusses the OTOPLAN device (v2.0) and its substantial equivalence to a predicate device (OTOPLAN v1.3). The information regarding acceptance criteria and a detailed study proving the device meets these criteria is not fully presented in a standalone format as requested for all fields. However, based on the available text, I can extract and infer the following:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of acceptance criteria with specific numerical targets and performance metrics for the OTOPLAN v2.0 device itself. Instead, it focuses on demonstrating substantial equivalence to the predicate device, OTOPLAN v1.3, and verifying the new features.

    However, for the new feature of "Electrode Contact Identification," performance testing was conducted. While specific numerical acceptance criteria (e.g., accuracy percentages) are not explicitly stated in a table, the conclusion states that the "testing demonstrated that the algorithm can accurately identify the electrode contacts."

    Since the document stresses "substantial equivalence" and the safety/effectiveness of the updated device, the implicit acceptance criteria are that the OTOPLAN v2.0 performs at least as well as and does not adversely affect the safety and effectiveness compared to the predicate device, and for new features, they perform "accurately."

    Feature/MetricAcceptance Criteria (Implicit)Reported Device Performance
    All Existing FunctionsSubstantially equivalent to OTOPLAN v1.3; does not adversely affect safety and effectiveness. Software design verification and validation, hazard analysis, and established moderate level of concern.OTOPLAN v2.0 maintains the same intended use and functions as OTOPLAN v1.3 for cochlear parametrization, audiogram, virtual trajectory planning, postoperative quality checks, and export report. Existing 3D reconstruction functions (temporal bone, incus, malleus, stapes, facial nerve, chorda tympani, external ear canal) are also the same. Performance is demonstrated through internal testing and software validation.
    New 3D Reconstruction Functions
    (Cochlea, Sigmoid sinus, Cochlear bony overhang, Cochlear round window)Same technological characteristics as functions in the predicate device (e.g., uses similar reconstruction methods). Safety and performance demonstrated through software validation activities and documentation.These functions use the same reconstruction methods and processes as existing functions in the predicate device. For example, Cochlea uses the same method as temporal bone reconstruction. This was verified through software validation.
    New 3D Reconstruction Function
    (Electrode contacts - automatic detection)Accurate identification of electrode contacts. Does not adversely affect the safety and effectiveness of the subject device."The testing demonstrated that the algorithm can accurately identify the electrode contacts." Performance was demonstrated through specific non-clinical performance testing and software validation using human temporal bone cadaver specimens.
    Overall Safety and EffectivenessSubstantially equivalent to the predicate device with regard to intended use, safety, and effectiveness.The subject device is concluded to be substantially equivalent to the predicate device based on comparison of intended use, technological characteristics, and non-clinical performance testing (Software Verification and Validation, Human Factors and Usability Validation, Internal Test Standards).

    2. Sample Size Used for the Test Set and Data Provenance

    For the specific new feature of "Electrode Contact Identification":

    • Sample Size for Test Set: "human temporal bone cadaver specimens" (the exact number is not specified).
    • Data Provenance: The specimens were "scanned with a Micro CT" (for ground truth) and "clinical CTs" (for test datasets). This implies a laboratory or research setting. The country of origin is not explicitly stated. The study is likely retrospective as it uses pre-existing or specially prepared cadaver specimens rather than living patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • The document does not explicitly state the number or qualifications of experts used to establish the ground truth for the "Electrode Contact Identification" test set. It only states that electrode contacts were "marked for the ground truth dataset."

    4. Adjudication Method for the Test Set

    • The document does not describe an explicit adjudication method (e.g., 2+1, 3+1). It only mentions that electrode contacts were "marked for the ground truth dataset" for the micro CT scans.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. The document primarily focuses on demonstrating substantial equivalence to a predicate device and verifying new features, not on the comparative effectiveness of human readers with vs. without AI assistance. The device is described as "assistive" and requiring "user interaction," but no study on human performance improvement is detailed. Human Factors and Usability Validation was performed on the predicate device, not a comparative effectiveness study with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance test was done for the "Electrode Contact Identification" algorithm. The text states: "The electrode contact identification algorithm has been applied on the test dataset. The testing demonstrated that the algorithm can accurately identify the electrode contacts." This confirms standalone algorithm testing. The user then "reviews the result and can manually adjust the contacts points," indicating the human-in-the-loop aspect during clinical use, but the initial detection was algorithm-only.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • For the "Electrode Contact Identification" feature: The ground truth was established by "electrode contacts marked" on "human temporal bone cadaver specimens" scanned with a Micro CT. This suggests expert marking/annotation on high-resolution imaging (Micro CT is considered a gold standard for anatomical detail beyond clinical CT).

    8. The Sample Size for the Training Set

    • The document does not provide information on the sample size for the training set for any of the algorithms or features. It focuses on the validation of the new features.

    9. How the Ground Truth for the Training Set Was Established

    • Since the sample size for the training set is not provided, the method for establishing its ground truth is also not described in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203486
    Device Name
    Otoplan
    Manufacturer
    Date Cleared
    2021-08-20

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Otoplan

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).

    Device Description

    OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can

    • import DICOM-conform medical images and view these images.
    • navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view.
    • use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
    • create a virtual trajectory, which can be displayed in the 2D images and 3D view.
    • identify electrode array contacts of a cochlear implant to assess electrode insertion and position.
    • input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.
      OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode array portfolio.
      The information provided by OTOPLAN is solely assistive and for the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.
      OTOPLAN is designed to run on a PC and requires the 64 bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use.
      For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities. The minimum hardware requirements are:
    • 12.3in wide screen
    • 8GB of RAM
    • 2 core CPU (such as a 5th generation i5 or i7)
    • dedicated GPU with OpenGL 4.0 capabilities
    • 250GB hard drive
    AI/ML Overview

    The provided text is a 510(k) summary for the OTOPLAN device. This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report with specific acceptance criteria and performance metrics for an AI/algorithm component.

    Based on the provided text, OTOPLAN is described as a software interface for displaying, segmenting, and transferring medical image data for pre-operative planning and post-operative assessment. It does include functions like semi-automatic segmentation and calculations based on manual 2D measurements, but it largely appears to be a tool that assists human users and does not replace their judgment or perform fully autonomous diagnostics. Therefore, it's unlikely to have the kind of acceptance criteria typically seen for AI/ML diagnostic algorithms (e.g., sensitivity, specificity, AUC).

    The document states that "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN. This conclusion is based upon a comparison of intended use, technological characteristics, and nonclinical performance data (Software Verification and Validation Testing, Human Factors and Usability Validation, and Internal Test Standards)." This explicitly means there was no clinical study of the type that would prove the device meets acceptance criteria related to diagnostic performance.

    However, I can extract information related to the closest aspects of "acceptance criteria" and "study that proves the device meets the acceptance criteria" from the provided text, focusing on the software's functional performance and usability. Since this is not a diagnostic AI/ML device in the sense of making independent clinical decisions, the "acceptance criteria" will be related to its intended functions and safety.

    Here's a breakdown based on the information available:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a formal table of specific, quantifiable performance acceptance criteria (e.g., segmentation accuracy, measurement precision) with numerical results as one would expect for an AI diagnostic algorithm. Instead, the "performance" is demonstrated through various validation activities.

    CategoryAcceptance Criteria (Implied from testing focus)Reported Device Performance
    Software FunctionalitySoftware functions as intended; outputs are accurate and reliable (e.g., correct calculation of cochlear length, correct display of information, accurate 2D measurements). Software is "moderate" level of concern."All tests have been passed and demonstrate that no question on safety and effectiveness is raised by this technological difference."
    "The internal tests demonstrate that the subject device can fulfill the expected performance characteristics and no questions of safety or performance were raised." (Referencing comparison with known dimensions).
    Human Factors & UsabilityDevice is safe and effective for intended users, uses, and use environments; users can successfully perform tasks and there are no critical usability errors. Conformance to FDA guidance and AAMI/ANSI/IEC 62366-1:2015."OTOPLAN has been found to be safe and effective for the intended users, uses and use environments."
    Safety and EffectivenessNo questions of safety or effectiveness are raised by technological differences or overall device operation."The subject device is equivalent to the predicate device with regard to intended use, safety and efficacy."
    "The subject device is substantially equivalent to the predicate device with regard to device performance."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Software Verification and Validation Testing & Internal Test Standards:
      • The document mentions "tests with known dimensions which were loaded into OTOPLAN." No specific "sample size" of medical images or data is mentioned for these internal software tests, nor is the provenance of this "known dimension" data explicitly stated (e.g., synthetic, real anonymized clinical data). Given it's internal testing of software functionality rather than clinical performance, it's likely proprietary test cases.
    • Human Factors and Usability Validation:
      • Sample Size: "15 users from each user group." (User groups are not specified, but typically refer to the intended users like otologists and neurotologists).
      • Data Provenance: "to be carried out in the US". This implies prospective usability testing with human users.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Software Verification and Validation & Internal Test Standards: The concept of "ground truth" as established by experts for medical image interpretation is not directly applicable here for these functional tests. The ground truth refers to "known dimensions" or expected calculation results, which are determined by the software developers and internal quality processes rather than expert radiologists.
    • Human Factors and Usability Validation: No "ground truth" in the diagnostic sense is established by experts for this type of testing. The "ground truth" for usability testing relates to whether users can successfully complete tasks and if the device performs as expected according to the user.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable. "Adjudication" methods (like 2+1 or 3+1 consensus) are used to establish ground truth in clinical image interpretation studies, typically when there's ambiguity or disagreement among expert readers. Since no clinical study involving image interpretation by multiple readers in this manner was performed (as explicitly stated that clinical testing was not required), no such adjudication method was used.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN." Therefore, no MRMC comparative effectiveness study was conducted.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: The documentation focuses on the software's functional correctness. It states that OTOPLAN "does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually." It emphasizes that "All tasks performed with OTOPLAN require user interaction" and "the user is required to have clinical experience and judgment."
      • The internal tests seem to evaluate the standalone computational aspects (e.g., "correct calculation according to the published formula and display of the information," "tests with known dimensions which were loaded into OTOPLAN and results compared to the know dimension"). This validates the algorithm's performance for specific computational tasks but not its overall clinical diagnostic performance in a "standalone" fashion that replaces human judgment.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    • Software Verification and Validation & Internal Test Standards: "Known dimensions" and
      "Published formulas" for calculations. This indicates a ground truth based on pre-defined, mathematically verifiable inputs and outputs.
    • No ground truth from expert consensus, pathology, or outcomes data was used for a clinical study, as no clinical study was performed.

    8. The sample size for the training set

    The document describes OTOPLAN as a software interface with functions like segmentation and measurement, often based on user interaction or published formulas. It does not describe a machine learning or deep learning model that requires a "training set" in the conventional sense. The "semi-automatic" segmentation is mentioned, but if it uses algorithms that learn from data, no information is provided about such a training set size. This device appears to be a software tool with algorithmic functions rather than a continuously learning AI model.

    9. How the ground truth for the training set was established

    Not applicable, as no "training set" for a machine learning model is described in the document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1