Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242120
    Device Name
    OTOPLAN
    Manufacturer
    Date Cleared
    2025-04-11

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological and neurotological procedures (e.g., cochlear implantation).

    Device Description

    OTOPLAN is a Software as a medical Device (SaMD) which consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can

    • import DICOM-conform medical images, fuse supported images and view these images.
    • navigate through the images and segment ENT relevant structures (semi-automatic/automatic), which can be highlighted in the 2D images and 3D view.
    • use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
    • create a virtual trajectory, which can be displayed in the 2D images and 3D view.
    • identify electrode array contacts, lead, and housing of a cochlear implant to assess electrode insertion and position.
    • input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.

    OTOPLAN allows the visualization of third-party information, that is, cochlear implant electrodes, implant housings and audio processors.

    The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.

    AI/ML Overview

    The provided document describes the acceptance criteria and the study that proves the device (OTOPLAN version 3.1) meets these criteria for several new functionalities.

    Here's the breakdown:

    Acceptance Criteria and Device Performance Study for OTOPLAN v3.1

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes performance tests for several new automatic functions introduced in OTOPLAN v3.1. These are broadly categorized into Temporal Bone, Skin, and Inner Ear segmentation and thickness mapping, and CT-CT and CT-MR Image Fusion.

    Table: Acceptance Criteria and Reported Device Performance

    Functionality TestedAcceptance CriteriaReported Device PerformancePass/Fail
    Temporal Bone Thickness MappingMean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mmMAD: 0.17–0.20 mm, CI: 0.19–0.22Pass
    Temporal Bone 3D ReconstructionMean DICE coefficient ≥ 0.85, 95% CI lower limit ≥ 0.85DICE coefficient (R1): 0.88 [CI: 0.87–0.89]DICE coefficient (R2): 0.86 [CI: 0.85–0.87]DICE coefficient (R3): 0.89 [CI: 0.88–0.90]Pass
    Skin Thickness MappingMean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mmMAD: 0.21–0.23 mm, CI: 0.23–0.26Pass
    Skin 3D ReconstructionMean DICE coefficient ≥ 0.68, 95% CI lower limit ≥ 0.68DICE coefficient (R1): 0.89 [CI: 0.88–0.90]DICE coefficient (R2): 0.87 [CI: 0.86–0.88]DICE coefficient (R3): 0.86 [CI: 0.84–0.88]Pass
    Scala Tympany 3D ReconstructionMean DICE coefficient ≥ 0.65, 95% CI lower limit ≥ 0.65DICE coefficient: 0.76 [CI: 0.75–0.77]Pass
    Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (CT)Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80DICE coefficient (R1): 0.82 [CI: 0.81–0.83]DICE coefficient (R2): 0.84 [CI: 0.83–0.85]DICE coefficient (R3): 0.85 [CI: 0.84–0.86]Pass
    Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (MR)Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80DICE coefficient (R1): 0.81 [CI: 0.80–0.82]DICE coefficient (R2): 0.83 [CI: 0.82–0.84]DICE coefficient (R3): 0.84 [CI: 0.83–0.85]Pass
    Cochlear Parameters (CT)Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mmMAE (±SD) for CDLoc:R1: 0.59 ± 0.37 mmR2: 0.64 ± 0.44 mmR3: 0.62 ± 0.39 mmPass
    Cochlear Parameters (MR)Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mmMAE (±SD) for CDLoc:R1: 0.56 ± 0.42 mmR2: 0.70 ± 0.39 mmR3: 0.64 ± 0.43 mmPass
    Image Fusion (CT-CT) - SemitonesMaximum mean absolute semitone error per electrode contact < 7.0 semitonesMax semitone error (per rater): R1: 5.34, R2: 4.43, R3: 4.20Pass
    Image Fusion (CT-CT) - Landmark DistancesMean point distance error at each anatomical landmark per rater must be < 0.88 mmRWP: 0.49–0.51 mm, LWP: 0.53–0.66 mm, IWP: 0.47–0.52 mm, SWP: 0.42–0.53 mmPass
    Image Fusion (CT-MR) - SemitonesMaximum mean absolute semitone error per electrode contact < 7.0 semitonesMax semitone error (per rater): R1: 3.94, R2: 3.90, R3: 3.97Pass
    Image Fusion (CT-MR) - Landmark DistancesMean point distance error at each anatomical landmark per rater must be < 1.25 mmRWP: 0.82–0.84 mm, LWP: 0.68–0.85 mm, IWP: 0.63–0.74 mm, SWP: 0.63–0.76 mmPass

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Temporal Bone Thickness Mapping and Skin Thickness Mapping:
      • Test set: 43 temporal bones (29 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Temporal Bone 3D Reconstruction and Skin 3D Reconstruction:
      • Test set: 31 temporal bones (23 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Scala Tympany 3D Reconstruction:
      • Test set: 450 clinical-resolution CBCT datasets derived from 75 cochleae
      • Data provenance: Not explicitly stated beyond "clinical-resolution CBCT datasets".
    • Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (CT):
      • Test set: 44 ears (27 patients)
      • Data provenance: Pooled from 1 clinical site (retrospective, implied clinical data).
    • Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (MR):
      • Test set: 41 ears (24 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Cochlear Parameters (CT):
      • Test set: 61 ears (53 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Cochlear Parameters (MR):
      • Test set: 63 ears (52 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Image Fusion (CT-CT):
      • Test set: 32 temporal bones (32 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).
    • Image Fusion (CT-MR):
      • Test set: 31 temporal bones (25 patients)
      • Data provenance: Pooled from 4 clinical sites (retrospective, implied clinical data).

    General Note on Data Provenance: The document consistently refers to data being "Pooled (X clinical sites)". This implicitly suggests the data is retrospective patient data from various clinical centers, but specific countries of origin and the exact prospective/retrospective collection mechanism are not detailed beyond this.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • General across all segmentation/measurement tasks: Three qualified surgeons
    • Qualifications: For thickness mapping and 3D reconstructions, it is specified that "three qualified surgeons" (for anatomical annotations/measurements) or "three experienced otologists" (for Scala Tympany binary masks accuracy review) or "three experienced surgeons" (for cochlear parameter measurements and image fusion landmark points) were used. The document does not specify their years of experience or board certification status, but "qualified" and "experienced" imply relevant expertise in the field of ENT/otology/neurotology.

    4. Adjudication Method for the Test Set

    The adjudication method appears to be consensus-based or independent review followed by measurement/comparison.

    • For 3D Reconstruction ground truth, "Three surgeons annotated each CT slice using 3D Slicer" or "annotated the entire inner ear slice by slice" or "Binary masks were generated for each sample and independently reviewed for accuracy by three experienced otologists."
    • For Thickness Mapping and Cochlear Parameters ground truth, "Thickness manually measured at 5 locations on each CT image by three surgeons" or "The cochlear parameters were manually measured in each ear by three experienced surgeons."
    • For Image Fusion ground truth, "Cochlear parameters were manually measured by three experienced surgeons. Electrode contact positions were defined, and the software calculated the insertion metrics and frequency allocation" or "3D coordinates of points were manually measured on each post-operative image by 3 experienced surgeons."

    While specific "2+1" or "3+1" adjudication systems are not explicitly named, the repeated use of "three surgeons" or "three experienced otologists" suggests an approach where either all three agree, or discrepancies are resolved implicitly or through aggregation (e.g., averaging their measurements). For some metrics (DICE coefficient based on annotations by rater, MAE per rater, Max semitone error per rater), individual rater performance against the algorithm is reported, which could imply each rater's annotation was considered a separate "ground truth" or used to calculate inter-rater variability before comparison with the algorithm. However, for the primary ground truth generation, it seems to involve multiple experts for consistency.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

    No, an MRMC comparative effectiveness study was not performed. The studies described are validation studies of device performance (algorithm only) against expert-established ground truth, not a comparison of human reader performance with and without AI assistance. The data provided in the tables explicitly show the algorithm's performance (DICE coefficient, MAD, MAE) against the ground truth.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, multiple standalone (algorithm only) performance studies were done. The "Automatic Outputs Validation" section (Table 2 and Table 3) explicitly details the performance of the OTOPLAN algorithms for 3D reconstruction, thickness mapping, cochlear parameter calculation, and image fusion against expert-generated ground truth. These tests evaluate the accuracy of the software's automated outputs directly.

    7. The Type of Ground Truth Used

    The primary type of ground truth used is expert consensus/annotation and manual measurement.

    • For 3D reconstructions and Scala Tympany, ground truth was established by manual annotation of image slices by three surgeons using 3D Slicer to generate binary masks. For Scala Tympany, these masks were also "independently reviewed for accuracy by three experienced otologists."
    • For thickness mapping and cochlear parameters, ground truth was established by manual measurements at specified locations or of specific parameters by three experienced surgeons.
    • For Image Fusion, ground truth involved manual measurement of cochlear parameters and 3D coordinates of landmark points by three experienced surgeons.

    8. The Sample Size for the Training Set

    The document explicitly states: "Algorithm not trained on a dataset. Use established segmentation methods that don't require training." and "The data from the different sites were pooled based on a prior review to confirm consistency in key image acquisition parameters per validated feature. The data was then separated into a development dataset and validation. The dataset used for algorithm development is entirely separate from the dataset used for performance testing. Prior development, the available data was systematically divided into distinct development and test datasets. At no point was data from the test dataset used during algorithm development."

    This indicates that for the "automatic" functions (e.g., 3D reconstruction, thickness mapping, Scala Tympany), Cascination AG claims to either use rule-based/classical image processing methods that do not require machine learning training, or any machine learning component was developed using a "development dataset" that was strictly separate from the "test dataset". The first statement suggests the algorithms may not be deep learning-based, while the second indicates rigorous data separation if they do involve trainable components.

    9. How the Ground Truth for the Training Set Was Established

    Given the statement "Algorithm not trained on a dataset. Use established segmentation methods that don't require training.", it implies there wasn't a "training set" in the traditional machine learning sense that required a separate ground truth establishment process. If there was a "development dataset" used (as implied by the separation statement), the document does not specify how its ground truth was established, only that it was entirely separate from the test set. However, for "established segmentation methods," ground truth implicitly comes from the underlying principles of those methods (e.g., definitions of anatomical structures, physics of image formation).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1