(142 days)
OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).
OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can import DICOM-conform medical images and view these images, navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view, use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency, create a virtual trajectory, which can be displayed in the 2D images and 3D view, identify electrode array contacts of a cochlear implant to assess electrode insertion and position, and input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN. OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode arrav portfolio. The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment. OTOPLAN is designed to run on a PC and requires the 64-bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use. For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities.
The provided text discusses the OTOPLAN device (v2.0) and its substantial equivalence to a predicate device (OTOPLAN v1.3). The information regarding acceptance criteria and a detailed study proving the device meets these criteria is not fully presented in a standalone format as requested for all fields. However, based on the available text, I can extract and infer the following:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with specific numerical targets and performance metrics for the OTOPLAN v2.0 device itself. Instead, it focuses on demonstrating substantial equivalence to the predicate device, OTOPLAN v1.3, and verifying the new features.
However, for the new feature of "Electrode Contact Identification," performance testing was conducted. While specific numerical acceptance criteria (e.g., accuracy percentages) are not explicitly stated in a table, the conclusion states that the "testing demonstrated that the algorithm can accurately identify the electrode contacts."
Since the document stresses "substantial equivalence" and the safety/effectiveness of the updated device, the implicit acceptance criteria are that the OTOPLAN v2.0 performs at least as well as and does not adversely affect the safety and effectiveness compared to the predicate device, and for new features, they perform "accurately."
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
All Existing Functions | Substantially equivalent to OTOPLAN v1.3; does not adversely affect safety and effectiveness. Software design verification and validation, hazard analysis, and established moderate level of concern. | OTOPLAN v2.0 maintains the same intended use and functions as OTOPLAN v1.3 for cochlear parametrization, audiogram, virtual trajectory planning, postoperative quality checks, and export report. Existing 3D reconstruction functions (temporal bone, incus, malleus, stapes, facial nerve, chorda tympani, external ear canal) are also the same. Performance is demonstrated through internal testing and software validation. |
New 3D Reconstruction Functions | ||
(Cochlea, Sigmoid sinus, Cochlear bony overhang, Cochlear round window) | Same technological characteristics as functions in the predicate device (e.g., uses similar reconstruction methods). Safety and performance demonstrated through software validation activities and documentation. | These functions use the same reconstruction methods and processes as existing functions in the predicate device. For example, Cochlea uses the same method as temporal bone reconstruction. This was verified through software validation. |
New 3D Reconstruction Function | ||
(Electrode contacts - automatic detection) | Accurate identification of electrode contacts. Does not adversely affect the safety and effectiveness of the subject device. | "The testing demonstrated that the algorithm can accurately identify the electrode contacts." Performance was demonstrated through specific non-clinical performance testing and software validation using human temporal bone cadaver specimens. |
Overall Safety and Effectiveness | Substantially equivalent to the predicate device with regard to intended use, safety, and effectiveness. | The subject device is concluded to be substantially equivalent to the predicate device based on comparison of intended use, technological characteristics, and non-clinical performance testing (Software Verification and Validation, Human Factors and Usability Validation, Internal Test Standards). |
2. Sample Size Used for the Test Set and Data Provenance
For the specific new feature of "Electrode Contact Identification":
- Sample Size for Test Set: "human temporal bone cadaver specimens" (the exact number is not specified).
- Data Provenance: The specimens were "scanned with a Micro CT" (for ground truth) and "clinical CTs" (for test datasets). This implies a laboratory or research setting. The country of origin is not explicitly stated. The study is likely retrospective as it uses pre-existing or specially prepared cadaver specimens rather than living patients.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- The document does not explicitly state the number or qualifications of experts used to establish the ground truth for the "Electrode Contact Identification" test set. It only states that electrode contacts were "marked for the ground truth dataset."
4. Adjudication Method for the Test Set
- The document does not describe an explicit adjudication method (e.g., 2+1, 3+1). It only mentions that electrode contacts were "marked for the ground truth dataset" for the micro CT scans.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document primarily focuses on demonstrating substantial equivalence to a predicate device and verifying new features, not on the comparative effectiveness of human readers with vs. without AI assistance. The device is described as "assistive" and requiring "user interaction," but no study on human performance improvement is detailed. Human Factors and Usability Validation was performed on the predicate device, not a comparative effectiveness study with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance test was done for the "Electrode Contact Identification" algorithm. The text states: "The electrode contact identification algorithm has been applied on the test dataset. The testing demonstrated that the algorithm can accurately identify the electrode contacts." This confirms standalone algorithm testing. The user then "reviews the result and can manually adjust the contacts points," indicating the human-in-the-loop aspect during clinical use, but the initial detection was algorithm-only.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For the "Electrode Contact Identification" feature: The ground truth was established by "electrode contacts marked" on "human temporal bone cadaver specimens" scanned with a Micro CT. This suggests expert marking/annotation on high-resolution imaging (Micro CT is considered a gold standard for anatomical detail beyond clinical CT).
8. The Sample Size for the Training Set
- The document does not provide information on the sample size for the training set for any of the algorithms or features. It focuses on the validation of the new features.
9. How the Ground Truth for the Training Set Was Established
- Since the sample size for the training set is not provided, the method for establishing its ground truth is also not described in this document.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).