Search Results
Found 2 results
510(k) Data Aggregation
(165 days)
Ask a specific question about this device
(653 days)
The iotaSOFT™ Insertion System is intended to aid the surgeon in placement of cochlear implant electrode arrays into a radiographically normal cochlea by controlling the speed of implant insertion. The iotaSOFT Insertion System is intended for use in cochlear implant patients ages 12 years and older during cochlear implant procedures using either a round window or cochleostomy approach.
The iotaSOFT™ Insertion System is designed to assist the surgeon during cochlear implantation by controlling electrode array insertion speed. The system consists of a single-use, sterile drive unit (iotaSOFT™ DRIVE Unit) connected to a reusable, non-sterile touch screen control console and footpedal (iotaSOFT Controller and Accessories) (Figure 1).
Device Acceptance Criteria and Performance Study: iotaSOFT™ Insertion System
The iotaSOFT™ Insertion System is a powered insertion device designed to aid surgeons in placing cochlear implant electrode arrays. The acceptance criteria for this device and the studies conducted to prove it meets these criteria are detailed below, synthesizing information from the provided document.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the iotaSOFT™ Insertion System are primarily outlined in the "SPECIAL CONTROLS" section and implicitly through the "SUMMARY OF NONCLINICAL/BENCH STUDIES" and "SUMMARY OF CLINICAL TESTING" sections. The reported performance is extracted from the respective study summaries.
| Acceptance Criteria Category | Specific Criteria | Reported Device Performance |
|---|---|---|
| Clinical Performance | Performs as intended under anticipated conditions of use, including evaluation of all adverse events. | - No clinically significant patient safety or outcomes differences based on surgeon experience. - Electrode array impedance measures within normal limits in all subjects at 1-month follow-up. - Neural response telemetry (NRT) measurements present in all subjects with normal cochlear anatomy (n=20/21) at post-operative follow-up. - Adverse events generally as expected for cochlear implant procedures (1 CSF leak noted, but not attributable to device use per instructions). - Insertion time ranged from 57 sec to 6 min, 1 sec (mean 3 min 15 sec). - Post-insertion cochlea view x-ray imaging satisfactory on all insertions with normal cochlear anatomy. |
| Non-Clinical Performance | (i) Verification of CI attachment force, release force, and insertion speed. | - Insertion speed verified to be within 20% of both 0.1 and 1 mm/s. - Maximum insertion force (with array) of <300mN met at both 0.1mm/s and 1mm/s. - Slip force specification of >80mN met at both 1mm/s and 0.1mm/s. - Maximum drive wheel pinch force of <5.6N. - Drive head removal force (when decoupling array) of <100mN. |
| (ii) Device does not damage or degrade the CI. | - Not explicitly stated as a separate acceptance criterion, but implied by successful insertion and normal impedance/NRT. Bench testing on insertion force parameters (pinch force, removal force) suggests no damage. | |
| (iii) Comparison testing with manual insertion to evaluate: | ||
| (A) Differences in CI array insertion force. | - Average of maximum insertion force for iotaSOFT assisted insertions were lower than manual (51% reduction at 0.1mm/s, 32% reduction at 1mm/s). - Average insertion force variation for iotaSOFT assisted insertions was 78% lower than manual at 0.1mm/s and 70% lower at 1mm/s. | |
| (B) Intracochlear placement of CI array. | - Cadaveric comparison: iotaSOFT showed similar results to manual insertion across all surgeons and arrays in terms of: - Scala Tympani Insertion: Manual 44% (7/16), iotaSOFT 50% (6/12)* - Scala Media Translocation: Manual 43% (7/16), iotaSOFT 42% (5/12)* - Scala Vestibuli Translocation: Manual 13% (2/16), iotaSOFT 8% (1/12)* - Tip Fold-Over: Manual 19% (3/16), iotaSOFT 8% (1/12)* - Insertion Angle: Manual 309.98 ± 126.6°, iotaSOFT 307.4 ± 95.9° *Four iotaSOFT samples excluded post-insertion. | |
| Usability Testing | (i) Successful use to aid in placement of electrode array. | - Data show that surgeons with nurse staff support can successfully use iotaSOFT in the hospital environment. |
| (ii) Harms caused by use errors observed. | - Harms associated with use errors deemed minor: prolonged operative/anesthesia time due to delay during surgery and setup. - Mitigated with design changes to screws and screwdriver, and training update. - For all tasks, use errors deemed not to lead to serious patient harm. | |
| CI Compatibility Validation | Changes in CI compatibility are determined to significantly affect safety/effectiveness and must be validated. | - Supported with Advanced Bionics HiFocus SlimJ, Cochlear Slim Straight, MED-EL Flex 24/28 (rationale provided for Flex 28 leveraging Flex 24 data). |
| Biocompatibility | Patient-contacting components demonstrated to be biocompatible. | - Patient-contacting components (right/left drive housings, stage base, arm heat shrink, bone screws, silicone) passed all biocompatibility requirements for an external communicating device in limited contact (<24hr). - Testing performed per ISO 10993-1:2009 and FDA Guidance. |
| EMC/Electrical Safety | Performance testing must demonstrate EMC, electrical safety, and thermal safety. | - Passed all tests for Electrical Safety (compliant with AAMI ES60601-1:2012). - Passed all tests for EMC (compliant with IEC 60601-1-2:2014) including conducted/radiated emissions, ESD immunity, radiated/conducted RF immunity, fast transient/burst immunity, surge immunity, and voltage dips/interruptions/variations. |
| Sterility/Non-Pyrogenicity | Patient-contacting components sterile and non-pyrogenic. | - Single-use components (iotaSOFT Drive Unit) provided sterile via EO sterilization (SAL 10-6). - EO/ECH residuals comply with ISO 10993-7. - Endotoxin testing performed (LAL kinetic turbidimetric method), all samples met acceptance criteria (< 2.15 EU/device). |
| Shelf Life | Performance testing supports shelf life (sterility, package, functionality). | - Labeled 6-month shelf life. - Packaging validation: Samples subjected to EO, environmental conditioning, transportation simulation; met acceptance criteria for seal strength (≥1 lb/in) and gross leak (no bubble leak). - Shelf life validation: Samples subjected to EO, environmental conditioning, accelerated aging; met acceptance criteria for seal strength (≥1 lb/in) and gross leak (no bubble leak). |
| Software V&V | Software V&V and hazard analysis for any software components. | - Software and firmware described, verified, and validated (Control Console Software v1.0.3, Drive Unit PCB Firmware v0.1.1). - Documentation provided: SRS, Device Hazard Analysis, Traceability Analysis, V&V, Revision Level History. - Cybersecurity addressed per FDA guidances, threat model analysis (STRIDE) performed. |
2. Sample Sizes and Data Provenance
The document describes several studies with varying sample sizes and data provenance:
-
Bench Performance Testing (Device Characteristics):
- Sample Size: "All samples tested." (Specific number of devices/arrays not quantified, but implied to be sufficient for statistical confidence based on sterilization/accelerated aging protocols).
- Data Provenance: Non-clinical, bench testing.
-
Synthetic Cochlea Insertion Force Comparison (Surgeon Bench Comparison Testing):
- Sample Size: Devices "having undergone sterilization." (Specific number not quantified). Involves "multiple surgeons" (number not specified).
- Data Provenance: Non-clinical, bench testing using synthetic cochlea models.
-
Cadaveric Comparison Testing:
- Sample Size: 16 cadaver heads (1 cadaver per surgeon).
- Data Provenance: Non-clinical, cadaveric study.
- Country of Origin: Not specified but common for such studies to be conducted in the US.
- Retrospective/Prospective: Prospective, as it involves active experimentation.
-
Usability Testing:
- Sample Size: 16 Surgeon/Nurse teams, 16 cadavers (1 cadaver per team).
- Data Provenance: Non-clinical, simulated use environment in cadavers.
- Country of Origin: Not specified.
- Retrospective/Prospective: Prospective.
-
Clinical Testing:
- Sample Size: 21 subjects.
- Data Provenance: Clinical study.
- Country of Origin: Not specified, but generally US-based for FDA de novo.
- Retrospective/Prospective: Prospective.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
For the cadaveric comparison testing and usability testing, the "experts" are the surgeons involved.
-
Cadaveric Comparison Testing:
- Number of Experts: 16 surgeons.
- Qualifications: Not explicitly stated for each of the 16, but generally implied to be experienced CI surgeons. The usability testing section mentions "Surgeon CI experience level ranged from 1 to 30+ years: approximately equal numbers of surgeons with less than 10 years, 10 to 20 years, and greater than 20 years. Between 10 and 180 implantations were performed manually in the year preceding this study, across all surgeons." This provides a strong indication of their expertise.
- Ground Truth Establishment: Array intracochlear position determined via 3D x-ray. This serves as objective measurement for array placement.
-
Usability Testing:
- Number of Experts: 16 Surgeon/Nurse teams.
- Qualifications: Same as above for surgeons. ("Surgeon CI experience level ranged from 1 to 30+ years: approximately equal numbers of surgeons with less than 10 years, 10 to 20 years, and greater than 20 years. Between 10 and 180 implantations were performed manually in the year preceding this study, across all surgeons.")
- Ground Truth Establishment: "Was task successful? Y/N responses," use errors noted, user comments, Likert scale responses. This is a subjective assessment of usability and safety by the users themselves.
For the clinical testing, the assessment is based on clinical outcomes relevant to device safety and function.
- Number of Experts: 3 surgeons.
- Qualifications: "The principal investigator (PI) of the study... a neurotologist with nearly 40 years experience who implanted ~75 CIs manually in the year prior to this study." The other two surgeons "implanted 25 to 30 CIs each in their careers prior to commencement of this study." This establishes high levels of expertise.
- Ground Truth Establishment: Objective clinical measurements (electrode impedance, NRT measurements, x-ray imaging) and observed adverse events.
For bench performance testing, the ground truth is established by objective engineering measurements (e.g., speed, force values) using calibrated equipment.
4. Adjudication Method for the Test Set
- Cadaveric Comparison Testing: The method for 3D x-ray determination of array position is not detailed in terms of adjudication. It appears to be an objective measurement, not requiring expert consensus adjudication in the typical sense of radiological review.
- Usability Testing: Adjudication is implicitly through observation of tasks, noting of use errors, and direct feedback from the surgeon/nurse teams. No explicit multi-rater adjudication process is described for this feedback.
- Clinical Testing: Clinical outcomes are observed and reported. While medical staff would monitor these, a formal adjudication panel (e.g., for adverse events) is not explicitly detailed but standard practice for clinical trials. The outcome (normal impedance, NRT presence, satisfactory x-ray, expected AEs) suggests general medical consensus for interpreting these results.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No explicit MRMC comparative effectiveness study, as typically seen in AI diagnostic imaging (comparing human readers with vs. without AI assistance), was performed. The studies are more focused on validating the device's performance compared to manual insertion or evaluating its safe and effective use.
- Cadaveric Comparison: This study compares device-assisted insertions to manual insertions, which is a form of comparative effectiveness, but it's human + device vs. human, not human reader with AI vs. human reader without AI assistance in interpretation. The outcomes are about insertion quality, not diagnostic accuracy.
- Effect Size: Not applicable in the context of an MRMC diagnostic study. The "effect" here is the device's impact on insertion parameters and placement. The device assisted in achieving similar array placement outcomes as manual insertion, while showing lower maximum insertion force and lower insertion force variation.
6. Standalone Performance
The "Bench Performance Testing" section details the standalone performance of the algorithm/device (meaning, the device's mechanical and software functions) without human-in-the-loop performance influencing the intrinsic numerical output of speed, force, etc. This shows whether the device itself can meet its operational specifications.
- Insertion speed verification within 20% of set rates (0.1 and 1 mm/s).
- Maximum insertion force <300mN.
- Slip force >80mN.
- Maximum drive wheel pinch force <5.6N.
- Drive head removal force <100mN.
These tests evaluate the device's inherent mechanical and control capabilities.
7. Type of Ground Truth Used
Different types of ground truth were used depending on the study:
- Bench Performance Testing: Objective engineering measurements (e.g., physical force transducers, speed sensors) and device specifications.
- Synthetic Cochlea Insertion Force Comparison: Objective measurements of force (mN) and force variation (mN/sec) from the synthetic models.
- Cadaveric Comparison Testing: Objective anatomical imaging (3D x-ray) to determine intracochlear position, translocations, tip fold-over, and insertion angle.
- Usability Testing: User-reported success/failure, observed use errors, and subjective Likert scale responses.
- Clinical Testing: Objective clinical outcomes (electrode impedance, neural response telemetry (NRT), post-insertion x-ray imaging findings) and observed adverse events.
8. Sample Size for the Training Set
The document does not describe a separate "training set" in the context of machine learning. The iotaSOFT Insertion System is a robotic/mechanical device with sophisticated software, but there is no indication that it uses machine learning algorithms that require a specific "training set" in the way a diagnostic AI model would. The software and firmware are verified and validated, which is a different process from training a machine learning model.
9. How the Ground Truth for the Training Set was Established
Since there is no explicit machine learning training set mentioned, this question is not applicable. The device's operational parameters are based on engineering design and validated through bench, cadaveric, and clinical testing, not "training" in the ML sense.
Ask a specific question about this device
Page 1 of 1