Search Results
Found 3 results
510(k) Data Aggregation
(147 days)
Advanced Surgical Concepts ltd
The RedEx contained extraction system is indicated to contain and isolate tissue during, or prior to, surgical removal and /or extracorporeal manual morcellation.
The Advanced Surgical Concepts Ltd, RedEx, is a contained extraction system; proposed under classification regulation 21 CFR 876.1500, device class II and product code GCJ.
The device is provided sterile for single use.
The RedEx consists of a flexible specimen containment Baq, with an integrated Opening Ring and Bag Tether and a separate Guard component to protect the Bag and incision.
The Bag is made from polyurethane (PU) film and comes preloaded in an Introducer. There is a Plunger to deploy the Bag into the abdominal cavity. Any FDA cleared 12mm trocar may be used as an accessory for device deployment. This is a standard sized trocar for use in laparoscopic surgery. A blue arrow on the Introducer provides the user with the correct orientation for insertion of the Introducer to ensure the Bag is correctly deployed.
After the Baq is ejected from the Introducer into the abdominal cavity, the mouth of the Bag returns to its original circular shape. The nitinol wire Opening Ring facilitates placement of the specimen in the Bag. When the specimen is encapsulated and ready for removal or extracorporeal manual morcellation the Bag Tether is pulled, closing the Bag. The Bag Tether and Opening Ring exit through the 12mm trocar, indicating the Bag is fully closed. The incision is then increased to the required size, 2.5-6cm, prior to removal of the trocar is removed, and the mouth of the Bag is opened outside the abdomen. The free end of the Guard. which includes the Guard Petals and is opposite the end with the Guard Ring, is then inserted through the mouth of the Bag followed by the Anchor Ring. The Guard is actuated by flipping the Rolling Ring inward until the incision is maximized. The Guard Petals, which are made from a tough polyethylene (PE) film, overlap and conform to the incision; protecting the incision and Bag material from inadvertent scalpel strikes and from the traumatic graspers that are used to grasp and hold the tissue specimen at the incision.
The physician then performs extracorporeal manual morcellation using manual surgical instruments (e.g., a grasper and a scalpel). When the tissue specimen has been removed, the surgeon flips the Rolling Ring in the opposite direction two or three times and pulls on the Removal Ribbon to remove the Guard. The Bag is removed by grasping the Opening Ring and carefully removing the Bag from the incision.
The provided text describes the RedEx device, a contained extraction system for tissue removal during surgery, and its comparison to a predicate device to establish substantial equivalence for FDA clearance. However, it does not contain a study that establishes clear acceptance criteria or reports device performance against such criteria in the quantifiable way requested. Instead, it details various performance tests conducted to demonstrate substantial equivalence to a predicate device.
Therefore, I cannot populate the table or answer all sub-questions as requested because the specific acceptance criteria and detailed device performance outcomes are not explicitly stated in the provided text in a quantifiable manner (e.g., sensitivity, specificity, accuracy, or specific thresholds for durability, puncture resistance etc.). The document focuses on demonstrating that the RedEx device performs "as intended" and "as expected" and is "as safe and effective" as the predicate device through various testing, rather than presenting a study with pre-defined, quantifiable acceptance criteria and the RedEx's performance against them.
Here's what can be extracted and what is missing based on the provided text:
1. A table of acceptance criteria and the reported device performance:
This information is not explicitly provided in the document in a quantifiable format. The document states that "In all instances, the RedEx functioned as intended and the results observed were as expected" for the various tests. It also mentions testing areas like "Bag material and seal strength," "Bag material puncture-resistance," "Guard puncture-resistance," "Guard coverage and security," and "Simulated use," but does not provide specific numerical acceptance criteria (e.g., minimum tensile strength, maximum puncture force, minimum coverage percentage) or the corresponding measured performance values for the RedEx device for these tests. The comparison is framed in terms of "Substantial Equivalence," implying that its performance was comparable to the predicate for these aspects.
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The document mentions "Side-by-side testing" and "Additional testing" but does not provide the number of units or cases used in these tests.
- Data Provenance: The document implies the tests were conducted by the manufacturer, Advanced Surgical Concepts Ltd, potentially with independent laboratory assistance for biocompatibility (Toxikon). The data is associated with the device's premarket notification (K211234). There is no mention of country of origin for data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of experts and qualifications: Not mentioned. The testing described is primarily bench-top and simulated use. There is no mention of human experts establishing ground truth for the performance of the device in a clinical context within this document.
4. Adjudication method for the test set:
- Adjudication method: Not mentioned. Given the nature of the bench and simulated use testing, a formal adjudication method by experts is unlikely to have been part of the evaluation for these specific performance tests.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC study: Not applicable/Not mentioned. The device is a contained extraction system (medical device), not an AI-based diagnostic tool for interpretation by human readers. Therefore, an MRMC study or AI-related improvement metrics are not relevant or discussed in this document.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone algorithm performance: Not applicable/Not mentioned. This is not an AI-driven device.
7. The type of ground truth used:
- Type of ground truth: For the performance tests listed (e.g., material strength, puncture resistance, simulated use), the "ground truth" would be established by physical measurements, engineering standards, and functional assessments against design specifications or a predicate device's performance. There is no biological or diagnostic "ground truth" (like pathology or clinical outcomes) being assessed in the context of these specific performance tests described.
8. The sample size for the training set:
- Training set sample size: Not applicable/Not mentioned. This is a physical medical device, not an AI model requiring a training set.
9. How the ground truth for the training set was established:
- How ground truth was established for training set: Not applicable/Not mentioned for the same reason as above.
Ask a specific question about this device
(81 days)
Advanced Surgical Concepts Ltd.
The ContainOR is a bag containment system intended for use by qualified surgeons for tissue extraction and/or power morcellation during general laparoscopic procedures. The ContainOR is compatible with bipolar or electromechanical laparoscopic power morcellators that are between 15mm and 18mm in shaft outer diameter and 135mm and 180mm in shaft working length and which have an external component that allows for the proper orientation of the laparoscope to perform a contained morcellation.
The ContainOR device consists of two main components:
- A laparoscopic multi-instrument port .
- Tissue pouch (Bag) intended to provide a contained space in the abdomen for the safe morcellation of tissue.
Here's a breakdown of the acceptance criteria and the studies conducted for the ContainOR device, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are primarily derived from the "Special Controls" section and the results reported in the "Design Verification" table and "Immersion Testing" section. Due to the nature of the device (a containment system), many acceptance criteria are "Pass" for integrity or functionality. Quantitative criteria are present for immersion testing, shelf life, and some simulated use cases.
Test Category / Acceptance Criteria | Reported Device Performance |
---|---|
Material Biocompatibility (Special Control 1) | Device components demonstrated to be biocompatible (per ISO 10993-1, leveraging previous DEN150028 evaluation). |
Device Sterility (Special Control 2) | Achieves a sterility assurance level (SAL) of 10^-6 (per ISO 11137:2006). |
Shelf Life (Special Control 3) | Supports a 1-year labeled shelf-life. Visual inspection, barrier properties (seal strength, bubble leak) demonstrated to be |
maintained after accelerated aging. Device functionality maintained after 1 year accelerated aging, with no leaks in simulated use. | |
Device Functionality – In Vitro (Non-clinical Performance Data) (Special Control 4) | |
Impermeability to tissue, cells, and fluids (Filter Test) (Special Control 4a) | Zero failures in 32 samples (estimated lower bound for passing leakage test ≥ 0.893 with 95% CI) when challenged with B. diminuta. |
Impermeability to tissue, cells, and fluids (Immersion Test - Post Morcellation) (Special Control 4a) | 32 test samples without any failures (upper bound on 95% CI for failure rate of 0.107), designed to detect superiority against a 12.5% failure rate. |
Allows for insertion/withdrawal of laparoscopic instruments while maintaining pneumoperitoneum (Special Control 4b) | Demonstrated through various design verification tests (e.g., Test 3: flow rate, leakage rate – passed after design revision due to initial issue; Test 4: time to insert/remove ContainOR system). |
Provides adequate space to perform morcellation and adequate visualization (Special Control 4c) | Demonstrated by the ability to successfully morcellate tissue in simulated use, with operators making observations that led to additional safety statements in labeling (e.g., morcellator could contact pouch at extreme angle, but not relevant to expected use). Device design allows for direct visualization. |
Compatible laparoscopic instruments and morcellators do not compromise integrity (Special Control 4d) | Preliminary bench testing showed some tenacula could damage material but led to safety statements. Powered morcellation test with 5 different morcellators showed potential for contact at extreme angles, also leading to safety statements. Design verification tests (e.g., Test 5) demonstrate no leakage. |
Users can adequately deploy the device, morcellate a specimen without compromising integrity, and remove without spillage (Special Control 4e) | Animal Model Training Validation: 34 participants, 102 ContainOR systems used in total, no leaks observed (estimated lower bound on 95% CI for leakage = 0.898, exceeding 0.875 limit). |
Animal Model Design Validation: 31 participants, no observed leaks (lower bound on 95% CI for success = 0.889, exceeding 0.875 minimum). Stone testing showed no bag damage or leaks. | |
Design Verification (Table 1) | |
Test 1: Inspection of Components (Components match color/description, free from damage/sharp edges) | Pass |
Test 2: Performance and Set-up of Retractor (Retract abdomen, maintain incision opening, removal force, time to set-up) | Pass |
Test 3: Set-up and Use of Valve Assembly (Time to attach/remove valve/reducer, flow rate, leakage rate) | Pass (Leakage rate: Pass* after design revision) |
Test 4: Set-up and Use of ContainOR System (Time to insert/remove) | Pass |
Test 5: Inspection of components, assemblies seams (No leakage when ContainOR filled) | Pass |
Test 6: Base Retractor Assembly (Weld integrity) | Pass (all listed welds) |
Test 7: Valve Assembly (Bond integrity) | Pass (all listed bonds) |
Test 8: ContainOR Pouch Assembly (Weld/tab integrity) | Pass (all listed welds/tabs/crimps) |
Test 9: Forces required to use ContainOR system (Insert, retract, attach, eject, remove various components) | Pass (all listed actions) |
Ability to handle stones in tissue (Specific to kidney stones due to label claim) | Simulant stones did not lead to bag damage or leaks, and were retrieved. Additional testing with general surgeons demonstrated safe and effective use without compromising bag integrity, even with stones. |
Boxed Warning/Contraindications/Limitations/Training (Special Control 6) | Labeling includes all required warnings, contraindications, limitations, and training requirements. Training was developed and validated to ensure users can follow IFU. |
2. Sample Sizes Used for the Test Set and Data Provenance
Due to the nature of the device and the studies, "test sets" often refer to multiple samples of the physical device or simulated procedures.
- Filter Testing (Impermeability to bacteria):
- Initial: 25 samples of pouch material.
- After accelerated aging: 32 test samples (and 1 control).
- Immersion Testing (Pouch Integrity Post-Morcellation):
- Initial: 25 samples.
- First Morcellation Test Group: 22 test samples (35 initially, 6 excluded for initial leak, 4 excluded for aberrant bacteria).
- Second Morcellation Test Group (Revised Protocol): 10 test samples (24 initially, 3 excluded for initial leak, 6 excluded for contamination, 2 excluded for leak post-incubation).
- Total for analysis: 32 test samples.
- Shelf Life Testing:
- Visual Inspection & Bubble Leak: 15 samples.
- Seal Strength: 60 samples.
- Device Functionality (simulated use): 35 samples.
- Preliminary Bench Testing:
- Laparoscope puncture: 30 test samples.
- Tenaculum damage: 5 different tenacula, each with 30 material samples (150 total).
- Powered Morcellation (contact with liner): 5 morcellators, used once each.
- Pressure/Burst Testing: 30 ContainOR system samples.
- Obstruction Testing: 30 samples.
- Design Verification (Table 1): Each of the 9 separate tests included 30 or more device samples.
- Clinical Simulation of Morcellation (Animal tissue in SSTR): 34 ContainOR pouches, 5 ContainOR system valve assembly and retractors.
- Additional Testing to Support Use with Stones:
- Stone validation: Not specified, but involved urologists.
- ContainOR performance with stones: Not specified, but involved porcine kidneys in a simulator.
- Surgeon training with stones: 5 general surgeons.
- Animal Model (Training Validation): 34 participants, total of 102 ContainOR systems used.
- Animal Model (Design Validation): 31 participants, 31 ContainOR systems used.
Data Provenance: All data appears to be from in-house studies conducted by the manufacturer, within a controlled laboratory or simulated environment, and an animal model. The country of origin for the device contact is Ireland, implying studies were likely conducted either there or at a contract research organization. The studies are prospective in nature, as they are designed to test the device's performance against pre-defined criteria.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Experts for Stone Simulant Validation: Two urologists. Their specific qualifications (e.g., years of experience) are not detailed beyond "urologists." They confirmed that simulant stones accurately mimicked actual kidney stones.
- Experts for Animal Model Training and Design Validation:
- Training Validation: 34 participants with a range of experience in laparoscopic procedures. "Experienced" was defined as having previously performed at least 5 power morcellation procedures. "Inexperienced" was the remainder.
- Design Validation: 31 participants (from the training validation study, minus 3 inexperienced subjects), same qualification breakdown.
- Additional Training with Stones: 5 general surgeons. Their specific experience level is not detailed, but they successfully completed validated training for the PneumoLiner and proposed ContainOR IFU.
For most bench and simulated use tests (like impermeability, leakage, strength, setup times), the "ground truth" is established by direct measurement or observation of the physical properties and performance of the device against engineering specifications, rather than expert consensus on an interpretation of data.
4. Adjudication Method for the Test Set
Adjudication methods like 2+1 or 3+1 are typically used for clinical endpoints, especially when interpreting subjective data (e.g., image reads). For the ContainOR device, which is evaluated primarily through objective performance metrics (leakage, strength, setup times, successful operation), such formal adjudication methods are not explicitly mentioned or typically necessary.
- Leakage Tests (Immersion, Animal Model): The presence or absence of a leak was determined by inspection (e.g., visual check for water/bacterial growth). This is a binary, objective outcome.
- Design Verification Tests: "Pass" results are based on meeting pre-defined quantitative or qualitative engineering criteria.
- User Performance (Animal Model): Success was judged by the participant's ability to "successfully set up and use the ContainOR system" and the subsequent "no leaks" observation, confirmed by a test coordinator.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was done for this device. This type of study is more common for diagnostic imaging AI algorithms where the question is how AI assistance changes human reader performance in making diagnoses. The ContainOR is a surgical containment device, and its evaluation focuses on its physical and functional integrity, and the ability of surgeons to effectively use it, rather than interpret data.
6. Standalone (Algorithm Only) Performance Study
Not applicable. The ContainOR is a physical medical device, not an AI algorithm. Its performance is always evaluated "standalone" in the sense that its intrinsic properties are being tested (e.g., material strength, impermeability), or its use standalone by a human surgeon (without other AI assisting the surgeon during the procedure being evaluated).
7. Type of Ground Truth Used
The ground truth for most of the studies is based on:
- Direct Physical Measurement/Observation: For strength, leakage, impermeability to bacteria, setup times, flow rates, and visual inspection criteria.
- Engineering Specifications/Acceptance Criteria: For quantitative and qualitative "Pass/Fail" determinations in design verification.
- Expert Consensus (Limited): Two urologists provided consensus on the accuracy of stone simulants.
- Performance Outcome: Successful completion of a surgical procedure in a simulated or animal model, determined by the absence of leaks in the device and the ability of users to follow instructions.
8. Sample Size for the Training Set
The concept of a "training set" as understood in AI (data used to train a machine learning model) does not apply to this physical device.
However, if "training set" refers to the data or procedures used to train human users of the device:
- Training Validation Study: 34 participants were explicitly "trained" in the use of the ContainOR system through a structured program. This involved both assisted and unassisted use in a training rig and a porcine model.
9. How the Ground Truth for the Training Set Was Established
Again, this question directly applies to AI/ML context. For the training of human users:
- The "ground truth" for the training program was established by the manufacturer through their validated instructional methods and procedures for the ContainOR system, referencing the Instructions for Use (IFU).
- The effectiveness of this human training was then "validated" by observing whether participants (both experienced and inexperienced) could successfully set up and use the device without compromising its integrity (i.e., without leaks), as determined by objective leak tests conducted by a test coordinator. User feedback also contributed to refining the IFU and training.
Ask a specific question about this device
(293 days)
ADVANCED SURGICAL CONCEPTS LTD.
The PneumoLiner device is intended for use as a multiple instrument port and tissue containment system during minimally invasive gynecologic laparoscopic surgery to enable the isolation and containment of tissue considered benign, resected during single-port or multi-site laparoscopic surgery during power morcellation and removal. The PneumoLiner is compatible with bipolar or electromechanical laparoscopic power morcellators that are between 15 mm and 18 mm in shaft outer diameter and 135 mm and 180 mm in shaft working length and which have an external component that allows for the proper orientation of the laparoscope to perform a contained morcellation.
The PneumoLiner System consists of two main components:
- A laparoscopic multi-instrument port
- Tissue pouch (PneumoLiner) intended to provide a separately contained space within the abdomen for the safe morcellation of tissue
As depicted in Figure 1 below, the laparoscopic multi-instrument port consists of the Retractor, Retractor Introducer and the Boot Assembly.
The provided text describes the acceptance criteria and performance of the PNEUMOLINER system, but it does not include a study that directly compares human readers with and without AI assistance (a multi-reader multi-case comparative effectiveness study). The device itself is a medical containment system, not an AI-powered diagnostic or assistive tool. Therefore, some of the requested information regarding AI-specific studies will not be present.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document details extensive performance testing. Here, we compile a table based on the "Design Verification" section, which includes quantitative acceptance criteria, and other key performance tests.
Test Category | Specific Test / Performance Area | Acceptance Criteria | Reported Device Performance |
---|---|---|---|
Barrier Testing (Impermeability) | Filter Test (against Brevundimonas diminuta) | Superiority against an 85% rate of passing the leakage test. | 0 failures in 32 samples (accelerated aged). Estimated lower bound for passing leakage test: 0.893 (95% CI). |
Immersion Test (post-morcellation integrity, against B. diminuta) | Maximum allowable failure rate of 0.125 (12.5%) for detecting superiority against a set failure rate (one-sided significance level of 0.025, 90% power). | 0 failures in 32 samples (first group) + 0 failures in 10 samples (second group) after morcellation. Total 38 samples considered in analysis (out of 59 selected). Upper bound on 95% CI for failure rate: 0.107. | |
Shelf Life/Sterility | Sterility (SAL) | 10⁻⁶ Sterility Assurance Level | Achieved. |
Package integrity (visual, bubble leak, seal strength) | Visual inspection per ASTM F1886; Bubble Leak per ASTM F2096; Seal Strength per ASTM F88. | All samples passed. | |
Device functionality (after accelerated aging, leakage assessment) | Mimics design verification, no leakage. | Tested samples met test acceptance criteria. | |
Design Verification (Table 1 & footnote) | Test 1: Inspection of Components | Components match color and description, free from damage, no sharp edges, features. | Pass |
Test 2: Performance and Set-up of Retractor (Incision Opening Maintenance) | Incision remain retracted after 3 hours. | Pass | |
Test 2: Performance and Set-up of Retractor (Removal Force) | (b) (4) | Pass | |
Test 2: Performance and Set-up of Retractor (Time to set-up retractor) | (b) (4) | Pass | |
Test 3: Set-up and Use of Boot Assembly (Leakage rate) | (b) (4) | Pass* (Initial failure related to large instrument passage was resolved with revised design meeting criteria). | |
Test 4: Set-up and Use of PneumoLiner System (Time to insert) | 0.875. | No leaks observed in 34 PneumoLiner Systems used by participants. Estimated lower bound on 95% CI for leakage: 0.898 (> 0.875). | |
Design Validation (Porcine Model) | Device integrity (no damage to pouch by surgeons in clinical setting) and Successful use (no leaks) | Lower bound on 95% confidence interval for success > 0.875 (derived from a simple superiority test with 90% power and 0.025 alpha against a 0.875 success limit). | No device failures (leaks) noted in 31 tests. Lower bound on 95% CI for success: 0.889 (> 0.875). |
Note: (b) (4) indicates redacted information, typically quantitative values or specific methods.
2. Sample Size Used for the Test Set and Data Provenance
- Barrier Testing (Filter Test):
- Accelerated Aged: 32 test samples (plus 1 control).
- Provenance: Bench testing, in-vitro.
- Barrier Testing (Immersion Test):
- Initial Group: 22 test samples (from an initial 35, with 6 excluded for initial leaks and 4 for aberrant bacteria).
- Additional Group: 10 test samples (from an initial 24, with 3 excluded for initial leaks and 6 for contamination).
- Total for Analysis: 32 samples (plus controls).
- Provenance: Bench testing, using TSB (Tryptone Soya Broth) and B. diminuta.
- Bench Testing (Preliminary):
- Laparoscope Puncture: 30 test samples.
- Tenaculum Damage: 30 material samples per each of 5 different tenacula.
- Pressure/Burst Testing: 30 PneumoLiner System samples.
- Obstruction Testing: 30 samples.
- Provenance: Bench testing, in-vitro.
- Design Verification (Table 1): Over 30 device samples per test. (Specific counts not always provided but stated as "30 or more").
- Provenance: Bench testing, in-vitro.
- Clinical Simulation of Morcellation: 34 PneumoLiner pouches and 5 PneumoLiner System boot assembly and retractors.
- Provenance: Simulated use in a surgical simulation test rig (SSTR) using animal tissue (lamb heart, beef tongue).
- Training Validation: 34 participants, each using at least 3 PneumoLiner Systems. A total of 102 PneumoLiner Systems were used.
- Provenance: Porcine model (animal study, in-vivo simulation).
- Design Validation: 31 participants, each using one PneumoLiner System.
- Provenance: Porcine model (animal study, in-vivo simulation).
No Human Data: All listed studies are non-clinical (bench or animal models). Therefore, there is no country of origin for human data, as no human data was used directly to support device performance. The studies are prospective in design.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This device is not an AI diagnostic device. Therefore, the concept of "experts establishing ground truth" in the traditional sense of medical image interpretation (e.g., radiologists reviewing images) does not directly apply here.
However, for the Training Validation and Design Validation studies, participants included "surgeons with advanced training in laparoscopic techniques," with varying levels of experience (categorized as "Experienced" and "Inexperienced"). These are the "users" of the device, whose ability to correctly use the device and avoid damage established a form of "ground truth" for usability and safety in a simulated clinical scenario.
- Training Validation: 34 participants (experts/users).
- Design Validation: 31 participants (experts/users).
Their qualifications are described as participants with a "range of experience in laparoscopic procedures," with the device being intended for "surgeons with advanced training in laparoscopic techniques."
4. Adjudication Method for the Test Set
Adjudication methods (like 2+1, 3+1) are typically used for establishing consensus on ground truth in studies involving expert review of data (e.g., radiology reads). As this is a study about the physical performance and usability of a medical device, such an adjudication method is not applicable.
Instead, the "ground truth" for the performance tests was based on:
- Pre-defined objective criteria (e.g., absence of bacterial growth, lack of leaks, force thresholds, time limits, visual inspection for defects).
- For the simulated use studies (Training and Design Validation), the "ground truth" on successful device use and absence of leaks was assessed by a "test coordinator" or the test team against clearly defined pass/fail criteria (e.g., visual inspection for leaks after a water test).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done.
MRMC studies typically compare multiple human readers' diagnostic performance on multiple cases, often to evaluate the impact of an AI algorithm on reader accuracy or efficiency. The PNEUMOLINER is a physical medical device, not an AI diagnostic tool, so this type of study is not relevant to its assessment.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
No, a standalone performance study in the context of an AI algorithm was not done.
This terminology applies to AI algorithms. The PNEUMOLINER is a physical device that requires human operation. Its "standalone" performance is measured through bench testing and simulated use, where the device itself is tested against physical parameters (e.g., impermeability, strength, ability to contain tissue). These tests were conducted without human interaction beyond operating the test equipment or, in simulations, the device itself.
7. The Type of Ground Truth Used
The ground truth used for the various performance tests includes:
- Objective Physical Measurements: Force thresholds, pressure levels, time metrics, visual inspection for damage, dimensional accuracy.
- Biological Impermeability: Absence of bacterial growth (B. diminuta) after challenge.
- Integrity Assessment: Absence of leaks after water filling and visual inspection.
- Usability/Safety: Successful setup and operation of the device by trained users, and absence of device compromise (damage/leakage) in simulated clinical scenarios (porcine models). This is based on observation by test coordinators against predefined success/failure criteria.
8. The Sample Size for the Training Set
The concept of a "training set" typically refers to data used to train machine learning models. As the PNEUMOLINER is a physical medical device and not an AI algorithm, there is no "training set" in this sense.
However, if we interpret "training set" as the data used to inform the design and development of the device (before formal verification/validation), or the data used for the training program for users, then:
- Device Development Data: The document mentions "preliminary tests intended to generate acceptance criteria for their design verification tests as well as to validate the surgical simulator and training rig." These tests (e.g., Laparoscope puncture (30 samples), Tenaculum damage (150 samples), Powered Morcellation (5 samples), Pressure/Burst (30 samples), Obstruction Testing (30 samples)) could be considered analogous to data used in the formative stages.
- User Training Program Data: The "Training Validation" study itself involved 34 participants using a total of 102 PneumoLiner Systems in a porcine model. This study validated the effectiveness of the user training program, which is crucial for the device's safe and effective use.
9. How the Ground Truth for the Training Set Was Established
Again, applying the AI model analogy, there is no "ground truth" for an AI training set here.
If referring to the "Training Validation" study for users:
- The ground truth was established by objective observation of user performance by a study coordinator against predefined criteria for successful setup, use, and removal of the device, as well as post-procedure leak testing of the PneumoLiner pouch. The Instructions for Use (IFU) served as the standard against which user performance was evaluated. The outcome was binary: successful execution of steps and no leaks.
Ask a specific question about this device
Page 1 of 1