Search Results
Found 1 results
510(k) Data Aggregation
(122 days)
The CoreDx Pulmonary Mini-Forceps are specifically designed to collect tissue endoscopically for histologic examination. These forceps can be used with endoscopes for ultrasound guided mini-forcens biopsy (MFB) of submucosal and extramural lesions of the trachcobronchial tree. These for any purpose other than their intended use.
The CoreDx Pulmonary Mini-Forceps is a sterile, single use device comprised of jaws at the distal tip, attached to a flexible coil with a spool and a thumb ring handle attached at the proximal end of the device. The two radial jaws are attached to an actuation mechanism and can be opened and closed by sliding the spool handle. The jaws are designed to tear and retain tissues within the jaws. Once the CoreDx Pulmonary Mini-Forceps is positioned at the target area, the radial jaws are opened and a sample of tissue is collected for histological examination. The CoreDx Pulmonary Mini-Forceps are designed to be compatible with scopes that have a working channel with a minimum inner diameter (ID) of 1.2 mm. The CoreDx™ Pulmonary Mini-Forceps is designed for use in the pulmonary system. The proposed device can also be used following an EBUS Transbronchial Needle Aspiration (EBUS-TBNA) procedure to pass through the airway wall and access a lymph node via an EBUS scope.
The provided text describes the 510(k) clearance for the CoreDx™ Pulmonary Mini-Forceps, a medical device used for collecting tissue samples. The majority of the document focuses on regulatory information, device description, indications for use, and a comparison to predicate devices, rather than a detailed study proving the device meets specific performance acceptance criteria.
While the document states that "the performance of the proposed device meets the requirements of its pre-defined acceptance criteria and intended use," it does not provide the specific numerical acceptance criteria for each test or detailed study results. The "Performance Data" section lists the types of tests performed but lacks the quantitative outcomes and the methodology of a study designed to prove the device meets pre-defined acceptance criteria.
Therefore, many of the requested details cannot be extracted from this document, as it primarily focuses on the regulatory submission and substantial equivalence argument rather than the in-depth results of a clinical or performance study.
Here's an attempt to answer based on the available information and explicitly noting what is not present:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly provided in the document. The document states "the performance of the proposed device meets the requirements of its pre-defined acceptance criteria," but it does not list these criteria or the specific numerical performance results against them.
The types of tests performed (without specific results or criteria) include:
Test Type | Reported Device Performance (Qualitative) | Acceptance Criteria (Not Detailed) |
---|---|---|
Biocompatibility | Met requirements of ISO 10993: Cytotoxicity, Sensitization, Intracutaneous Reactivity, Acute Systemic Toxicity, USP Rabbit Pyrogen Testing. | Meets ISO 10993 standards and specific tests. |
Sterilization | Met requirements of ISO 11135-1 (Ethylene Oxide) and ISO 10993-7 (ETO residuals). | Meets ISO 11135-1 and ISO 10993-7 standards. |
Bench Tests | Passability, Pushability, Working Length, Device Reliability, Forceps Operation, Forceps Integrity, Smooth Edges. | Device performed as intended and met pre-defined requirements (details not provided). |
Ultrasound Visibility | Confirmed through user feedback assessment in a swine lung model. | Visible under ultrasound. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not specified for any of the performance tests.
- Data Provenance:
- Bench tests suggest laboratory testing.
- Ultrasound visibility was assessed in a "swine lung model," indicating an animal study.
- No human clinical data (retrospective or prospective) is mentioned for performance validation against acceptance criteria.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided. The document mentions "user feedback assessment" for ultrasound visibility in a swine model but does not detail the number or qualifications of these users/experts. For other mechanical/biocompatibility tests, ground truth typically comes from lab measurements against standards.
4. Adjudication Method for the Test Set
This information is not provided. Given that the tests primarily involve bench testing and an animal model, formal "adjudication" in the sense of human reader consensus for imaging interpretation would not apply.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
A Multi-Reader Multi-Case (MRMC) study was not performed, nor is it applicable. This device is a mechanical biopsy forceps, not an AI-assisted diagnostic tool.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This question is not applicable. The device is a physical medical instrument, not an algorithm or AI.
7. The Type of Ground Truth Used
- For biocompatibility: Adherence to ISO standards and specific test methods (e.g., cell viability, toxicity assays).
- For sterilization: Adherence to ISO standards for ethylene oxide sterilization and residuals.
- For bench tests (Passability, Pushability, etc.): Likely engineering specifications and measurements to determine if the device performed within defined parameters (details not provided).
- For ultrasound visibility: User feedback assessment in a swine model, implying qualitative or semi-quantitative observation by unquantified "users."
8. The Sample Size for the Training Set
This question is not applicable. The device is not an AI/machine learning algorithm that requires a "training set."
9. How the Ground Truth for the Training Set was Established
This question is not applicable. The device is not an AI/machine learning algorithm.
In summary, the provided FDA 510(k) clearance letter and summary primarily focus on demonstrating substantial equivalence to predicate devices through regulatory compliance, material testing, and functional bench testing. It does not contain the detailed quantitative performance study results and methodologies typically expected for an AI/software as a medical device (SaMD) submission. The "acceptance criteria" mentioned are qualitative statements about meeting standards and pre-defined requirements, rather than specific numerical thresholds.
Ask a specific question about this device
Page 1 of 1