Search Results
Found 2 results
510(k) Data Aggregation
(77 days)
The Bard® Mission® Disposable Core Biopsy Instrument is intended for use in obtaining biopsy samples from soft tissues such as lung, liver, spleen, kidney, prostate, lymph nodes, breast, thyroid, and various soft tissue tumors.
The subject device BARD® MISSION® Disposable Core Biopsy Instrument is a single use core biopsy device. It is available in several needle gauge sizes and lengths. The plunger is color coded according to the various gauge sizes, e.g. yellow = 20 gauge, pink = 18 gauge, purple = 16 gauge, and green = 14 gauge.
The provided text describes the 510(k) submission for the Bard® Mission® Disposable Core Biopsy Instrument. This document focuses on demonstrating substantial equivalence to a predicate device through non-clinical performance testing and biocompatibility.
Crucially, this document does not contain information about a study involving AI assistance, multi-reader multi-case (MRMC) studies, or the establishment of ground truth by human experts for an algorithm's performance. The device in question is a physical medical instrument for obtaining biopsy samples, not a diagnostic imaging AI or similar software.
Therefore, many of the requested criteria regarding AI performance, human reader improvement, expert consensus, and ground truth establishment for a diagnostic algorithm are not applicable to this document.
However, I can extract information related to the device's acceptance criteria and performance based on the non-clinical (bench) testing described in the 510(k) summary.
Here's a breakdown of the available information:
1. Table of acceptance criteria and the reported device performance:
The document states: "The subject device, BARD® MISSION® Disposable Core Biopsy Instrument, met all predetermined acceptance criteria of design verification and validation as specified by applicable standards, guidance, test protocols and/or customer inputs."
While specific numerical acceptance criteria and their corresponding reported performance values are not explicitly detailed in a table, the document lists the in vitro
tests performed and generally states that the device "performed as expected."
Acceptance Criteria Category (Derived from Tests Performed) | Reported Device Performance |
---|---|
Number of Samples (ability to collect samples) | Performed as expected |
Penetration Depths (accuracy of needle penetration) | Performed as expected |
Stylet / Cannula to Handle Tensile Strength (durability) | Performed as expected |
Corresponding Working Needle Length and Cutting Cannula OD, and Stylet/Cannula Working Needle Lengths (dimensional accuracy) | Performed as expected |
Integrity of the Sterile Barrier (sterility maintenance) | Performed as expected |
Performance After Ship Testing (durability during transport) | Performed as expected |
Needle Protection After Shipping and Storage (safety and integrity) | Performed as expected |
2. Sample size used for the test set and the data provenance:
- Sample Size: The document does not specify exact numerical sample sizes for each
in vitro
test. It broadly refers to "the subject device BARD® MISSION®" being tested. - Data Provenance: The tests are described as "in vitro tests," meaning they were performed in a lab setting (bench testing) rather than on human or animal subjects. The provenance is internal testing performed by C.R. Bard. The document does not specify a country of origin for these specific tests, but the submission is to the US FDA. The testing is
prospective
in the sense of being performed for this submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not Applicable. This document describes the performance of a physical biopsy instrument through bench testing, not a diagnostic algorithm requiring expert interpretation or ground truth establishment in a clinical context. The "ground truth" for these engineering tests would simply be the objective measurements and adherence to specifications.
4. Adjudication method for the test set:
- Not Applicable. As above, this is about physical instrument performance tests, not diagnostic interpretations requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not Applicable. This is a physical core biopsy instrument, not an AI-assisted diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not Applicable. This is not an algorithm.
7. The type of ground truth used:
- For the physical tests, the "ground truth" is based on engineering specifications, measurements, and established testing protocols (e.g., ISO 10993-1 for biocompatibility). There is no "expert consensus" or "pathology" in the sense of interpreting images for diagnosis.
8. The sample size for the training set:
- Not Applicable. This is a physical device, not an AI model requiring a training set.
9. How the ground truth for the training set was established:
- Not Applicable. As above, no training set for an AI model.
In summary, the provided document focuses on the substantial equivalence of a physical medical device (biopsy instrument) based on non-clinical (bench) performance testing and biocompatibility, as opposed to the performance of a diagnostic AI algorithm.
Ask a specific question about this device
(22 days)
The SenoMark® Ultra Breast Tissue Marker is intended to radiographically and sonographically mark breast tissue during a percutaneous breast biopsy procedure.
The SenoMark® Ultra Breast Tissue Marker is a sterile, single use device, comprised of a disposable applicator and an implantable marker contains three PGA pads which are visible via ultrasound imaging for approximately 3 weeks and are essentially resorbed by the body after approximately 12 weeks. The center PGA pad contains a metallic wireform interwoven with a PVA polymer. The non-resorbable PVA polymer enhances viewing under ultrasound. The wireform is made of Titanium or BioDur™ 108 in a ribbon or coil shape respectively. The wireform is visible radiographically on a permanent basis. The SenoMark® Ultra Breast Tissue Marker is intended for breast tissue marking during a breast biopsy procedure.
The provided text is a 510(k) summary for a Special 510(k) device modification. It describes the SenoMark® Ultra Breast Tissue Marker and compares it to a predicate device. However, this document does not contain any information about acceptance criteria or a study proving the device meets certain performance criteria beyond a general statement of "performance specifications" that are "identical to the specifications provided in the reference devices."
The document focuses on demonstrating substantial equivalence to a predicate device through technological comparison and nonclinical testing that assessed product characteristics and deployment mechanics. It explicitly states:
- "Performance specifications - With the change to the legally marketed UltraClip II US wireforms, the performance specifications for ultrasound imaging and MRI compatibility will now include permanent ultrasound visibility and scanning in up to a 3-Tesla MR system. These specifications are identical to the specifications provided in the reference devices."
- "To demonstrate substantial equivalence of the subject device to the predicate device, the technological characteristics and performance criteria were evaluated. Using the FDA guidance document, 'Design Control Guidance for Medical Device Manufacturers,' dated March 11, 1997, and internal risk assessment procedures, the following nonclinical tests were performed:
- Visual Inspection of Product for Pre-Deployment
- Inversion Test
- Deployment Force
- Read and Understand the IFU"
Therefore, based solely on the provided text, I cannot fill in the requested table or detail a study proving specific acceptance criteria related to clinical performance (e.g., sensitivity, specificity, or reader improvement). The document focuses on physical and functional equivalence to a predicate device, not on diagnostic performance metrics.
Here's what I can extract based on the provided text, and identify what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Explicitly Stated in Document) | Reported Device Performance (Explicitly Stated in Document) |
---|---|
Visual Inspection of Product for Pre-Deployment (Nonclinical Test) | Results demonstrate comparable technological characteristics and performance criteria to the predicate device. Performed as safely and effectively as the legally marketed predicate device. (Specific quantitative results or pass/fail criteria are not detailed in this summary.) |
Inversion Test (Nonclinical Test) | Results demonstrate comparable technological characteristics and performance criteria to the predicate device. Performed as safely and effectively as the legally marketed predicate device. (Specific quantitative results or pass/fail criteria are not detailed in this summary.) |
Deployment Force (Nonclinical Test) | Results demonstrate comparable technological characteristics and performance criteria to the predicate device. Performed as safely and effectively as the legally marketed predicate device. (Specific quantitative results or pass/fail criteria are not detailed in this summary.) |
Read and Understand the IFU (Nonclinical Test) | Results demonstrate comparable technological characteristics and performance criteria to the predicate device. Performed as safely and effectively as the legally marketed predicate device. (Specific quantitative results or pass/fail criteria are not detailed in this summary.) |
Permanent ultrasound visibility (Performance Specification) | Now includes permanent ultrasound visibility, identical to specifications in reference devices (K042341 and K090547). |
MRI compatibility: scanning in up to a 3-Tesla MR system (Performance Specification) | Now includes scanning in up to a 3-Tesla MR system, identical to specifications in reference devices (K042341 and K090547). |
Missing Information: The document does not provide quantitative acceptance criteria (e.g., minimum deployment force, specific visibility metrics) or detailed quantitative performance results for these tests. It only states that the performance is "comparable" and "performs as safely and as effectively as the legally marketed predicate device."
Study Details (Based on Provided Text):
The document describes "nonclinical tests" rather than a clinical study. The goal was to demonstrate "substantial equivalence" through evaluation of technological characteristics and performance criteria, referencing "FDA guidance document, 'Design Control Guidance for Medical Device Manufacturers,' dated March 11, 1997, and internal risk assessment procedures."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample size: Not specified. The document only mentions "nonclinical tests."
- Data provenance: Not applicable, as this describes nonclinical tests, not data from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. This was a technical/engineering evaluation of the device itself, not a study requiring clinical ground truth from experts.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable. No adjudication method described for nonclinical tests.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done, nor is this device an AI system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is a physical medical device, not an algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable in the clinical sense. For the nonclinical tests, the "ground truth" would be established engineering specifications or benchmarks for the predicate device, which are not detailed here.
8. The sample size for the training set
- Not applicable. No training set for an algorithm.
9. How the ground truth for the training set was established
- Not applicable. No training set for an algorithm.
Ask a specific question about this device
Page 1 of 1