Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
The FibermarX™ Radiopaque Tissue Marker is intended to be implanted into the body to accurately visualize and constitute the reference frame for stereotactic radiosurgery and radiotherapy target localization. In addition, the markers are indicated in situations where tissue needs to be marked for future medical procedures such as IMRT/IGRT.
The device is a sterile, single-patient-use, barium sulfate infused non-absorbable polymer monofilament that is visible on standard radiographs (x-ray, CT, mammography). FibermarX™ is passed through soft tissue and tied into place during open, percutaneous, or arthroscopic/laparoscopic/endoscopic procedures and standard surgeon's knots or a continuous running outline are used to quickly mark the soft tissue for subsequent imaging or for radiotherapy target localization.
The provided document is a 510(k) premarket notification for a medical device called the FibermarX™ Radiopaque Tissue Marker. This type of submission focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than providing extensive clinical study data that would typically include detailed acceptance criteria and performance metrics of the new device in a standalone clinical trial.
Therefore, the document does not contain the requested information regarding acceptance criteria and a study that proves the device meets those criteria in the context of a comparative effectiveness study, standalone AI algorithm performance, or using experts to establish ground truth. The submission focuses on non-clinical testing to demonstrate substantial equivalence to a predicate device.
Here's a breakdown of what can be extracted or inferred from the document based on the standard 510(k) submission process:
1. A table of acceptance criteria and the reported device performance
The document does not provide a formal table of acceptance criteria and reported device performance in the manner one would see for a clinical trial or AI algorithm validation. Instead, it describes non-clinical tests conducted to support substantial equivalence. The "performance" is generally demonstrated by showing that the device is comparable to the predicate device in specific aspects.
Inferred "Acceptance Criteria" (based on non-clinical tests) and "Reported Device Performance":
Acceptance Criteria (Inferred from testing) | Reported Device Performance (Summary from submission) |
---|---|
Biocompatibility: No cytotoxicity, irritation, sensitization, systemic toxicity, and non-pyrogenic. | Passed ISO 10993 biocompatibility testing (cytotoxicity, Intracutaneous irritation, sensitization, intramuscular implantation, pyrogenicity, and acute systemic toxicity) and toxicological risk assessment of extractables. |
Material Safety: Extractables are within safe limits. | Extractables analyzed using GC-MS, LC-MS, and ICP-MS showed safety. |
Sterility: Achieve a Sterility Assurance Level (SAL) of 10^-6. | Sterilization resistance and bioburden testing determined proper EO parameters to achieve 10^-6 SAL. |
Radiographic Visibility: Visible on standard radiographs (x-ray, CT, mammography) and comparable to predicate. | CT/x-ray radiographic imaging at low, medium, and high doses and mammography showed substantially equivalent radiographic visualization to the predicate device. |
Shelf-Life & Packaging Integrity: Maintain performance over shelf-life and packaging suitable. | Packaging validation and post shelf-life performance testing conducted. |
Radiation Impact: Mechanical properties, visibility, and cytotoxicity unaffected by radiation exposure (for radiotherapy applications). | Evaluation of radiation impact on mechanical properties, visibility, and cytotoxicity showed no negative impacts. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable in the context of human data. The "test set" here refers to physical samples of the device used for non-clinical testing (e.g., individual markers for biocompatibility, imaging, and radiation impact). The specific number of physical units tested is not detailed, but it would have been sufficient for the conducted non-clinical tests.
- Data Provenance: Not applicable as this is non-clinical testing of the device itself, not human data. The tests were performed by the manufacturer or a contracted lab.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not applicable. Ground truth, in the context of clinical or AI studies, is not established for non-clinical device testing. The results of the physical tests are the "ground truth" for the device's characteristics.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable. This concept applies to expert review of clinical data, which is not part of this 510(k) non-clinical submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: No, an MRMC comparative effectiveness study was not conducted and is not mentioned. This type of study would be relevant for evaluating diagnostic imaging devices or AI-assisted systems reading images from human patients, which is not the nature of this device or its submission.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Performance: No, a standalone algorithm performance study was not done. This device is a physical tissue marker, not an algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: For the non-clinical tests, the "ground truth" is derived from objective, quantitative measurements and laboratory analyses (e.g., chemical analysis results for extractables, physical measurements for mechanical properties after radiation, optical/imaging assessments of visibility).
8. The sample size for the training set
- Sample Size for Training Set: Not applicable. This device is not an AI algorithm, so there is no training set in the conventional sense.
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable, as there is no training set for this device.
Ask a specific question about this device
Page 1 of 1