Search Results
Found 103 results
510(k) Data Aggregation
(29 days)
Chemfort® 28-day 20 mm Vial Adaptor is a single use, sterile Closed System Transfer Device (CSTD) that mechanically prohibits the release of drugs, including antineoplastic and hazardous drugs, in vapor, aerosol or liquid form during preparation, reconstitution, compounding and administration, thus minimizing exposure of individuals, healthcare personnel, and the environment to hazardous drugs.
Chemfort® 28-day 20 mm Vial Adaptor prevents the introduction of microbial and airborne contaminants into the drug or fluid path for up to 28 days or 10 activations.
The Chemfort® Closed System Transfer Device (CSTD) is a system of components that allows the reconstitution of liquid or pre-dissolved powder drugs into infusion bags, flexible bottles or syringes. Single, partial or multiple vials can be used for each infusion solution container. The Chemfort® CSTD prevents contamination of the user or the environment by the drug through the use of elastomeric seals and an active carbon filter.
The components of the predicate Chemfort® CSTD system are:
- Vial Adaptor 20 mm with 13 mm Vial Converter
- Vial Adaptor 28 mm
- Vial Adaptor 32 mm
- Syringe Adaptor
- Syringe Adaptor Lock
- Luer Lock Adaptor
- Bag Adaptor SP
Each of the Chemfort® system components is available separately.
This submission introduces a new version of the 20mm Vial Adaptor to the Chemfort® CSTD system, called the Chemfort® 28-day 20 mm Vial Adaptor, as a range extension. This new Vial Adaptor differs from the predicate Vial Adaptor only with respect to the usage time limitation, which is extended from 7 to 28 days, but with the same limit of 10 activations. This change is reflected in the Indications for Use statement and the device labeling.
N/A
Ask a specific question about this device
(223 days)
The Archimedes APD system is an Automated Peritoneal Dialysis system indicated for acute and chronic peritoneal dialysis for adult patients in clinical and home use. A care partner is not required. The following therapies are supported: Continuous Cyclic Peritoneal Dialysis (CCPD) and Intermittent Peritoneal Dialysis (IPD). Mid-day exchanges are not supported.
The proposed system is an automated peritoneal dialysis (APD) cycler which consists of a heater unit, control unit, cart, drain containers, and disposable tubing set.
N/A
Ask a specific question about this device
(90 days)
Chemfort® is a single use, sterile Closed System Transfer Device (CSTD) that mechanically prohibits the release of drugs, including antineoplastic and hazardous drugs, in vapor, aerosol or liquid form during administration and preparation, thus minimizing exposure of individuals, healthcare personnel, and the environment to hazardous drugs. Chemfort® prevents the introduction of microbial and airborne contaminants into the drug or fluid path for up to 7 days.
The Chemfort® Closed System Transfer Device (CSTD) is developed by Simplivia Healthcare Ltd. The system is used by pharmacists, nurses or other healthcare professionals to prepare drugs, including cytotoxic drugs, and allow the safe reconstitution of powder and liquid drugs transfer for infusion containers (infusion bags, semi-rigid bottles, and collapsible plastic containers), injection, or administration. It is supplied sterile with a sterility assurance level (SAL) of 10-6.
The Chemfort® Female Luer Lock Adaptor is part of the Chemfort® system of devices. The Chemfort® Female Luer Lock Adaptor is intended for the safe drug transfer from one syringe to another and allows closed access via Chemfort® devices to any standard male Luer connection (see below in more details).
- Syringe to Syringe connection:
The Chemfort® Female Luer Lock Adaptor is connected to the Chemfort® Luer Lock Adaptor. The Chemfort® Luer Lock Adaptor port is connected to an empty / saline containing syringe (syringe "A"), equipped with a Chemfort® Syringe Adaptor or Chemfort® Syringe Adaptor Lock. A drug containing syringe (syringe "B"), equipped with a Chemfort® Syringe Adaptor or Chemfort® Syringe Adaptor Lock is connected to the Chemfort® Female Luer Lock Adaptor. This assembly of devices allows drug transfer from one syringe "A" to the other, syringe "B", for drug dilution (if syringe "A" contains saline) or drug dosage (if syringe "A" is empty). This procedure allows safe drug transfer from one syringe to another. The drug in Syringe "A" can then be injected to an intravenous (IV) bag through the Chemfort® spike or in a bolus through another Chemfort® Luer Lock Adaptor connected to a Y-site on an IV set.
Note that this procedure also involves the Chemfort® Vial Adaptor to allow to withdraw the drug from the drug vial to syringe "B".
- Connection to IV sets:
The Chemfort® Female Luer Lock Adaptor is connected to an IV set through the luer lock connection (proximal end or infusion line). The Chemfort® port can then connect to one of the Chemfort® Closed Administration (CADM) IV sets. This setup transfers an open IV set connection to a closed connection.
The Chemfort® Female Luer Lock Adaptor can be in contact with concentrated or diluted drugs.
The Chemfort® Female Luer Lock Adaptor is a single-use device intended for use on adults, children and infants.
The provided FDA 510(k) clearance letter and supporting documentation (Chemfort® Female Luer Lock Adaptor 510(k) Summary) describe the performance testing and acceptance criteria for a physical medical device, not a software or AI-driven diagnostic device.
Therefore, many of the requested categories in your prompt (e.g., number of experts for ground truth, adjudication method, MRMC study, sample size for training set, how ground truth for training set was established, standalone performance) are not applicable to this type of device submission. These categories are typically relevant for AI/ML-based diagnostic devices where performance data relies heavily on expert annotations, comparative effectiveness studies involving human readers, and distinct training/test datasets.
However, I can extract the relevant acceptance criteria and performance data for the Chemfort® Female Luer Lock Adaptor based on the provided document.
Acceptance Criteria and Device Performance for Chemfort® Female Luer Lock Adaptor
This document outlines the performance data and acceptance criteria for the Chemfort® Female Luer Lock Adaptor, a physical medical device. The study performed demonstrates the device's adherence to established safety and performance standards for intravascular administration sets.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are primarily based on established international standards and internal validation procedures for medical devices of this type. The "Results" column from the provided Table 2 in the 510(k) summary indicates that all tests met their acceptance criteria, demonstrating the device's compliance.
| Test Name | Description | Acceptance Criteria (Implied by Standard/Procedure) | Reported Device Performance |
|---|---|---|---|
| Particulate Analysis | Chemfort® Female Luer Lock Adaptor fluid path was examined for particles. | Compliance with USP <788> "Particulate Matter in Injections, Method 1- Light Obscuration Particle Count Test" (i.e., particulate count within specified limits for injectables). | Pass |
| Bidirectional Flow | The ability of the device to deliver liquid throughout the system was verified. | Fluid delivery demonstrated to be effective and unimpeded as per internal procedure. (Specific quantitative criteria not provided but implied by "Pass"). | Pass |
| Assembly's Connection | Evaluation of the connection force between Chemfort® Syringe Adaptor and Chemfort® Female Luer Lock Adaptor ports. | Connection forces within acceptable ranges to ensure secure attachment and proper function without excessive effort or accidental disconnection, as per internal procedure. | Pass |
| Air Tightness | This test demonstrated that there is no leakage between the Chemfort®'s Female Luer Lock Adaptor and the Chemfort® Syringe Adaptor ports connection. | No detectable air leakage between connected ports, ensuring a closed system, as per internal procedure. | Pass |
| Fluid Leakage | Ensure that the Chemfort® Female Luer Lock Adaptor's luer connector. | No detectable fluid leakage from the luer connector, as per internal procedure. | Pass |
| Luer Test | The luer lock connection complies with ISO 80369-20. This specifically refers to the functional and dimensional integrity of the luer connections, preventing misconnections and ensuring secure fit. | Compliance with ISO 80369-7:2021 "Small-bore connectors for liquids and gases in healthcare applications Part 7: Connectors for intravascular or hypodermic applications" requirements for luer connections. | Pass |
| Biocompatibility | All device parts that contact the patient comply with ISO 10993-1. (This is a general statement from the summary implying testing was done to ensure no adverse biological reactions). | Compliance with ISO 10993 series (e.g., cytotoxicity, irritation, sensitization, systemic toxicity, hemocompatibility) for materials in contact with body fluids. | Compliance (Implicit) |
| Sterilization Residuals | Ethylene Oxide sterilization residuals. | Compliance with ISO 10993-7 requirements for acceptable levels of ethylene oxide and its byproducts. | Compliance (Implicit) |
| Shelf Life | The device is safe and effective throughout its intended shelf life (3 years). (This is a general statement, implying stability testing was conducted over time to support this claim). | Device maintains its safety and effectiveness characteristics over the declared 3-year shelf life, as demonstrated by stability testing (e.g., maintaining sterility, material integrity, functional performance). | Not explicitly detailed but implied by overall "Pass" and "safe and effective". |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not explicitly state the numerical sample sizes for each performance test (e.g., number of units tested for particulate analysis, bidirectional flow, etc.). However, it indicates that "Simplivia conducted several performance tests to demonstrate that the Chemfort® Female Luer Lock Adaptor is safe and effective..." implying a sufficient number of samples were tested to meet the requirements of the listed standards and internal procedures.
- Data Provenance: The tests were conducted by Simplivia Healthcare LTD. (an Israeli company) for regulatory submission to the FDA. The data provenance is laboratory testing performed by the manufacturer, rather than clinical data from human subjects. The tests are prospective in nature, as they involve testing newly manufactured devices against predetermined specifications.
3. Number of Experts Used to Establish Ground Truth and Their Qualifications
This question is not applicable to the type of device being cleared. The "ground truth" for the performance of a physical device like the Chemfort® Female Luer Lock Adaptor is established by adherence to validated engineering specifications, material properties, and functionality defined by international standards (e.g., ISO, USP) and internal quality control procedures. It does not involve expert interpretations of images or signals for diagnostic purposes.
4. Adjudication Method for the Test Set
This question is not applicable. Adjudication methods (like 2+1, 3+1) are used to resolve discrepancies in expert annotations or interpretations, typically in studies involving human readers or AI outputs for diagnostic tasks. For a physical device, performance is evaluated against objective, measurable criteria with pass/fail outcomes, not subjective interpretations requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done. MRMC studies are specific to evaluating the diagnostic performance of medical imaging devices or AI algorithms, often comparing human reader performance with and without AI assistance across multiple cases. This device is an intravascular administration set, not an imaging or diagnostic AI device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
This question is not applicable. There is no "algorithm" to be evaluated in a standalone manner for this physical device. Its function is mechanical and fluidic.
7. The Type of Ground Truth Used
The "ground truth" for this device is based on engineering specifications, material science, and compliance with recognized international standards (e.g., ISO 80369-7, ISO 10993 series, USP monographs). These standards define the acceptable performance characteristics, physical properties, and safety profiles for devices of this type. For example, for "Luer Test," the ground truth is defined by the dimensional and functional requirements of ISO 80369-7:2021. For "Biocompatibility," the ground truth is defined by the specific tests and acceptance criteria within the ISO 10993 series.
8. The Sample Size for the Training Set
This question is not applicable. This device is a physical product, not an AI/ML model that requires a training set.
9. How the Ground Truth for the Training Set Was Established
This question is not applicable for the same reason as #8.
Ask a specific question about this device
(29 days)
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing and analyzing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- Visualization (standard image viewing tools, MIPs, and reformats)
- Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a breakdown of the acceptance criteria and the study details for the TumorSight Viz device, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics, where the device's performance is deemed "adequate" and "clinically acceptable" if the variability is similar to inter-radiologist variability or differences in error are clinically insignificant.
| Measurement Description | Units | Acceptance Criterion (Implicit) | Reported Device Performance (Mean Abs. Error ± Std. Dev.) |
|---|---|---|---|
| Tumor Volume (n=218) | cubic centimeters (cc) | Similar to inter-radiologist variability | 5.2 ± 12.5 |
| Tumor-to-breast volume ratio (n=218) | % | Clinically acceptable | 0.4 ± 1.2 |
| Tumor longest dimension (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 1.02 cm ± 1.33 cm) | 1.32 ± 1.65 |
| Tumor-to-nipple distance (n=241) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.88 cm ± 1.12 cm) | 1.17 ± 1.55 |
| Tumor-to-skin distance (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.42 cm ± 0.45 cm) | 0.60 ± 0.52 |
| Tumor-to-chest distance (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.79 cm ± 1.14 cm) | 0.86 ± 1.22 |
| Tumor center of mass (n=218) | centimeters (cm) | Clinically acceptable | 0.60 ± 1.47 |
| Segmentation Accuracy | |||
| Volumetric Dice (n=218) | High agreement with reference standard | 0.76 ± 0.26 | |
| Surface Dice (n=218) | High agreement with reference standard (particularly for 3D rendering) | 0.92 ± 0.21 |
The document states: "We found that all tests met the acceptance criteria, demonstrating adequate performance for our intended use." This indicates that the reported performance metrics were considered acceptable by the regulatory body. For measurements where inter-radiologist variability is provided (e.g., longest dimension, tumor-to-skin), the device's error is compared to this variability. For other metrics, the acceptance is based on demonstrating "adequate performance," implying that the reported values themselves were within a predefined acceptable range.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 266 patients (corresponding to 267 samples, accounting for bilateral disease).
- Data Provenance:
- Country of Origin: U.S.
- Retrospective/Prospective: The document does not explicitly state "retrospective" or "prospective." However, the description of "DCE-MRI were obtained from... patients" and establishment of ground truth by reviewing images suggests a retrospective acquisition of data for validation. The mention of "All patients had pathologically confirmed invasive, early stage or locally advanced breast cancer" further supports a retrospective gathering of existing patient data.
- Clinical Sites: More than eight (8) clinical sites in the U.S.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3) U.S. Board Certified radiologists.
- Qualifications: U.S. Board Certified radiologists. (No specific experience in years is mentioned, but Board Certification implies a high level of expertise.)
4. Adjudication Method for the Test Set
- Adjudication Method: 2+1 (as described in the document).
- For each case, two radiologists independently measured various characteristics and determined if the candidate segmentation was appropriate.
- In cases of disagreement between the first two radiologists ("did not agree on whether the segmentation was appropriate"), a third radiologist provided an additional opinion, and the ground truth was established by majority consensus.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance is directly measured and compared.
Instead, it compares the device's performance to:
- Ground Truth: Radiologist consensus measurements.
- Predicate Device: Its own previous version.
- Inter-radiologist Variability: The inherent variability between human expert readers.
Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided, as this type of MRMC study was not detailed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The sections titled "Performance Tests" and the tables detailing "Validation Testing (Mean Abs. Error ± Std. Dev.)" describe the algorithm's performance in comparison to the established ground truth. This is a standalone evaluation, as it assesses the device's output intrinsically against expert-derived truth without measuring human interaction or improvement. The statement "The measurements generated from the device result directly from the segmentation methodology and are an inferred reflection of the performance of the deep learning algorithm" supports this.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert Consensus (specifically, pathologist-confirmed lesions measured and evaluated by a consensus of U.S. Board Certified radiologists). The initial diagnosis of early-stage or locally advanced breast cancer for patient selection was based on pathology ("biopsy proven"). However, the ground truth for measurements and segmentation appropriateness for the study was established by radiologists.
8. The Sample Size for the Training Set
- Sample Size for Training Set: One thousand one hundred fifty-six (1156) patients/samples.
9. How the Ground Truth for the Training Set Was Established
The document states: "DCE-MRI were obtained from one thousand one hundred fifty-six (1156) patients from more than fifteen (15) clinical sites in the U.S. for use in training and tuning the device."
However, the document does not explicitly detail how the ground truth for this training set was established. It describes the ground truth establishment method only for the validation dataset (by three U.S. Board Certified radiologists with 2+1 adjudication). For training data, it is common practice to use similar rigorous methods for labeling, but the specifics are not provided in this excerpt.
Ask a specific question about this device
(262 days)
The Shadow Catheter™ is intended to be used in conjunction with steerable guidewires in order to access discrete regions of the coronary and peripheral arterial vasculature, to facilitate placement and exchange of guidewires and other interventional devices, for use during two guidewire procedures and to subselectively infuse/deliver diagnostic or therapeutic agents.
The Shadow Guidewire Positioning Catheter is a dual lumen over-the-wire catheter, compatible with a 6F or larger guiding catheter and 0.014" guidewires. The Catheter is designed to support and aim steerable guidewires. The Catheter consists of a distal tip with markers, torque shaft and guidewire introducer. The distal end of the Catheter (the nosecone) has two ports, one to load a tracking guidewire axially (via distal port) and the other for the positioning guidewire to exit radially (via side port). The distal nosecone is coated with a hydrophilic coating.
The Shadow Catheter™ has three markers in the nosecone. The guidewire introducer accommodates an on-axis tracking guidewire, a tracking guidewire flush port, as well as a positioning guidewire introducer port that doubles as a flush port. The handle may be rotated or orientated to adjust or aim the direction of the positioning guidewire exiting the side port at the distal end.
The Catheter is placed in a packaging hoop, sealed in a Tyvek® pouch, and packaged in a shelf carton. There are no diagnostic or therapeutic agents known to be incompatible with the Shadow Catheter™. The Shadow Catheter™ is sterilized with ethylene oxide.
The provided text is a 510(k) Premarket Notification from the FDA for a medical device called the "Shadow Catheter™." It details the device's description, intended use, technological characteristics, and non-clinical tests performed to demonstrate substantial equivalence to a predicate device.
However, the text does not contain any information about acceptance criteria or a study that proves the device meets those criteria related to an AI/ML-based medical device. The product is a physical catheter, and the testing described focuses on physical properties and biocompatibility. There is no mention of a test set, data provenance, expert ground truth, MRMC studies, or standalone algorithm performance.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and reported device performance related to AI/ML: The document discusses physical device performance (e.g., bond strength, kink test, flow rate), but not AI/ML performance metrics.
- Sample size used for the test set and data provenance: No test set is described for AI/ML evaluation.
- Number of experts used to establish ground truth and qualifications: Not applicable, as there's no AI/ML component.
- Adjudication method for the test set: Not applicable.
- MRMC comparative effectiveness study: Not mentioned, as it's not an AI-assisted device.
- Standalone (algorithm only) performance: Not applicable.
- Type of ground truth used: Not applicable for AI/ML.
- Sample size for the training set: No training set is mentioned.
- How the ground truth for the training set was established: Not applicable.
The document explicitly states: "Clinical Study: Not Applicable. The Shadow Catheter™ was not evaluated in a clinical study." and "The results of the testing met the specified acceptance criteria and did not raise new questions of safety or effectiveness; therefore, the subject device is substantially equivalent to the predicate device." This refers to non-clinical, bench testing of the physical catheter.
In summary, the provided FDA 510(k) document is for a physical medical catheter, not an AI/ML device, and thus does not contain the information required to answer your specific questions regarding AI/ML acceptance criteria and study details.
Ask a specific question about this device
(148 days)
Sim&Size enables visualization of cerebral blood vessels for preoperational planning and sizing for neurovascular interventions and surgery. Sim&Size also allows for the ability to computationally model the placement of neurointerventional devices.
General functionalities are provided such as:
- Segmentation of neurovascular structures
- Automatic centerline detection
- Visualization of X-ray based images for 2D review and 3D reconstruction
- Placing and sizing tools
- Reporting tools
Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition.
Sim&Size is a Software as a Medical Device (SaMD) for simulating neurovascular implantable medical devices. The product enables visualization of cerebral blood vessels for preoperational planning for neurovascular interventions and surgery. It uses an image of the patient produced by 3D rotational angiography. It offers clinicians the possibility of simulating neurovascular implantable medical devices in the artery or in the aneurysm to be treated through endovascular surgery and provides support in the treatment for the sizing and positioning of implantable medical devices.
Each type of implant device is simulated in a simulation module of Sim&Size:
- FDsize, a module that allows pre-operationally planning Flow-Diverter (FD) devices.
- IDsize, a module that allows pre-operationally planning Intrasaccular (ID) devices.
- STsize, a module that allows pre-operationally planning Stent (ST) devices.
- FCsize, a module that allows pre-operationally planning First and filling coils (FC) devices.
Associated with these four modules, a common module is intended to import DICOM and to provide a 3D reconstruction of the vascular tree in the surgical area.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Sim&Size:
Acceptance Criteria and Device Performance
The provided document highlights performance testing without explicitly stating quantitative acceptance criteria. However, the nature of the tests implies the device must accurately "predictive behavior of the implantable medical device with its theoretical behavior," accurately "compare the device placement in a silicone phantom model with the device simulation," and accurately "compare the in vitro retrospective cases with the device simulation."
Given the context of a 510(k) submission, the implicit acceptance criterion is that the device's performance is substantially equivalent to the predicate device and that the new features do not raise new questions of safety and effectiveness.
Here's a table based on the types of performance tests conducted:
| Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|
| Verification Testing: Predictive behavior matches theoretical behavior of implantable medical devices. | "Verification testing, which compares the predictive behavior of the implantable medical device with its theoretical behavior." (Implies successful verification based on "Conclusion" stating device "performs as intended.") |
| Bench Testing: Simulated device placement matches physical placement in a silicone phantom model. | "Bench testing, which compares the device placement in a silicone phantom model with the device simulation." (Implies successful bench testing based on "Conclusion" stating device "performs as intended.") |
| Retrospective In Vivo Testing: Simulated cases match actual in vivo outcomes (or in vitro representations of retrospective in vivo data). | "Retrospective in vivo testing, which compares the in vitro retrospective cases with the device simulation." (Implies successful retrospective testing based on "Conclusion" stating device "performs as intended.") This suggests the retrospective cases were either in vitro models derived from in vivo data or in vitro analyses of actual in vivo outcomes. The document specifically says "in vitro retrospective cases," which could mean a lab-based re-creation or analysis from real patient data. |
| Overall Performance: New features do not introduce new safety or effectiveness concerns and the device is substantially equivalent to the predicate. | The Conclusion states: "The subject and predicate devices are substantially equivalent. The results of the verification and validation tests demonstrate that the Sim&Size device performs as intended. The new features added to the subject device do not raise new questions of safety and effectiveness." |
Study Details:
Based on the provided document, here's what can be inferred about the studies conducted:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated in the document.
- Data Provenance:
- "Retrospective in vivo testing" suggests real-world patient data, but the phrase "in vitro retrospective cases" implies these were lab-based re-creations or analyses of that data. The specific country of origin is not mentioned, but given the company's address (Montpellier, France), it's plausible the data could originate from Europe, although this is not confirmed.
- "Bench testing" uses a "silicone phantom model," which is an experimental setup, not clinical data provenance.
- "Verification testing" involves comparing theoretical behavior, which doesn't involve a dataset in the same way clinical or phantom models do.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The document refers to "theoretical behavior," "silicone phantom model," and "in vitro retrospective cases" as benchmarks, but it doesn't detail how the ground truth for "in vitro retrospective cases" was established or if experts were involved in defining the "theoretical behavior" or validating the phantom results.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided in the document.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study is explicitly mentioned. The device "enables visualization of cerebral blood vessels" and "allows for the ability to computationally model the placement of neurointerventional devices," but it's stated that "Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition." This indicates it's a tool for assistance, but the document does not detail studies on human reader performance improvement with this AI.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The "Verification testing," "Bench testing," and "Retrospective in vivo testing" (comparing simulations to "in vitro retrospective cases") all describe methods that would assess the algorithm's standalone performance without a human in the loop for the actual comparison/measurement, although human input (e.g., in segmentation, placing/sizing tools) is part of the device's intended use. The wording "compares the predictive behavior... with its theoretical behavior" and "compares the device placement... with the device simulation" explicitly refers to the device's performance, implying a standalone assessment of the algorithmic component.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Theoretical Behavior: Used for "Verification testing" (e.g., physical laws, engineering models of device deployment).
- Physical Phantom Model: Used for "Bench testing" (measurements from a physical silicone model).
- "In vitro retrospective cases": Used for "Retrospective in vivo testing." This implies a ground truth derived from actual patient data, analyzed or re-created in a laboratory (in vitro). It's not explicitly stated if this ground truth was pathology or outcomes data, but rather a representation of the in vivo reality.
-
The sample size for the training set:
- This information is not provided in the document. This section focuses on validation testing, not the training of any underlying models.
-
How the ground truth for the training set was established:
- This information is not provided as the document does not detail the training process.
Ask a specific question about this device
(25 days)
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- . Visualization (standard image viewing tools, MIPs, and reformats)
- . Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
- . Communication and storage (DICOM import, retrieval, and study storage)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a breakdown of the acceptance criteria and study details for the TumorSight Viz device, based on the provided document:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria implicitly relate to the device's performance in comparison to expert variability and the predicate device. The study aims to demonstrate that the error in measurements produced by TumorSight Viz is consistent with the variability observed among expert radiologists.
The table below summarizes the performance metrics from the validation testing, which serves as the reported device performance against the implicit acceptance criterion of being comparable to inter-radiologist variability.
| Measurement Description | Units | Acceptance Criteria (Implicit: Comparable to Inter-Radiologist Variability) | Reported Device Performance (Mean Abs. Error ± Std. Dev.) |
|---|---|---|---|
| Tumor Volume (n=184) | cubic centimeters (cc) | Error consistent with inter-radiologist variability (NA for direct comparison) | 5.22 ± 15.58 |
| Tumor-to-breast volume ratio (n=184) | % | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.51 ± 1.48 |
| Tumor longest dimension (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.60 ± 1.93 |
| Tumor-to-nipple distance (n=200) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.20 ± 1.37 |
| Tumor-to-skin distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.63 ± 0.61 |
| Tumor-to-chest distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.91 ± 1.14 |
| Tumor center of mass (n=184) | centimeters (cm) | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.72 ± 1.42 |
Segmentation Accuracy:
| Performance Measurement | Metric | Acceptance Criteria (Implicit: Adequate for intended use) | Reported Device Performance (Mean ± Std. Dev.) |
|---|---|---|---|
| Tumor segmentation (n=184) | Volumetric Dice | Adequate for intended use | 0.75 ± 0.24 |
| Tumor segmentation (n=184) | Surface Dice | Adequate for intended use | 0.88 ± 0.24 |
Comparison to Predicate Device and Inter-Radiologist Variability:
| Performance Measurement | N | Metric | Predicate/TumorSight Viz (Mean ± Std. Dev.) | TumorSight Viz/Ground Truth (Mean ± Std. Dev.) | Predicate/Ground Truth (Mean ± Std. Dev.) | Inter-radiologist Variability (Mean ± Std. Dev.) |
|---|---|---|---|---|---|---|
| Longest Dimension | 197 | Abs. Distance Error | 1.33 cm ± 1.80 cm | 1.59 cm ± 1.93 cm | 1.27 cm ± 1.34 cm | 1.30 cm ± 1.34 cm |
| Tumor to Skin | 197 | Abs. Distance Error | 0.24 cm ± 0.39 cm | 0.61 cm ± 0.60 cm | 0.55 cm ± 0.48 cm | 0.51 cm ± 0.48 cm |
| Tumor to Chest | 197 | Abs. Distance Error | 0.64 cm ± 1.13 cm | 0.89 cm ± 1.12 cm | 0.69 cm ± 0.88 cm | 0.97 cm ± 1.16 cm |
| Tumor to Nipple | 195 | Abs. Distance Error | 0.89 cm ± 1.03 cm | 1.15 cm ± 1.30 cm | 1.01 cm ± 1.23 cm | 1.03 cm ± 1.30 cm |
| Tumor Volume | 197 | Abs. Volume Error | 4.42 cc ± 11.03 cc | 5.22 cc ± 15.58 cc | 6.50 cc ± 21.40 cc | NA |
The study concludes that "all tests met the acceptance criteria, demonstrating adequate performance for our intended use," and that the "differences in error between the mean absolute errors (MAE) for the predicate and subject device are clinically acceptable because they are on the order of one to two voxels for the mean voxel size in the dataset. These differences are clinically insignificant."
2. Sample Size and Data Provenance
- Test Set (Validation Dataset) Sample Size: 216 patients, corresponding to 217 samples (when accounting for bilateral disease).
- Data Provenance:
- Country of Origin: U.S. (from more than 7 clinical sites).
- Retrospective/Prospective: Not explicitly stated, but the description of data collection and review for ground truth suggests it was retrospective. The data was "obtained" and "collected," implying pre-existing data.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three (3)
- Qualifications: U.S. Board Certified radiologists. No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- Method: Majority Consensus (2+1). For each case, two radiologists independently reviewed measurements and segmentation appropriateness. "In cases where the two radiologists did not agree on whether the segmentation was appropriate, a third radiologist provided an additional opinion and established a ground truth by majority consensus."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, an MRMC study comparing human readers with and without AI assistance was not performed as described in the document. The study primarily focused on the standalone performance of the AI algorithm (TumorSight Viz) and its comparison to the predicate device, with ground truth established by expert radiologists. It did compare the device's measurements to inter-radiologist variability, but not in a human-in-the-loop setup.
- Effect Size: Not applicable, as an MRMC comparative effectiveness study was not performed.
6. Standalone (Algorithm Only) Performance
- Was it done? Yes. The performance metrics listed in the tables (Mean Absolute Error, Volumetric Dice, Surface Dice) are indicators of the standalone performance of the TumorSight Viz algorithm against the established ground truth.
7. Type of Ground Truth Used
- Type: Expert Consensus. The ground truth was established by three (3) U.S. Board Certified radiologists through a defined review and adjudication process (majority consensus).
- For measurements: Radiologists measured various characteristics including longest dimensions and tumor to landmark distances.
- For segmentation: Radiologists reviewed and deemed the candidate segmentation "appropriate."
8. Sample Size for the Training Set
- Training Dataset: 676 samples.
- Tuning Dataset: 240 samples.
- Total Patients for Training and Tuning: 833 patients (corresponding to 916 samples total for training and tuning).
9. How the Ground Truth for the Training Set was Established
The document states that the training and tuning data were used to "train and tune the device," but it does not explicitly describe how the ground truth for this training data was established. It only details the ground truth establishment for the validation dataset. It is common for deep learning models to require labeled data for training, but the process for obtaining these labels for the training set is not provided here.
Ask a specific question about this device
(250 days)
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing and analyzing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- Visualization (standard image viewing tools, MIPs, and reformats)
- Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
- Communication and storage (DICOM import, retrieval, and study storage)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a summary of the acceptance criteria and study details for TumorSight Viz, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by demonstrating that the device's performance (Mean Absolute Error and Dice Coefficients) is comparable to inter-radiologist variability and the predicate device, CADstream, and that "all tests met the acceptance criteria".
| Measurement Description | Units | Acceptance Criteria (Implied) | Validation Testing (Mean Abs. Error ± Std. Dev.) |
|---|---|---|---|
| Tumor Volume (n=157) | cubic centimeters (cc) | Comparable to inter-radiologist variability | 6.48 ± 12.67 |
| Tumor-to-breast volume ratio (n=157) | % | Comparable to inter-radiologist variability | 0.56 ± 0.93 |
| Tumor longest dimension (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 1.48 ± 1.46 |
| Tumor-to-nipple distance (n=161) | centimeters (cm) | Comparable to inter-radiologist variability | 1.00 ± 1.03 |
| Tumor-to-skin distance (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 0.63 ± 0.60 |
| Tumor-to-chest distance (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 0.94 ± 1.34 |
| Tumor center of mass (n=157) | centimeters (cm) | Comparable to inter-radiologist variability | 0.735 ± 1.26 |
| Performance Measurement | Metric | Acceptance Criteria (Implied) | Validation Testing (Mean ± Std. Dev.) |
|---|---|---|---|
| Tumor segmentation (n=157) | Volume Dice | Sufficient for indicating location, volume, surface agreement | 0.676 ± 0.289 |
| Surface Dice | Sufficient for indicating location, volume, surface agreement | 0.873 ± 0.264 |
Additionally, for the direct comparison with the CADstream predicate device and ground truth:
| Performance Measurement | Metric | TumorSight Viz / Ground Truth (Mean Abs. Error ± Std. Dev.) | CADStream / Ground Truth (Mean Abs. Error ± Std. Dev.) | Inter-radiologist Variability (Mean Abs. Error ± Std. Dev.) |
|---|---|---|---|---|
| Longest Dimension (n=136) | Abs. Distance Error | 1.40 cm ± 1.43 cm | 1.11 cm ± 1.52 cm | 1.17 cm ± 1.38 cm |
| Tumor to Skin (n=136) | Abs. Distance Error | 0.61 cm ± 0.46 cm | 0.49 cm ± 0.56 cm | 0.49 cm ± 0.54 cm |
| Tumor to Chest (n=136) | Abs. Distance Error | 0.77 cm ± 0.90 cm | 1.37 cm ± 1.01 cm | 0.79 cm ± 1.01 cm |
| Tumor to Nipple (n=134) | Abs. Distance Error | 0.98 cm ± 1.06 cm | 0.80 cm ± 0.86 cm | 0.82 cm ± 0.98 cm |
| Tumor Volume (n=134) | Abs. Distance Error | 6.69 cc ± 13.53 cc | 8.09 cc ± 17.42 cc | N/A (not provided for inter-radiologist variability) |
The document states: "The mean absolute error and variability between the automated measurements (Validation Testing) and ground truth for tumor volume (measured in cc) and landmark distances (measured in cm) was similar to the variability between device-to-radiologist measurements and inter-radiologist variability. This demonstrates that the error in measurements is consistent to the variability between expert readers." It also notes: "We found that all tests met the acceptance criteria, demonstrating adequate performance for our intended use." And for the comparison to the predicate: "The differences in error between the mean absolute errors (MAE) for the predicate and subject device are clinically acceptable because they are on the order of one to two voxels for the mean voxel size in the dataset. These differences are clinically insignificant."
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Validation (Test Set): 161 patients, corresponding to 163 samples (accounting for bilateral disease).
- Data Provenance: Obtained from six (6) clinical sites in the U.S. All patients had pathologically confirmed invasive, early stage or locally advanced breast cancer. The data was collected to ensure adequate coverage of MRI manufacturer and field strength and similarity with the broader U.S. population for patient age, breast cancer subtype, T stage, histologic subtype, and race/ethnicity. This data is retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Seven (7) U.S. Board Certified radiologists.
- Qualifications of Experts: U.S. Board Certified radiologists. Specific experience level (e.g., years of experience) is not explicitly stated beyond "expert readers."
4. Adjudication Method for the Test Set
- Adjudication Method: For each case, two radiologists independently measured various characteristics. If the two radiologists did not agree on whether the candidate segmentation was appropriate, a third radiologist provided an additional opinion and established a ground truth by majority consensus (2+1 adjudication).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document describes a performance comparison between TumorSight Viz, CADstream (predicate), and ground truth, as well as inter-radiologist variability. However, it does not describe an MRMC comparative effectiveness study directly measuring how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of TumorSight Viz and its comparability to a predicate device and human variability.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
- Yes, a standalone study was done. The reported performance metrics (Mean Absolute Error, Dice Coefficients) for TumorSight Viz against a radiologist-established ground truth represent the standalone performance of the algorithm. The document explicitly states: "The measurements generated from the device result directly from the segmentation methodology and are an inferred reflection of the performance of the deep learning algorithm."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Pathologically confirmed breast cancer cases (for patient inclusion) combined with expert consensus of U.S. Board Certified radiologists for specific image measurements and segmentation appropriateness. The ground truth was established by two radiologists measuring characteristics, with a third radiologist adjudicating disagreements by majority consensus.
8. The sample size for the training set
- Training Set Sample Size: 390 samples (from 736 patients mentioned for training and tuning).
- Tuning Set Sample Size: 376 samples (from 736 patients mentioned for training and tuning).
9. How the ground truth for the training set was established
- The document states that 736 patients (766 samples) were used for "training and tuning the device." It explicitly mentions that for the validation set, "Seven (7) U.S. Board Certified radiologists reviewed 163 validation samples to establish the ground truth for the dataset..."
- The method for establishing ground truth for the training set is not explicitly detailed in the provided text. It is generally implied that such ground truth would also be established by experts, but the specifics are not given for the training portion.
Ask a specific question about this device
(87 days)
SIMDA Abutments are intended for use with dental implants as a support for single or multiple tooth prostheses in the maxilla or mandible of a partially or fully edentulous patient.
SIMDA Abutment is made of titanium alloy (Ti-6Al-4V ELI, ASTM F136) intended for use as an aid in prosthetic restoration. It consists of Pre-Milled Blank and Ti-Base abutment. It has a premanufactured connection interface that fits directly to an endosseous dental implant.
This FDA 510(k) summary describes a dental device, the SIMDA Abutment. However, it does not include information that directly addresses the specific questions you've asked about acceptance criteria and a study proving a device meets them, particularly for an AI/ML powered device.
Here's why and what information is provided:
The document is a premarket notification for a Class II medical device, specifically dental implant abutments. These are physical components used in dental restorations, not an AI/ML powered diagnostic or therapeutic device. Therefore, the concepts of "acceptance criteria for an AI model," "test set," "ground truth," "MRMC studies," "effect size of human readers with AI assistance," or "standalone algorithm performance" are not applicable to this submission.
The "studies" mentioned are non-clinical (mechanical, biological) tests demonstrating the physical safety and performance of the abutments and their compatibility with existing dental implant systems.
Here's a breakdown of the relevant information provided, framed as closely as possible to your request, but acknowledging the device type:
Device: SIMDA Abutments (K232271)
Device Type: Endosseous Dental Implant Abutment (physical medical device, not AI/ML powered)
1. A table of acceptance criteria and the reported device performance
The document sets design limits and then demonstrates conformity through non-clinical testing. The "acceptance criteria" here are rather design specifications and performance standards for dental abutments.
| Acceptance Criteria (Design Parameters/Limitations) | Reported Device Performance (Demonstrated through testing) |
|---|---|
| Pre-Milled Blank (for Patient-specific abutment): | |
| - Minimum and Maximum Gingival (Cuff) Height: 0.5~5mm | "The minor difference between the two products in the design parameters [...] was evaluated as part of the performance testing and was determined to not impact the performance of the device." - Implies device meets these parameters and performs acceptably. |
| - Minimum and Maximum diameter at abutment/implant interface: Ø4.0~Ø8.0 | |
| - Minimum and Maximum length of abutment: 4.5~13mm | |
| - Minimum and Maximum length of abutment post (length above the abutment collar/gingival height): 4~8mm | |
| - Minimum wall thickness at abutment/implant interface: 0.4mm (Predicate: 0.4mm, Proposed: 0.39~0.55mm) | "This change in technological characteristics [minimum thickness] was evaluated as part of the performance testing and was determined to not impact the performance of the device." - Indicates the slightly wider range for the proposed device (0.39-0.55mm) still met performance requirements. |
| - Minimum and Maximum abutment angle: 0~25° | |
| Ti-Base (for Zirconia top-half): | |
| - Post Angle (°): 0~15 | Identical to predicate. Non-clinical testing results "demonstrated the substantial equivalence with the primary predicate." |
| - Cuff Height (mm): 0.5~5.0 | |
| - Post Length (mm): 4.0~6.0 | |
| - Diameter (Ø, mm): 5.0~8.0 | |
| - Thickness (mm): 0.4 | |
| General Performance: | |
| - Fatigue Resistance: Must meet ISO 14801 and FDA special controls guidance. | Fatigue testing followed ISO 14801 and the FDA special controls guidance document. Results "demonstrated the substantial equivalence with the primary predicate." |
| - Sterilization Efficacy: Must meet ISO 17665-1:2006, 17665-2:2009, ANSI/AAMI ST79:2010. | End User Steam Sterilization Test according to ISO 17665-1:2006, 17665-2:2009 and ANSI/AAMI ST79:2010. Results "demonstrated the substantial equivalence with the primary predicate." |
| - Biocompatibility: Must meet ISO 10993-1:2009, ISO 10993-5:2009, and ISO 10993-10:2010. | Biocompatibility tests according to ISO 10993-1:2009, ISO 10993-5:2009, and ISO 10993-10:2010. Results "demonstrated the substantial equivalence with the primary predicate." |
| - MRI Safety: Must address magnetically induced displacement force and torque (per FDA guidance "Testing and Labeling Medical Devices for Safety in the Magnetic Resonance (MR) Environment"). | "Non-clinical worst-case MRI review was performed... using scientific rationale and published literature... Rationale addressed parameters per the FDA guidance... including magnetically induced displacement force and torque." - Implies the device is deemed safe in the MR environment based on this review. |
| - Compatibility with OEM Implant Systems: Precision implant/abutment interface. | Dimensional analysis and reverse engineering of critical features... Cross sectional images were provided to demonstrate substantially equivalent compatibility. The testing aided implant to abutment compatibility and has established substantial equivalency of the proposed device with the predicate device. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- This information is not provided. For physical tests (fatigue, biocompatibility, sterilization), sample sizes would typically be determined by the relevant ISO standards.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. "Ground truth" in the context of AI/ML is not relevant here. The "truth" is established by physical measurement, adherence to material standards, and documented mechanical performance.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. Adjudication methods are typically for subjective assessments, whereas these are objective physical tests.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This is a physical dental device, not an AI-assisted diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is a physical dental device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For physical tests, the "ground truth" is defined by the ISO standards and FDA guidance documents to which the device is tested. This includes established methods for fatigue testing, biocompatibility evaluation, and sterilization efficacy. For compatibility, it's about precise dimensional matching and mechanical fit to existing OEM implant systems.
8. The sample size for the training set
- Not applicable. This is a physical dental device, not an AI/ML powered device that requires a "training set."
9. How the ground truth for the training set was established
- Not applicable. See point 8.
In summary, this document is for a traditional medical device (dental abutments), and thus the questions formulated for an AI/ML device do not directly apply. The acceptance criteria are based on established engineering and materials standards, and performance is demonstrated through non-clinical laboratory testing rather than clinical or observational studies on diagnostic performance.
Ask a specific question about this device
(90 days)
Chemfort® Catheter Adaptor is a single use, sterile Closed System Transfer Device (CSTD) that mechanically prohibits the release of drugs, including antineoplastic and hazardous drugs, in vapor, aerosol or liquid form during administration, thus minimizing exposure of individuals, healthcare personnel, and the environment to hazardous drugs. Chemfort® Catheter Adaptor prevents the introduction of microbial and airborne contaminants into the drug or fluid path for up to 7 days.
The Chemfort® Catheter Adaptor enables drug transfer to the catheter, thus allowing drug administration to the patient's urinary bladder. The use of elastomeric seals of the Chemfort® Catheter Adaptor prevents hazardous drug contamination of healthcare professionals, the patient, and the environment.
The Chemfort® Catheter Adaptor is an addition to the cleared Chemfort® system (K192866). The Catheter Adaptor provides closed system protection during the following procedures:
- a) Drug transfer from a standard luer lock syringe to the Catheter Adaptor through the Chemfort® Syringe Adaptor (K192866).
- b) Closed system drug administration to the urinary bladder, through a urinary catheter. The Catheter Adaptor fits a wide range of standard catheter sizes and converts an open catheter connection to a closed Chemfort® connection.
The Chemfort® Catheter Adaptor allows the healthcare professional to have the option for safe drug administration to the urinary catheter and safe disconnection of the Chemfort® Syringe Adaptor (K192866) from the patient's urinary catheter.
The provided text describes a medical device submission (K231286) for the Chemfort® Catheter Adaptor. This document is a 510(k) summary submitted to the FDA to demonstrate substantial equivalence to a legally marketed predicate device.
Crucially, the provided text does NOT contain information about acceptance criteria for a study, nor does it detail a study that proves the device meets specific performance criteria in the way requested.
The text mentions several performance tests were conducted to demonstrate compliance with standards and intended function, but it does not present the acceptance criteria for these tests or the reported device performance against those criteria. It lists various ISO standards and USP tests that the device complies with, but this is different from presenting specific acceptance criteria and detailed study results.
Therefore, I cannot fulfill all parts of your request based on the provided text. I can, however, extract relevant information about the device and the nature of its evaluation.
Here's what can be extracted based on the provided document:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The document states that "Simplivia Healthcare conducted several performance tests to demonstrate that the Chemfort® Catheter Adaptor complies with the following standards and that it functions as intended." However, it does not present a table with specific acceptance criteria (e.g., "Drug leakage must be less than X mg") and the quantitative reported device performance for these criteria. It only lists the standards against which various tests were performed (e.g., ISO 10993 series for biocompatibility, ISO 11135 for sterilization, USP tests for endotoxins and particulate matter).
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided definitively. The document mentions "performance tests" but does not detail their methodology, including sample sizes, nor does it specify the provenance (country of origin, retrospective/prospective nature) of the data from these tests.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable / Cannot be provided. This type of information is relevant for studies involving human interpretation (e.g., diagnostic imaging studies). The Chemfort® Catheter Adaptor is a physical medical device (Closed System Transfer Device - CSTD), and its performance evaluation would typically involve laboratory testing rather than expert-established ground truth on diagnostic cases.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable / Cannot be provided. As above, adjudication methods are typically used in studies where human readers are involved in making subjective assessments or interpretations, which is not the nature of the device testing described here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This device is not an AI-powered diagnostic tool, so an MRMC comparative effectiveness study involving human readers and AI assistance is not relevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is not an algorithm. Its "performance" refers to its mechanical and biological integrity, and its ability to prevent contamination and drug release.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable / Implicitly based on technical standards. For a physical device like this, "ground truth" refers to meeting established engineering, chemical, and biological specifications defined by standards (e.g., a device must be sterile, meaning it passes a sterility test; it must not leak hazardous drugs, meaning it passes a containment test). The document indicates compliance with various ISO and USP standards which define quantitative and qualitative benchmarks for device performance.
8. The sample size for the training set
- Not applicable. This device is not an AI algorithm that requires a training set.
9. How the ground truth for the training set was established
- Not applicable. This device is not an AI algorithm that requires a training set.
Summary of what the document does provide regarding the device's evaluation:
The document states that the Chemfort® Catheter Adaptor has similar indications for use, technological characteristics, and principles of operation as its predicate device, Tevadaptor® Catheter Adaptor (K180489).
It highlights one specific difference in claims:
- Predicate Device (Tevadaptor®): Tested and proved to prevent contaminants from entering the drug or fluid path for up to 3 days.
- Proposed Device (Chemfort® Catheter Adaptor): Has been tested and approved for 7 days regarding the prevention of microbial and airborne contaminants into the drug or fluid path.
The document lists the following standards against which performance tests were conducted (but does not provide the specific acceptance criteria or results for each):
- Biocompatibility:
- ISO 10993-1:2018 (Biological Evaluation - General)
- ISO 10993-4:2017 (Interactions with blood)
- ISO 10993-5:2009 (In vitro cytotoxicity)
- ISO 10993-10:2021 (Irritation and skin sensitization)
- ISO 10993-11:2017 (Systemic toxicity)
- ISO 10993-18:2020 (Chemical characterization)
- Sterilization:
- ISO 10993-7:2008/Amd 1:2019 (Ethylene oxide sterilization residuals)
- ISO 11135:2014 + Amd.1:2018 (Ethylene oxide sterilization requirements)
- Packaging:
- ISO 11607-1:2019 (Packaging for terminally sterilized medical devices)
- Risk Management:
- ISO 14971:2019 (Application of risk management to medical devices)
- Pharmacopeial Tests:
- USP <85> (Bacterial Endotoxins Test)
- USP <161> (Transfusion and Infusion Assemblies and Similar Medical Devices)
- USP <788> (Particulate Matter in Injections)
The conclusion is that "Performance data demonstrated that the Chemfort® Catheter Adaptor is as safe and as effective as its predicate and does not raise any new safety and effectiveness issues." However, the specifics of these performance data (acceptance criteria, methodologies, sample sizes, and quantitative results) are not provided in this 510(k) summary.
Ask a specific question about this device
Page 1 of 11