Search Results
Found 3 results
510(k) Data Aggregation
(155 days)
The BraidE Embolization Assist Device is indicated for use in the peripheral vasculature as a temporary endovascular device used to assist in the coil embolization of wide-necked peripheral aneurysms with a neck width ≤ 10 mm. A wide-necked peripheral aneurysm defines the neck width as > 4 mm or a dome-to-neck ratio < 2.
The BraidE Embolization Assist Device is a sterile single use endovascular device intended to provide temporary assistance for the coil embolization of wide-necked peripheral aneurysms. The BraidE is comprised of a nitinol braided mesh, stainless steel shaft, nitinol core wire and a handle. The braided mesh at the distal portion of the device is shown in Figure 2. The shaft connects the mesh and the handle by the core wire that runs inside the shaft from the distal end of the mesh to the slider activation element in the handle. The mesh is expanded when the physician pulls the slider. Because the wires of the mesh are completely radiopaque, the physician sees the mesh under fluoroscopy and controls it until it conforms to the aneurysm neck morphology and vessel requirement.
This document describes the regulatory acceptance of the BraidE Embolization Assist Device, leveraging data primarily from the Comaneci Embolization Assist Device due to shared design features. The core of the acceptance criteria and supporting studies are presented below:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are derived from the "Special Controls" section, which outlines the requirements for clinical, animal, and non-clinical performance, as well as biocompatibility and labeling. The reported device performance is extracted from the "Summary of Clinical Information" and "Summary of Nonclinical/Bench Studies" sections.
| Acceptance Criteria (Special Control) | Reported Device Performance |
|---|---|
| 1. Clinical Performance Testing: Must demonstrate the device performs as intended under anticipated conditions of use and evaluate all adverse events (tissue/vessel damage, thromboembolic events, coil ensnarement). | Neurovascular Retrospective Study (Comaneci Device):- N=63 patients (64 intracranial aneurysms) treated.- Technical Success: 93.65% (59/63 patients) had successful coil embolization without coil entanglement, ensnarement, prolapse, or protrusion into the parent vessel. (Table 6)- Adverse Events: 11.1% (7/63) of patients experienced a serious neurological AE within 3 months post-procedure. (Table 5) Specific AEs detailed in Table 4 (e.g., symptomatic thrombotic event, vasospasm, hemorrhage). No mortality or subject device-coil entanglements reported.Peripheral Case Studies (Comaneci Device):- 6 patients with peripheral VRAAs reported across 3 publications.- Effectiveness: All visceral aneurysms completely excluded/occluded. All but one procedure concluded with complete patency of parent and branch vessels. (Table 7)- Safety: Generally no immediate or periprocedural complications reported, except for one case of coil entanglement leading to non-target embolization of the splenic artery. (Table 7) |
| 2. Animal Testing: Must demonstrate device delivery to target, compatibility with coils, and evaluate adverse events (vessel/tissue damage). | Rabbit Model Study (Comaneci Device):- Evaluated acute (4 days) and chronic (28 days) safety and performance.- Delivery & Performance: Successful delivery and coil embolization in 23 aneurysms (20 animals). No post-procedural mortalities, no angiographically-visible coil protrusions (acute). Patent parent vessels with normal aneurysmal sac embolization (chronic).- Adverse Events: No morbidity, thrombosis, infection, hemorrhage, or downstream ischemia (acute). Mild embolic coil protrusion in 2 Comaneci-treated aneurysms (chronic). No perforations, dissections, erosions, or thrombus formation in device contact zones. Absence of thrombus in distal skeletal muscles. |
| 3. Non-clinical Performance Testing: Demonstrates device performs as intended, including: a. Mechanical testing (tensile, torsional, compressive, tip deflection forces).b. Mechanical testing (radial forces).c. Simulated use testing (delivery in tortuous vasculature, coil compatibility).d. Dimensional verification.e. Radiopacity testing. | Bench Testing (leveraged from Comaneci):- a. Mechanical: Tensile Strength (verified compliance of joints), Kink Resistance (ability to reach tortuous vasculature), Tip Flexibility (maximum force deflected), Tracking Force/Torque (withstand typical forces/torquing). (Table 2)- b. Radial Forces: Radial Force/Crush (withstand external forces, retain integrity, measure outward forces to ensure no serious vessel damage). (Table 2)- c. Simulated Use: Functional and Microcatheter Compatibility (delivery in recommended microcatheter through tortuous silicone model), Simulated Use (device performance in in vitro anatomical model through femoral artery to target site). (Table 2, 4)- d. Dimensional Verification: Verified various dimensional attributes. (Table 2)- e. Radiopacity: Clinical study evaluated radiopacity (can be visualized under fluoroscopic guidance). |
| 4. Biocompatibility: Patient-contacting components must be demonstrated to be biocompatible. | Biocompatibility Testing (leveraged from Comaneci):- Classified as external communicating, limited contact (<24 hours), blood-contacting device.- Evaluated hemocompatibility (complement activation, thrombogenicity, indirect/direct hemolysis), cytotoxicity, sensitization, intracutaneous reactivity, acute systemic toxicity, material-mediated pyrogenicity per ISO 10993-1. (Table 1) |
| 5. Sterility and Pyrogenicity: Performance data must support. | Sterility & Pyrogenicity:- Adopted into existing, validated ethylene oxide (EtO) sterilization cycle (AAMI TIR28:2016) in accordance with ISO 11135:2014, achieving SAL of at least 10^-6. - Material-mediated pyrogenicity testing performed (leveraged from Comaneci). |
| 6. Shelf Life: Performance data must support continued sterility, package integrity, and device functionality over labeled shelf life. | Shelf Life: Established at 2.5 years based on real-time and accelerated aging (ASTM F1980-07). Post-aging, package integrity tested per ASTM F1929, F2096, F1886. (leveraged from Comaneci). |
| 7. Labeling: Must include detailed technical parameters, clinical testing summary, and shelf-life. | The labeling includes instructions for use, expertise needed, detailed technical parameters (including compatible delivery catheter dimensions), summary of clinical testing results (including technical success, complications, AEs), and shelf life. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Clinical (Neurovascular): 63 patients / 64 intracranial aneurysms.
- Clinical (Peripheral): 6 patients across 3 case studies.
- Animal: 20 rabbits (used to create 23 aneurysms).
- Data Provenance:
- Clinical (Neurovascular): Retrospective collection from two sites outside the United States: Walton Center in Liverpool, United Kingdom, and University Hospital St. Ivan Rilski in Sofia, Bulgaria. Data collected between March to December 2017.
- Clinical (Peripheral): Reported in the clinical literature (3 publications) - specifics on country of origin for these individual cases are not provided.
- Animal: Conducted in a rabbit model (Good Laboratory Practice (GLP) standards). Location not specified.
- Non-clinical/Bench: Performed internally or by contract labs.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Clinical (Neurovascular - Retrospective Study):
- Imaging Data (Technical Success and Safety): Independently analyzed by the Angiography and Noninvasive Imaging Core Lab at the University of California - Los Angeles (UCLA) in Los Angeles, California. No specific number or qualifications of individual experts from the core lab are provided, but a core lab typically implies specialized radiologists/neuro-interventionalists.
- Adverse Events (AE): Adjudicated independently by the Department of Neurology at the University of Southern California (USC). No specific number or qualifications of individual experts from the department are provided, but a neurology department implies neurologists with relevant expertise.
- Clinical (Peripheral - Case Studies): The ground truth for effectiveness (aneurysm exclusion/occlusion, patency) and safety observations would have been established by the treating physicians and reported in their publications. No specific number or qualifications of experts beyond the authors of these publications are provided.
- Animal Study: Aneurysmal healing was characterized by light microscopy and en face assessment by scanning electron microscopy (SEM). Histologic indicators of vessel wall healing were determined by light microscopy and SEM. These analyses would be performed by trained pathologists/histologists. No specific number or qualifications are mentioned.
4. Adjudication Method for the Test Set
- Clinical (Neurovascular - Retrospective Study):
- Imaging data for technical success and safety were independently analyzed by the Angiography and Noninvasive Imaging Core Lab at UCLA.
- Adverse events (AE) were adjudicated independently by the Department of Neurology at USC.
- This suggests independent review by specialized entities, but no specific 'X+Y' method (e.g., 2+1, 3+1) for individual case agreement is explicitly stated within these "independent" reviews.
- Clinical (Peripheral - Case Studies): The data presented are observational reports from published case studies. There's no indication of an independent adjudication method for these specific cases beyond the reporting clinicians' assessments.
- Animal Study: The analyses (microscopy, SEM, angiographic evaluations) would involve expert interpretation. No explicit adjudication method for disagreements is described beyond the reporting of findings.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not explicitly done in the provided text. The clinical data focuses on device performance in specific populations, and a "control device" was used in the animal study for comparison, but this is not framed as a human MRMC comparative effectiveness study. Human readers (clinicians) were involved in the treatment and assessment, but the study design was not an MRMC study comparing human readers' performance with and without AI assistance. The device itself is an assistive tool, not an AI diagnostic/interventional algorithm.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
This question is not applicable as the BraidE Embolization Assist Device is a physical medical device intended for human-in-the-loop (physician) use to assist in procedures, not an AI algorithm.
7. The Type of Ground Truth Used
- Clinical (Neurovascular - Retrospective Study):
- Technical Success: Established by independent analysis of imaging data (angiography) by a core lab.
- Safety/Adverse Events: Established by independent adjudication by a neurology department based on clinical observation and reported events.
- Clinical (Peripheral - Case Studies): Clinical outcomes (aneurysm exclusion, vessel patency) observed and reported by treating physicians, likely based on imaging (e.g., CTA) and clinical assessment.
- Animal Study: Gross, histological, and clinical chemistry evaluations, combined with angiographic assessments and scanning electron microscopy (SEM), constituted the ground truth for safety and performance in the animal model.
- Non-clinical/Bench Studies: Physical measurements, material analysis, and simulated performance tests against predetermined specifications.
8. The Sample Size for the Training Set
The document does not describe a "training set" in the context of an AI algorithm or a de novo clinical trial for this device. The clinical "training" data for the BraidE (a physical device) would be the experience gained with the predecessor Comaneci device.
- The BraidE device leverages data from the Comaneci device. The Comaneci device itself was evaluated in a "historical post-market data" retrospective study involving 63 patients (64 intracranial aneurysms). This data effectively serves as the "prior experience" or "training" knowledge base for the regulatory acceptance of the BraidE, especially since BraidE shares clinically relevant design features.
- The "training plan for the use of the device with the novel adjustment feature of the mesh region was provided as part of the review," suggesting educational materials for physicians, rather than an algorithmic training set.
9. How the Ground Truth for the Training Set Was Established
As noted above, "training set" here refers to the clinical experience/data gathered from the Comaneci device, which is leveraged for the BraidE.
- For the neurovascular retrospective study of the Comaneci device:
- Technical success and safety imaging data: Independently analyzed by the Angiography and Noninvasive Imaging Core Lab at UCLA.
- Adverse events: Independently adjudicated by the Department of Neurology at USC.
- For the peripheral case studies of the Comaneci device: Ground truth was established by the treating physicians' assessments based on procedural outcomes, follow-up imaging (e.g., CTA), and clinical observations, as reported in the literature.
Ask a specific question about this device
(136 days)
Braid is a software teleradiology system used to receive DICOM images, scheduling information and textual reports, organize and store them in an internal format, and to make that information available across a network via web. Braid is used by hospitals, clinics, imaging centers, and radiologist reading practices.
Braid can optionally be used for mobile diagnostic use for review and analysis of CR, DX, CT, and MR images and medical reports. Braid mobile diagnostic use is not intended to replace workstations and should only be used when there is no access to a workstation. Braid mobile diagnostic use is not intended for the display of mammography images for diagnosis.
When images are reviewed and use as an element of diagnosis, it is the trained physician to determine if the image quality is suitable for their clinical application.
Contraindications: Braid is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel only who are qualified to create and diagnose radiological image data.
Braid is a web-based software platform that allows a user to view DICOM-compliant images for diagnostic and mobile-diagnostic purposes. Braid may be used with FDA-cleared diagnostic monitors and mobile devices including iPhones, and iPads . It is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the display, annotation, and review of reports and demographic information. Braid allows for multispecialty viewing of medical images including Computed Radiography (CR), Computer Tomography (CT), Digital Radiography (DX), Magnetic Resonance (MR), as well as associated non-imaging data such as report text.
- . Braid can be used for primary diagnosis on FDA-cleared diagnostic monitors. Braid is intended for use by trained and instructed healthcare professionals, including (but not limited to) physicians, surgeons, nurses, and administrators to review patient images, perform non-destructive manipulations, annotations, and measurements. When used for diagnosis, the final decision regarding diagnoses resides with the trained physician, and it is up to the physician to determine if image quality is suitable for their clinical application.
- . Braid can also be used for reference and diagnostic viewing on mobile devices. Braid diagnostic use on mobile devices is not intended to replace full diagnostic workstations and should only be used only be used for when there is no access to workstation. When used for diagnosis, the final decision regarding diagnoses resides with the trained physician, and it is up to the physician to determine if image quality is suitable for their clinical application.
Braid has the following viewer technology and features:
- Grayscale Image Rendering
- Localizer Lines
- Localizer Point
- Orientation Markers
- Distance Markers
- Study Data Overlays
- Stack Navigation Tool
- Window/Level Tool
- Zoom Tool
- Panning Tool
- Color Inversion
- Text Annotation
- Maximum Intensity Projection
- Reslicing (MPR)
- Area Measurement Annotation
- Angle Measurement Annotation
In addition, Braid has:
- Added Hardware accelerated rendering
- Support for high resolution Retina displays
- Keyboard shortcuts for all tools and all annotation types
- Touchscreen support
- Quick image manipulation and navigation via multitouch gestures, on touchscreens or multitouch capable trackpads
The provided text is a 510(k) summary for the device "Braid" and contains information about its performance data and substantial equivalence to predicate devices. However, it does not include a specific table of acceptance criteria and reported device performance with numerical metrics typically found in a clinical study report. It states that "clinical validation testing was conducted to support the diagnostic quality of Braid™ on mobile devices as well as the use of Braid™ features such as reslicing (MPR)." and "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." but does not elaborate on the specific acceptance criteria or quantitative performance metrics.
Therefore, the following response will extract all available information related to your request and explicitly state where information is not provided in the document.
Description of Acceptance Criteria and Study Proving Device Meets Criteria (Based on Provided Text)
The provided submission does not explicitly detail a quantitative table of acceptance criteria with corresponding reported device performance metrics in the way a typical clinical study report for an AI/CADe device would. Instead, the "Performance Data" section describes bench testing and clinical validation testing intended to demonstrate that the device's image display quality is "appropriate for Braid™'s intended use" and "of appropriate diagnostic quality." The acceptance is based on the subjective evaluation of image quality by board-certified radiologists, rather than specific numerical thresholds for metrics like sensitivity, specificity, or AUC, which are common for diagnostic AI algorithms.
1. A table of acceptance criteria and the reported device performance
A direct table of acceptance criteria with numerical performance metrics is not provided in the given document. The document states general qualitative acceptance regarding image quality:
| Acceptance Criterion (Inferred from text) | Reported Device Performance (Qualitative) |
|---|---|
| Images displayed by Braid™ on FDA-cleared diagnostic monitors are of appropriate diagnostic quality. | "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." |
| Images displayed by Braid™ on intended mobile devices (iPhone 11, iPad Pro 3) are of appropriate diagnostic quality. | "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." |
| Functionality of Braid™ features (e.g., reslicing/MPR) is acceptable for diagnostic purposes. | Board-certified radiologists "were asked to evaluate all braid features and to provide multiple scores for the quality of the Braid™ images... These performance data including image quality evaluations by qualified radiologists are adequate to support substantial equivalence of Braid™ to the predicate devices." |
| Mobile device screens (iPhone 11, iPad Pro 3) meet display quality standards for the proposed indications. | Bench testing "demonstrated that the designated hardware platforms are appropriate for Braid™'s intended use." |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size: The document does not specify the number of images or cases used in the clinical validation testing. It mentions "Images were evaluated across all intended imaging modalities."
- Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: The document states "Board-certified radiologists were asked to evaluate all braid features..." The exact number of board-certified radiologists is not specified.
- Qualifications of Experts: "Board-certified radiologists." The document does not specify their years of experience or other detailed qualifications beyond board certification.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- The document does not specify an adjudication method for establishing ground truth or resolving discrepancies among readers. It simply states that radiologists "were asked to evaluate all braid features and to provide multiple scores for the quality of the Braid™ images."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study involving AI assistance for human readers was not conducted or described for this device. Braid is a PACS/viewer, not an AI diagnostic algorithm, so such a study would not be directly applicable to its stated function in this context. The clinical validation focused on the diagnostic quality of the displayed images via human expert review.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- As Braid is a Picture Archiving and Communications System (PACS) and viewer, not a diagnostic AI algorithm, the concept of "standalone (algorithm only)" performance metrics like sensitivity/specificity for a specific condition is not applicable here. The performance evaluation focuses on the image display capabilities that a human uses for diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The "ground truth" for the diagnostic quality of the displayed images was established by expert opinion/evaluation of board-certified radiologists who provided "multiple scores for the quality of the Braid™ images." This is a subjective assessment of image quality for diagnostic interpretation.
8. The sample size for the training set
- Braid is described as a PACS/viewer, not a machine learning model that requires a "training set" in the conventional sense for a diagnostic algorithm. Therefore, information about a training set sample size is not applicable and not provided.
9. How the ground truth for the training set was established
- Given that Braid is a PACS/viewer and not an AI diagnostic algorithm, the concept of "ground truth for a training set" is not applicable in this context and is not provided.
Ask a specific question about this device
(194 days)
Ask a specific question about this device
Page 1 of 1