(136 days)
Braid is a software teleradiology system used to receive DICOM images, scheduling information and textual reports, organize and store them in an internal format, and to make that information available across a network via web. Braid is used by hospitals, clinics, imaging centers, and radiologist reading practices.
Braid can optionally be used for mobile diagnostic use for review and analysis of CR, DX, CT, and MR images and medical reports. Braid mobile diagnostic use is not intended to replace workstations and should only be used when there is no access to a workstation. Braid mobile diagnostic use is not intended for the display of mammography images for diagnosis.
When images are reviewed and use as an element of diagnosis, it is the trained physician to determine if the image quality is suitable for their clinical application.
Contraindications: Braid is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel only who are qualified to create and diagnose radiological image data.
Braid is a web-based software platform that allows a user to view DICOM-compliant images for diagnostic and mobile-diagnostic purposes. Braid may be used with FDA-cleared diagnostic monitors and mobile devices including iPhones, and iPads . It is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the display, annotation, and review of reports and demographic information. Braid allows for multispecialty viewing of medical images including Computed Radiography (CR), Computer Tomography (CT), Digital Radiography (DX), Magnetic Resonance (MR), as well as associated non-imaging data such as report text.
- . Braid can be used for primary diagnosis on FDA-cleared diagnostic monitors. Braid is intended for use by trained and instructed healthcare professionals, including (but not limited to) physicians, surgeons, nurses, and administrators to review patient images, perform non-destructive manipulations, annotations, and measurements. When used for diagnosis, the final decision regarding diagnoses resides with the trained physician, and it is up to the physician to determine if image quality is suitable for their clinical application.
- . Braid can also be used for reference and diagnostic viewing on mobile devices. Braid diagnostic use on mobile devices is not intended to replace full diagnostic workstations and should only be used only be used for when there is no access to workstation. When used for diagnosis, the final decision regarding diagnoses resides with the trained physician, and it is up to the physician to determine if image quality is suitable for their clinical application.
Braid has the following viewer technology and features:
- Grayscale Image Rendering
- Localizer Lines
- Localizer Point
- Orientation Markers
- Distance Markers
- Study Data Overlays
- Stack Navigation Tool
- Window/Level Tool
- Zoom Tool
- Panning Tool
- Color Inversion
- Text Annotation
- Maximum Intensity Projection
- Reslicing (MPR)
- Area Measurement Annotation
- Angle Measurement Annotation
In addition, Braid has:
- Added Hardware accelerated rendering
- Support for high resolution Retina displays
- Keyboard shortcuts for all tools and all annotation types
- Touchscreen support
- Quick image manipulation and navigation via multitouch gestures, on touchscreens or multitouch capable trackpads
The provided text is a 510(k) summary for the device "Braid" and contains information about its performance data and substantial equivalence to predicate devices. However, it does not include a specific table of acceptance criteria and reported device performance with numerical metrics typically found in a clinical study report. It states that "clinical validation testing was conducted to support the diagnostic quality of Braid™ on mobile devices as well as the use of Braid™ features such as reslicing (MPR)." and "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." but does not elaborate on the specific acceptance criteria or quantitative performance metrics.
Therefore, the following response will extract all available information related to your request and explicitly state where information is not provided in the document.
Description of Acceptance Criteria and Study Proving Device Meets Criteria (Based on Provided Text)
The provided submission does not explicitly detail a quantitative table of acceptance criteria with corresponding reported device performance metrics in the way a typical clinical study report for an AI/CADe device would. Instead, the "Performance Data" section describes bench testing and clinical validation testing intended to demonstrate that the device's image display quality is "appropriate for Braid™'s intended use" and "of appropriate diagnostic quality." The acceptance is based on the subjective evaluation of image quality by board-certified radiologists, rather than specific numerical thresholds for metrics like sensitivity, specificity, or AUC, which are common for diagnostic AI algorithms.
1. A table of acceptance criteria and the reported device performance
A direct table of acceptance criteria with numerical performance metrics is not provided in the given document. The document states general qualitative acceptance regarding image quality:
Acceptance Criterion (Inferred from text) | Reported Device Performance (Qualitative) |
---|---|
Images displayed by Braid™ on FDA-cleared diagnostic monitors are of appropriate diagnostic quality. | "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." |
Images displayed by Braid™ on intended mobile devices (iPhone 11, iPad Pro 3) are of appropriate diagnostic quality. | "Results demonstrated that the images displayed by Braid™ were of appropriate diagnostic quality in all conditions." |
Functionality of Braid™ features (e.g., reslicing/MPR) is acceptable for diagnostic purposes. | Board-certified radiologists "were asked to evaluate all braid features and to provide multiple scores for the quality of the Braid™ images... These performance data including image quality evaluations by qualified radiologists are adequate to support substantial equivalence of Braid™ to the predicate devices." |
Mobile device screens (iPhone 11, iPad Pro 3) meet display quality standards for the proposed indications. | Bench testing "demonstrated that the designated hardware platforms are appropriate for Braid™'s intended use." |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size: The document does not specify the number of images or cases used in the clinical validation testing. It mentions "Images were evaluated across all intended imaging modalities."
- Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: The document states "Board-certified radiologists were asked to evaluate all braid features..." The exact number of board-certified radiologists is not specified.
- Qualifications of Experts: "Board-certified radiologists." The document does not specify their years of experience or other detailed qualifications beyond board certification.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- The document does not specify an adjudication method for establishing ground truth or resolving discrepancies among readers. It simply states that radiologists "were asked to evaluate all braid features and to provide multiple scores for the quality of the Braid™ images."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study involving AI assistance for human readers was not conducted or described for this device. Braid is a PACS/viewer, not an AI diagnostic algorithm, so such a study would not be directly applicable to its stated function in this context. The clinical validation focused on the diagnostic quality of the displayed images via human expert review.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- As Braid is a Picture Archiving and Communications System (PACS) and viewer, not a diagnostic AI algorithm, the concept of "standalone (algorithm only)" performance metrics like sensitivity/specificity for a specific condition is not applicable here. The performance evaluation focuses on the image display capabilities that a human uses for diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The "ground truth" for the diagnostic quality of the displayed images was established by expert opinion/evaluation of board-certified radiologists who provided "multiple scores for the quality of the Braid™ images." This is a subjective assessment of image quality for diagnostic interpretation.
8. The sample size for the training set
- Braid is described as a PACS/viewer, not a machine learning model that requires a "training set" in the conventional sense for a diagnostic algorithm. Therefore, information about a training set sample size is not applicable and not provided.
9. How the ground truth for the training set was established
- Given that Braid is a PACS/viewer and not an AI diagnostic algorithm, the concept of "ground truth for a training set" is not applicable in this context and is not provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).