Search Results
Found 1 results
510(k) Data Aggregation
(134 days)
BR-FHUS Viewer 1.0
BR-FHUS Viewer 1.0 is intended as a standalone software device installed on a standalone windows-based computer to assist physicians with manipulation and analysis tools in reviewing breast ultrasound images. Images and data are previously recorded from various imaging systems and other sources such as calibrated spatial positioning devices. BR-FHUS Viewer 1.0 provides the capability to visualize two-dimensional ultrasound images along with the scanning paths and position information of probe that stored in the DICOM file in advance.
BR-FHUS Viewer 1.0 is an electronic image review and reporting software program intended to operate on a windows-based computer. The device allows the review of previously recorded ultrasound examinations, which are performed using standard ultrasound systems and other sources such as calibrated spatial positioning devices, the images of which were recorded digitally. The images are displayed on a computer monitor. The images can be reviewed individually or as a self-playing sequence. The software can adjust the speed of the playback. In addition, the device software allows the user to save the screenshots as DIOM-compatible files and generate electronic reports.
Here's an analysis of the acceptance criteria and study information for the BR-FHUS Viewer 1.0, based on the provided text:
Important Note: The provided document is a 510(k) summary for a Picture Archiving and Communications System (PACS) device. The primary purpose of such devices is to display and manipulate medical images, not to perform diagnostic analysis or provide AI-driven insights directly. Therefore, the "acceptance criteria" and "study" described here are related to the functional performance and safety of the imaging viewer itself, rather than diagnostic accuracy metrics typically associated with AI algorithms. The document explicitly states that the device is intended to "assist physicians with manipulation and analysis tools in reviewing breast ultrasound images," implying a role in image presentation and review, not automated diagnosis.
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criterion (Implied/Stated) | Reported Device Performance |
---|---|---|
Functional Requirements | All functional requirements met. | "All functional requirements have been met." |
Core Function Execution | Core functions execute as expected. | "Core functions execute as expected." |
Validation to Specification | Validation results are within specification. | "The result of these tests demonstrate that BR-FHUS Viewer 1.0 validation is with in specification." |
Intended Operation | Device functions as intended, and operation is as expected. | "In all instances, BR-FHUS Viewer 1.0 functioned as intended and the operation observed was as expected." |
Safety and Effectiveness | Device is as safe and effective as predicate devices. | "BR-FHUS Viewer 1.0 is as safe and effective as the predicate devices and is substantially equivalent to existing products on the market today." (Stated by manufacturer) |
No New Safety/Effectiveness Issues | No new safety or effectiveness issues raised by features. | "The features provided by BR-FHUS Viewer 1.0 do not in themselves raise new concerns of safety or effectiveness." |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated in terms of patient cases. The testing was conducted using a "breast phantom." This suggests a limited, controlled environment rather than a broad patient data set.
- Data Provenance: Not applicable in the context of patient data for diagnostic accuracy. The data used for testing was generated from a "breast phantom" in a "simulated work environment."
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. Given the nature of the device as a viewer (not a diagnostic AI), external expert ground truth might not be the primary focus for its clearance. The "ground truth" here likely refers to the expected functional behavior of the software and accurate display of the phantom's ultrasound images.
-
Adjudication Method for the Test Set:
- Not applicable/Not described. The text indicates "internal procedures" and "trained personnel" performed the testing, implying a verification process against predefined functional expectations rather than an adjudication process of expert interpretations.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The BR-FHUS Viewer 1.0 is described as a "standalone software device... to assist physicians with manipulation and analysis tools in reviewing breast ultrasound images." It is not an AI diagnostic or assistance tool in the sense of providing automated interpretations or improving human reader diagnostic accuracy. Its function is to facilitate image review.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, in the context of its function as a viewer. The performance data described is purely for the software's ability to display, manipulate, and analyze images as intended. It assesses the software itself against its functional specifications. It's a "standalone" performance assessment of the viewer's capabilities, not a diagnostic algorithm.
-
The Type of Ground Truth Used:
- Functional/Technical Ground Truth: The "ground truth" for this device appears to be its pre-defined functional requirements and expected display behavior. Testing against a "breast phantom" would verify that the images from the phantom are accurately displayed and that the manipulation tools work correctly. This is not a "diagnostic ground truth" like pathology for a lesion.
-
The Sample Size for the Training Set:
- Not applicable/Not mentioned. As the device is an image viewer and not an AI algorithm performing diagnostic tasks, there is no "training set" in the sense of machine learning.
-
How the Ground Truth for the Training Set Was Established:
- Not applicable. (See point 7).
Ask a specific question about this device
Page 1 of 1