Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
Ambra PACS software is intended for use as a primary diagnostic and analysis tool for diagnostic images for hospitals, imaging centers, radiologists, reading practices and any user who requires and is granted access to Patient image, demographic and report information.
Ambra Pro Viewer, a component of Ambra PACS, displays, modifies and manages diagnostic quality DICOM images including 3D visualization and reordering functionality.
Lossy compressed mammographic images and digitized film screen mages must not be reviewed for primary diagnosis or image interpretations. Mammographic images may only be viewed using cleared monitors intended for mammography Display.
Not intended for diagnostic use on mobile devices.
Ambra PACS software is intended for use as a primary diagnostic and analysis tool for diagnostic images and reporting for hospitals, imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and supplementary information.
Ambra PACS is considered a 'Continuous Use' device is compliant with HIPAA and 21 CFR Part 11 regulations regarding patient privacy (such as restricting access to particular studies, logging access to data), data integrity, patient safety and best software development and validation practices.
Ambra PACS provides common diagnostic and analytic radiology functionality. Specifically, Ambra PACS enables:
- Real-time viewing and management of DICOM images for diagnostic, clinical, research and education purposes;
- Ingestion and normalization of DICOM content for review and archiving;
- Electronic distribution and secure storage of images;
- Off-site viewing and reporting (distance education, tele-diagnosis).
Ambra ProViewer, a component of Ambra PACS, displays, modifies, and manages diagnostic quality DICOM images, including 3D visualization and reordering functionality. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary diagnosis or image interpretations. Mammographic images may only be viewed using cleared monitors intended for mammography display.
The provided text does not contain detailed acceptance criteria or a study that proves the device meets specific performance metrics. Instead, it focuses on demonstrating substantial equivalence to a predicate device (Ambra PACS with ProViewer, K202335) through software verification and validation testing, especially concerning a change in the programming language for the transcoding component.
Therefore, many of the requested details, such as specific performance metrics, sample sizes for test sets, expert qualifications, and details of clinical studies, are not available in this document.
However, based on the information provided, here's what can be inferred and what is not available:
1. Table of Acceptance Criteria and Reported Device Performance:
The document states: "All verification and validation acceptance criteria were met." However, it does not provide the specific numerical acceptance criteria or the reported device performance. It generally refers to functional, design, measurement, and deployment requirements.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Functional Requirements | Met all requirements |
Design Requirements | Met all requirements |
Measurement Requirements | Met all requirements |
Deployment Requirements | Met all requirements |
Simulated Use Testing | Achieved intended use |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified. The document mentions "validation testing of DICOM images" and "simulated use testing," but no numbers for the test set size or number of images are provided.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as no clinical study or ground truth establishment by experts is mentioned for this substantial equivalence submission. The testing was verification and validation against software requirements.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable as no expert adjudication or clinical study is mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done or mentioned. This submission is for a PACS system, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not applicable in the context of specific performance metrics for AI algorithms. The testing was for the overall PACS software functionality.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated for a test set. The validation focused on verifying that the software continues to meet defined requirements, rather than establishing diagnostic accuracy against a clinical ground truth.
8. The sample size for the training set:
- Not applicable, as this is a PACS system and the document describes verification and validation rather than the training of a machine learning model.
9. How the ground truth for the training set was established:
- Not applicable, as there is no mention of a training set or machine learning model in this context.
Ask a specific question about this device
Page 1 of 1