(15 days)
CentarisOne™ RIS/PACS is intended for use by a physicians and other medical staff as a full service DICOM and IHE compliant RIS/PACS solution for the storage, distribution and display of medical images from various imaging sources such as CT, MRI, NM, PT, DR, CR, US, XA scanners and secondary capture devices (film digitizers, etc.). This device may be used to archive mammography images (MG) but is not intended for primary diagnostic viewing of mammography images by this device.
The CentarisOne™ RIS/PACS is a comprehensive DICOM and IHE compliant radiology software based solution for small to mid-sized imaging centers. The RIS maintains control over all patient related information. The product integrates the imaging aspect with the front office details of the imaging center. It provides all users of the system with more information about the activities of a patient. This all leads to a smoother workflow and better patient care. The PACS component manages the image storage and image display to end-users while tightly integrated into the radiology information system (RIS) to provide users with more patient information resulting in a tighter work-flow.
I am sorry, but the provided text from K063037 does not contain the specific details about acceptance criteria, study design, sample sizes, ground truth establishment, or expert qualifications that you are requesting.
The document is a 510(k) summary for the CentarisOne™ RIS/PACS system, which is a Picture Archiving and Communication System (PACS). It declares substantial equivalence to a predicate device (IMAGEQUBE PACS) and outlines the device's intended use and general safety claims.
Here's what I can infer from the document regarding your numbered points, and what is explicitly missing:
1. A table of acceptance criteria and the reported device performance:
- Missing. The document states: "The device has been tested in non-clinical and clinical locations and proven to be safe and effective." However, it does not provide any specific quantitative acceptance criteria (e.g., accuracy, sensitivity, specificity, workflow efficiency metrics) or data showing how the device performed against such criteria.
2. Sample size used for the test set and the data provenance:
- Missing. No information on the sample size of any test set or the provenance (country of origin, retrospective/prospective) of data used for testing is provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Missing. There is no mention of ground truth establishment, the number of experts, or their qualifications. Given the nature of a PACS system (storage, distribution, and display), performance might be evaluated on technical aspects (e.g., image fidelity, retrieval speed, system uptime) rather than diagnostic accuracy, which would typically involve expert-established ground truth. However, no such details are given.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Missing. No adjudication method is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Missing. No MRMC study is mentioned. This device is a PACS, which is infrastructure for handling medical images, not an AI-powered diagnostic tool. Therefore, an MRMC study comparing human readers with and without "AI assistance" would not be relevant to this specific device as described.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Missing. There is no mention of a standalone algorithm performance evaluation. Again, this is a PACS system, not an algorithm performing a diagnostic task independently.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Missing. No information on the type of ground truth used is provided.
8. The sample size for the training set:
- Missing. As this is a PACS system (software for image management), it is unlikely to have a "training set" in the context of an AI/machine learning model. The relevant "training" would likely refer to software development and testing, not AI model training.
9. How the ground truth for the training set was established:
- Missing. (See point 8).
In summary, the provided document focuses on establishing substantial equivalence to a predicate device based on comparable features for image storage, communication, and display, rather than detailing a performance study with specific acceptance criteria and ground truth validation, which are more common for diagnostic AI devices.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).