Search Results
Found 1 results
510(k) Data Aggregation
(59 days)
JiveX is a software only Picture Archiving and Communication System intended to display, process, read, report, communicate, distribute, store, and archive medical data which is available as DICOM or HL7 data, including mammographic images, and bio signals. JiveX also converts case related non-image documents, archives them as DICOM data and serves as a vendor neutral archive.
It supports the physician in diagnosis.
For primary image diagnosis in Mammography only uncompressed or non-lossy compressed images must be used.
Typical users of this system are trained professionals, including but not limited to physicians, radiologists, nurses, medical technicians, and assistants.
Note: Web-based image distribution and mobile device display of mammographic images are not intended for diagnostic purposes.
For users in the United States of America: Mobile device display is not intended for diagnostic purposes.
JiveX is a PACS software, with a Moderate level of concern.
A Communication Server is communicating, storing, and archiving images, documents and signal data via DICOM. HL7 and proprietary interfaces. It also renders images for the web-based image distribution.
The fat clients can be used as workstations for medical reading and reporting. They provide extensive functions for image display and image processing. The reporting of digital mammography images is also supported.
The web-based clients are mainly intended for image distribution on personal computers and mobile devices. They offer less functions than the fat clients. As far as the functions allow for it, the web clients can also be used for reading and reporting on personal computers.
The provided text describes a 510(k) premarket notification for the JiveX (Model Number / Release: 5.3) Picture Archiving and Communication System (PACS).
Please note: The document explicitly states that the "subject device was found to have a safety and effectiveness profile that is similar to the predicate device" based on non-clinical performance testing. It also outlines verification and validation activities throughout development. However, the document does not contain details of specific acceptance criteria (numerical targets) or the results of a dedicated study to "prove" the device meets such criteria in comparison to a predicate device. The submission primarily focuses on demonstrating substantial equivalence to a predicate device (JiveX 5.2) without reporting a comparative clinical effectiveness study against specific performance metrics for the new release.
Therefore, many of the requested details related to quantitative acceptance criteria, a specific study proving those criteria, sample sizes used for test sets, data provenance, expert ground truth establishment, adjudication methods, MRMC studies, or standalone performance are not available in the provided text.
Here's a breakdown of what can be extracted based on the provided information:
1. A table of acceptance criteria and the reported device performance
As mentioned above, the document does not provide a table of explicit, quantifiable acceptance criteria or reported numerical performance data from a specific study comparing the subject device to a predicate device against such criteria. The "Performance Data" section describes the verification and validation process rather than specific performance metrics and their achievement.
The conclusion states: "Based on the non-clinical performance testing the subject device was found to have a safety and effectiveness profile that is similar to the predicate device."
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not provided in the document. The "Performance Data" section describes general verification and validation activities but does not specify the sample sizes, data provenance, or study design (retrospective/prospective) for any test sets used to compare performance between the subject and predicate device.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not provided in the document. No specific "test set" with expert-established ground truth is detailed.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. This device is a PACS system, which primarily focuses on image management and display, and not an AI-assisted diagnostic tool that would typically involve improving human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is a PACS system, a software only medical device (SaMD) for image management, display, and processing. It does not appear to be an algorithm designed for standalone diagnostic performance. The document focuses on its functions as an archiving and communication system, supporting physicians in diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not provided and not applicable given the nature of the device and the information presented. The document does not describe studies requiring "ground truth" for diagnostic accuracy.
8. The sample size for the training set
Not applicable. This is a PACS system, not a device that relies on a specific "training set" for an AI algorithm in the traditional sense of machine learning for diagnostic tasks. Its development involves software engineering principles and testing, not AI model training.
9. How the ground truth for the training set was established
Not applicable as there is no mention of a training set or AI model development in the provided text.
Ask a specific question about this device
Page 1 of 1