(247 days)
NovaPACS is intended for the viewing, analysis, annotation, registration, distribution, editing, fusion, and processing of digital medical images and data acquired from diagnostic imaging devices and all DICOM devices, etc.
NovaPACS is intended for use by trained healthcare professionals, including radiologists, physicians, technologists, clinicians, and nurses. NovaPACS allows the end user to display, manipulate, archive, and evaluate images.
Mobile devices are not intended to replace a full workstation and should be used only when there is no access to a workstation. They are not to be used for mammography. Mobile devices are used for diagnosis of medical images from different modalities including CT, MR, US, CR/DX, NM, PT, and XA. For a list of compatible mobile platforms see NovaPACS Diagnostic Viewer User Manual.
While NovaPACS full workstation provides tools to assist the healthcare professional determine diagnostic viability, it is the user's responsibility to ensure quality, display contrast, ambient light conditions, and to confirm image compression ratios are consistent with the generally accepted standards of the clinical application.
NovaPACS is a picture archiving and communication system software that retrieves, archives, and displays images and data from all common modalities. NovaPACS uses a variety of workstations, including a Technologist Workstation, Enterprise Radiologist Workstation, Cardio Viewer and Workstation, NovaMG Workstation, and NovaWeb Web Viewer. NovaPACS uses a vatiety of mobile platforms and browers including iPad 2 (Safari Browser), Nexus 7 (Chrome Browser), and iPad mobileRAD (Native Application). For a list of possible browser choices for one platform that are valid for diagnostic viewing see NovaPACS Diagnostic Viewer User Manual.
The NovaPACS software makes images and data available in digital format from all common modalities. The images are viewed on a computer monitor or portable device. NovaPACS tools/features include the following: window, level, zoom, pan, digital subtraction, ejection, cross localization, note-taking ability, voice dictation, and other similar tools. It includes the capability to measure distance and image intensity values, such as standardized uptake value. NovaPACS displays measurement lines, annotations, regions of interest, and fusion blending control functionality. Advanced features include 3D image rendering, virtual fly-through, time domain imaging, and vessel analysis.
Images and data are stored on a digital archive with multiple redundancies; images and data are available on-site and off-site. Novarad provides all software, including third party software (i.e. Windows® OS). NovaPACS software resides on third party hardware, which may vary depending on the client's PACS needs. All hardware is connected to the radiology department local area network.
NovaPACS integrates with NovaRIS and may integrate with any other third party RIS software that has HL7 anterface capabilities.
NovaPACS integrates with Novarad Mobile Rad application and web viewers to display data on 3rd party mobile platforms. Mobile devices are not intended to replace full workstation and should be used only when there is no access to a workstation. They are not to be used for mammography.
Here's a breakdown of the acceptance criteria and study details for the NovaPACS device, extracted from the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly listed in a quantitative table format with specific thresholds. Instead, they are described qualitatively through the results of the clinical and performance testing.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Mobile Display Quality (based on AAPM TG18 guidelines): | Bench Testing: |
- Acceptable Contrast Response | Results regarding measured luminance with respect to target luminance using ND plots provided. (Specific quantitative results are not provided in the summary). |
- No Geometric Distortions | Evaluated but specific results not provided. |
- Acceptable Resolution | Evaluated but specific results not provided. |
- Acceptable Noise Levels | Evaluated but specific results not provided. |
- Acceptable Non-Uniformity | Evaluated but specific results not provided. |
- Acceptable Viewing Angle | Evaluated but specific results not provided. |
- Acceptable Luminance Response | Evaluated but specific results not provided beyond the general statement for ND plots. |
- Acceptable Specular Reflectance | Evaluated but specific results not provided. |
- Acceptable Diffuse Reflectance | Evaluated but specific results not provided. |
Clinical Equivalence to Predicate Workstation: | Clinical Testing: |
- Image quality (contrast, sharpness, artifact, overall quality) comparable to predicate workstation | Each radiologist individually rated contrast, sharpness, artifact, and overall quality as acceptable in comparison to the predicate NovaPACS workstation. |
- Adequate for clinical use/diagnostic assurance | Each radiologist agreed that the mobile devices were comparable to the predicate NovaPACS workstation across all seven modalities and of adequate quality for clinical use. They were comfortable with the diagnosis made on the mobile devices. |
- Overall clinical image display quality equivalent for identification of clinically relevant details | All radiologists agreed that the overall clinical image display quality on the mobile devices were equivalent to the NovaPACS workstation for the identification of clinically relevant details. |
- Acceptable for regular use | Each radiologist indicated that the software and mobile devices provide acceptable quality for regular use and they were satisfied reviewing images on the mobile devices. |
- Same diagnosis made on mobile devices as on predicate workstation (across lighting conditions) | Each radiologist agreed that the same diagnosis would be made on the mobile devices with NovaPACS as on the predicate NovaPACS workstation in low lighting, office lighting, and bright light conditions. |
Software Performance & Safety: | Performance Testing: |
- All requirements have passed test cases | All requirements in the iteration have a test case and the test case has run and passed. |
- All Acceptance tests have passed | All Acceptance tests have passed. |
- All Current tests have passed | All Current tests have passed. |
- All high-impact bugs corrected and verified | All high-impact bugs have been corrected and verified by Quality Assurance. |
- Unresolved anomalies do not pose safety risk or substantially affect performance | Any unresolved anomalies have been assessed in a risk meeting and found not to pose a safety risk to the end user (or their patients) and not to substantially affect the performance of NovaPACS software. |
- Software features operate correctly and safely, meet equivalent objectives/functions as predicate devices | Of over 1200 test cases run, 99% passed, 1% failed, 0% blocked. Failed tests were mostly minor UI errors. Conclusion: testing sufficient to conclude features/functionality are substantially equivalent to predicate devices and raise no new safety concerns. |
- Functional usability across mobile platforms | Verification and validation activities performed, including functional usability across Native App (mobileRad), iOS, and Android. (Implied successful validation, as the device was cleared). |
Auto-detection of Unsupported Mobile Platforms: | The new version auto-detects unsupported mobile platforms at HTML5 login to display a persistent on-screen message of "Not for diagnostic viewing". (This is a specific feature in the new version, demonstrating it meets a functional requirement for safety). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: For the clinical testing, the radiologists evaluated six typical cases for each imaging modality, across seven modalities (CT, MR, US, CR/DX, NM, PT, and XA). This means 6 cases/modality * 7 modalities = 42 cases were used in the clinical evaluation. These cases were evaluated on multiple mobile device platforms (Native App (mobileRad), iOS, Android, and Windows mobile device platforms).
- Data Provenance: The clinical testing was conducted by a panel of board-certified radiologists in the United States. The description refers to "typical cases," suggesting these were retrospective cases, although it's not explicitly stated as retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three board-certified radiologists were used.
- Qualifications: They were described as "board certified radiologists in the United States." No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- The text states: "For each study the radiologist individually rated the contrast, sharpness, artifact and overall quality..." and "Each radiologist agreed that the Native App (mobileRad), IOS, Android, and Windows mobile devices were comparable..." and "Each radiologist agreed that the same diagnosis would be made..."
- This indicates a consensus-based approach among the three radiologists rather than a formal pre-defined adjudication method like 2+1 or 3+1. It appears they reached a unanimous agreement on the comparability and diagnostic quality.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- Yes, a form of MRMC comparative effectiveness study was done. The clinical testing involved multiple readers (3 radiologists) evaluating multiple cases across different mobile platforms against a predicate workstation.
- Effect Size: The study's primary conclusion regarding effect size is qualitative: the radiologists agreed that the mobile devices were "comparable," "adequate quality for clinical use," and "equivalent" to the predicate workstation for diagnostic purposes. A quantitative effect size (e.g., specific metrics like AUC difference or sensitivity/specificity improvement) is not provided in this summary, as the study focused on demonstrating non-inferiority/equivalence qualitatively for diagnostic performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- No, a standalone (algorithm only) performance study was not done. NovaPACS is a PACS system designed for human use (viewing, analysis, annotation) and its evaluation focused on its utility with a human user, not as an autonomous diagnostic algorithm. While there's "performance testing" of the software itself (99% passed), this is system-level functional testing, not a diagnostic accuracy assessment in a standalone manner.
7. The Type of Ground Truth Used
- The ground truth for the clinical study was based on expert consensus (the agreement of the three board-certified radiologists) regarding the image quality, diagnostic assurance, and the ability to make the same diagnosis compared to the predicate workstation. The predicate NovaPACS workstation itself serves as the de facto "ground truth" or reference standard for comparison in this 510(k) submission, as the goal is to show substantial equivalence.
- There's no mention of pathology or outcomes data being used as ground truth.
8. The Sample Size for the Training Set
- The document does not specify a sample size for a training set. This is because NovaPACS is a Picture Archiving and Communications System (PACS), not an AI/ML-driven diagnostic algorithm that typically relies on a distinct training phase with labeled data. Its "training" would primarily involve software development, bug fixing, and internal quality assurance, rather than machine learning model training on specific image datasets.
9. How the Ground Truth for the Training Set Was Established
- As NovaPACS is not an AI/ML diagnostic algorithm, the concept of a "training set" with established ground truth for diagnostic purposes (e.g., presence/absence of disease) is not applicable in the context of this 510(k) summary. The "ground truth" for its development would be its functional specifications, software requirements, and the expected behavior of a PACS system relative to established standards and predicate devices.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).