Search Results
Found 2 results
510(k) Data Aggregation
(246 days)
Vitrea View software is a medical image viewing and information distribution that provides access, through the internet and within the enterprise to multi-modality softcopy medical images (including mammography and digital breast tomosynthesis), reports, and other patient-related information. This data is hosted within disparate archives and repositories for diagnosis, review, communication, and reporting of DICOM and non-DICOM data.
Lossy compressed mammography images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA cleared display that meets technical specifications reviewed and accepted by FDA or displays accepted by the appropriate regulatory agency for the country in which it is used.
Display monitors used for reading medical images for diagnostic purposes must comply with the applicable regulatory approvals and quality control requirements for their use and maintenance.
Vitrea View software is indicated for use by qualified healthcare professionals including, but not restricted to. radiologists, non-radiology specialists, physicians and technologists.
When accessing Vitrea View software from a mobile device, images viewed are for informational purposes only and not intended for diagnostic use.
The Vitrea View software is a web-based, cross-platform, zero-footprint enterprise image viewer solution capable of displaying both DICOM and non-DICOM medical images. The Vitrea View software enables clinicians and other medical professionals to access patients' medical images with integrations into a variety of medical record systems, such as Electronic Health Record (EHR), Electronic Medical Record (EMR), Health Information Exchange (HIE), Personal Health Record (PHR), and image exchange systems. The Vitrea View software is a communication tool, which supports the physician in the treatment and planning process by delivering access to images at the point of care.
The Vitrea View software offers medical professionals an enterprise viewer for accessing imaging data in context with reports from enterprise patient health information databases, fosters collaboration, and provides workflows and interfaces appropriate for referring physicians and clinicians. IT departments will not have to install client systems, due to the web-based zero-footbrint nature of the Vitrea View software. The Virea View software offers scalability to add new users as demand grows, and may be deployed in a virtualized environment. Some of the general features include:
. Fast time-to-first-image
- Contextual launch integration with single-sign-on
- . Easy study navigation and search capability
- Supports multi-modality vendor-neutral DICOM images
- . Supports non-DICOM images
- lmages display at full diagnostic quality (with appropriate hardware)
- . Basic 2D review tools (zoom, pan, measure)
- Basic 3D and MPR viewing
- Radiology key images
- . Comparative side-by-side review, regardless of image types
- . Collaboration tools
- . Leverages traditional DICOM as well as next-generation DICOMweb image transfer protocols
- Enables federated access to across multiple data sources across multiple sites
- . Web-based zero-footprint architecture
- Secure Access on various Windows® and Mac computers through standard internet
- browsers
- . Secure Access on various iOS®, Android™, and Windows® tablet devices through the device's Internet browser
- . Secure Access on various iOS and Android smartphones through the device's Internet browser
Here's the analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Device Name: Vitrea View
510(k) Number: K163232
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (from Radiologists' Rating) | Reported Device Performance (Vitrea View vs. Reference System) |
---|---|
Visualization of the adipose and fibroglandular tissue | Met clinical equivalence for diagnostic quality |
Visualization of the breast tissue and underlying pectoralis muscle | Met clinical equivalence for diagnostic quality |
Image contrast for differentiation of subtle tissue density differences | Met clinical equivalence for diagnostic quality |
Sharpness, assessment of the edges of fine linear structures, tissue borders and benign calcifications | Met clinical equivalence for diagnostic quality |
Tissue visibility at the skin line | Met clinical equivalence for diagnostic quality |
Artifacts due to image processing, detector failure and other external factors to the breast | Met clinical equivalence for diagnostic quality |
Overall clinical image quality | Met clinical equivalence for diagnostic quality |
2. Sample Size Used for the Test Set and Data Provenance:
- Mammography Image Quality Validation: 50 studies.
- Data Provenance: Studies were "chosen randomly from existing patient studies obtained over a two-day time-frame at the designated Breast Imaging center." This indicates retrospective data from a specific imaging center.
- Digital Breast Tomosynthesis Image Quality Validation: 50 studies.
- Data Provenance: Studies were "chosen randomly from existing patient studies obtained at the designated Breast Imaging center." This also indicates retrospective data from a specific imaging center.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts:
- Mammography Image Quality Validation: Four experienced radiologists. No further specific qualifications (e.g., years of experience) are provided.
- Digital Breast Tomosynthesis Image Quality Validation: Three experienced radiologists. No further specific qualifications are provided.
4. Adjudication Method for the Test Set:
The studies were multi-reader, multi-case tests where radiologists were asked to rate image quality equivalence. The text states:
- "The radiologists found all of the images displayed met the clinical equivalence for diagnostic quality when displayed using the Vitrea View software as compared to the same studies displayed using the McKesson system."
- "The radiologists found all of the images displayed met the clinical equivalence for diagnostic quality when displayed using Vitrea View as compared to the same studies using the sites existing Phillips Radiology system."
This suggests a consensus or agreement among the radiologists was reached, rather than a formal adjudication method like a 2+1 or 3+1 rule. The criteria were rating the image quality on a scale of 1 to 3, but the specifics of how these individual ratings were combined or adjudicated to reach the overall "met clinical equivalence" conclusion are not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
Yes, MRMC studies were done for both mammography and digital breast tomosynthesis. However, these were not comparative effectiveness studies evaluating human reader improvement with AI assistance. Instead, they were image quality equivalence studies comparing the device's display quality against a cleared predicate/reference device. The purpose was to show that images displayed by "Vitrea View software... met the clinical equivalence for diagnostic quality" when compared to a reference system displaying the same images. Therefore, an effect size of human improvement with AI vs. without AI is not applicable to these studies.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
No, the studies described were focused on the display quality of the Vitrea View software in conjunction with diagnostic monitors, for review by human radiologists. It was not a standalone algorithmic performance evaluation.
7. The Type of Ground Truth Used:
The ground truth was established by the subjective rating of "experienced radiologists" on the "overall clinical image quality" and other specific image quality parameters. This is effectively expert consensus on image quality and clinical equivalence for diagnostic use. It does not appear to involve pathology or outcomes data to define disease presence or absence.
8. The Sample Size for the Training Set:
The document does not specify a training set or its size. The studies described are verification and validation of the device's performance, not the training of a machine learning model.
9. How the Ground Truth for the Training Set Was Established:
Not applicable, as no training set or ground truth establishment for a training set is mentioned in the provided text.
Ask a specific question about this device
(88 days)
Philips IntelliSpace Portal Platform is a software medical device that allows multiple users clinical applications from compatible computers on a network.
The system allows networking, selection, processing and filming of multimodality DICOM images.
This software is for use with off-the-shelf PC computer technology that meets defined minimum specifications .
Philips IntelliSpace Portal Platform is intended to be used by trained professionals, including but not limited to physicians and medical technicians.
This medical device is not to be used for mammography.
The device is not intended for diagnosis of lossy compressed images.
Philips IntelliSpace Portal Platform is a software medical device that allows multiple users to remotely access clinical applications from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images. This software is for use with offthe-shelf PC computer technology that meets defined minimum specifications.
The IntelliSpace Portal Platform communicates with imaging systems of different modalities using the DICOM-3 standard.
Here's an analysis of the provided text regarding the acceptance criteria and study for the IntelliSpace Portal Platform (K162025):
The submitted document is a 510(k) Premarket Notification for the Philips IntelliSpace Portal Platform. This submission aims to demonstrate substantial equivalence to a legally marketed predicate device (GE AW Server K081985).
Important Note: The document focuses on demonstrating substantial equivalence for a Picture Archiving and Communications System (PACS) and related functionalities. Unlike AI/ML-driven diagnostic devices, the information provided here does not detail performance metrics like sensitivity, specificity, or AUC against a specific clinical condition using a test set of images with established ground truth from a clinical study. Instead, the acceptance criteria and "study" refer to engineering and functional verification and validation testing to ensure the software performs as intended and safely, consistent with a PACS system.
Here's the breakdown based on your requested information:
-
A table of acceptance criteria and the reported device performance
The document does not provide a table with specific quantitative acceptance criteria or reported performance results in the classical sense (e.g., sensitivity, specificity, accuracy percentages) because it's for a PACS platform, not a diagnostic AI algorithm for a specific clinical task.
Instead, the "acceptance criteria" for a PACS platform primarily relate to its functional performance, compliance with standards, and safety. The reported "performance" is a successful demonstration of these aspects.
Acceptance Criteria (Inferred from regulatory requirements and description) Reported Device Performance (as stated in the submission) Compliance with ISO 14971 (Risk Management) Demonstrated compliance with ISO 14971. (p. 9) Compliance with IEC 62304 (Medical Device Software Lifecycle Processes) Demonstrated compliance with IEC 623304. (p. 9) Compliance with NEMA-PS 3.1-PS 3.20 (DICOM Standard) Demonstrated compliance with NEMA-PS 3.1-PS 3.20 (DICOM). (p. 9) Compliance with FDA Guidance for Content of Premarket Submissions for Software Contained in Medical Devices Demonstrated compliance with relevant FDA guidance document. (p. 9) Meeting defined functionality requirements and performance claims (e.g., networking, selection, processing, filming of multimodality DICOM images, multi-user access, various viewing/manipulation tools as listed in comparison tables) Verification and Validation tests performed to address intended use, technological characteristics, requirement specifications, and risk management results. Tests demonstrated the system meets all defined functionality requirements and performance claims. (p. 9) Safety and Effectiveness equivalent to predicate device Demonstrated substantial equivalence in terms of safety and effectiveness, confirming no new safety or effectiveness concerns. (p. 9, 10) -
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This type of information is not provided in this document. Since the submission is for a PACS platform and not a diagnostic AI algorithm, there is no mention of a "test set" of clinical cases or patient data in the context of diagnostic performance evaluation. The "testing" refers to software verification and validation, which would involve testing functionalities rather than analyzing a dataset of medical images.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not applicable/not provided. As explained above, there is no "test set" of clinical cases with ground truth established by medical experts for diagnostic performance.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not applicable/not provided. There is no clinical "test set" requiring adjudication for diagnostic performance.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is a PACS platform, not an AI-assisted diagnostic tool designed to improve human reader performance for a specific clinical task. The submission explicitly states: "The subject of this premarket submission, ISPP does not require clinical studies to support equivalence." (p. 9).
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No, a standalone performance study (in the context of an AI algorithm performing a diagnostic task) was not done. This device is a software platform for image management and processing, intended for use by trained professionals (humans-in-the-loop) for visualization and administrative functions.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not applicable/not provided. There is no ground truth data in the context of diagnostic accuracy for this PACS platform submission. The "ground truth" for its functionality would be defined by its requirement specifications, and testing would verify if those specifications are met.
-
The sample size for the training set
This information is not applicable/not provided. This device is a PACS platform, not an AI/ML algorithm that requires a "training set" of data in the machine learning sense. The software development process involves design and implementation, followed by verification and validation, but not training on a dataset of images to learn a specific task.
-
How the ground truth for the training set was established
This information is not applicable/not provided. As there is no "training set," there is no ground truth establishment for it.
Ask a specific question about this device
Page 1 of 1