(30 days)
Image Management is intended to provide complete and scalable local and wide area PACS solutions for hospital and related institutions/sites, which will archive, retrieve, process and display medical images and data from hospital medical imaging and information systems. The device contains clinical applications that assist the processing, analyzing and comparing of medical images. It is a single device that integrates the review, dictation and reporting tools to create a productive work environment for the radiologists and physicians.
The Image Management viewers are used for patient exam management by clinicians in order to access and display patient data, medical reports, medical images from different modalities including but not limited to CR, DR, CT, MR, NM, ECG, US, MG*, DBT*, OP and OPT. The device provides wireless and portable access to medical images for remote reading or referral purposes from web browsers using current standard HTML.
*For primary interpretation and review of mammography images, only use display hardware that is specifically designed for and cleared by FDA for mammography.
Image Management V15 is a software based system and is intended to provide completely scalable local and wide area PACS (Picture Archiving and Communication System) solutions for hospital and related institutions/sites, which will archive, distribute, retrieve, process and display images and data from all hospital modalities and information systems. The device is to be used by trained professionals including, but not limited to, physicians and medical technicians. The device contains clinical applications that assist the processing, analyzing and comparing medical images. It is a single device that integrates review, dictation and reporting tools to create a productive work environment for the radiologists and physicians.
IM V15 supports receiving, sending, printing, storing and displaying studies received from the following modality types via DICOM (Digital Imaging and Communications in Medicine): Computed Tomography (CT), Magnetic Resonance (MR), Nuclear Medicine (NM), Ultrasound (US), X-Ray (XR), X-Ray Angiography (XA), Positron Emission Tomography (PET), Computed Radiography (CR), Digital Radiography (Abbreviation not defined by the DICOM standard) (DR) Radio Fluoroscopy (RF), Radiation Therapy (RT), Mammography (MG), Secondary Capture (SC), Visible Light (VL), Optical Coherence Tomography (OCT), Electrocardiogram (ECG) and Ophthalmic Photography (OP) as well as hospital/radiology information systems.
The provided document, a 510(k) summary for Philips Medical Systems' Image Management V15, does not contain the detailed information necessary to fully answer the request.
Specifically, an AI/algorithm-centric study proving the device meets acceptance criteria for specific performance metrics is not described. The document primarily focuses on demonstrating substantial equivalence to a predicate device (CARESTREAM PACS K110919) based on indications for use, technological characteristics, and compliance with general medical device standards.
However, based on the information provided, here's what can be inferred and what remains unknown regarding acceptance criteria and a "study" of device performance:
Acceptance Criteria and Reported Device Performance:
The document states that "Verification and validation tests have been performed to address intended characteristics, technological use. specifications and risk management results." and that "The test results in this 510(k) premarket notification demonstrate that Image Management v15 complies with the aforementioned international and FDA-recognized consensus standards and FDA guidance documents, and is substantially equivalent to the primary predicate device."
This implies that the acceptance criteria are tied to:
- Compliance with ISO 14971, IEC 62304, and NEMA PS 3.1-3.22 (DICOM Standard).
- Meeting "intended characteristics" and "technological specifications."
- Successful mitigation of risks identified through risk management.
- Demonstrating substantial equivalence to the predicate device.
Without explicit performance metrics (e.g., accuracy, sensitivity, specificity for a specific clinical task), a table of acceptance criteria and reported device performance cannot be generated as requested. The document doesn't provide quantitative results of these "verification and validation tests" beyond a statement of compliance.
*Study Information (Based on what is and is not in the document):
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: As inferred above, these include compliance with specified standards (ISO 14971, IEC 62304, DICOM), meeting intended characteristics and technological specifications, and addressing risk management results. However, no specific quantitative performance metrics (e.g., accuracy, precision, recall) are listed as acceptance criteria, nor are their corresponding reported device performance values provided.
- Reported Device Performance: Not explicitly stated in quantitative terms in this summary. The summary focuses on compliance and equivalence, not on specific performance results that would be typically seen in a study evaluating an AI algorithm's diagnostic capabilities.
-
Sample size used for the test set and the data provenance:
- Not provided. The document mentions "verification and validation tests" but gives no details about the data (e.g., sample size, type of images, country of origin, retrospective/prospective collection). Given the nature of the device as an "Image Management System" (PACS), the testing might involve functional performance, data integrity, and display capabilities rather than diagnostic accuracy on a specific disease.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not provided. Ground truth establishment is typically relevant for diagnostic AI algorithms. Since this is an image management system, the "ground truth" for its testing would likely be related to correct image archiving, retrieval, processing, and display, rather than a clinical diagnosis.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not provided. Adjudication methods are relevant for establishing ground truth in diagnostic studies, which is not described here.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No evidence of an MRMC comparative effectiveness study involving AI assistance and human readers is present. The device is described as an "Image Management System" with "clinical applications that assist the processing, analyzing and comparing medical images." While it processes images, it is not described as an AI intended to directly assist or augment human diagnostic performance in the way an AI CADx (Computer-Aided Detection/Diagnosis) system would. Therefore, an MRMC study aimed at quantifying human improvement with AI assistance would not be applicable or described for this type of device based on this summary.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No evidence of a standalone algorithm performance study is present. Again, the summarized device appears to be a PACS system, not a standalone diagnostic AI algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated and likely not applicable in the typical diagnostic sense. For an image management system, "ground truth" would relate to the correctness of data handling, image fidelity, display accuracy, and functionality as per DICOM standards and internal specifications, rather than a clinical diagnosis confirmed by pathology or outcomes.
-
The sample size for the training set:
- Not applicable and not provided. This device is described as a software system primarily for image management. While it may contain "clinical applications that assist the processing, analyzing and comparing medical images," it is not described as a deep learning or machine learning-based AI that requires a "training set" in the conventional sense for a diagnostic task.
-
How the ground truth for the training set was established:
- Not applicable and not provided. (See point 8).
In Summary:
The provided document describes a 510(k) submission for an "Image Management V15" system, which is a PACS. The focus of the submission is on demonstrating substantial equivalence to an existing predicate device and compliance with general medical device standards for software and risk management. It does not describe a clinical study of an AI algorithm with specific diagnostic performance acceptance criteria, test sets, or ground truth establishment relevant to AI diagnostic capabilities. The "clinical applications" mentioned within the PACS are for "processing, analyzing and comparing medical images" and would likely refer to standard image manipulation tools (e.g., 3D reconstruction, volume matching, MPR) rather than novel AI diagnostic algorithms requiring specific performance validation studies as requested in the prompt.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).