K Number
K250181
Device Name
AV Viewer
Date Cleared
2025-07-15

(174 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The AV Viewer is an advanced visualization software intended to process and display images and associated data in a clinical setting.

The software displays images of different modalities and timepoints, and performs digital image processing, measurement, manipulation, quantification and communication.

The AV Viewer is not to be used for mammography.

Device Description

AV Viewer is an advanced visualization software which processes and displays clinical images from the following modality types: CT, CBCT – CT format, Spectral CT, MR, EMR, NM, PET, SPECT, US, XA (iXR, DXR), DX, CR and RF.

The main features of the AV Viewer are:
• Viewing of current and prior studies
• Basic image manipulation functions such as real-time zooming, scrolling, panning, windowing, and rolling/rotating.
• Advanced processing tools assisting in the interpretation of clinical images, such as 2D slice view, 2D and 3D measurements, user-defined regions of interest (ROIs), 3D segmentation and editing, 3D models visualization, MPR (multi planar Reconstructions) generation, image fusion and more.
• A finding dashboard used for capturing and displaying findings of the patient as an overview.
• Customized workflows allow the user to create their own workflows
• Tools to export customizable reports to the Radiology Information System (RIS) or PACS (Picture archiving and communication system) in different formats.

AV Viewer is based on the AV Framework, an infrastructure that provides the basis for the AV Viewer and common functionalities such as: image viewing, image editing tools, measurements tools, finding dashboard.

AV viewer can be hosted on multiple platforms and devices, such as Philips AVW, Philips CT/MR scanner console or on cloud.

AI/ML Overview

The provided FDA 510(k) clearance letter for the AV Viewer device indicates that the device has met its acceptance criteria through various verification and validation activities. However, the document does not include detailed quantitative acceptance criteria (e.g., specific thresholds for accuracy, sensitivity, specificity, or measurement error) or comprehensive performance data that would typically be presented in a clinical study report. The submission focuses on demonstrating "substantial equivalence" to a predicate device rather than presenting detailed performance efficacy of the algorithm itself.

Therefore, much of the requested information regarding specific performance metrics, sample sizes for test and training sets, expert qualifications, and detailed study methodologies is not explicitly stated in this 510(k) summary. I will extract and infer what is present and explicitly state when information is missing.

Here's a breakdown based on the provided document:

Acceptance Criteria and Device Performance

The document describes comprehensive verification and validation activities, including "Bench Testing" for measurements and segmentation algorithms. However, specific quantitative acceptance criteria (e.g., "accuracy > 95%") and the reported performance values are not detailed in this summary. The general statement is that "Product requirement specifications were tested and found to meet the requirements" and "The validation objectives have been fulfilled, and the validation results provide evidence that the product meets its intended use and user requirements."

Table of Acceptance Criteria and Reported Device Performance

Feature/MetricAcceptance Criteria (Quantified)Reported Device Performance (Quantified)Supporting Study Type mentioned
General FunctionalityMeets product requirement specificationsMeets product requirementsVerification, Validation
Clinical Use SimulationSuccessful performance in use case scenariosPassed successfully by clinical expertExpert Test
Measurement AccuracyNot explicitly stated"Correctness of the various measurement functions"Bench Testing
Segmentation AccuracyNot explicitly stated"Performance" validated for segmentation algorithmsBench Testing
User RequirementsMeets user requirement specificationsMeets user requirementsValidation
Safety and EffectivenessEquivalent to predicate deviceSafe and effective; substantially equivalent to predicateVerification, Validation, Substantial Equivalence Comparison

Note: The quantitative details for the "Acceptance Criteria" and "Reported Device Performance" for measurement accuracy and segmentation accuracy are missing from this 510(k) summary. The document only confirms that these tests were performed and the results were positive.


Study Details Based on the Provided Document:

2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Test Set Sample Size: Not explicitly stated. The document mentions "Verification," "Validation," "Expert Test," and "Bench Testing" were performed, implying the use of test data, but no specific number of cases or images in the test set is provided.
  • Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data used for testing, nor does it explicitly state whether the data was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not explicitly stated. The "Expert Test" mentions "a clinical expert" (singular) was used to test use case scenarios. It does not mention experts establishing ground truth for broader validation.
  • Qualifications of Experts: The "Expert Test" mentions "a clinical expert." For intended users, the document states "trained professionals, including but not limited to, physicians and medical technicians" (Subject Device), and "trained professionals, including but not limited to radiologists" (Predicate Device). It can be inferred that the "clinical expert" would hold one of these qualifications, but specific details (e.g., years of experience, subspecialty) are not provided.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Adjudication Method: Not explicitly stated. The document refers to "Expert test" where "a clinical expert" tested scenarios, implying individual assessment rather than a multi-reader adjudication process for establishing ground truth for a test set.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Comparative Effectiveness Study: Not explicitly stated or implied. The document focuses on the device's substantial equivalence to a predicate device and its internal verification and validation. There is no mention of a human-in-the-loop MRMC study to compare reader performance with and without AV Viewer assistance. The AV Viewer is described as an "advanced visualization software" and not specifically an AI-driven diagnostic aid that would typically warrant such a study for demonstrating improved reader performance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance Study: The "Bench Testing" section states that it "was performed on the measurements and segmentation algorithms to validate their performance and the correctness of the various measurement functions." This implies a standalone evaluation of these specific algorithms. However, the quantitative results (e.g., accuracy, precision) of this standalone performance are not provided.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Type of Ground Truth: For the "Bench Testing" of measurement and segmentation algorithms, the ground truth would likely be based on reference measurements/segmentations, possibly done manually by experts or using highly accurate, non-clinical methods. For other verification/validation activities, the ground truth would be against the pre-defined product and user requirements. However, explicit details about how this ground truth was established (e.g., expert consensus, comparison to gold standard devices/methods) are not specified.

8. The sample size for the training set

  • Training Set Sample Size: Not explicitly stated. The document does not mention details about the training data used to develop the AV Viewer's algorithms. The focus is on validation for regulatory clearance. Since the product is primarily an "advanced visualization software" with general image processing tools, much of its functionality might not rely on deep learning requiring large, distinct training sets in the same way a specific AI diagnostic algorithm would.

9. How the ground truth for the training set was established

  • Ground Truth for Training Set: Not explicitly stated. As no training set details are provided, the method for establishing its ground truth is also not mentioned.

Summary of Missing Information:

This 510(k) summary provides a high-level overview of the device's intended use, comparison to a predicate, and the types of verification and validation activities conducted. It largely focuses on demonstrating "substantial equivalence" based on similar indications for use and technological characteristics. Critical quantitative details about the performance of specific algorithms (measurements, segmentation), the size and characteristics of the datasets used for testing, and the methodology for establishing ground truth are not included in this public summary. Such detailed performance data is typically found in the full 510(k) submission, which is not publicly released in its entirety.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).