Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K143397
    Device Name
    ICIS View
    Date Cleared
    2015-06-01

    (187 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K103785, K022292, K111164

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ICIS® View is a software application used for reference viewing of medical images and associated reports and, as such, fulfills a key role in Agfa HealthCare's Imaging Clinical Information System (ICIS). ICIS® View enables healthcare professionals, including (but not limited to) physicians, surgeons, nurses, and administrators to receive and view patient images and data from multiple departments and organizations within one multi-disciplinary viewer.

    Users may access the product directly via a web-browser, select mobile devices, healthcare portal or within the Electronic Medical Record (EMR). ICIS® View allows users to perform basic image manipulations and measurements (for example window/level, rotation, zoom, and markups).

    ICIS® View can optionally be configured for Full Fidelity mode, which is intended for diagnostic use, review and analysis of CR, DX, CT, MR, US images and medical reports. ICIS® View Full Fidelity is not intended to replace full diagnostic workstations and should only be used when there is no access to a workstation. ICIS® View full fidelity is not intended for the display of digital mammography images for diagnosis.

    Device Description

    Agfa's ICIS® View system is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the display, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for review and diagnostic purposes within the system and across computer networks.

    The new device is substantially equivalent to the predicate devices (K103785, K022292, & K111164). It is a multidisciplinary viewer that allows the user to securely access patient images and reports from any PACS or vendor-neutral archive. Images and reports can be viewed directly via a web-browser, select mobile device, healthcare portal or Electronic Medical Record (EMR). The new device includes some of the clinical tools of the predicate devices specifically the functionality to retrieve original lossless renditions of stored images for diagnostic purposes.

    The optional Full Fidelity functionality allows the retrieval of original lossless renditions of stored CR, DX, CT, MR, and US images for diagnostic purposes on select mobile devices or FDA cleared display monitors when there is no access to a full workstation.

    AI/ML Overview

    The provided text describes the 510(k) submission for AGFA Healthcare's ICIS® View device. This device is a Picture Archiving and Communications System (PACS) software intended for reference viewing of medical images and associated reports. The submission aims to demonstrate substantial equivalence to previously marketed predicate devices.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a formal "acceptance criteria table" with specific quantitative metrics (e.g., sensitivity, specificity, or numerical performance thresholds for image quality). Instead, the acceptance is based on demonstrating substantial equivalence to predicate devices, primarily through qualitative evaluations of image quality and functional parity.

    The core acceptance criterion is that the ICIS® View system performs equivalently to the predicate devices in terms of diagnostic image quality when used in "Full Fidelity" mode.

    Acceptance Criterion (Implicitly from substantial equivalence)Reported Device Performance
    Diagnostic Image Quality equivalent to predicate PACS workstation (IMPAX 6.6.1) for CR, DX, CT, MR, US modalities on both desktop and mobile (iPad® 3, iPad® 4).Qualified radiologists evaluated a sample set of images across 3 platforms: ICIS® View Desktop (with FDA cleared diagnostic monitor), ICIS® View Mobile (with calibrated iPad® 3 and iPad® 4), and Agfa's full diagnostic PACS workstation IMPAX 6.6.1 (predicate). They provided an "acceptable" or "unacceptable" score when comparing diagnostic quality, including evaluations of contrast, sharpness, artifacts, and overall image quality. The overall finding was that "Performance data including resolution testing and image quality evaluations by qualified radiologists are adequate to ensure equivalence to the predicates." This implies the performance was "acceptable" and equivalent.
    Compliance with TG18 Image Quality Assessment Parameters.The "Assessment of Display Performance for Medical Imaging Systems" (TG18-QC, TG18-BR, TG18-LP) was used for display device assessment. "All results met acceptance criteria."
    Functional Equivalence/Parity with Predicate Devices.The device demonstrates functional parity with predicate devices in terms of communication (DICOM), no mammographic use, support for CR, DX, CT, MR, US modalities, operating systems (Windows & iOS), mobile device support, transfer/storage/display of images, network access, user authentication, window/level, rotate/pan/zoom, measurements, and annotations. The differences (server vs. app-based) do not alter the intended diagnostic effect.
    Product and manufacturing processes conform to relevant standards.The product, manufacturing, and development processes conform to ACR/NEMA PS3.1-3.20: 2011 DICOM, ISO 14971:2012 (Risk Management), and ISO 13485:2003 (Quality Management Systems).
    Risk assessment demonstrates acceptable residual risk."During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with a conventional PACS system previously released to the field. For ICIS® View there are a total of 20 risks in the broadly acceptable region and two risks in the ALARP region. Zero risks were identified in the Not Acceptable Region." The overall benefits are determined to outweigh the residual risks.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: A "sample set of an average of 6 imaging studies per modality (CR, DX, CT, MR, US)" was evaluated. This means approximately 30 studies in total (6 studies * 5 modalities).
    • Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It refers to "laboratory data" and "image quality evaluations conducted with qualified radiologists," suggesting existing image datasets were used for testing, which typically points to retrospective data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document states "qualified radiologists" were used. The exact number of radiologists is not specified.
    • Qualifications of Experts: The experts are described as "qualified radiologists," indicating they are medical professionals specializing in radiology. No specific experience levels (e.g., "10 years of experience") are provided in the text.

    4. Adjudication Method for the Test Set

    The document states that qualified radiologists "were asked to provide an acceptable or unacceptable score when comparing the diagnostic quality...to the IMPAX predicate." This suggests a comparative evaluation rather than a direct adjudication for "ground truth" of disease presence. The adjudication here seems to be on the equivalence of display quality for diagnostic interpretation rather than agreement on specific findings. No specific multi-reader adjudication method (e.g., 2+1, 3+1) is described for establishing a definitive "ground truth" diagnosis for each case within the test set itself, as the assessment was comparative to a predicate display rather than a diagnostic accuracy study measuring against a reference standard.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • The study described is a comparative evaluation between the new device and a predicate device by "qualified radiologists." While radiologists evaluated multiple cases, the study's primary goal was to establish display equivalence for diagnostic purposes, rather than a formal MRMC study aimed at quantifying the effect size of AI assistance on human reader performance.
    • Effect Size of AI vs. Without AI Assistance: This study did not involve AI assistance. The ICIS® View device is a PACS viewer, not an AI diagnostic tool. Therefore, an MRMC comparative effectiveness study regarding AI assistance was not applicable and not conducted. The comparison was between two different display systems (ICIS® View vs. IMPAX Workstation).

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • As ICIS® View is a PACS viewer software, not a diagnostic algorithm, the concept of "standalone performance" in the context of an algorithm's diagnostic accuracy is not directly applicable.
    • The performance evaluation focused on the visual equivalence of the images displayed by the device compared to a predicate, as interpreted by human radiologists. It is a human-in-the-loop assessment of the display quality.

    7. The Type of Ground Truth Used

    • The "ground truth" in this context is not a pathological diagnosis or clinical outcome, but rather the "diagnostic quality" provided by the predicate system (Agfa's IMPAX 6.6.1 workstation). Radiologists were asked to assess whether the ICIS® View's display quality was "acceptable" when compared to the established diagnostic quality of the predicate. This is a form of expert consensus/comparison against an established standard (the predicate's display) for the purpose of demonstrating substantial equivalence of image rendering.

    8. The Sample Size for the Training Set

    The document does not mention a "training set" for the ICIS® View device, as it is a PACS viewing software, not a machine learning or AI model that requires a data-driven training phase. The development and testing focused on software functionality and image display fidelity.

    9. How the Ground Truth for the Training Set was Established

    Since there is no mention of a training set or an AI/ML component, this question is not applicable based on the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K133135
    Date Cleared
    2014-03-07

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K022292, K043441

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Volume Viewing software is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital medical image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Web-browser access is available for review purposes. It should not be used to arrive at a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version includes automated removal of bone-like structures, stenosis measurement and web-browser access.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes verification and validation testing, with acceptance criteria for various functionalities. The reported performance consistently "met acceptance criteria" for all described tests. Specific quantitative performance values are generally not provided, instead emphasizing meeting the established criteria for equivalence and usability.

    Functionality TestedAcceptance CriteriaReported Device Performance
    Measurement AlgorithmsIdentical output to predicates (Volume Viewing 2.0 (K111638) and Registration and Fusion (K080013)); measurements within +/- scanner resolution.Results met the established acceptance criteria.
    Crosshair PositioningWithin half a voxel (for rounding differences across graphic video cards).Results met the established acceptance criteria.
    Centerline Computation/Vessel Visualization (Contrast-filled vessels in CT angiography)Adequate tracing and visualization (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Stenosis MeasurementUser can determine the amount of stenosis (via side-by-side comparison with Voxar 3D predicate).Results met acceptance criteria.
    Bone-like Structure Removal (CT angiography of thorax, abdomen, pelvis, upper/lower extremities)Adequate removal from view (via side-by-side comparison with Voxar predicate).Results met acceptance criteria.
    Volume Measurements (Manual/Semi-automated)User can perform measurements in a user-friendly and intuitive way.Results met acceptance criteria.
    Image Quality (2D and 3D Rendering)Adequate (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Web-browser component (XERO Clinical Applications 1.0) for non-diagnostic reviewUsability of features and functionalities for non-diagnostic review of CT and MR data sets using 3D and multi planar reconstructions.Validation successfully completed; scored as acceptable.
    Stereoscopic 3D ViewingEquivalent to "regular" 3D viewing, no distinct medical or clinical benefit over "regular" 3D viewing.Concluded that both viewing methods are equivalent.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Validation (Clinical Studies): 154 anonymized clinical studies.
    • Sample Size for Web-browser Component Validation: 42 anonymized clinical data sets.
    • Data Provenance: The anonymized clinical studies were used for validation in test labs and hospitals in the United Kingdom, the United States, Ireland, and Belgium. The document doesn't specify if the data originated from these countries or elsewhere, only where the validation was performed. The data is retrospective, as it consists of "anonymized clinical studies."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Main Validation: 29 medical professionals participated. Their specific qualifications (e.g., years of experience, subspecialty) are not explicitly stated, beyond being "medical professionals."
    • Web-browser Component Validation: 11 medical professionals participated. Their specific qualifications are also not explicitly stated.
    • Stereoscopic 3D Viewing Concept Tests: 6 medical professionals from 4 different hospitals in Belgium and the Netherlands. Their specific qualifications are not explicitly stated.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe a formal adjudication method (like 2+1 or 3+1 consensus) for the test sets. Instead, it mentions that a "scoring scale was implemented and acceptance criteria established" for the main validation. For the web-browser component, "examiners focused on the usability of features and functionalities." For the stereoscopic 3D viewing, "Tests by medical professionals showed" and "The tests concluded." This suggests a consensus-based approach or individual assessments against established scoring scales rather than a formal arbitration process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study explicitly designed to measure the effect size of human readers improving with AI vs without AI assistance was done.

    The validation involved side-by-side comparisons with predicate devices, where medical professionals evaluated whether the new device's functionality (e.g., centerline computation, stenosis measurement, bone removal, image quality) was "adequate" or allowed the user to "determine the amount of stenosis" comparable to the predicates. This is more of a non-inferiority or equivalence assessment against predicate functionality, rather than an explicit measure of human reader performance improvement with the new device (AI assistance, in this context) versus without it.

    The testing of stereoscopic 3D viewing concluded "no specific medical or clinical benefits to using the stereoscopic 3D view" over the "regular" 3D view, indicating no improvement for human readers in that specific aspect.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    Yes, standalone performance was implicitly tested, particularly for "measurement algorithms" and "crosshair positioning."

    • Measurement Accuracy: Regression testing assured "the different measurement algorithms still provide the same output as the predicates." Testers "made identical measurements of diameters, areas and volumes and compared those against reference values."
    • Crosshair Positioning: Tests verified "whether viewports link to the same location in every dataset."

    While human testers initiated these measurements and observations, the assessment was of the algorithm's output against a defined reference or predicate, rather than human diagnostic performance with the algorithm's output.

    7. Type of Ground Truth Used

    • Reference Values / Predicate Comparisons: For measurement accuracy tests (diameters, areas, volumes), the ground truth was established by "reference values" and comparison against "equivalent" measurements from predicate devices (Voxar 3D Enterprise for vessel measurements, Mirage 5.5 for semi-automatic region growing volumes).
    • Expert Consensus/Qualitative Assessment: For many validation objectives (e.g., adequacy of centerline tracing, vessel visualization, bone removal, image quality, usability of volume measurements), the ground truth was essentially a qualitative assessment by medical professionals against established scoring scales and side-by-side comparisons with predicate devices. For stereoscopic 3D viewing, "Concept tests involving 6 medical professionals... were asked to score" and "The tests concluded."
    • Technical Specifications: For crosshair positioning, the ground truth was defined by technical specifications (half a voxel).

    8. Sample Size for the Training Set

    The document does not provide any information about the sample size used for a training set. The testing described focuses on verification and validation of specific functionalities in comparison to predicates or against predefined criteria. This product appears to be a PACS accessory with advanced viewing and manipulation tools, and while it involves "automated" features (like bone removal), the process description suggests a rule-based or algorithmic approach rather than a machine learning model that would typically require a distinct training set. If machine learning was used, the training data information is absent.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied for machine learning, there is no information on how its ground truth would have been established. The ground truth described in the document pertains to the validation and verification sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K062878
    Date Cleared
    2007-06-27

    (274 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K030781, K022292

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EZPACS® is intended for use by radiologists (for primary diagnosis), and other medical professionals who needs access to radiological images and reports.

    Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA.

    Device Description

    EZPACS® is a PACS system that runs on off-the-shelf monitors and PCs running the Microsoft Windows® operating system. It consists of software that displays images and provides functions for image manipulation, enhancement, compression and quantification.

    AI/ML Overview

    The provided document (K062878) does not describe a study proving the device meets specific acceptance criteria in the way a clinical performance study for an AI-powered CADe or CADx device would. This 510(k) pertains to a Picture Archiving Communications System (PACS) software called EZPACS®, which is a medical image management system, not an AI diagnostic or assistive tool.

    Therefore, many of the requested elements (like sample size for test set, number of experts for ground truth, MRMC study effect size, standalone performance, training set details) are not applicable to this type of device and submission.

    The submission focuses on establishing substantial equivalence to predicate devices. This means demonstrating that the new device has the same intended use, features, safety, and effectiveness as already legally marketed devices.

    Here's how to address the questions based on the provided information:


    1. Table of Acceptance Criteria and Reported Device Performance

    For a PACS system like EZPACS®, the "acceptance criteria" are primarily related to its functional equivalence and conformity to standards, rather than diagnostic performance metrics. The "performance" is its ability to perform the stated functions.

    Acceptance Criteria (Implied from Substantial Equivalence Claim)Reported Device Performance (as stated in submission)
    Functional Equivalence to Predicate PACS Systems:
    - Runs on Commercial PCs (Windows® O/S)Yes
    - Supports Commercial MonitorsYes
    - Multi-monitor supportYes
    - JPEG/wavelet compressionYes
    - DICOM conformanceYes
    - Measurement tools (ROI, distance, angle)Yes
    - Viewing tools (window/level, magnify, pan, annotation)Yes
    - Comparison cases functionalityYes
    - Cine/stack view functionalityYes
    - Supports teleradiologyYes
    - 3D viewing functionalityYes
    Intended Use:Same as predicate devices
    Safety and Effectiveness:Same as predicate devices

    Note: The "study" that proves the device meets "acceptance criteria" in this context is the substantiation of substantial equivalence through comparison of features and intended use with the identified predicate devices (IMPAX by Agfa and DirectView by Kodak). No quantitative performance metrics are provided because it's not an AI diagnostic device.


    2. Sample size used for the test set and the data provenance

    • Not Applicable. This submission is for a PACS software system, not a diagnostic AI algorithm that requires a test set of medical images for performance evaluation. The "test" is a feature-by-feature comparison against predicate devices.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not Applicable. There is no "test set" of medical images for diagnostic performance evaluation for this type of device.

    4. Adjudication method for the test set

    • Not Applicable. No test set requiring expert adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. This is not an AI diagnostic or assistive device. An MRMC study is not relevant to demonstrating substantial equivalence for a PACS system.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not Applicable. This device is a PACS system, primarily for image management and display, not a standalone diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not Applicable. For this submission, the "ground truth" is effectively the functionality and regulatory status of the predicate devices. The claim is that EZPACS® performs the same functions as those already approved devices.

    8. The sample size for the training set

    • Not Applicable. As this is a PACS software system, there is no "training set" in the context of machine learning or AI.

    9. How the ground truth for the training set was established

    • Not Applicable. No training set is involved for this type of device.

    Summary of the "Study" and "Proof":

    The "study" in this context is a feature-by-feature comparison and analysis for substantial equivalence against legally marketed predicate devices. The "proof" is the detailed comparison table provided in the submission (see "Substantial Equivalence" section in the document) which demonstrates that EZPACS® shares the same core functionalities, hardware requirements, and intended use as the predicate devices. The conclusion reached by the FDA (as stated in the letter) is that based on this comparison, the device is substantially equivalent, and therefore, its safety and effectiveness are established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K050751
    Manufacturer
    Date Cleared
    2005-04-21

    (29 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K014113, K022292, K993532

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Agfa IMPAX OT3000 Orthopedic workstation is designed to help orthopedic surgeons and specialists access images, plan surgical procedures and monitor patient progress in a digital environment. As an add-on component to the IMPAX client, the OT3000 orthopedic application provides digital planning to the PACS system. These images can be aimed at images helping the surgeon plan the actual prosthetic implant. These plans can also be shown to surgical placement of the implant. They will undergo and to help them understand the pathology present. The application consists of an Impax Diagnostic Workstation and templates. The application consists of guides intended for selecting or positioning orthopedic implants or guiding the marking of tissue before cutting.

    Device Description

    Concentrating within the specialty of joint replacement, the IMPAX® OT3000 will provide an orthopedic surgeon with the ability to produce presurgical plans and distribute those plans for intra operative guidelines. It will also support the proper workflow necessary to effectively compare pre and and operative radiograph studies for a unique understanding of the patient's surgical outcome. Integrating this workflow with the orthopedic surgeons surgiour ownershow and combining it with the data produced from the patient physical exam, provides a comprehensive data set for the continued prescription of a patient's relevant treatment and therapy.

    The proper choice of prosthesis implant, size and placement is critical to postoperative success and minimizing intra operative complications. Proper pre-surgical planning is key for identifying the correct choices and decisions an orthopedic surgeon makes.

    AI/ML Overview

    The provided text does not contain any information about acceptance criteria, study details, or performance metrics for the Agfa IMPAX® OT3000 Orthopedic Workstation.

    The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed performance study. It states that "Technological and functional characteristics of the Agfa's IMPAX® OT3000 software are identical to those of the predicate device."

    Therefore, I cannot populate the requested table or answer the specific questions about acceptance criteria, study design, sample sizes, ground truth, or MRMC studies.

    Summary of what is missing from the provided text:

    • Acceptance Criteria and Reported Performance: No specific performance metrics or thresholds are mentioned.
    • Study Details: No study is described that evaluates the device's accuracy or effectiveness. The 510(k) submission relies on substantial equivalence to a predicate device (Siemens' EndoMap).
    • Sample Sizes: No information on test sets or training sets.
    • Data Provenance: No details on where any data (if used for testing) originated.
    • Experts and Ground Truth: No mention of experts, how ground truth was established, or adjudication methods.
    • MRMC Study: No information about a comparative effectiveness study with human readers.
    • Standalone Performance: No standalone performance study is described.
    • Type of Ground Truth: Not applicable since no performance study is detailed.
    • Training Set Sample Size and Ground Truth: Not applicable since no training or performance study is detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K040896
    Manufacturer
    Date Cleared
    2004-07-16

    (101 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K022292, K954860, K994210

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Encompass™ is a picture archiving and communications system intended to be used as a networked cardiovascular information management system. Encompass™ is software comprised of modular software programs that run on standard "off-the-shel?' personal coonyters and servers running the Windows 2000/XP operating system. Encompass™ is image data surage and display software that accepts DICOM (Digital Imaging and Communications in Medicine) data from laboratories, which support DICOM standard imaging transfer. The system provides the capability to; consolidate images generated by equipment from multiple OEM vendors, view images, enter clinical findings while viewing the associated images, perform digital subtraction, create graphical representation of coronary arteries, perform quantitative measurements on both cath and ultrasound images, perform quantitative analysis on cath images, generate and on voview patient reports with additional measurement and report writer capabilities and provides accessible digital image archive. Encompass™ is a scalable network system designed to service customers ranging in size from small departments (with 2 or 3 users) to large hospital networks (with tens of users).

    Device Description

    Encompass™ is a picture archiving and communications system intended to be used as a networked cardiovascular information management system. Encompass™ is software comprised of modular software programs that run on standard "off-the-shelf" personal computers and servers running the Windows 2000/XP operating system. Encompass™ is image data storage and display software that accepts DICOM (Digital Imaging and Communications in Medicine) data from laboratories, which support DICOM standard imaging transfer. The system provides the capability to; consolidate images generated by equipment from multiple OEM vendors, view images, enter clinical findings while viewing the associated images, perform digital subtraction, create graphical representation of coronary arteries, perform quantitative measurements on both cath and ultrasound images, perform quantitative analysis on cath images, generate and review patient reports with additional measurement and report writer capabilities and provides accessible digital image archive. Encompass™ is a scalable network system designed to service customers ranging in size from small departments (with 2 or 3 users) to large hospital networks (with tens of users). The original core functionality is detailed below:

    1. Review of x-ray angiography, ultrasound, intravascular ultrasound, computed tomography (CT), nuclear medicine, and magnetic resonance imaging (MRI) images.
    2. Compare images from different studies on one or two monitors, regardless of modality.
    3. Perform report data entry and view the report.
    4. Print, save as mpeg/avi, save as bitmap, and copy to clipboard any image(s).
    5. Perform standard image processing such as brightness, contrast, gamma, sharpen, window/level, invert, pan, zoom, and digital subtraction.
    6. Perform standard stop, start, single frame advance and reverse, previous/next image playback, and various standard clinical presentations.
    7. Control the speed of playback.
    8. Use a supported jog wheel to control playback.
    9. Play/repeat a subset of a loop with user defined begin and endpoints.
    10. Search sources for studies based on demographic information.
    11. Copy any study or subset of study from any supported source to any supported destination.
    12. Write DICOM study to CD/DVD for interchange.
    13. Calibrate the monitor to a SMPTE pattern.
    AI/ML Overview

    The provided text does not contain detailed information about acceptance criteria and a specific study proving that the device meets these criteria in the format requested. The document is a 510(k) summary for the Encompass™ Cardiac Network Image Processing System, which describes the device, its intended use, and substantial equivalence to predicate devices, along with design control activities.

    However, based on the Summary of Design Control Activities section, the document states:

    "All verification and validation activities were performed by the designated individual(s), and the results demonstrated that the predetermined acceptance criteria were met."

    This indicates that some form of testing was done to verify the device met pre-defined criteria, but the specifics of these criteria, the study design, and the reported performance are not detailed for most of the requested points.

    Here's a breakdown of what can be inferred or directly stated from the text, and where information is missing:


    1. A table of acceptance criteria and the reported device performance

    The document broadly mentions "predetermined acceptance criteria were met" but does not provide a table specifying what these criteria were or the quantitative performance metrics achieved by the device. The listed standards (DICOM, 21 CFR 1020.10, SMPTE, etc.) imply compliance, but actual performance against these is not detailed.


    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This information is not provided in the document. The text mentions "Integration testing (System verification)" and "Final acceptance testing (Validation)" but does not specify the dataset used for these tests, its size, or its provenance.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. The device is for "diagnostic quality image review" and "aid in diagnosis," implying expert involvement, but details on how ground truth was established for testing are absent.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This information is not provided in the document. The device offers "analysis and measurement capabilities" and "graphical representation," but there's no mention of a comparative effectiveness study with human readers or AI assistance effect sizes.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes the device as "image data storage and display software" with capabilities for "quantitative measurements" and "quantitative analysis." While it performs these functions, it's considered a system for medical professionals to use. The text does not explicitly detail a standalone algorithm-only performance study without human-in-the-loop interaction. Its design implies it's an assistive tool.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    This information is not provided in the document.


    8. The sample size for the training set

    This information is not provided in the document. The document primarily focuses on a modification to existing software and its compliance with standards and design control. It does not describe a machine learning model's training process.


    9. How the ground truth for the training set was established

    This information is not provided in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K040344
    Manufacturer
    Date Cleared
    2004-05-12

    (90 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K022292, K993532

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Agfa IMPAX OT3000 Orthopedic Workstation is designed as an x-ray imaging system software option, which allows the planning of orthopedic surgeries on a workstation. Along with basic diagnostic display station functionality the software is intended to read in diagnostic images (c.g. digitized x-rays) for use with a database of orthopedic implant geometries and dimensions. This provides a constructed image of this data, to use in conjunction with the Agfa Impax OT3000 software to overlay the constructed images to aid surgeons in their planning of orthopedic surgeries.

    The Agfa IMPAX OT3000 Orthopedic workstation is designed to help orthopedic surgeons and specialists access images, plan surgical procedures, educate patients and monitor patient progress in a digital environment.

    As an add-on component to the IMPAX client, the OT3000 orthopedic application provides digital planning to images acquired through the PACS system. These images can be utilized to place digital templates that reflect actual prosthetic implants on patients' images helping the surgeon plan the surgical placement of the implant. These plans can be shown to patients to explain the procedure they will undergo and to help them understand the pathology present.

    The application consists of the following components:

    • Hip Prosthetic Planning .
    • Knee Prosthetic Planning ●
    • Biometry Planning takes into account patient motion and metrics .
    • Coxometry tracking of known measurement values in pediatrics to determine surgical . intervention
    • Osteotomy determines optimum osteotomy locations .
    • Impax Diagnostic Workstation ●
    Device Description

    Concentrating within the specialty of joint replacement, the IMPAX® OT3000 will provide an orthopedic surgeon with the ability to produce presurgical plans and distribute those plans for intra operative guidelines. It will also support the proper workflow necessary to effectively compare pre and post operative radiograph studies for a unique understanding of the patient's surgical outcome. Integrating this workflow with the orthopedic surgeons existing workflow and combining it with the data produced from the patient physical exam, provides a comprehensive data set for the continued prescription of a patient's relevant treatment and therapy.

    The proper choice of prosthesis implant, size and placement is critical to postoperative success and minimizing intra operative complications. Proper pre-surgical planning is key for identifying the correct choices and decisions an orthopedic surgeon makes.

    AI/ML Overview

    Here's an analysis of the provided text regarding the Agfa IMPAX® OT3000 Orthopedic Workstation, focusing on the acceptance criteria and study information:

    Based on the provided 510(k) summary, there is no detailed information regarding specific acceptance criteria or an explicit study proving the device meets them. The document primarily focuses on establishing substantial equivalence to a predicate device.

    Here's a breakdown of the requested information based on the text:

    1. Table of Acceptance Criteria and Reported Device Performance:

      • Acceptance Criteria: Not explicitly stated in the document. The submission is based on substantial equivalence, implying that the device's performance aligns with that of the predicate device (Siemens' EndoMap).
      • Reported Device Performance: Not explicitly enumerated in performance metrics. The document states that the "Technological and functional characteristics of the Agfa's IMPAX® OT3000 software are identical to those of the predicate devices."
    2. Sample Size Used for the Test Set and Data Provenance:

      • This information is not provided in the document. No specific test set or study data is mentioned.
    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • This information is not provided in the document. No information on ground truth establishment for a test set is present.
    4. Adjudication Method for the Test Set:

      • This information is not provided in the document. No test set or related adjudication method is mentioned.
    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No MRMC study is mentioned or referenced. The document does not discuss human reader performance with or without AI assistance.
    6. Standalone (Algorithm Only) Performance Study:

      • No standalone performance study for the algorithm is described. The focus is on the software's functionality and its substantial equivalence to a predicate device, not on specific performance metrics of the algorithm itself.
    7. Type of Ground Truth Used:

      • This information is not provided. As no specific study or test set is detailed, the type of ground truth used is not mentioned.
    8. Sample Size for the Training Set:

      • This information is not provided. There is no mention of a training set or its size.
    9. How Ground Truth for the Training Set Was Established:

      • This information is not provided. As no training set is mentioned, the method for establishing its ground truth is also absent.

    Summary of what the document does provide:

    • Intended Use: The workstation is designed for orthopedic surgical planning, accessing images, educating patients, and monitoring progress. It allows for placing digital templates of orthopedic implants on patient images to aid in surgical planning.
    • Predicate Device: The Agfa IMPAX® OT3000 is deemed substantially equivalent to the Siemens' EndoMap (K014113).
    • Technological Identity: The document explicitly states, "Technological and functional characteristics of the Agfa's IMPAX® OT3000 software are identical to those of the predicate devices." This is the primary "proof" of its suitability for market, based on the 510(k) pathway for substantial equivalence.

    Conclusion:

    The provided 510(k) summary is typical for a substantial equivalence submission, where detailed performance studies with acceptance criteria, sample sizes, and ground truth establishment are often not included if the device is found to be sufficiently similar to an already cleared predicate device. The "proof" the device meets acceptance criteria essentially relies on its "identical" technological and functional characteristics to a legally marketed predicate. It does not contain the granular study details typically found for novel devices or those undergoing more rigorous performance testing.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1