Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K153653
    Device Name
    DICOM Viewer
    Manufacturer
    Date Cleared
    2016-04-13

    (114 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DICOM Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DICOM Viewer is a software device for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements and 3D visualization (MPR reconstructions and 3D volume rendering).

    It is not intended for primary image diagnosis or the review of mammographic images.

    Device Description

    The DICOM Viewer is software for web based viewing of DICOM data.

    AI/ML Overview

    The provided document is a 510(k) summary for a DICOM Viewer. It describes the device's intended use, features, and declares substantial equivalence to predicate devices. However, it does not contain information about specific acceptance criteria or a detailed study proving the device meets those criteria, especially in terms of diagnostic performance metrics.

    The document states that the "DICOM Viewer is a software device for display of medical images and other healthcare data," and explicitly clarifies: "It is not intended for primary image diagnosis or the review of mammographic images." This means the device is for general viewing and not for a specific diagnostic task that would require rigorous performance metrics like sensitivity, specificity, or AUC, as these would be associated with a "primary image diagnosis" function.

    Therefore, many of the requested details, such as specific performance metrics, sample sizes for test and training sets, ground truth establishment, expert qualifications, and MRMC studies, would not be applicable or expected for a device with this stated intended use.

    Here's an attempt to answer the questions based only on the provided text, highlighting where information is absent or not relevant given the device's purpose:


    1. A table of acceptance criteria and the reported device performance

    Based on the provided text, the device's intended use is not for primary image diagnosis. As such, the acceptance criteria are focused on functionality, safety, and substantial equivalence to predicate devices, rather than diagnostic performance metrics (e.g., sensitivity, specificity, accuracy) that would be relevant for a diagnostic AI device.

    Acceptance Criterion (Inferred from Text)Reported Device Performance (Inferred from Text)
    Display medical images and other healthcare dataDICOM Viewer is software for web-based viewing of DICOM data.
    Functions for image review, manipulation, basic measurements, 3D visualization (MPR, VRT)Includes these functions.
    Not intended for primary image diagnosis or mammography reviewExplicitly stated (this is a limitation, not a performance metric).
    Safety and effectiveness similar to predicate devicesVerified and validated activities ensure design specifications met and no new safety/effectiveness issues.
    Substantial equivalence to predicate devices (K093117, K130624)Found to have similar functionality, intended use, technological characteristics, and typical users.
    Software risks analyzed, no non-acceptable risks identifiedStated directly.
    User interface is substantially equivalent to previous version (2.2)Formative usability tests performed, prototype substantially equivalent to final device with minimal changes.
    Meets design specificationsVerification of the System DICOM Viewer thoroughly carried out.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not applicable. This device is not an AI/ML algorithm performing a diagnostic task that typically involves a defined "test set" of patient data for performance evaluation in the way an AI diagnostic tool would. The validation focused on functional verification and safety, not on diagnostic accuracy on a dataset. The document mentions reviews of MAUDE, BfArM, and Brainlab internal complaint databases for incidents of similar products, but this is not a test set for performance.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. As the device is for viewing and not primary diagnosis, there is no "ground truth" establishment for diagnostic accuracy purposes on a test set mentioned in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. No test set for diagnostic performance requiring adjudication is mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. This document does not suggest an MRMC study was performed, nor would it be expected given the device's stated intended use (not for primary diagnosis). The device displays images but does not actively assist in interpretation beyond basic viewing tools.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This question is largely irrelevant for a DICOM Viewer whose primary function is image display. The device is a "standalone" software in the sense that it operates independently to display images, but it doesn't perform diagnostic interpretations that would be measured for standalone performance as an AI algorithm would.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. No ground truth for diagnostic accuracy is mentioned in context of performance evaluation.

    8. The sample size for the training set

    Not applicable. This device is a software viewer, not an AI/ML algorithm that requires a "training set" in the conventional sense for learning and inference.

    9. How the ground truth for the training set was established

    Not applicable. (See #8)

    Ask a Question

    Ask a specific question about this device

    K Number
    K151957
    Device Name
    BOX DICOM Viewer
    Manufacturer
    Date Cleared
    2015-09-01

    (47 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BOX DICOM Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BOX DICOM Viewer™ is a software Teleradiology system used to receive DICOM images, scheduling information and textual reports, organize and store them in an internal formation available across a network via web and customized user interfaces. The BOX DICOM Viewer™ is used by hospitals, imaging centers, radiologist reading practices.

    Contraindications: The BOX DICOM Viewer™ is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel only who are qualified to create and diagnose radiological image data.

    Device Description

    The BOX DICOM Viewer™ is a software system to be used to view DICOM compliant studies, which are stored. The BOX DICOM Viewer™ is intended for professional use only as a viewing tool for medical image studies.

    The BOX DICOM VIEWER software allows for acquisition of images from DICOM devices and lets users view those images from their personal computers. A third party DICOM device sends to the Box DICOM Proxy listener, the files are then sent to the Upload Proxy, then to the DICOM processor where the DICOM header data is extracted. Finally, the BOX DICOM VIEWER will communicate with a database component to store all the information required for patients, users, studies and configuration settings.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the BOX DICOM Viewer, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria for device performance. Instead, it relies on a comparative approach, demonstrating substantial equivalence to a predicate device (CLARISO PACS, K132799). The performance is reported by stating that the subject device (BOX DICOM Viewer) matches or improves upon the predicate's functionalities without introducing new safety risks.

    FunctionalityPredicate (CLARISO PACS) PerformanceSubject Device (BOX DICOM Viewer) PerformanceImpact on Safety and/or Efficacy (if different)
    Core Functionalities (1-37): Web Browser, Intended Use, Intended User, Network, Monitor, User Interaction/Input, Import/Export Images, Acquisition Devices, Image Organization, Image Search, Image Storage, Database Software, Greyscale Image Rendering, RGB Image Rendering, Localizer Lines, Localizer Point, Orientation Markers, Distance Markers, Study Data Overlays, Stack Navigation, Window Level, Zoom, Panning, Horizontal/Vertical Flip, Clockwise/Counterclockwise Rotate, Invert Image, Text Annotation, Area Measurement Annotation, Angle Measurement Annotation, Cobb Angle Measurement Annotation, Image Annotation, Security, DICOM 3.0 Conformance, Worklist, Thumbnail Viewing, Login, AuditAll "Yes" or specific descriptions (e.g., Google Chrome for browsers, 10/100/100 Ethernet, Above 19inch monitor, MySQL database)"Same as predicate" for all these functionalities. The document states "The full features and functions of Clariso have been imported to the BOX DICOM Viewer"."No differences between predicate and subject device" or "No difference". "No impact on safety or efficacy" for all these.
    User Interface (38): Text styles, colors, fonts, and iconsCLARISO PACS stylesBOX Styles (new styles, colors, fonts, and icons were added)"Yes, there are differences, however these changes do not affect device functions and does not raise new potential safety risks. Therefore, it is our determination that there is 'No impact on safety or efficacy'."
    WebGL Rendering Optimizations (39): Hardware accelerationNo hardware acceleration.Yes, Hardware acceleration is used. "WebGL rendering optimizations" are added."Yes, there are differences, between the predicate and the subject device for WebGL since the subject device uses hardware acceleration and the predicate does not. However this difference do not affect the device IFU and does not raise new potential safety risks. The device has been tested and has passed predetermined criteria and therefore, it is our determination that there is 'No impact on safety or efficacy'."
    Support for high resolution Retina displays (40)Pixelated display on high-DPI displays only (i.e., "Retina Displays").Full pixel density on all displays. "Added support for high resolution Retina displays.""Yes, there is a difference. The subject device will display the full pixel density of the saved image where the predicate device only did so if it was set to 'Retina Display' mode. This difference actually aids the viewer to always see the image as captured by the modality. The difference does not affect the device IFU and does not raise new potential safety risks. Therefore, it is our determination that there is 'No impact on safety or efficacy'."
    Keyboard shortcuts for all tools and all annotation types (41)Limited keyboard shortcut support.Keyboard shortcuts allowed for tools and all annotation types. "Added keyboard shortcuts for all tools and all annotation types.""Yes, there is a difference. In the predicate, it was not possible for the User to use the inherent functions of the operating system to create keyboard short cuts. In the subject device, keyboard shortcuts allowed for tools and all annotation types which may help the User view images."

    2. Sample size used for the test set and the data provenance

    The document does not specify a distinct "test set" in the context of image data or patient cases for performance evaluation. The testing described is nonclinical testing related to the system's functionalities.

    • Sample Size for Test Set: Not applicable or not specified in terms of clinical cases/images. The testing appears to be functional verification and validation of the software itself.
    • Data Provenance: Not specified, as clinical data is not mentioned as part of the validation testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. As the validation is focused on technical functionalities and equivalence to a predicate, there is no mention of establishing ground truth by medical experts for image interpretation.

    4. Adjudication method for the test set

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The device is a DICOM viewer, not an AI-powered diagnostic tool, and the focus of the submission is on substantial equivalence to an existing viewer, not on improving human reader performance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, in essence. The testing described as "Nonclinical Testing" and the "Verification & Validation Test Plan" appears to be a standalone evaluation of the software's functionalities against predetermined criteria. The document states: "The BOX DICOM Viewer™ system and configuration has been assessed and tested at BOX Inc. and has passed all pre-determined testing criteria." and "Validation testing indicated, that as required by the risk analysis, designated individuals performed all verification and validation activities and that the results demonstrated that the predetermined acceptance criteria were met."

    However, it's crucial to note that this "standalone" performance refers to the functional capabilities of the viewer (e.g., rendering correctly, performing measurements, handling DICOM data), not a diagnostic algorithm's accuracy in identifying medical conditions. The device is explicitly described as a "viewing tool" for medical image studies and emphasizes that "A physician, providing ample opportunity for competent human intervention interprets images and information being displayed and printed."

    7. The type of ground truth used

    The concept of "ground truth" in the clinical diagnostic sense (e.g., pathology, outcomes data) is not applicable to the nonclinical testing described for this device. The ground truth for this device's validation is based on:

    • Functional Specifications: The software's ability to perform its stated functions (e.g., display images, perform measurements, store data) as defined in its design.
    • Predicate Device Equivalence: The functions of the BOX DICOM Viewer are compared directly against the known and accepted functionalities of the CLARISO PACS predicate device.

    8. The sample size for the training set

    This information is not provided. The BOX DICOM Viewer is described as a software system that imports features from a previously cleared device and adds new functionalities. There is no mention of a "training set" in the context of machine learning or AI algorithm development, as the viewer itself is not presented as an AI diagnostic tool.

    9. How the ground truth for the training set was established

    This information is not provided, as there is no mention of a training set for an AI algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K100236
    Date Cleared
    2010-03-29

    (62 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    STAIR SYSTEMS PACS & DICOM VIEWER

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Stair Systems Constellation Suite Software system is a picture archiving and communications system (PACS) intended to be used as a networked Digital Imaging and Communications in Medicine (DICOM) and non- DICOM information and data management system. The STAIR System PACS & DICOM Viewer Software is comprised of modular software programs that run on standard "off-the-shelf" personal computers, business computers, and servers running standard operating systems. STAIR System & DICOM Viewer Software system is an image, data storage and display software that accepts DICOM data from any OEM modality which support DICOM standard imaging data: The system provides the capability to organize images generated by OEM vendor equipment, perform digital manipulation, create graphical representations of anatomical areas, and perform quantitative measurements.

    The STAIR Systems Constellation Suite Software system should not be used for Diagnostic review of full-field digital mammograms.

    Device Description

    STAIR Constellation Suite is a collection of applications Coded in the Microsoft C# application language. A color scheme defined as a way to identify both highlighted or important information within the user interface and as a "not so grey" environment that could stand up to a user who was likely to work 10-12 hours a day in front of it. The layout and design of the layers of the UI evolved from the "old design" of having 3 separate computers. each with a single monitor copy of a single element of the system into a modern, multimonitor native design which incorporates large performance gains through the use of Microsoft's DirectX technologies.

    The core of the Suite is the database, is an adaptive star relational design for Microsoft SQL Server 2008 Enterprise. The data server provides a dedicated, central place to provide disaster recovery servicing, and the large multi-terabyte storage required for PACS. Attaching to this are Northstar PACS clients, Cosmos Enterprise Managmenet Clients, Apex Servers, and Cascade Servers; all of which are STAIR produced software products.

    The Northstar PACS client is intended to be a desktop replacement product, with the interface dominating the screen space on a workstation.computer. TThis design decision was made to accommodate non- technical doctors who, we found, typically prefer to have a simplified and unobtrusive environment to work in. Color based exam status listings were evolutionary, and grew from the initial 'field of green' into the more advanced dynamic tree view seen today in the client. It is a multi-monitor capable client, currently configurable in a 1, 2, or 3 monitor footprint.

    Cosmos represents the nerve center of the STAIR Constellation Suite, providing services for securing the STAIR network, maintaining Paperless workflow, and other such critical day-today system maintenance tasks. Typically users will include hospital or practice administrators and functional personnel who may need to make modifications to STAIR database entries. It is an OLTP client, and requires port level access to the main STAIR database central server installation to function (we use DSN-less connections to the database usually requiring port 1433 to be excepted in the workstation's firewall rule set). The client performs many tasks, some of which are not relevant to every customer. though we encourage adoption of STAIR electronic processes by our clients in order to help them streamline their office efficiency. RIS integration is partially available (per vendor, STAIR offers no supported HL7 interface at this time), and can be incorporated in most cases to allow a RIS system to perform synchronization and workflow tasks in harmony with STAIR kept data records.

    The Apex DICOM Storage Server provides the primary server capabilities of the STAIR system. It was built to be automatable such that it typically runs unattended, but a user interface is used to both help monitor and control the various DICOM storage processes necessary to import each study from an external source (modality, another PACS, etc). The Apex server incorporates a DICOM SCP service grouping that allows for DICOM ping response, client protocol negotiation, transfer syntax negotiation, and several other system-level DICOM negotiations. As each case is received; it is stored via a Service host to the local hard drive (into the STAIRIMAGE hierarchy), it is queued for storage at the top, and it is stored as data to the main database in one of 8 available simultaneous thread 'putaway' processes (shown as 'idle').

    The STAIR Cascade is a DICOM SCU, and auxiliary compression Server product originally designed to allow the send and receive functions of DICOM to be split from the STAIR Apex product and balanced onto a separate physical server. This was to allow for load balancing within the STAIR Enterprise, and the product has evolved now into a secondary server role, handling transmission queues for all enterprise clients. To allow for DICOM transmissions to be seamless, we use a database table XMISSION QUES, and each client transmission request to the database are handled as a threaded process (8 simultaneous). Each process begins by downloading the Imageset for a requested case from the STAIR database to a local cache, where it is then added to a queue within the program for processing. As each queue slot opens, another case is promoted until the queue empties.

    AI/ML Overview

    Here's an analysis of the provided text regarding the Stair Systems Constellation Suite, focusing on the acceptance criteria and study information:

    Based on the provided 510(k) Summary, there is no specific information detailing acceptance criteria for device performance or a study proving that the device meets such criteria.

    The document primarily focuses on establishing "substantial equivalence" of the Stair Systems Constellation Suite to predicate devices, as required for a 510(k) premarket notification. This process typically relies on demonstrating that the new device has the same intended use, similar technological characteristics, and raises no new issues of safety or effectiveness compared to legally marketed devices.

    Let's address each point of your request based on the available text:

    1. A table of acceptance criteria and the reported device performance

      • Not available in the provided text. The document does not define specific performance metrics or acceptance criteria for the device (e.g., image display accuracy, processing speed, measurement precision etc.)
    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

      • Not available. The document does not describe any specific test set of data used for performance evaluation, nor does it mention data provenance. The "verification and validation testing" mentioned in the Conclusion is a general statement, without details on the scope, methodology, or data used.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

      • Not available. Since no specific test set or performance evaluation study is described, there's no mention of experts or ground truth establishment.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

      • Not available. No adjudication method is described.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

      • No. There is no mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. This device is a PACS/DICOM viewer software, not an AI-assisted diagnostic tool.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

      • Not applicable. This device is a PACS system designed for human interaction and image management/display. It's not an AI algorithm performing standalone diagnostic tasks. The closest equivalent would be its performance in managing and displaying images according to DICOM standards, but specific standalone performance metrics are not detailed.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

      • Not available. As no specific performance study is detailed, no ground truth type is mentioned. The "Conclusions" state: "A comparison of the labelling, substantial equivalence table, and verification and validation testing has established that the device meets its intended use and design specifications." This indicates that the validation focused on compliance with design specifications and intended use, likely through functional testing and adherence to standards (like DICOM), rather than clinical "ground truth" performance in a diagnostic sense.
    8. The sample size for the training set

      • Not applicable. The device described is a PACS and DICOM viewer software, which is an infrastructure and display system, not a machine learning or AI model that requires a training set.
    9. How the ground truth for the training set was established

      • Not applicable. As it's not an AI/ML device, there's no training set or ground truth in that context.
    Ask a Question

    Ask a specific question about this device

    K Number
    K083910
    Manufacturer
    Date Cleared
    2009-04-15

    (106 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    VIDISTAR PACS & DICOM VIEWER SOFTWARE SERVER SOFTWARE SYSTEM, HEART VIEW, STANDALONE VIEWER

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VidiStar PACS & DICOM Viewer Software system is a picture archiving and communications system (PACS) intended to be used as a networked Digital Imaging and Communications in Medicine (DICOM) and non-DICOM information and data management system. The VidiStar PACS & DICOM Viewer Software is comprised of modular software programs that run on standard "off-the-shelf" personal computers, business computers, and servers running standard operating systems. VidiStar PACS & DICOM Viewer Software system is an image, data storage and display software that accepts DICOM data from laboratories, which support DICOM standard imaging data and structured reporting transfer(s). The system provides the capability to: organize images generated by OEM vendor equipment, perform digital manipulation, create graphical representations of anatomical areas, perform quantitative measurements, and create DICOM structure reports, all over the Internet.

    All quantitative data ranges are derived from the clinical experience of laboratories and are included in observation libraries for VidiStar users. VidiStar strongly recommends that users review these ranges with their individual diagnostic needs in mind prior to using the VidiStar PACS & DICOM Viewer Software system for clinical reporting. The VidiStar PACS & DICOM Viewer Software system should not be used for reviewing full-field digital mammograms.

    Device Description

    The VidiStar PACS & DICOM Viewer Software System is a picture archiving and communications system software used to process, display, transfer, enable reports, communicate, store and archive digital medical images using Transmission Control Protocol/Internet Protocol (TCP/IP). It supports DICOM structured reports for creating, rendering, storage and archiving.

    AI/ML Overview

    The provided text describes the VidiStar PACS & DICOM Viewer Software System and its substantial equivalence to other PACS devices on the market. However, it does not contain information about specific acceptance criteria, a detailed study proving the device meets those criteria, or the methodology (sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, or ground truth establishment) typically associated with such studies for AI/CAD devices.

    The document is a 510(k) summary focused on demonstrating "substantial equivalence" to predicate devices, which is a regulatory pathway for medical devices. This pathway often relies on comparing features and performance to existing, legally marketed devices rather than presenting novel clinical performance studies with acceptance criteria in the manner requested.

    Therefore, most of the requested information cannot be extracted from the provided text.

    Here's what can be inferred or explicitly stated from the document:

    1. A table of acceptance criteria and the reported device performance

    The document does not specify formal "acceptance criteria" for clinical performance. Instead, it demonstrates substantial equivalence by comparing features to predicate devices. The "performance" is implied by matching or exceeding the capabilities of the predicate devices.

    FeatureAcceptance Criteria (Implied by Predicate)Reported VidiStar PACS & DICOM Viewer Software Performance
    Operating SystemWindows NT/2000/2003/XPLinux and Windows 2000/XP
    Image SourceDICOMDICOM
    Display RatesOver 30 fpsOver 30 fps
    Multiple WindowsYesYes
    Image Exportbmp, jpg, mpg, avibmp, jpg, png, avi
    Network AccessYesYes
    AnalysisYesYes
    ReportingYesYes

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not mentioned. The 510(k) summary focuses on design control activities and comparison to predicates, not a specific clinical performance test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not mentioned. Ground truth establishment for a specific test set is not detailed as there is no described clinical performance study of this nature.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study is mentioned. The device is a PACS and DICOM viewer, not an AI/CAD algorithm intended to assist human readers in a diagnostic capacity that would be evaluated by such a study in this document.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. The device is a PACS system and viewer, not a standalone algorithm with diagnostic performance. Its function is to process, display, store, and manage images.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Not applicable for a clinical performance study as none is described for specific diagnostic tasks. The "ground truth" for the device's functionality would be adherence to DICOM standards and correct display/storage of images, which would be verified through functional testing (ALPHA, BETA testing), not clinical ground truth as defined for diagnostic AI.

    8. The sample size for the training set

    Not mentioned. A training set is typically associated with machine learning or AI algorithms, which is not the primary focus or nature of this PACS software as described for regulatory submission.

    9. How the ground truth for the training set was established

    Not mentioned, as no training set is described.


    Summary of available information:

    The document describes the VidiStar PACS & DICOM Viewer Software System as a networked PACS intended for processing, displaying, storing, and managing DICOM and non-DICOM medical images and data. It outlines design control activities like validation planning and ALPHA/BETA testing. The core of its regulatory submission relies on demonstrating substantial equivalence to existing PACS products by comparing features such as operating system, image source, display rates, multiple window support, image export formats, network access, analysis capabilities, and reporting features. No specific clinical performance study with acceptance criteria, sample sizes, expert ground truth, or AI-specific evaluations (like MRMC or standalone performance) is detailed in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K971181
    Manufacturer
    Date Cleared
    1997-07-29

    (120 days)

    Product Code
    Regulation Number
    892.1600
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    THE MIDVIEW-MEDCON'S DICOM VIEWER

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MOVIEW is a software device vicently , print and save cardia. In the same income studio refrieve refrieve, display, profit and into in a mage studio cathelerization faboration and to view these real-Time mode PC images on Cu

    Device Description

    The MDVIEW is a software device intended to retrieve, display, print and save cardiac catheterization laboratory image studies from CD-R media, and to view these images on a PC computer in real time mode. Digital imaging from the cardiac catheterization laboratory is recorded on the CD-R in conformance with the DICOM standard. These loss free images provide physicians with a valuable tool for diagnostic review and analysis.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for a medical image communication and storage device (MDVIEW-MEDCON's DICOM Viewer). It does not contain typical acceptance criteria or a detailed study proving device performance in the way modern AI/ML medical devices would.

    This document is from 1997, a time before the widespread use of AI/ML in medical devices, and it concerns a Picture Archiving and Communication System (PACS) device. For such devices, "performance" relates to their ability to accurately display, store, and retrieve medical images, ensuring image integrity and compatibility with standards like DICOM. The review process for such devices would focus on demonstrating technical functionality and equivalence to existing devices rather than a detailed clinical performance study with statistical endpoints.

    Therefore, many of the requested sections about acceptance criteria, study details, human reader performance, and ground truth establishment, as they apply to clinical performance of an AI/ML algorithm, are not applicable or available in this document.

    However, I can extract the relevant information that is present:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Functional Equivalence to Predicate Device: The device must demonstrate substantial equivalence in its intended use for retrieving, displaying, printing, and saving cardiac catheterization laboratory image studies.The MDVIEW is "substantially equivalent to The CRS 2000 subsystem of the Kodak Science Digital Image System." The differences between MDVIEW and the predicate device "raise no new issues of safety or effectiveness."
    Image Integrity: The device must handle "loss free images" from cardiac catheterization laboratories.The device is designed to handle "loss free images" recorded on CD-R in conformance with the DICOM standard, providing a "valuable tool for diagnostic review and analysis."
    Standard Conformance: Compatibility with established medical imaging standards.Digital imaging is recorded on CD-R "in conformance with the DICOM standard."
    Platform Compatibility & Real-time Viewing: Ability to view images on a PC computer in real-time.The device is intended "to view these images on a PC computer in real time mode."

    2. Sample size used for the test set and the data provenance

    • Not Applicable / Not Provided. This document does not describe a clinical performance study with a test set of patient data to evaluate diagnostic accuracy. The assessment focuses on technical and functional equivalence to a predicate device.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not Applicable / Not Provided. No ground truth establishment for a test set is mentioned, as there was no clinical performance study in the context of an AI/ML diagnostic aid.

    4. Adjudication method for the test set

    • Not Applicable / Not Provided. No test set or adjudication method is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. This device is a PACS viewer, not an AI diagnostic aid. Therefore, no MRMC study or assessment of human reader improvement with AI assistance was performed or is relevant to this submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not Applicable / Not Provided. This is a PACS viewer, not an algorithm. Standalone performance as understood for AI/ML algorithms is not relevant to this device's regulatory review.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not Applicable / Not Provided. The concept of "ground truth" for a diagnostic outcome is not relevant to a PACS device's functional equivalence review. The "truth" in this context refers to the accurate display and storage of the original, unadulterated medical images.

    8. The sample size for the training set

    • Not Applicable / Not Provided. There is no mention of a training set as this is not an AI/ML device.

    9. How the ground truth for the training set was established

    • Not Applicable / Not Provided. No training set or ground truth establishment is described.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1