Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K222781
    Device Name
    Augmento
    Date Cleared
    2023-04-11

    (208 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K203744, K162011

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Augmento is a web-based PACS and radiology workflow management solution. It receives digital images and data from various DICOM compliant sources (i.e. CT scanners, ultrasound systems, RF Units, PET Units, computed & digital radiographic devices, secondary capture devices, imaging gateways and other imaging sources). Images and data can be stored, communicated, processed and displayed within the system and/or across computer networks at distributed locations.

    Only preprocessed DICOM "for presentation" images can be interpreted for primary image diagnosis in mammography. Lossy compressed images and digitized film screens of mammographic images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using a monitor that meets technical specifications identified by the FDA.

    This system is meant to be used by trained and qualified medical professionals, e.g physicians, radiologists, nurses, and medical technicians.

    Device Description

    Augmento is a web based PACS and radiology workflow management solution. It is used to receive DICOM images from multiple systems, organize and store them into a centrally managed worklist and distribute the information across a web-based network. It is used by hospitals, imaging centers, radiologists, radiology professional services providers, and any user who requires and is granted access to the patient image, information, and reports. It is intended to be used as a platform for the diagnosis and analysis of radiology images by trained and qualified medical personnel such as radiologists, physicians, nurses, and medical technicians.

    It receives digital images and data from various DICOM-compliant sources (i.e., CT scanners, ultrasound systems, RF Units, computed & digital radiographic devices, secondary capture devices, imaging gateways, and other imaging sources). It provides MPR/MIP post-processing components that allow enhanced visualization to radiologists and assist them in diagnostic analysis and quantification of Computed Tomography (CT) and Magnetic Resonance (MR) images. When images are reviewed and used for diagnosis, it is the responsibility of the medical professional to determine if the images is suitable for clinical application.

    It provides optional integration with FDA-cleared 3rd party AI models. The solution only provides support for the visualization of outputs of 3rd party AI models "as-is". The safety and effectiveness of the 3rd party model is covered under the original 3rd party manufacturer's regulatory clearance. Augmento receives the output merely displays the simple output and the original image is always accessible. It is the responsibility of qualified medical practitioners to review the AI output, confirm the finding, and perform the diagnosis.

    AI/ML Overview

    The provided text focuses on the 510(k) summary for the device Augmento, primarily demonstrating its substantial equivalence to a predicate device (NubeX) rather than detailing a specific external clinical or performance study proving device meets acceptance criteria. The information regarding acceptance criteria and a study to prove they are met is primarily limited to non-clinical technical performance tests.

    Here's an analysis of the provided text based on your requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Based solely on the provided text, the acceptance criteria are not explicitly laid out as a table of quantitative performance metrics for disease detection (e.g., sensitivity, specificity for a specific medical condition). Instead, the non-clinical performance data focuses on functional equivalence, software integrity, and measurement accuracy of specific tools.

    Acceptance Criteria (Inferred/Stated Non-Clinical)Reported Device Performance (as stated in the document)
    Risk Management & Cybersecurity ComplianceDevice Hazard analysis and mitigations detailed as per ISO 14971:2019. Vulnerability Assessment & Penetration testing conducted to check adequate security controls and mitigation of cybersecurity risks.
    Usability ComplianceUsability testing conducted per IEC 62366-1:2015.
    Software Verification & ValidationUser Acceptance Testing (features comply with intended use, verified against SRS/SDS documents). Software Unit Test (end-to-end workflow-based units, code compliance, intended use verification). Software Integration Test and System Test (end-to-end integration of components, functional testing for configurations, user management, DICOM viewer/acceptor, search, assignment, smart tags, conversation, report management, audit log, error handling, hospital management). All test cases addressed and successfully passed the acceptance criteria.
    Angle Measurement Tool AccuracyAn Angle Measurement study demonstrated that the angle tool of Augmento is equivalent to an FDA-cleared DICOM viewer (Sotneta MedDream Viewer (K162011)) for measurements within the range of 0° and 180°.
    3rd Party AI Model Integration IntegrityIntegration interface between Augmento and 3rd party AI models tested to confirm that the integrity of source DICOM file and AI model output is unaltered during transmission.
    Substantial Equivalence (Overall Conclusion)Non-clinical testing demonstrates the device performs in accordance with its intended use, complies with FDA-recognized consensus standards, has a "moderate" software level of concern, and does not raise new safety/effectiveness concerns, thus being substantially equivalent to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: For the Angle Measurement Study, the text specifies "measurements made by two readers in twelve independent X-ray scans from four different categories: frontal chest, AP knee, lateral ankle, and AP pelvis." This means 12 cases were used. For other non-clinical tests (Hazard Analysis, Usability, Software V&V, AI Integration), no specific "sample size" of medical images/cases is mentioned as these are system-level tests.
    • Data Provenance: The document does not specify the country of origin for the X-ray scans used in the Angle Measurement Study. It indicates the company is based in Pune, Maharashtra, INDIA, suggesting the data could be from India, but this is not explicitly stated for the test set. All mentioned studies are described as "Non-Clinical Performance Data," implying them to be internal verification and validation studies rather than external prospective or retrospective clinical trials.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Angle Measurement Study: "Two readers" performed measurements. Their qualifications are not stated (e.g., radiologist, technician, years of experience). The 'ground truth' in this case was comparative analysis against an FDA-cleared DICOM viewer, not an independent clinical gold standard.
    • For other non-clinical tests (Hazard Analysis, Usability, Software V&V, AI Integration), the "experts" would be the engineering and quality assurance teams, but their numbers and specific qualifications are not detailed as they are for a clinical study.

    4. Adjudication Method for the Test Set

    • Angle Measurement Study: The text mentions "measurements made by two readers" but does not specify adjudication methods (e.g., 2+1, 3+1, consensus reading). It implies independent measurements that were then statistically compared.
    • No adjudication method is relevant or detailed for the other non-clinical software/system tests.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was performed or detailed in the provided text. The document states that Augmento "receives the output and merely displays the output of the integrated FDA-cleared 3rd party AI models, 'as-is'." It explicitly states that "The safety and effectiveness of the 3rd party model is covered under the original 3rd party manufacturer's regulatory clearance." This means Augmento itself is not claiming to improve human reader performance through AI assistance; it's a platform that displays results from other FDA-cleared AI tools.

    6. Standalone (Algorithm Only) Performance

    • No standalone (algorithm only) performance study of Augmento's diagnostic capabilities was detailed. Augmento itself is described as a PACS, workflow management solution, and viewer, not a diagnostic algorithm. Its role concerning AI is to display outputs from third-party FDA-cleared AI models.

    7. Type of Ground Truth Used

    • For the Angle Measurement Study, the "ground truth" was essentially the measurements obtained from a comparative FDA-cleared DICOM viewer, which served as the reference standard against which Augmento's measurements were compared using statistical tests (equivalence test and T-test). This is a technical comparison rather than a clinical ground truth like pathology or patient outcomes.
    • For the other non-clinical studies (Risk, Usability, Software V&V, AI Integration), the "ground truth" would be compliance with specified standards, functional requirements, and freedom from defects or vulnerabilities.

    8. Sample Size for the Training Set

    • Not applicable / not provided. Augmento is a PACS and viewer system, not an AI model that requires a training set. The AI models it integrates with are developed and cleared by third parties, and information about their training sets is outside the scope of this document.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable / not provided for the same reason as point 8.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200546
    Device Name
    ZeeroMED View
    Manufacturer
    Date Cleared
    2020-05-05

    (63 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K162011

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ZeeroMED View software is intended for use as a diagnostic and analysis tool for diagnostic images for hospitals, imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and report information. ZeeroMED View displays and manages diagnostic quality DICOM images. ZeeroMED View is not intended for diagnostic use with mammography images. Usage for mammography is for reference and referral only. ZeeroMED View is not intended for diagnostic use on mobile devices.

    Device Description

    The ZeeroMED View Software, or ZeeroMED View, is a Web-based DICOM medical image viewer that allows downloading, reviewing, manipulating, visualizing and printing medical multi-modality image data in DICOM format, from a client machine. ZeeroMED View is a server-based solution that connects to any PACS and displays DICOM images within the hospital, securely from remote locations, or as an integrated part of an EHR or portal. ZeeroMED View enables health professionals to access, manipulate, measure DICOM images and collaborate real-time over full quality medical images using any web-browser without installing client software.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study proving the device meets those criteria:

    The provided document (K200546) is a 510(k) summary for the ZeeroMED View software, establishing substantial equivalence to a predicate device, MedDream.

    Crucially, this document does not describe a study that proves the device meets specific acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, accuracy). Instead, it demonstrates substantial equivalence to a legally marketed predicate device based on technical characteristics and functionality.

    Therefore, most of the requested information regarding "acceptance criteria" for diagnostic performance and a "study that proves the device meets the acceptance criteria" (in the sense of a clinical diagnostic performance study) is not present in the provided text.

    The "acceptance criteria" here are implicitly tied to the demonstration of substantial equivalence, meaning the device must perform similarly and be as safe and effective as the predicate. The "study" proving this is primarily the non-clinical product evaluation, including software verification and validation, and performance testing for measurement accuracy, rather than a clinical trial assessing diagnostic performance against a ground truth.

    Here's an explanation based on the available information:


    1. A table of acceptance criteria and the reported device performance

    The document does not specify quantitative diagnostic performance acceptance criteria (e.g., sensitivity, specificity thresholds) or report such performance metrics. The "performance" being assessed and demonstrated is the similarity in technical characteristics and functionality compared to the predicate device.

    Acceptance Criteria (Implicit for Substantial Equivalence)Reported Device Performance (Demonstrated Similarity to Predicate)
    Safety and Effectiveness: No new questions regarding safety or effectiveness compared to predicate."There are no differences between the devices that affect the usage, safety and effectiveness, thus no new question is raised regarding the safety and effectiveness."
    Measurement Accuracy: Ability to accurately perform various distance and area measurements."Performance Testing (Measurement Accuracy) was conducted on the ZeeroMED View system to determine measurement accuracy when performing the various distance and area measurements." (Specific results not provided in this summary, but presumably demonstrated sufficient accuracy for the intended use.)
    Software Reliability/Robustness: Software functions as intended with a "moderate" level of concern."Software verification and validation testing were conducted on the ZeeroMED View system... Documentation includes level of concern [moderate], software requirements and specifications, design architecture, risk analysis and software validation and verification."
    Functional Equivalence: Possesses similar features and functionality to the predicate.A detailed "Feature Comparison" table (on page 5) is provided, showing near-identical functionalities like DICOM image loading/visualization, patient study search, user authentication, image display operations (flip, rotate, zoom, scroll, layout, PET fusion, volumetric rendering), measurement functions (line, angle, polyline, area), annotations, report generation, etc.
    Technical Equivalence: Basic technical features are the same as the predicate."The basic and main technical features of the subject device are the same as the predicated device."
    Intended Use Equivalence: Shares the same intended use as the predicate (with specific contraindications/limitations).Both devices are "intended for use as a diagnostic and analysis tool for diagnostic images..." with specific exclusions for mammography and mobile devices. Comparison table on page 5 details this.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not describe a clinical test set with a "sample size" in the context of diagnostic performance evaluation. The "test set" for the software verification and validation would refer to the internal software testing data, not a patient image dataset for diagnostic performance assessment. No information on data provenance (country of origin, retrospective/prospective) is provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as no external "test set" requiring expert-established ground truth for diagnostic performance is described in this 510(k) summary.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as no clinical test set requiring adjudication for ground truth is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was done or described. This device is a PACS viewer, not an AI-assisted diagnostic tool in the sense of a CADe/CADx system that would typically undergo MRMC studies to assess AI's impact on human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This is not an AI diagnostic algorithm; it's a medical image viewer. Standalone performance as commonly understood for AI algorithms is not relevant to this device's regulatory pathway as presented. The "standalone performance" here relates to its software functionality and measurement accuracy as a display and analysis tool, which was tested during software V&V.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the software verification and validation, the "ground truth" would be the expected performance of the software functions (e.g., a measurement tool should calculate distances correctly based on known image properties, display functions should work as per specifications). This is established through internal engineering testing and validation against defined software requirements. It's not a clinical ground truth like pathology or expert consensus on disease presence.

    8. The sample size for the training set

    Not applicable. This device is not an AI/machine learning product that requires a "training set" of medical images in the common sense.

    9. How the ground truth for the training set was established

    Not applicable, as there is no "training set" described for this non-AI device.


    Summary of what the document does convey regarding validation:

    The validation for ZeeroMED View primarily consists of:

    • Software Verification and Validation: This assesses the software's functionality, adherence to specifications, and reliability according to FDA guidance (specifically, for a "moderate" level of concern). This includes risk analysis.
    • Performance Testing (Measurement Accuracy): This specific non-clinical test confirms that the device's measurement tools (distance, area) provide accurate results.
    • Comparison to Predicate Device: The core of the 510(k) submission relies on demonstrating that ZeeroMED View shares the same intended use, technical characteristics, and functionality as a legally marketed predicate device (MedDream), and that any differences do not raise new questions of safety or effectiveness. This comparison serves as the "proof" for substantial equivalence.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1