Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K140269
    Date Cleared
    2014-05-08

    (94 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K021656, K130884, K123528

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    HERMES Medical Imaging suite that provides software applications used to process, display, analyze and manage nuclear medicine and other medical imaging data transferred from other workstation or acquisition stations.

    Device Description

    The base product design of Hermes Medical Imaging Suite v5.4 is the same as for the Hermes Medical Imaging Suite v5.3 (K131233). A modification has been made of the product where the imaging processing application BRASS™ has been transferred from the Oracle® Solaris environment to the Microsoft® Windows environment. BRASS™ has also been updated with improved support for management and analysis of amyloid PET imaging as described in the 510(k) submission. The Hermes Medical Imaging Suite provides software applications used to process, display, analyze and manage nuclear medical imaging data transferred from other workstation or acquisition stations.

    AI/ML Overview

    The provided text is a 510(k) summary for the HERMES Medical Imaging Suite v5.4. It describes the device, its intended use, and substantial equivalence to predicate devices. However, it does not contain specific details about acceptance criteria, device performance metrics, or a study design with sample sizes, ground truth establishment, or expert involvement as requested.

    The summary states: "The testing results supports that all the software specifications have met the acceptance criteria." but does not elaborate on what those criteria were or how performance was measured against them. It focuses on the substantial equivalence based on technological characteristics and indication for use with predicate devices.

    Therefore, I cannot fulfill your request to describe the acceptance criteria and the study that proves the device meets them, nor can I provide information for most of the numbered points, as that information is not present in the provided document.

    Here's a breakdown of what can and cannot be extracted from the provided text based on your request:


    Acceptance Criteria and Device Performance Study (Information Not Provided in Document)

    The document states that "The testing results supports that all the software specifications have met the acceptance criteria." However, it does not provide:

    • A table of acceptance criteria.
    • Reported device performance metrics.
    • Details of the study that proves the device meets the acceptance criteria.

    Therefore, the following points cannot be addressed from the given text:

    1. A table of acceptance criteria and the reported device performance
    * Not provided in the document. The document only states that acceptance criteria were met.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
    * Not provided in the document.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
    * Not provided in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
    * Not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
    * Not provided in the document. The document describes a comparison to predicate devices, focusing on technological equivalence, not a comparative effectiveness study with human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
    * Not provided in the document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
    * Not provided in the document.

    8. The sample size for the training set
    * Not provided in the document. This document focuses on a 510(k) submission for a software update and comparison to predicate devices, not on the deep learning aspects of an AI model's training.

    9. How the ground truth for the training set was established
    * Not provided in the document.


    What the document does state:

    • Device Description: The HERMES Medical Imaging Suite v5.4 is an update to v5.3 (K131233). The primary change is the transfer of the BRASS™ imaging processing application from Oracle® Solaris to Microsoft® Windows environment, with improved support for management and analysis of amyloid PET imaging.
    • Intended Use: To process, display, analyze, and manage nuclear medicine and other medical imaging data transferred from other workstations or acquisition stations.
    • Testing: "The tests for verification and validation followed Hermes Medical Solutions AB design controlled procedures. The Risk analysis was completed and risk control implemented to mitigate identified hazards. The testing results supports that all the software specifications have met the acceptance criteria."
    • Substantial Equivalence: The device is deemed substantially equivalent to predicate devices (HERMES Medical Imaging Suite v5.3 (K131233), HERMES HDAQ Acquisition Station and Hermes Workstation (K021656), Xeleris 3.1 processing and review workstation (K130884), and Scenium 3.0 (K123528)) based on similar technology, fundamental concepts, and operation, with the specific modification for BRASS™ noted. The "results showed a good compliance."

    In summary, this 510(k) primarily focuses on demonstrating that a software update to an existing device, which includes transferring a feature to a new operating system and enhancing support for amyloid PET imaging, maintains substantial equivalence without introducing new safety or effectiveness concerns requiring detailed clinical performance studies to the extent of proving specific acceptance criteria with quantifiable metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K130451
    Device Name
    NEUROQ 3.6
    Manufacturer
    Date Cleared
    2013-05-17

    (84 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K072307, K123528

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    1. assist with regional assessment of human brain scans, through automated quantification of mean pixel values lying within standardized regions of interest (S-ROI's), and
    2. assist with comparisons of the activity in brain regions of individual scans relative to normal activity values found for brain regions in FDG-PET scans, through quantitative and statistical comparisons of S-ROI's.
    3. assist with comparisons of activity in brain regions of individual scans between two studies from the same patient, between symmetric regions of interest within the brain PET study, and to perform an image fusion of the patients PET and CT data
    4. NeuroQ 3.6 provides added functionality to provide analysis of amyloid uptake levels in brain regions.
    Device Description

    NeuroQ™ 3.6 has been developed to aid in the assessment of human brain scans through quantification of mean pixel values lying within standardized regions of interest, and to provide quantified comparisons with brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC). The Program provides automated analysis of brain PET scans, with output that includes quantification of relative activity in 240 different brain regions, as well as measures of the magnitude and statistical significance with which activity in each region differs from mean activity values of brain regions in the AC database. The program can also be used to compare activity in brain regions of individual scans between two studies from the same patient, between symmetric reqions of interest within the brain PET study, and to perform an image fusion of the patients PET and CT data. The program can also be used to provide analysis of amyloid uptake levels in the brain. This program was developed to run in the IDL operating system environment, which can be executed on any nuclear medicine computer systems which support the IDL software platform. The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided.

    AI/ML Overview

    The provided text describes the NeuroQ™ 3.6 device, its intended use, and its equivalence to previously cleared devices. However, it does not contain specific acceptance criteria for performance metrics (like sensitivity, specificity, accuracy, or statistical thresholds) or a detailed study description with specific results that would "prove" the device meets such criteria. Instead, it references previous validation studies and states general conclusions about safety and effectiveness.

    Therefore, I cannot populate a table of acceptance criteria and reported performance, nor can I provide detailed information for many of the requested points because that specific data is not present in the provided 510(k) summary. The summary focuses on establishing substantial equivalence based on prior versions and a general statement of in-house testing.

    Here's a breakdown of what can and cannot be answered based on the provided text:


    1. A table of acceptance criteria and the reported device performance

    • Cannot be provided. The document does not specify any quantitative acceptance criteria or reported performance metrics (e.g., specific accuracy, sensitivity, specificity values, or statistical thresholds) that the device was tested against. It states that validation for modifications can be found in "Item H. Testing & Validation," but this item itself is not included in the provided text.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Cannot be fully provided. The document mentions "clinical validation studies submitted in our previous 510(k) K041022 and 510(k) #: K072307" for the initial program and earlier versions. For the current version (3.6) it only refers to "in-house testing" and "final in-house validation results."
      • Sample Size (Test Set): Not specified for NeuroQ™ 3.6.
      • Data Provenance (country of origin, retrospective/prospective): Not specified for NeuroQ™ 3.6. The reference database is described as "brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)," but no details on its origin are given.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Cannot be provided. The document does not describe how ground truth was established for any test set or mention specific experts involved in such a process for NeuroQ™ 3.6.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Cannot be provided. The document does not describe any adjudication method for a test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study is not described as being performed. The document states the program "serves merely as a display and processing program to aid in the diagnostic interpretation...it was not meant to replace or eliminate the standard visual analysis." It emphasizes the physician's ultimate responsibility and integration of all information. There is no mention of a study comparing human reader performance with and without AI assistance, or any effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, implicitly, to some extent, but with a critical caveat. The device itself performs "automated analysis" and provides "quantification of relative activity." This suggests standalone algorithmic processing.
    • Caveat: The document explicitly states, "The program processes the studies automatically, however, user verification of output is required and manual processing capability is provided." It also says it is "not meant to replace or eliminate the standard visual analysis" and that the physician "should integrate all of the patients' clinical and diagnostic information." This strongly indicates that while the algorithm runs automatically, its performance is not intended to be "standalone" in a diagnostic sense, as human-in-the-loop verification and interpretation are always required. No specific "standalone performance" metrics are provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Cannot be explicitly stated. The document describes a "reference database" of "asymptomatic controls (AC)" used for comparison. This implies a ground truth of "normalcy" based on the absence of identified neuropsychiatric disease or symptoms. However, the precise method of establishing this normal status (e.g., through long-term follow-up, expert clinical assessment, other diagnostic tests) is not detailed. For patient studies, the tool provides quantitative results to be integrated by a physician, but doesn't mention a specific "ground truth" used to validate its diagnostic accuracy in patient cases.

    8. The sample size for the training set

    • Cannot be provided. The training set size for the algorithms is not mentioned. The document refers to a "reference database" of asymptomatic controls, but its size is not specified.

    9. How the ground truth for the training set was established

    • Partially described for the "reference database." The "reference database" consists of "brain scans derived from FDG-PET studies of defined groups having no identified neuropsychiatric disease or symptoms, i.e., asymptomatic controls (AC)." This indicates that the ground truth for this reference is the absence of neuropsychiatric disease or symptoms. However, the specific methods (e.g., detailed clinical evaluation, exclusion criteria, follow-up) used to establish this "asymptomatic" status are not provided. The document does not explicitly discuss a separate "training set" and associated ground truth, but rather a "reference database" for comparison.
    Ask a Question

    Ask a specific question about this device

    K Number
    K130884
    Date Cleared
    2013-04-12

    (14 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K021656, K123528

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The system is intended for use by Nuclear Medicine (NM) or Radiology practitioners and referring physicians for display, processing, archiving, printing, reporting and networking of NM data, including planar scans (Static, Whole Body, Dynamic, Multi-Gated) and tomographic scans (SPECT, Gated SPECT, dedicated PET or Camera-Based-PET) acquired by gamma cameras or PET scanners.

    The system can run on dedicated workstation or in a server-client configuration.

    The NM or PET data can be coupled with registered and/or fused CT or MR scans, and with physiological signals in order to depict, localize, and/or quantify the distribution of radionuclide tracers and anatomical structures in scanned body tissue for clinical diagnostic purposes.

    DaTQUANT optional application enables visual evaluation and quantification of 131ioflupane (DaTscan™)) images. DaTQUANT Normal Database option enables quantification relative to normal population databases of 1231-ioflupane (DaTscan TM) images.

    These applications may assist in detection of loss of functional dopaminergic neuron terminals in the striatum, which is correlated with Parkinson disease.

    Device Description

    The Xeleris 3.1 is a Nuclear Medicine Workstation system intended for general nuclear medicine processing & review procedures for detection of radioisotope tracer uptake in the patient body, using a variety of processing modes supported by various clinical applications types and various features designed to enhance image quality. The components of the Xeleris 3.1 NM Workstation system are: operation console, monitor and peripherals. The Xeleris 3.1 is a modification of its predicate device Xeleris 3 while providing enhanced workflow to existing operations and enabling broader access to Xeleris applications in supporting PACS and GE AW Server and in offline client server configuration. Xeleris 3.1 also enables the use of normal data base comparison together with the quantification analysis of 123I-ioflupane brain NM images. Similar functionality for NM/PET brain image analysis also resides in the predicate devices K021656 and K123528.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Xeleris 3.1 Processing and Review Workstation, specifically focusing on the DaTQUANT application:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state formal acceptance criteria with specific numerical thresholds for the DaTQUANT application's accuracy. Instead, it describes a
    The study for the DaTQUANT application compared "DaTQUANT analysis results to manual analysis results." The reported performance is that "DaTQUANT results were found to be as accurate as manual results."

    Acceptance CriteriaReported Device Performance
    DaTQUANT analysis results are accurate compared to manual analysis results.DaTQUANT results were found to be as accurate as manual results.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document mentions that the data used for testing was "taken from brain phantoms injected symmetrically and asymmetrically." It does not specify the number of phantoms or the number of acquisitions/images used.
    • Data Provenance: The data was derived from "brain phantoms injected symmetrically and asynchronously," simulating normal and abnormal uptakes, with "different contrast levels used to simulate different signal to noise ratio levels." This indicates a controlled, artificial data set (phantoms) rather than human clinical data. It is a retrospective analysis of phantom data. The country of origin for the phantom data is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The ground truth in this specific test was established by "manual analysis results," which inherently implies human expert involvement. However, the document does not specify the number of experts who performed the manual analysis, nor their specific qualifications (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1). It simply states that the DaTQUANT results were compared to "manual analysis results," implying a direct comparison without detailing how discrepancies in manual analysis (if multiple experts were involved) would have been resolved.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned for the DaTQUANT application in this document. The testing described focuses on comparing the algorithm's output to manual analysis, not on how human readers' performance might improve with or without AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    Yes, a standalone performance test was done for the DaTQUANT application. The description, "Testing the accuracy of using the DaTQUANT application by comparing DaTQUANT analysis results to manual analysis results," indicates that the algorithm's output (DaTQUANT results) was directly evaluated against a ground truth (manual analysis) without an explicit human-in-the-loop interaction for the DaTQUANT itself during this specific accuracy test.

    7. Type of Ground Truth Used

    The ground truth used for the DaTQUANT accuracy testing was expert consensus / manual analysis results derived from phantom data.

    8. Sample Size for the Training Set

    The document does not specify the sample size or details regarding a training set for the DaTQUANT application. The description focuses solely on the accuracy testing using phantom data.

    9. How the Ground Truth for the Training Set Was Established

    Since a training set is not mentioned, the method for establishing its ground truth is also not provided in this document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1