Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K242215
    Manufacturer
    Date Cleared
    2024-10-25

    (88 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K170981

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Neurophet AQUA is intended for Automatic labeling, visualization and volumetric quantification of segmentable brain structures and lesions from a set of MR images. Volumetric data may be compared to reference percentile data.

    Device Description

    Neurophet AQUA is a fully automated MR imaging post-processing medical device software that provides automatic labeling, visualization, and volumetric quantification of brain structures from a set of MR images and returns segmented images and morphometric reports. The resulting output is provided in morphometric reports that can be displayed on Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in routine patient care as a support tool for clinicians in assessment of structural MRIs.

    Neurophet AQUA provides morphometric measurements based on T1 MRI series. The output of the software includes volumes that have been annotated with color overlays, with each color representing a particular segmented region, and morphometric reports that provide comparison of measured volumes to age and gender-matched reference percentile data. In addition, the adjunctive use of the T2 FLAIR MR series allows for improved identification of some brain abnormalities such as lesions, which are often associated with T2 FLAIR hyperintensities.

    Neurophet AQUA processing architecture includes a proprietary automated internal pipeline that performs segmentation, volume calculation and report generation.

    The results are displayed in a dedicated graphical user interface, allowing the user to:

    • Browse the segmentations and the measures,
    • Compare the results of segmented brain structures to a reference healthy population,
    • Read and print a PDF report

    Additionally, automated safety measures include automated quality control functions, such as scan protocol verification. which validate that the imaging protocols adhere to system requirements.

    AI/ML Overview

    I am sorry, but the provided text does not contain a table of acceptance criteria or details about a multi-reader, multi-case (MRMC) comparative effectiveness study. Therefore, I cannot generate the information you requested regarding those specific points.

    However, I can extract other relevant information about the device's performance study:

    1. Table of Acceptance Criteria and Reported Device Performance:

    MetricAcceptance Criteria (Implied by comparison to predicate/reference)Reported Device Performance (Neurophet AQUA V3.1)
    T2 FLAIR Lesion Segmentation AccuracyExceeds 0.80 (Dice's coefficient)Exceeds 0.80 (Dice's coefficient)
    T2 FLAIR Lesion Segmentation ReproducibilityLess than 0.25cc (Mean absolute lesion volume difference)Less than 0.25cc (Mean absolute lesion volume difference)
    All other performance metrics (T1 image analysis)Same as previous Neurophet AQUA v2.1 (K220437)Same as previous Neurophet AQUA v2.1 (K220437)

    (Note: The acceptance criteria for the T2 FLAIR analysis features are implied to be met because the text states, "The test results meet acceptance criteria based on the performance of the reference device, NeuroQuant v2.2 (K170981)." However, the exact numerical acceptance criteria for the reference device are not explicitly provided in the text. For the purpose of this table, the reported device performance is used as the implied acceptance criterion for the new T2 FLAIR features, as the device is stated to meet them.)

    2. Sample sizes used for the test set and data provenance:

    • Accuracy test dataset: 136 images
    • Reproducibility test dataset: 52 images
    • Data Provenance: Primarily sourced from U.S. hospitals. Multi-site data collection.

    3. Number of experts used to establish the ground truth for the test set and their qualifications:

    • Number of experts: Three
    • Qualifications of experts: U.S.-based neuroradiologists. (Specific years of experience are not mentioned).

    4. Adjudication method for the test set:

    • Adjudication method: Ground truth was established by "consensus among three U.S.-based neuroradiologists." This suggests a consensus-based adjudication, but the specific mechanics (e.g., majority vote, discussion until agreement) are not detailed (e.g., 2+1, 3+1 are not specified, just "consensus").

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs without AI assistance. The performance data focuses on the algorithm's accuracy and reproducibility against expert manual segmentation.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, performance was evaluated in a standalone (algorithm only) manner. The text describes comparing the device's segmentation accuracy and reproducibility directly with expert manual segmentations.

    7. The type of ground truth used:

    • Type of ground truth: Expert manual segmentations. The text states: "Neurophet AQUA performance was then evaluated by comparing segmentation accuracy with expert manual segmentations..."

    8. The sample size for the training set:

    • The text does not specify the sample size for the training set. It only mentions the subjects upon whom the device was "trained and tested" as including healthy subjects, mild cognitive impairment patients, Alzheimer's disease patients, and multiple sclerosis patients from young adults to elderlies.

    9. How the ground truth for the training set was established:

    • The text does not explicitly state how the ground truth for the training set was established. It only discusses the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K192051
    Device Name
    THINQ
    Manufacturer
    Date Cleared
    2020-09-30

    (427 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K170981

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    THINQ is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of MR images. Volumetric measurements may be compared to reference percentile data.

    Device Description

    THINQ™ is a software-only, non-interactive, medical device for quantitative imaging, accepting as input 3D T1-weighted MRI scan data of the human head. THINQ™ produces as output a quantitative neuromorphometry report in PDF format. The report contains morphometric (volume) measurements and visualizations of various structures in the brain, and compares these measures to age and gender-matched reference percentile data. The report includes images of the brain with color-coded segmentations, as well as plots showing how measurements compare to reference data. Additionally, in order to visually confirm the accuracy of the results, three segmentation overlays are created in DICOM-JPEG format; one in each anatomical plane: sagittal, coronal and axial.

    The THINQ™ processing pipeline performs an atlas-based segmentation of brain structures followed by measurement of those structures and a comparison to a reference dataset. The pipeline includes automated QA checks on the input DICOM 3D T1 MRI series to ensure adherence to imaging sequence requirements, checks on the data elements generated during the processing pipeline, and usage of a classifier to filter potentially incorrect reports due to corrupted image input.

    THINQ™ is packaged as a container, for deployment and operation in a high-performance computing environment within a clinical workflow.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the THINQ device, based on the provided text:

    1. Acceptance Criteria and Device Performance

    The provided document does not explicitly state acceptance criteria in a quantitative format (e.g., "Dice similarity coefficient must be >= 0.85"). Instead, it presents the reported device performance and implies that these results meet an acceptable standard, likely derived from a "literature review of neuroimaging publications."

    Table of Reported Device Performance

    StructureAccuracy MetricMean (StDev)
    Whole BrainDice0.94 (0.01)
    AVE (cm³)327.00 (111.48)
    RVE0.30 (0.13)
    Total Gray MatterDice0.82 (0.02)
    AVE (cm³)174.63 (46.42)
    RVE0.24 (0.05)
    Total White MatterDice0.87 (0.04)
    AVE (cm³)65.67 (37.38)
    RVE0.18 (0.12)
    Left Cortical Gray MatterDice0.92 (0.06)
    AVE (cm³)10.59 (6.51)
    RVE0.05 (0.03)
    Right Cortical Gray MatterDice0.92 (0.07)
    AVE (cm³)10.43 (6.67)
    RVE0.05 (0.03)
    Left Frontal LobeDice0.90 (0.06)
    AVE (cm³)5.44 (3.83)
    RVE0.07 (0.05)
    Right Frontal LobeDice0.90 (0.06)
    AVE (cm³)5.07 (3.86)
    RVE0.06 (0.05)
    Left Parietal LobeDice0.88 (0.08)
    AVE (cm³)4.06 (3.04)
    RVE0.08 (0.06)
    Right Parietal LobeDice0.88 (0.08)
    AVE (cm³)3.85 (2.86)
    RVE0.07 (0.06)
    Left Occipital LobeDice0.82 (0.07)
    AVE (cm³)1.57 (1.22)
    RVE0.07 (0.05)
    Right Occipital LobeDice0.82 (0.08)
    AVE (cm³)1.92 (1.97)
    RVE0.08 (0.09)
    Left Temporal LobeDice0.89 (0.06)
    AVE (cm³)2.08 (1.89)
    RVE0.04 (0.04)
    Right Temporal LobeDice0.89 (0.06)
    AVE (cm³)2.11 (1.81)
    RVE0.04 (0.04)
    Left Cerebral White MatterDice0.86 (0.04)
    AVE (cm³)32.60 (18.80)
    RVE0.18 (0.12)
    Right Cerebral White MatterDice0.86 (0.04)
    AVE (cm³)33.07 (18.88)
    RVE0.18 (0.12)
    Left Lateral VentricleDice0.86 (0.07)
    AVE (cm³)2.32 (1.69)
    RVE0.17 (0.15)
    Right Lateral VentricleDice0.85 (0.07)
    AVE (cm³)2.19 (1.59)
    RVE0.18 (0.14)
    Left HippocampusDice0.78 (0.03)
    AVE (cm³)0.45 (0.29)
    RVE0.14 (0.09)
    Right HippocampusDice0.79 (0.03)
    AVE (cm³)0.39 (0.28)
    RVE0.12 (0.10)
    Left AmygdalaDice0.66 (0.05)
    AVE (cm³)0.60 (0.17)
    RVE0.68 (0.24)
    Right AmygdalaDice0.64 (0.06)
    AVE (cm³)0.69 (0.19)
    RVE0.74 (0.27)
    Left CaudateDice0.78 (0.07)
    AVE (cm³)0.50 (0.35)
    RVE0.17 (0.14)
    Right CaudateDice0.78 (0.07)
    AVE (cm³)0.53 (0.34)
    RVE0.18 (0.13)
    Left PutamenDice0.82 (0.04)
    AVE (cm³)0.83 (0.35)
    RVE0.20 (0.10)
    Right PutamenDice0.82 (0.03)
    AVE (cm³)0.89 (0.35)
    RVE0.21 (0.08)
    Left ThalamusDice0.82 (0.03)
    AVE (cm³)1.51 (0.53)
    RVE0.19 (0.05)
    Right ThalamusDice0.83 (0.03)
    AVE (cm³)1.38 (0.45)
    RVE0.18 (0.04)
    Left CerebellumDice0.91 (0.02)
    AVE (cm³)2.10 (1.43)
    Right CerebellumDice0.92 (0.02)
    AVE (cm³)2.07 (1.49)
    RVE0.03 (0.02)
    Intracranial Volume ICVAPD3.42 (2.05)
    AVE (cm³)50.18 (31.92)
    RVE0.03 (0.02)

    Reproducibility Results:

    StructureReproducibility APD Mean (StDev)
    Whole Brain0.34 (0.29)
    Total Gray Matter0.83 (0.80)
    Total White Matter1.04 (1.12)
    Left Cortical Gray Matter1.08 (0.89)
    Right Cortical Gray Matter1.04 (0.88)
    Left Frontal Lobe1.31 (1.30)
    Right Frontal Lobe1.57 (2.41)
    Left Parietal Lobe1.50 (1.31)
    Right Parietal Lobe1.67 (2.73)
    Left Occipital Lobe1.49 (1.16)
    Right Occipital Lobe2.00 (1.41)
    Left Temporal Lobe1.24 (1.37)
    Right Temporal Lobe1.39 (1.15)
    Left Cerebral White Matter1.15 (1.09)
    Right Cerebral White Matter1.10 (1.20)
    Left Lateral Ventricle1.44 (1.21)
    Right Lateral Ventricle1.55 (1.12)
    Left Hippocampus1.56 (1.76)
    Right Hippocampus1.49 (1.57)
    Left Amygdala1.25 (1.23)
    Right Amygdala1.72 (1.36)
    Left Caudate1.14 (1.22)
    Right Caudate1.24 (1.10)
    Left Putamen1.41 (1.15)
    Right Putamen1.29 (0.90)
    Left Thalamus0.86 (0.59)
    Right Thalamus0.75 (0.63)
    Left Cerebellum0.62 (0.58)
    Right Cerebellum0.60 (0.54)
    Intracranial Volume ICV0.30 (0.29)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The "validation dataset was composed of 645 unique MR images."
    • Data Provenance: The document states that the dataset included "a wide range of patient characteristics (e.g. age, gender, disease case) and image acquisition varieties (e.g. scanner manufacturer, image acquisition protocols, data noise and artifacts)." However, specific details such as the country of origin or whether the data was retrospective or prospective are not provided.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: The document states that performance testing involved comparisons "to expert-labeled brain images" but does not specify the number of experts used.
    • Qualifications of Experts: The document refers to them as "expert-labeled," but does not provide specific qualifications (e.g., radiologist with X years of experience). It does refer to "gold-standard computer-aided expert manual segmentation" in the conclusion, implying a high standard of expertise.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method for establishing ground truth, beyond stating it was "expert-labeled."

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    • No, an MRMC comparative effectiveness study was not explicitly mentioned or performed to assess how much human readers improve with AI vs without AI assistance. The study focuses on the standalone performance of the THINQ device (segmentation accuracy and reproducibility) against established ground truth.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone study was done. The performance data presented (Dice, AVE, RVE, APD for accuracy and reproducibility) directly represents the algorithm's performance in automatically segmenting brain structures and quantifying volumes without human intervention during the segmentation process. The device explicitly states it is "a software-only, non-interactive, medical device for quantitative imaging."

    7. The Type of Ground Truth Used

    • Expert Consensus / Manual Segmentation: The segmentation accuracy was evaluated by comparing THINQ's output to "expert-labeled brain images" and "gold-standard computer-aided expert manual segmentation."

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions the "validation dataset" of 645 unique MR images, which is typically distinct from the training set.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only mentions the process for the validation/test set, which involved "expert-labeled brain images."
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1