Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K233620
    Manufacturer
    Date Cleared
    2024-05-20

    (189 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K223800, K060816

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MIM software is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical images. The medical image modalities include, but are not limited to, CT, MR, CR, DX, MG, US, SPECT, PET and XA as supported by ACR/NEMA DICOM 3.0. MIM assists in the following indications:

    · Receive, transmit, store, retrieve, display, print, and process medical images and DICOM objects.

    · Create, display, and print reports from medical images.

    · Registration, fusion display, and review of medical images for diagnosis, treatment evaluation, and treatment planning.

    · Evaluation of cardiac left ventricular function and perfusion, including left ventricular end-diastolic volume, end-systolic volume, and ejection fraction.

    · Localization and definition of objects such as tumors and normal tissues in medical images.

    · Creation, transformation, and modification of contours for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.

    · Quantitative and statistical analysis of PET/SPECT brain scans by comparing to other registered PET/SPECT brain scans.

    · Planning and evaluation of permanent implant brachytherapy procedures (not including radioactive microspheres).

    · Calculating absorbed radiation dose as a result of administering a radionuclide.

    When using the device clinically, within the United States, the user should only use FDA approved radiopharmaceuticals. If used with unapproved ones, this device should only be used for research purposes.

    Lossy compressed mammoaraphic images and digitized film screen images must not be reviewed for primary image interpretations. Images that are printed to film must be printed using an FDA-approved printer for the diagnosis of digital mammography images. Mammographic images must be viewed on a display system that has been cleared by the FDA for the diagnosis of digital mammography images. The software is not to be used for mammography CAD.

    Device Description

    MIM - Centiloid Scaling extends the features of MIM - Additional Tracers (K223800). It is designed for use in medical imaging and operates on Windows, Mac, and Linux computer systems. The intended use and indications for use in MIM - Centiloid Scaling are unchanged from the predicate device, MIM - Additional Tracers (K223800).

    MIM - Centiloid Scaling is a standalone software application that extends the functionality of the predicate device by providing:

    • · Conversion of SUVr calculations to a standardized Centiloid scale for PET-based amyloid burden measurement with Florbetapir (Amvvid®), Florbetaben (Neuraceq®), and Flutemetamol (Vizamyl™)
    AI/ML Overview

    The MIM - Centiloid Scaling device is intended to convert SUVr (Standardized Uptake Value ratio) calculations to a standardized Centiloid scale for PET-based amyloid burden measurement using specific radiopharmaceuticals (Florbetapir (Amyvid®), Florbetaben (Neuraceq®), and Flutemetamol (Vizamyl™)).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly derived from the validation methods and desired outcomes of the Centiloid Project and comparisons to expert visual reads.

    CriterionReported Device Performance
    SUVr Calculation Accuracy (against GAAIN-published values)
    - Linear regression R² for GAAIN Regions (across all 3 tracers)> 0.97
    - Linear regression R² for Clark Regions (across all 3 tracers)> 0.96
    (Comparable to Navitsky et al.²: GAAIN R²=0.89, Clark R²=0.90)
    Centiloid Conversion Equation Validation
    - Linear regression R² for MIM-calculated SUVr (Clark regions) vs. GAAIN-published SUVr (PiB scans) (across all 3 tracers)> 0.91 (Acceptance criterion: R² > 0.70)
    Centiloid Calculation Accuracy (against GAAIN-published Centiloid values)
    - Linear regression R² for Amyvid0.97
    - Linear regression R² for Neuraceq0.98
    - Linear regression R² for Vizamyl0.96
    - Bland-Altman biasMinimal (< 1.51 Centiloids for all tracers), no trending differences
    Accuracy against Consensus Expert Visual Reads (Binary Classification)
    - Combined Overall Accuracy95.1% (range 92.0-98.7%)
    - Combined Kappa0.90 (range 0.84-0.97)
    - Specific Accuracy for Amyvid (Overall, Kappa, Neg, Pos)Overall: 92.0%, Kappa: 0.84, Negative Read: 90.7%, Positive Read: 93.5%
    - Specific Accuracy for Neuraceq (Overall, Kappa, Neg, Pos)Overall: 95.4%, Kappa: 0.90, Negative Read: 92.3%, Positive Read: 98.3%
    - Specific Accuracy for Vizamyl (Overall, Kappa, Neg, Pos)Overall: 98.7%, Kappa: 0.97, Negative Read: 97.2%, Positive Read: 100.0%

    2. Sample Sizes Used for the Test Set and Data Provenance

    The testing involved several datasets:

    • SUVr Calculation & Centiloid Conversion Equation Validation:
      • Test set: "Tracer-specific patient cohorts available from the GAAIN database." No specific number of subjects is provided in this section, but it is implied to be diverse enough to establish robust linear regressions.
      • Data Provenance: GAAIN (Global Alzheimer's Association Interactive Network) database. This is a multi-national effort, so the data would likely originate from various countries. The nature (retrospective/prospective) is not explicitly stated but often large databases like GAAIN contain retrospective patient data.
    • Centiloid Calculation Accuracy (against GAAIN-published Centiloid values):
      • Test set: "Tracer data cohorts from the GAAIN database." Again, specific numbers are not given but are implied to be from the same cohorts used in the validation above.
      • Data Provenance: GAAIN database. Multi-national, likely retrospective.
    • Accuracy against Consensus Expert Visual Reads:
      • Test Set:
        • Amyvid: 100 scans from the ADNI2 study.
        • Neuraceq: 109 scans from a multi-center Phase II clinical trial.
        • Vizamyl: 79 scans from a multi-center Phase II clinical trial.
      • Data Provenance:
        • ADNI2: Alzheimer's Disease Neuroimaging Initiative, primarily US-based, prospective and longitudinal study.
        • Multi-center Phase II clinical trials: These are typically prospective studies conducted in multiple sites, often internationally, but specific countries are not mentioned for Neuraceq and Vizamyl beyond being "multi-center."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document mentions "consensus expert visual reads as the standard of truth" for the accuracy assessment of Centiloid quantification. However, it does not explicitly state:

    • The number of experts who performed these visual reads.
    • The qualifications of these experts (e.g., "radiologist with 10 years of experience").

    It refers to various clinical trials and studies (ADNI2, Phase II clinical trials for Neuraceq and Vizamyl) which typically involve trained medical professionals for such assessments, but precise details about the adjudicating experts for the visual reads are missing from this summary.

    4. Adjudication Method for the Test Set

    The document uses the term "consensus expert visual reads." This implies that multiple experts reviewed the scans and reached an agreement on the amyloid status. However, the specific adjudication method (e.g., 2-out-of-3 majority vote, discussion to consensus, initial independent reads followed by consensus panel) is not explicitly stated.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs without AI assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in this document. The study described focuses on the standalone performance of the algorithm in calculating Centiloid values and comparing these to established quantitative methods (GAAIN-published values) and clinical expert visual reads. There is no mention of human readers using or not using the AI (MIM - Centiloid Scaling) and measuring their performance improvement.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone study was done. The entire "Testing and Performance Data" section describes the performance of the MIM - Centiloid Scaling algorithm by itself:

    • It calculates SUVr values and compares them to published GAAIN values.
    • It generates Centiloid conversion equations using linear regression.
    • It calculates Centiloid values and compares them to GAAIN-published Centiloid values.
    • It compares its Centiloid quantification results to "consensus expert visual reads."

    In all these cases, the MIM - Centiloid Scaling algorithm is operating independently to produce results that are then compared against established standards or expert interpretations.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The study uses a combination of ground truth types:

    • Quantitative Reference Standard: For SUVr calculation and the initial Centiloid calculation accuracy, the "GAAIN-published SUVr values" and "GAAIN-published Centiloid values" are used as the reference standard. These are highly standardized and accepted quantitative metrics in the field.
    • Expert Consensus (Binary Classification): For the assessment of accuracy against visual interpretations, "consensus expert visual reads" are used. This typically involves human experts making a qualitative judgment (e.g., amyloid positive/negative) which serves as the ground truth for that part of the evaluation.

    8. The Sample Size for the Training Set

    The document does not explicitly mention the sample size for the training set. The described testing focuses on validation using external datasets (GAAIN, ADNI2, clinical trials). It states that the device "extends the features of MIM - Additional Tracers" but does not detail how the Centiloid scaling functionality itself was developed or trained if it involved machine learning models requiring a specific training dataset. The methodology references "The Centiloid Project," which defines a standardized method, implying the device is implementing this method rather than purely learning from data in a traditional machine learning sense.

    9. How the Ground Truth for the Training Set Was Established

    Since the document does not explicitly describe a "training set" for the Centiloid Scaling feature (but rather implementation and validation of a standardized project), this information is not provided. The ground truth for the validation process is, as described above, GAAIN-published values and expert consensus visual reads.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223800
    Manufacturer
    Date Cleared
    2023-01-17

    (29 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K060816

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MIM software is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical images. The medical image modalities include, but are not limited to, CT, MRI, CR, DX, MG, US, SPECT, PET and XA as supported by ACR/NEMA DICOM 3.0. MIM assists in the following indications:

    · Receive, transmit, store, retrieve, display, print, and process medical images and DICOM objects.

    • · Create, display and print reports from medical images.
      · Registration, fusion display, and review of medical images for diagnosis, treatment evaluation, and treatment planning.

    · Evaluation of cardiac left ventricular function and perfusion, including left ventricular end-systolic volume, and ejection fraction.

    · Localization and definition of objects such as tumors and normal tissues in medical images.

    · Creation, transformation, and modification of contours for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.

    · Quantitative and statistical analysis of PET/SPECT brain scans by comparing to other registered PET/SPECT brain scans.

    · Planning and evaluation of permanent implant brachytherapy procedures (not including radioactive microspheres).

    • · Calculating absorbed radiation dose as a result of administering a radionuclide.
      When using this device clinically within the United States, the user should only use FDA-approved radiopharmaceuticals. If used with unapproved ones, this device should only be used for research purposes.

    Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Images that are printed to film must be printed using a FDA-approved printer for the diagnosis of digital mammography images. Mammographic images must be viewed on a display system that has been cleared by the FDA for the diagnosis of digital mammography images. The software is not to be used for mammography CAD.

    Device Description

    MIM - Additional Tracers is an expansion of the standalone software application MIM -On Linux (K190379). MIM - Additional Tracers Indications for Use have not been modified, and the Intended Use is the same as in MIM - On Linux. With the addition of additional tracers to the standalone MIM software, engineering drawings, schematics, etc. are not applicable to the device.

    These Indications for Use include quantitative and statistical analysis of PET/SPECT brain scans by comparing to other registered PET/SPECT brain scans. MIM - Additional Tracers includes the above features and capabilities and adds support for new PET/SPECT new tracers. While MIM - On Linux supported FDG and HMPAO tracers, this special 510k submission includes new tracers support for: Amyvid™ (Florbetapir), Vizamyl™ (Flutemetamol), Neuraceq™ (Florbetaben), and DaTscan™ (Joflupane).

    MIM - Additional Tracers operates on Windows, Mac, and Linux computer systems.

    AI/ML Overview

    The provided text describes the acceptance criteria and a study proving the device meets those criteria, specifically for the "MIM - Additional Tracers" software (K223800). The key information is extracted below.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly list numerical acceptance criteria. Instead, it describes a validation process focused on the accuracy of template registration and quantitative analysis for the newly supported PET/SPECT tracers. The "reported device performance" is a qualitative statement of approval by experts.

    Acceptance Criteria (Inferred from testing)Reported Device Performance
    Accurate template registration for each tracer (Amyvid™, Vizamyl™, Neuraceq™, DaTscan™) and corresponding clinical use case.Assessed and approved by a radiologist and MIM technical experts. Risk mitigation for automatic registration error is built-in with adjustment and verification steps.
    Accurate placement of reference and analysis regions (specifically for DaTscan™).Verified with built-in affine registration to the template and individual hemisphere verification and adjustment.
    Creation of accurate normal patient databases for each tracer.Alignment to template space and accuracy of analysis contours for each tracer were approved by a radiologist and MIM technical experts before inclusion.
    Quantitative analysis results on representative population scans align with expert reads.Compared to expert reads and reviewed by physicians and MIM technical experts. Discrepancies only acceptable for misregistered, borderline, or poor-quality scans.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: "Normal patient scans were curated to span demographics appropriate for each tracer." The exact number of scans is not specified.
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It is implied that these are existing patient scans curated for the purpose of creating normal databases and performing quantitative analysis.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: A "radiologist" (singular) and "MIM technical experts" (plural) were involved. The exact number of MIM technical experts is not specified.
    • Qualifications: "Radiologist" and "MIM technical experts" are provided as qualifications. No specific experience levels (e.g., "10 years of experience") are given for the radiologist.

    4. Adjudication Method for the Test Set

    The adjudication method appears to be a consensus-based approach with expert review and approval. For template registration accuracy, a radiologist and MIM technical experts assessed and approved. For normal patient database creation, the same group approved. For quantitative analysis, results were compared to expert reads and reviewed by physicians and MIM technical experts. Discrepancies were noted and deemed acceptable only under specific conditions (misregistered, borderline, or poor-quality scans). This suggests a process where expert input directly defines or validates the acceptable outcomes.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study is mentioned. The study described focuses on the standalone performance of the software with expert review, not on human readers' improvement with AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance study was done. The description indicates that the software's template registration accuracy and quantitative analysis results were assessed by experts. The "risk mitigation for automatic registration error" built into the software, which includes "registration adjustment and verification steps" by the user, implies a workflow involving a human in the loop. However, the initial assessment and comparison of quantitative results to expert reads suggest an evaluation of the algorithm's output independently, even if human verification is part of the clinical workflow.

    7. Type of Ground Truth Used

    The ground truth was established through expert consensus/reads. For template registration and contour accuracy, a radiologist and MIM technical experts approved the alignment and contours. For quantitative analysis, results were compared to "expert reads" and reviewed by physicians and MIM technical experts.

    8. Sample Size for the Training Set

    The document does not explicitly mention a separate "training set" or its sample size. The description focuses on creating "normal patient scans...curated to span demographics appropriate for each tracer" which were then aligned to template space to create "databases for each tracer." These databases could be considered analogous to a reference/training set for the software's internal operations or for comparison, but it's not explicitly labeled as such for a machine learning model.

    9. How the Ground Truth for the Training Set Was Established

    If the "normal patient scans" and "databases for each tracer" are considered part of a training or reference set, then their ground truth was established by:

    • Expert Approval: "Alignment to template space and accuracy of analysis contours for each tracer were approved by a radiologist and MIM technical experts before each series could be included in the normals database."
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1