Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K232765
    Date Cleared
    2024-02-29

    (174 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K202014, K221733, K220575

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Cima.X Fit with software syngo MR XA61A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Cima.X Fit with syngo MR XA61:

    Hardware

    New Hardware:
    → 3D Camera

    Modified Hardware:

    • → Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP)).
    • MaRS (Measurement and Reconstruction System).

    • → Gradient Coil
    • → Cover
    • → Cooling/ACSC
    • → SEP
    • → GPA
    • → RFCEL Temp
    • → Body Coil
    • → Tunnel light

    Software

    New Features and Applications:

    • -> GRE_PC
    • → Physio logging
    • -> Deep Resolve Boost HASTE
    • Deep Resolve Boost EPI Diffusion

    • → Open Recon
    • -> Ghost reduction (DPG)
    • -> Fleet Ref Scan
    • → Manual Mode
    • → SAMER
    • → MR Fingerprinting (MRF)1

    Modified Features and Applications:

    • → BEAT nav (re-naming only).
    • myExam Angio Advanced Assist (Test Bolus).

    • → Beat Sensor (all sequences).
    • Stimulation monitoring

    • -> Complex Averaging
    AI/ML Overview

    I am sorry, but the provided text does not contain the acceptance criteria and the comprehensive study details you requested for the "MAGNETOM Cima.X Fit" device, particularly point-by-point information on a multi-reader multi-case (MRMC) comparative effectiveness study or specific quantitative acceptance criteria for its AI features like Deep Resolve Boost or Deep Resolve Sharp.

    The document is a 510(k) summary for a Magnetic Resonance Diagnostic Device (MRDD), highlighting its substantial equivalence to a predicate device. While it mentions AI features and their training/validation, it does not provide the detailed performance metrics or study design to fully answer your request.

    Here's what can be extracted based on the provided text, and where information is missing:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document mentions that the impact of the AI networks (Deep Resolve Boost and Deep Resolve Sharp) has been characterized by "several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)," and evaluated by "visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels" and "perceptual loss." For Deep Resolve Sharp, "an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp" was also conducted.

    However, specific numerical acceptance criteria (e.g., PSNR > X, SSIM > Y), or the actual reported performance values against these criteria are not provided in the text. The document states that the conclusions from the non-clinical data suggest that the features bear an equivalent safety and performance profile to that of the predicate device, but no quantitative data to support this for the AI features is included in this summary.

    AI FeatureAcceptance Criteria (Not explicitly stated with numerical values in the text)Reported Device Performance (No quantitative results provided in the text)
    Deep Resolve Boost- PSNR (implied to be high)
    • SSIM (implied to be high)
    • Visual comparisons (e.g., absence of aliasing artifacts, good image sharpness, effective denoising levels) | Impact characterized by these metrics and visual comparisons. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values (e.g., actual PSNR/SSIM scores) are reported in this document. |
      | Deep Resolve Sharp | - PSNR (implied to be high)
    • SSIM (implied to be high)
    • Perceptual loss
    • Visual rating
    • Image sharpness by intensity profile comparisons (reconstructions with and without Deep Resolve Sharp) | Impact characterized by these metrics, verified and validated by in-house tests. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values are reported in this document. |

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    • Deep Resolve Boost:
      • Test Set Description: The text mentions that "the performance was evaluated by visual comparisons." It does not explicitly state a separate test set size beyond the validation data used during development. It implies the performance evaluation was based on the broad range of data covered during training and validation.
      • Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation and augmentation."
    • Deep Resolve Sharp:
      • Test Set Description: The text mentions "in-house tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp." Similar to Deep Resolve Boost, a separate test set size is not explicitly stated. It implies these tests were performed on data from the more than 10,000 high-resolution 2D images used for training and validation.
      • Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. The document mentions "visual comparisons" and "visual rating" as part of the evaluation but does not detail how many experts were involved or their qualifications.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study is not mentioned in this document as being performed to establish substantial equivalence for the AI features. The document relies on technical metrics and visual comparisons of image quality to demonstrate equivalence.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The evaluation mentioned, using metrics like PSNR, SSIM, perceptual loss, and intensity profile comparisons, are indicative of standalone algorithm performance in terms of image quality. Visual comparisons and ratings would involve human observers, but the primary focus described is on the image output quality itself from the algorithm. However, no specific "standalone" study design with comparative performance metrics (e.g., standalone diagnostic accuracy) is detailed.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies the high-quality, full-data MRI scans before artificial undersampling or noise addition served as the ground truth. This is a technical ground truth based on the original acquired MRI data, not a clinical ground truth like pathology or expert consensus on a diagnosis.
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Deep Resolve Boost, this refers to technical ground truth from high-resolution 2D images before manipulation.

    8. The sample size for the training set:

    • Deep Resolve Boost:
      • TSE: more than 25,000 slices
      • HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
      • EPI Diffusion: more than 1,000,000 slices
    • Deep Resolve Sharp: on more than 10,000 high resolution 2D images.

    9. How the ground truth for the training set was established:

    • Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition Restricted of noise and mirroring of k-space data."
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."

    In summary, the document focuses on the technical aspects of the AI features and their development, demonstrating substantial equivalence through non-clinical performance tests and image quality assessments, rather than clinical efficacy studies with specific diagnostic accuracy endpoints or human-AI interaction evaluations.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231587
    Device Name
    MAGNETOM Cima.X
    Date Cleared
    2023-12-18

    (201 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K202014, K221733, K220575

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Cima.X with software syngo MR XA61A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Cima.X with syngo MR XA61:

    Hardware
    New Hardware:
    → 3D Camera
    Modified Hardware:

    • → Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP)).
    • → MaRS (Measurement and Reconstruction System).
    • → Gradient Coil
    • → Cover
    • → Cooling/ACSC
    • → SEP
    • → GPA
    • → RFCEL Temp
    • → Body Coil
    • → Tunnel light

    Software
    New Features and Applications:

    • -> GRE_PC
    • → Physio logging
    • -> Deep Resolve Boost HASTE
    • → Deep Resolve Boost EPI Diffusion
    • → Open Recon
    • -> Ghost reduction (DPG)
    • -> Fleet Ref Scan
    • → Manual Mode
    • → SAMER

    Modified Features and Applications:

    • → BEAT_nav (re-naming only).
    • → myExam Angio Advanced Assist (Test Bolus).
    • → Beat Sensor (all sequences).
    • → Stimulation monitoring
    • -> Complex Averaging

    Additionally, the pulse sequence MR Fingerprinting (MRF) (K213805) is now available for the subject device MAGNETOM Cima.X with syngo MR XA61A.

    AI/ML Overview

    The provided text is a 510(k) Summary for a medical device (MAGNETOM Cima.X) and outlines how the device, particularly its AI features, meets acceptance criteria through studies.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the performance characteristics used to evaluate the AI features. The reported device performance is presented in terms of quality metrics and visual evaluations.

    Acceptance Criterion (Implied)Reported Device Performance
    Deep Resolve Boost (TSE, HASTE, EPI Diffusion)
    Image quality (e.g., aliasing artifacts, sharpness, denoising levels)Characterized by:
    • Peak Signal-to-Noise Ratio (PSNR)
    • Structural Similarity Index (SSIM)
    • Evaluated by visual comparisons to assess aliasing artifacts, image sharpness, and denoising levels. |
      | Deep Resolve Sharp | |
      | Image quality (e.g., sharpness) | Characterized by:
    • Peak Signal-to-Noise Ratio (PSNR)
    • Structural Similarity Index (SSIM)
    • Perceptual loss
    • Verified and validated by in-house tests, including visual rating and evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp. |

    2. Sample Sizes Used for the Test Set and Data Provenance

    The document does not explicitly delineate a separate "test set" with a dedicated sample size after the training and validation phase for Deep Resolve Boost and Deep Resolve Sharp. Instead, it seems the "validation" mentioned in the context of training and validation data encompasses the evaluation of device performance.

    • Deep Resolve Boost:

      • TSE: More than 25,000 slices (used for training and validation).
      • HASTE: Pre-trained on TSE dataset and refined with more than 10,000 HASTE slices (used for training and validation).
      • EPI Diffusion: More than 1,000,000 slices (used for training and validation).
      • Data Provenance: Retrospectively created from acquired datasets. The document does not specify the country of origin.
    • Deep Resolve Sharp:

      • Sample Size: More than 10,000 high-resolution 2D images (used for training and validation).
      • Data Provenance: Retrospectively created from acquired datasets. The document does not specify the country of origin.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not mention the use of experts to establish ground truth for the test set of the AI features. The "visual comparisons" and "visual rating" described are internal evaluations for feature performance but are not linked to expert-established ground truth for a formal test set described as such.

    4. Adjudication Method for the Test Set

    Not applicable, as no external expert-adjudicated test set is explicitly described for the AI features. The evaluations mentioned (visual comparisons, visual rating) appear to be internal.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No MRMC comparative effectiveness study is mentioned in the provided text for the AI features. The document focuses on the technical performance of the AI algorithms rather than their impact on human reader performance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

    Yes, the performance evaluation for Deep Resolve Boost and Deep Resolve Sharp appears to be standalone algorithm performance. The metrics (PSNR, SSIM, perceptual loss) and visual comparisons/ratings are related to the image quality produced by the algorithm itself, without direct assessment of human-in-the-loop performance.

    7. The Type of Ground Truth Used

    • Deep Resolve Boost: The acquired datasets (MRI raw data or images) were considered the "ground truth" for training and validation. Input data for the AI was then retrospectively created from this ground truth by data manipulation and augmentation (discarding k-space lines, lowering SNR, mirroring k-space data) to simulate different acquisition conditions.
    • Deep Resolve Sharp: The acquired datasets (high-resolution 2D images) were considered the "ground truth" for training and validation. Low-resolution input data for the AI was retrospectively created from this ground truth by cropping k-space data, so the high-resolution data served as the output/ground truth.

    8. The Sample Size for the Training Set

    The document combines training and validation data, so the sample sizes listed in point 2 apply:

    • Deep Resolve Boost:
      • TSE: More than 25,000 slices
      • HASTE: More than 10,000 HASTE slices (refined)
      • EPI Diffusion: More than 1,000,000 slices
    • Deep Resolve Sharp: More than 10,000 high-resolution 2D images

    9. How the Ground Truth for the Training Set Was Established

    • Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies that the raw, original MRI data or images acquired under standard, full-sampling conditions were considered the reference. The AI was then trained to recover information from artificially degraded or undersampled versions of this ground truth.
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Deep Resolve Boost, the original high-resolution acquired 2D images were used as the ground truth. Low-resolution data was then derived from these high-resolution images to create the input for the AI, with the original high-resolution images serving as the target output (ground truth).
    Ask a Question

    Ask a specific question about this device

    K Number
    K223343
    Date Cleared
    2023-03-28

    (147 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K221733, K220575

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    MAGNETOM Amira and MAGNETOM Sempra with syngo MR XA50M include new and modified features comparing to the predicate devices MAGNETOM Amira and MAGNETOM Sempra with syngo MR XA12M (K183221, cleared on February 14, 2019).

    AI/ML Overview

    The provided document is a 510(k) summary for the Siemens MAGNETOM Amira and Sempra MR systems, detailing their substantial equivalence to predicate devices. It describes new and modified hardware and software features, including AI-powered "Deep Resolve Boost" and "Deep Resolve Sharp."

    However, the document does not contain the detailed information necessary to fully answer the specific questions about acceptance criteria and a study proving the device meets those criteria, particularly in the context of AI performance. The provided text is a summary for regulatory clearance, not a clinical study report.

    Specifically, it lacks:

    • Concrete, quantifiable acceptance criteria for the AI features (e.g., a specific PSNR threshold that defines "acceptance").
    • A comparative effectiveness study (MRMC) to show human reader improvement with AI assistance.
    • Stand-alone algorithm performance metrics for the AI features (beyond general quality metrics like PSNR/SSIM, which are not explicitly presented as acceptance criteria).
    • Details on expert involvement, adjudication, or ground truth establishment for a test set used for regulatory acceptance, as the "test statistics and test results" section refers to quality metrics and visual inspection, and "clinical settings with cooperation partners" rather than a formal test set for regulatory submission.

    The "Test statistics and test results" section for Deep Resolve Boost mentions "After successful passing of the quality metrics tests, work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners." It also mentions "seven peer-reviewed publications" covering 427 patients which "concluded that the work-in-progress package and the reconstruction algorithm can be beneficially used for clinical routine imaging." This indicates real-world evaluation but does not provide specific acceptance criteria or detailed study results for the regulatory submission itself.

    Based on the provided text, here's what can be extracted and what is missing:

    1. Table of acceptance criteria and reported device performance:

    The document does not explicitly state quantifiable "acceptance criteria" for the AI features (Deep Resolve Boost and Deep Resolve Sharp) that were used for regulatory submission. Instead, it describes general successful evaluation methods:

    Acceptance Criteria (Inferred/Methods Used)Reported Device Performance (Summary)
    For Deep Resolve Boost:
    • Successful passing of quality metrics tests (PSNR, SSIM)
    • Visual inspection to detect potential artifacts
    • Evaluation in clinical settings with cooperation partners
    • No misinterpretation, alteration, suppression, or introduction of anatomical information reported | Deep Resolve Boost:
    • Impact characterized by PSNR and SSIM. Visual inspection conducted for artifacts.
    • Evaluated in clinical settings with cooperation partners.
    • Seven peer-reviewed publications (427 patients on 1.5T and 3T systems, covering prostate, abdomen, liver, knee, hip, ankle, shoulder, hand and lumbar spine).
    • Publications concluded beneficial use for clinical routine imaging.
    • No reported cases of misinterpretation, altered, suppressed, or introduced anatomical information.
    • Significant time savings reported in most cases by enabling faster image acquisition. |
      | For Deep Resolve Sharp:
    • Successful passing of quality metrics tests (PSNR, SSIM, perceptual loss)
    • In-house visual rating
    • Evaluation of image sharpness by intensity profile comparisons of reconstruction with and without Deep Resolve Sharp | Deep Resolve Sharp:
    • Impact characterized by PSNR, SSIM, and perceptual loss.
    • Verified and validated by in-house tests, including visual rating and evaluation of image sharpness by intensity profile comparisons.
    • Both tests showed increased edge sharpness. |

    2. Sample sized used for the test set and the data provenance:

    The document mixes "training" and "validation" datasets. It doesn't explicitly refer to a separate "test set" for regulatory evaluation with clear sample sizes for that purpose. The "Test statistics and test results" section refers to general evaluations and published studies.

    • "Validation" Datasets (internal validation, not explicitly a regulatory test set):
      • Deep Resolve Boost: 1,874 2D slices
      • Deep Resolve Sharp: 2,057 2D slices
    • Data Provenance (Training/Validation):
      • Source: For Deep Resolve Boost: "in-house measurements and collaboration partners." For Deep Resolve Sharp: "in-house measurements."
      • Origin: Not specified by country.
      • Retrospective/Prospective: "Input data was retrospectively created from the ground truth by data manipulation and augmentation" (for Boost) and "retrospectively created from the ground truth by data manipulation" (for Sharp). This implies the underlying acquired datasets were retrospective.
    • "Clinical Settings" / Publications (Implied real-world evaluation, not a regulatory test set):
      • Deep Resolve Boost: "a total of seven peer-reviewed publications 427 patients"
      • Data Provenance: Not specified by origin or retrospective/prospective for these external evaluations.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document. It mentions "visual inspection" and "visual rating," but does not detail the number or qualifications of experts involved in these processes for the "validation" sets or any dedicated regulatory "test set." For the "seven peer-reviewed publications," the expertise of the authors is implied but not detailed as part of the regulatory submission.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    A formal MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance is not described in this document. The document focuses on the technical performance of the AI features themselves and their general clinical utility as reported in external publications (e.g., faster imaging, no misinterpretation), but not a comparative study of human performance with and without the AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, the sections on "Test statistics and test results" for both Deep Resolve Boost and Deep Resolve Sharp describe evaluation of the algorithm's performance using quality metrics (PSNR, SSIM, perceptual loss) and visual/intensity profile comparisons. This implies standalone algorithm evaluation. No specific quantifiable results for these metrics are provided as acceptance criteria, only that tests were successfully passed and showed increased sharpness for Deep Resolve Sharp.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    The ground truth for the AI training and validation datasets is described as:

    • Deep Resolve Boost: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation." This implies that the original, full-quality MR images serve as the ground truth.
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation." Similarly, the original, high-resolution MR images are the ground truth.

    This indicates the ground truth is derived directly from the originally acquired (presumably high-quality/standard) MRI data, rather than an independent clinical assessment like pathology or expert consensus. The AI's purpose is to reconstruct a high-quality image from manipulated or undersampled input, so the "truth" is the original high-quality image.

    8. The sample size for the training set:

    • Deep Resolve Boost: 24,599 2D slices
    • Deep Resolve Sharp: 11,920 2D slices

    Note that the document states: "due to reasons of data privacy, we did not record how many individuals the datasets belong to. Gender, age and ethnicity distribution was also not recorded during data collection."

    9. How the ground truth for the training set was established:

    As described in point 7:

    • Deep Resolve Boost: The "acquired datasets" (original, full-quality MR images) served as the ground truth. Input data for the AI model was then "retrospectively created from the ground truth by data manipulation and augmentation," including undersampling, adding noise, and mirroring k-space data.
    • Deep Resolve Sharp: The "acquired datasets" (original MR images) served as the ground truth. Input data was "retrospectively created from the ground truth by data manipulation," specifically by cropping k-space data so only the center part was used as low-resolution input, with the original full data as the high-resolution output/ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1