Search Filters

Search Results

Found 44 results

510(k) Data Aggregation

    K Number
    K250901
    Date Cleared
    2025-07-22

    (118 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Vantage Fortian/Orian 1.5T, MRT-1550, V10.0 with AiCE Reconstruction Processing Unit for MR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vantage Fortian/Orian 1.5T systems are indicated for use as a diagnostic imaging modality that produces cross-sectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.

    MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:

    • Proton density (PD) (also called hydrogen density)
    • Spin-lattice relaxation time (T1)
    • Spin-spin relaxation time (T2)
    • Flow dynamics
    • Chemical Shift

    Depending on the region of interest, contrast agents may be used. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.

    Device Description

    The Vantage Fortian (Model MRT-1550/WK, WM, WO, WQ)/Vantage Orian (Model MRT-1550/U3, U4, U7, U8) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. These Vantage Fortian/Orian models use 1.4 m short and 4.1 tons light weight magnet. They include the Canon Pianissimo™ Sigma and Pianissimo Zen technology (scan noise reduction technology). The design of the gradient coil and the whole-body coil of these Vantage Fortian/Orian models provide the maximum field of view of 55 x 55 x 50 cm and include the standard (STD) gradient system.

    The Vantage Orian (Model MRT-1550/ UC, UD, UG, UH, UK, UL, UO, UP) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. The Vantage Orian models use 1.4 m short and 4.1 tons light weight magnet. They include the Canon Pianissimo™ and Pianissimo Zen technology (scan noise reduction technology). The design of the gradient coil and the whole-body coil of these Vantage Orian models provide the maximum field of view of 55 x 55 x 50 cm. The Model MRT-1550/ UC, UD, UG, UH, UK, UL, UO, UP includes the XGO gradient system.

    This system is based upon the technology and materials of previously marketed Canon Medical Systems MRI systems and is intended to acquire and display cross-sectional transaxial, coronal, sagittal, and oblique images of anatomic structures of the head or body. The Vantage Fortian/Orian MRI System is comparable to the current 1.5T Vantage Fortian/Orian MRI System (K240238), cleared April 12, 2024, with the following modifications.

    AI/ML Overview

    Acceptance Criteria and Study for Canon Medical Systems Vantage Fortian/Orian 1.5T with AiCE Reconstruction Processing Unit for MR

    This document outlines the acceptance criteria and the study conducted to demonstrate that the Canon Medical Systems Vantage Fortian/Orian 1.5T with AiCE Reconstruction Processing Unit for MR (V10.0) device meets these criteria, specifically focusing on the new features: 4D Flow, Zoom DWI, and PIQE.

    The provided text focuses on the updates in V10.0 of the device, which primarily include software enhancements: 4D Flow, Zoom DWI, and an extended Precise IQ Engine (PIQE). The acceptance criteria and testing are described for these specific additions.

    1. Table of Acceptance Criteria and Reported Device Performance

    The general acceptance criterion for all new features appears to be demonstrating clinical acceptability and performance that is either equivalent to or better than conventional methods, maintaining image quality, and confirming intended functionality. Specific quantitative acceptance criteria are not explicitly detailed in the provided document beyond qualitative assessments and comparative statements.

    FeatureAcceptance Criteria (Implied from testing)Reported Device Performance
    4D FlowVelocity measurement with and without PIQE of a phantom should meet the acceptance criteria for known flow values. Images in volunteers should demonstrate velocity stream lines consistent with physiological flow.The testing confirmed that the flow velocity of the 4DFlow sequence met the acceptance criteria. Images in volunteers demonstrated velocity stream lines.
    Zoom DWIEffective suppression of wraparound artifacts in the PE direction. Reduction of image distortion level when setting a smaller PE-FOV. Accurate measurement of ADC values.Testing confirmed that Zoom DWI is effective for suppressing wraparound artifacts in the PE direction; setting a smaller PE-FOV in Zoom DWI scan can reduce the image distortion level; and the ADC values can be measured accurately.
    PIQE (Bench Testing)Generate higher in-plane matrix images from low matrix images. Mitigate ringing artifacts. Maintain similar or better contrast and SNR compared to standard clinical techniques. Achieve sharper edges.Bench testing demonstrated that PIQE generates images with sharper edges while mitigating the smoothing and ringing effects and maintaining similar or better contrast and SNR compared to standard clinical techniques (zero-padding interpolation and typical clinical filters).
    PIQE (Clinical Image Review)Images reconstructed with PIQE should be scored clinically acceptable or better by radiologists/cardiologists across various categories (ringing, sharpness, SNR, overall image quality (IQ), and feature conspicuity). PIQE should generate higher spatial in-plane resolution images from lower resolution images (e.g., triple matrix dimensions, 9x factor). PIQE should contribute to ringing artifact reduction, denoising, and increased sharpness. PIQE should be able to accelerate scanning by reducing acquisition matrix while maintaining clinical matrix size and image quality. PIQE benefits should be obtainable on regular clinical protocols without requiring acquisition parameter adjustment. Reviewer agreement should be strong.The resulting reconstructions were scored on average at, or above, clinically acceptable. Exhibiting a strong agreement at the "good" and "very good" level in the IQ metrics, the Reviewers' scoring confirmed all the specific criteria listed (higher spatial resolution, ringing reduction, denoising, sharpness, acceleration, and applicability to regular protocols).

    2. Sample Size Used for the Test Set and Data Provenance

    • 4D Flow & Zoom DWI: Evaluated utilizing phantom images and "representative volunteer images." Specific numbers for volunteers are not provided.
    • PIQE Clinical Image Review Study:
      • Subjects: A total of 75 unique subjects.
      • Scans: Comprising a total of 399 scans.
      • Reconstructions: Each scan was reconstructed multiple ways with or without PIQE, totaling 1197 reconstructions for scoring.
      • Data Provenance: Subjects were from two sites in USA and Japan. The study states that although the dataset includes subjects from outside the USA, the population is expected to be representative of the intended US population due to PIQE being an image post-processing algorithm that is not disease-specific and not dependent on factors like population variation or body habitus.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • PIQE Clinical Image Review Study:
      • Number of Experts: 14 USA board-certified radiologists/cardiologists.
      • Distribution: 3 experts per anatomy (Body, Breast, Cardiac, Musculoskeletal (MSK), and Neuro).
      • Qualifications: "USA board-certified radiologists/cardiologists." Specific years of experience are not mentioned.

    4. Adjudication Method for the Test Set

    • PIQE Clinical Image Review Study: The study describes a randomized, blinded clinical image review study. Images reconstructed with either the conventional method or the new PIQE method were randomized and blinded to the reviewers. Reviewers scored the images independently using a modified 5-point Likert scale. Analytical methods used included Gwet's Agreement Coefficient for reviewer agreement and Generalized Estimating Equations (GEE) for differences between reconstruction techniques, implying a statistical assessment of agreement and comparison across reviewers rather than a simple consensus adjudication method (e.g., 2+1, 3+1).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was done for PIQE.
    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance: The document states that "the Reviewers' scoring confirmed that: (a) PIQE generates higher spatial in-plane resolution images from lower resolution images (with the ability to triple the matrix dimensions in both in-plane directions, i.e. a factor of 9x); (b) PIQE contributes to ringing artifact reduction, denoising and increased sharpness; (c) PIQE is able to accelerate scanning by reducing the acquisition matrix only, while maintaining clinical matrix size and image quality; and (d) PIQE benefits can be obtained on regular clinical protocols without requiring acquisition parameter adjustment."
      • While it reports positive outcomes ("scored on average at, or above, clinically acceptable," "strong agreement at the 'good' and 'very good' level"), it does not provide a quantitative effect size (e.g., AUC difference, diagnostic accuracy improvement percentage) of how much human readers improve with AI (PIQE) assistance compared to without it. The focus is on the quality of PIQE-reconstructed images as perceived by experts, rather than the direct impact on diagnostic accuracy or reader performance metrics. It confirms that the performance is "similar or better" compared to conventional methods.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, standalone performance was conducted for PIQE and other features.
      • 4D Flow and Zoom DWI: Evaluated using phantom images, which represents standalone, objective measurement of the algorithm's performance against known physical properties.
      • PIQE: Bench testing was performed on typical clinical images to evaluate metrics like Edge Slope Width (sharpness), Ringing Variable Mean (ringing artifacts), Signal-to-Noise ratio (SNR), and Contrast Ratio. This is an algorithmic-only evaluation against predefined metrics, without direct human interpretation as part of the performance metric.

    7. Type of Ground Truth Used

    • 4D Flow & Zoom DWI:
      • Phantom Studies: Known physical values (e.g., known flow values for velocity measurement, known distortion levels, known ADC values).
    • PIQE:
      • Bench Testing: Quantitative imaging metrics derived from the images themselves (Edge Slope Width, Ringing Variable Mean, SNR, Contrast Ratio) are used to assess the impact of the algorithm. No external ground truth (like pathology) is explicitly mentioned here, as the focus is on image quality enhancement.
      • Clinical Image Review Study: Expert consensus/opinion (modified 5-point Likert scale scores from 14 board-certified radiologists/cardiologists) was used as the ground truth for image quality, sharpness, ringing, SNR, and feature conspicuity, compared against images reconstructed with conventional methods. No pathology or outcomes data is mentioned as ground truth.

    8. Sample Size for the Training Set

    The document explicitly states that the 75 unique subjects used in the PIQE clinical image review study were "separate from the training data sets." However, it does not specify the sample size for the training set used for the PIQE deep learning model.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set for PIQE was established. It only mentions that the study test data sets were separate from the training data sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243321
    Date Cleared
    2025-02-07

    (107 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Endoscopic Video Image Processor (RP-IPD-V1000F)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Endoscopic Video Image Processor is used in conjunction with the Single-Use Video Flexible Cysto-Nephroscope (Models: RP-U-C01F, RP-U-C01FS) to process the images collected by the video endoscope and send them to the display, and provide power for the endoscope.

    Device Description

    The Endoscopic Video Image Processor is a video processing system intended for use during endoscopic procedures. It receives and processes image signals from a compatible video endoscope and produces live video images during endoscopic procedures. Apart from the image processing functions, it also provides the power supply for the endoscope.

    The Endoscopic Video Image Processor is a reusable device. It does not require sterilization before use, as there is no direct/indirect patient contact. The device needs to be cleaned and disinfected before use. and the cleaning and disinfection method is outlined in the Instructions for Use.

    AI/ML Overview

    The provided text describes the Endoscopic Video Image Processor (RP-IPD-V1000F) as a video processing system for endoscopic procedures. It details its functions, such as processing image signals from compatible video endoscopes, producing live video images, and providing power to the endoscope. The document specifies that the device does not require sterilization as there is no direct/indirect patient contact but needs cleaning and disinfection before use.

    Here's an analysis of the provided information regarding acceptance criteria and the supporting study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" with specific pass/fail thresholds alongside "reported device performance" in a quantitative manner as typically expected. Instead, it lists general performance characteristics that were tested and states that the "Performance Testing demonstrated that the subject device and the predicate device have similar performance, and the subject device is as safe and effective as the predicate device."

    Here's a reconstruction based on the available information:

    Acceptance Criteria (Implied)Reported Device Performance
    Direction of viewThe testing showed similar performance to the predicate device.
    Field of viewThe testing showed similar performance to the predicate device.
    Depth of fieldThe testing showed similar performance to the predicate device.
    ResolutionThe testing showed similar performance to the predicate device.
    Signal-to-noise ratioThe testing showed similar performance to the predicate device.
    Geometric distortionThe testing showed similar performance to the predicate device.
    Image intensity uniformityThe testing showed similar performance to the predicate device.
    Dynamic rangeThe testing showed similar performance to the predicate device.
    Color performanceThe testing showed similar performance to the predicate device.
    Image Frame FrequencyThe testing showed similar performance to the predicate device.
    System DelayThe testing showed similar performance to the predicate device.

    The standards referenced are:

    • ISO 8600-1:2015 Endoscopes - Medical endoscopes and endotherapy devices. General requirements
    • ISO 8600-3:2019 Endoscopes. Medical endoscopes and endotherapy devices. Part 3: Determination of field of view and direction of view of endoscopes with optics

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not provide information on the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective). It mentions "the Cysto-Nephroscope System" as the subject of testing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not provide information regarding the number of experts used or their qualifications for establishing ground truth. The testing appears to be primarily technical performance testing against ISO standards rather than a clinical evaluation requiring expert interpretation of medical images.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method. Given that the testing appears to be technical performance testing of the device's imaging capabilities, a traditional adjudication method for medical image interpretation would likely not be applicable.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document explicitly states "IX. Clinical Evidence N/A." This indicates that no human factors or comparative effectiveness study involving human readers with and without AI assistance was conducted or provided for this submission. The device is an image processor, not an AI-assisted diagnostic tool.

    6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)

    Yes, a standalone performance study was done in the form of "non-clinical performance testing." This testing evaluated the device's technical specifications and imaging capabilities (e.g., resolution, signal-to-noise ratio, color performance, image frame frequency, system delay) against relevant ISO standards. This is considered standalone performance as it assesses the device's intrinsic functional properties independent of human interaction or a clinical scenario.

    7. Type of Ground Truth Used

    The ground truth for the non-clinical performance testing was based on technical specifications and compliance with international standards (ISO 8600-1:2015 and ISO 8600-3:2019), rather than expert consensus on medical findings, pathology, or outcomes data, as this device primarily processes images.

    8. Sample Size for the Training Set

    The document does not mention a training set. This is expected as the device described is an "Endoscopic Video Image Processor" and is not presented as an AI/ML-driven diagnostic algorithm that would typically require a training set. It processes existing video signals rather than performing analysis for diagnostic insights.

    9. How the Ground Truth for the Training Set Was Established

    Since there is no mention of a training set, this information is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243335
    Date Cleared
    2025-01-07

    (75 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Vantage Galan 3T, MRT-3020, V10.0 with AiCE Reconstruction Processing Unit for MR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vantage Galan 3T systems are indicated for use as a diagnostic imaging modality that produces crosssectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.

    MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:

    ·Proton density (PD) (also called hydrogen density)

    ·Spin-lattice relaxation time (T1)

    ·Spin-spin relaxation time (T2)

    ·Flow dynamics

    ·Chemical Shift

    Depending on the region of interest, contrast agents may be used. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.

    Device Description

    The Vantage Galan (Model MRT-3020) is a 3 Tesla Magnetic Resonance Imaging (MRI) System, previously cleared under K241496. This system is based upon the technology and materials of previously marketed Canon Medical Systems and is intended to acquire and display crosssectional transaxial, coronal, sagittal, and oblique images of anatomic structures of the head or body.

    AI/ML Overview

    This document describes a 510(k) premarket notification for the Vantage Galan 3T, MRT-3020, V10.0 with AiCE Reconstruction Processing Unit for MR. This submission concerns a modification to an already cleared device, primarily involving the addition of a standard gradient system and the extension of the Precise IQ Engine (PIQE) to new scan families, weightings, and anatomical areas.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of quantitative acceptance criteria for PIQE performance. Instead, it describes acceptance in qualitative terms based on expert review.

    Metric/CategoryAcceptance Criteria (Implicit)Reported Device Performance (PIQE)
    Image Quality Metrics (Bench Testing)Improvement in sharpness, mitigation of ringing, maintenance/improvement of SNR and contrast compared to standard techniques.Generates images with sharper edges, mitigates smoothing and ringing effects, maintains similar or better contrast and SNR compared to zero-padding interpolation and typical clinical filters.
    Clinical Image Review (Likert Scale)Scored "at or above, clinically acceptable" on average. Strong agreement at "good" and "very good" level for all IQ metrics.All reconstructions scored on average at, or above, clinically acceptable. Exhibited strong agreement at the "good" and "very good" level for all IQ metrics (ringing, sharpness, SNR, overall IQ, feature conspicuity).
    FunctionalityGenerate higher spatial in-plane resolution from lower resolution images (up to 9x factor). Reduce ringing artifacts, denoise, and increase sharpness. Accelerate scanning by reducing acquisition matrix while maintaining clinical matrix size and image quality. Obtain benefits on regular clinical protocols without requiring acquisition parameter adjustment.PIQE achieves these functionalities as confirmed by expert review and technical description.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 106 unique subjects.
    • Data Provenance: Two sites in the USA and one in Japan. This data is described as "separate from the training data sets." The document states that the multinational study population is expected to be representative of the intended US population for PIQE, as PIQE is an image post-processing algorithm not disease-specific or dependent on acquisition parameters that might be affected by population variation. Comparisons were internal (each subject as its own control).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: 14 USA board-certified radiologists and cardiologists (3 reviewers per anatomy).
    • Qualifications: "USA board-certified radiologists and cardiologists." Specific experience levels (e.g., years of experience) are not provided.

    4. Adjudication Method for the Test Set

    The document describes a scoring process by multiple reviewers but does not specify a formal adjudication method (e.g., 2+1, 3+1). It states: "scored by 3 reviewers per anatomy in various clinically-relevant categories... Reviewer scoring data was analyzed for reviewer agreement and differences between reconstruction techniques using Gwet's Agreement Coefficient and Generalized Estimating Equations, respectively." This suggests that the scores from the three reviewers were aggregated and analyzed statistically, rather than undergoing a consensus or tie-breaking adjudication process for each individual case.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • MRMC Study: Yes, a multi-site, randomized, blinded clinical image review study was conducted.
    • Effect Size (AI-assisted vs. without AI assistance): This was not an AI-assisted reader study. The study compared images reconstructed with the conventional method (matrix expansion with Fine Reconstruction and typical clinical filter) against images reconstructed with PIQE. The purpose was to evaluate the image quality produced by PIQE, not to assess reader performance with or without AI assistance. Therefore, no effect size on human reader improvement with AI assistance is reported. The study aimed to demonstrate that PIQE-reconstructed images are clinically acceptable and offer benefits like sharpness and ringing reduction.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation of the PIQE algorithm was conducted through "bench testing." This involved evaluating metrics like Edge Slope Width, Ringing Variable Mean, Signal-to-Noise ratio, and Contrast Change Ratio on typical clinical images from various anatomical regions. This bench testing demonstrated that PIQE "generates images with sharper edges while mitigating the smoothing and ringing effects and maintaining similar or better contrast and SNR."

    7. The Type of Ground Truth Used

    • For Bench Testing: The "ground truth" implicitly referred to established quantitative image quality metrics (Edge Slope Width, Ringing Variable Mean, Signal-to-Noise ratio, and Contrast Change Ratio) and comparisons against conventional reconstruction methods.
    • For Clinical Image Review Study: The "ground truth" was established by expert consensus/evaluation, where 14 board-certified radiologists and cardiologists scored images on various clinically-relevant categories (ringing, sharpness, SNR, overall IQ, and feature conspicuity) using a modified 5-point Likert scale.

    8. The Sample Size for the Training Set

    The document explicitly states that the "106 unique subjects... from two sites in USA and one in Japan... were scanned... to provide the test data sets (separate from the training data sets)." The sample size for the training set is not provided in the document.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established, as details about the training set itself are omitted.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241496
    Date Cleared
    2024-08-20

    (84 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Vantage Galan 3T, MRT-3020, V10.0 with AiCE Reconstruction Processing Unit for MR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vantage Galan 3T systems are indicated for use as a diagnostic imaging modality that produces cross-sectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.

    MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:

    ·Proton density (PD) (also called hydrogen density)

    ·Spin-lattice relaxation time (T1)

    ·Spin-spin relaxation time (T2)

    ·Flow dynamics

    ·Chemical Shift

    Depending on the region of interest, contrast agents may be used. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.

    Device Description

    The Vantage Galan (Model MRT-3020) is a 3 Tesla Magnetic Resonance Imaging (MRI) System, previously cleared under K230355. This system is based upon the technology and materials of previously marketed Canon Medical Systems and is intended to acquire and display crosssectional transaxial, coronal, sagittal, and oblique images of anatomic structures of the head or body.

    AI/ML Overview

    The provided document describes a 510(k) premarket notification for a modified MRI system (Vantage Galan 3T, MRT-3020, V10.0 with AiCE Reconstruction Processing Unit for MR) by Canon Medical Systems Corporation. The primary purpose of this submission is to demonstrate substantial equivalence to a previously cleared predicate device (Vantage Galan 3T, MRT-3020, V9.0 with AiCE Reconstruction Processing Unit for MR, K230355) despite hardware and software changes.

    The document primarily focuses on verifying that the changes do not adversely affect the device's safety and effectiveness and that the modified device maintains performance comparable to the predicate. It does not describe a study proving the device meets specific acceptance criteria in the context of diagnostic accuracy, particularly for an AI-assisted diagnostic device, as the "AiCE Reconstruction Processing Unit" is for image reconstruction, not for AI-based diagnosis.

    Therefore, many of the requested fields related to diagnostic performance studies (like multi-reader multi-case studies, expert consensus ground truth, effect size of AI assistance for human readers, or standalone AI performance) are not applicable or not provided in this regulatory submission, as this is a modification of an imaging device itself, not a new AI diagnostic algorithm.

    Based on the provided text, here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of "acceptance criteria" for diagnostic accuracy or clinical utility that an AI diagnostic algorithm would typically have, nor does it report performance metrics against such criteria. Instead, the testing focuses on ensuring the new features and hardware maintain image quality, safety, and functionality comparable to the predicate device.

    However, the document does list testing performed for new features. We can infer the "acceptance criteria" for these were successful confirmation of functionality and image quality.

    Feature TestedAcceptance Criteria (Inferred)Reported Device Performance
    4D FlowAccurate visualization of blood flow conditions when combined with external analytical software, including quantitative analysis (streamline, path line, velocity). Proper functioning of Cine or Retro modes with PS3D for time-phase information.Bench testing included velocity measurement in a phantom with known flow values. Images in volunteers demonstrated velocity streamlines. (Implied: The system successfully produced the intended flow visualizations and quantitative data.)
    Zoom DWIEffective suppression of wraparound artifacts, reduction of image distortion, and provision of accurate ADC values for smaller FOV diffusion sizes by selective excitation and outer volume suppression (OVS).Evaluated utilizing phantom images and representative volunteer images. Confirmed that Zoom DWI is effective for suppressing wraparound artifacts, reducing image distortion, and providing accurate ADC values. (Implied: The system successfully met these image quality objectives.)
    3D-QALASAcquisition of signals with FFE3D using T2prep pulse and IR pulse in combination. Production of multiple weighted images suitable for quantitative analysis using external analytical software. Image quality metrics (overall contrast, signal strength) comparable to reference images in literature.Bench testing included scanning multiple volunteers. Three experienced reviewers compared the resulting multiple weighted images on image quality metrics (overall contrast and signal strength) against reference images published in the literature. (Implied: The image quality was found to be comparable and suitable for its intended use with external analytical software.)
    General SystemSafety parameters (Static field strength, Operational Modes, Safety parameter display, Operating mode access requirements, Maximum SAR, Maximum dB/dt, Potential emergency conditions and shutdown means) remain identical to the predicate device and comply with relevant IEC standards. Image quality (overall diagnostic capability) is maintained from the predicate device despite hardware/software changes.Static field strength: 3T (Same as predicate). Operational Modes: Normal and 1st Operating Mode (Same as predicate). Safety parameter display: SAR, dB/dt (Same as predicate). Operating mode access requirements: Allows screen access to 1st level operating mode (Same as predicate). Maximum SAR: 4W/kg for whole body (1st operating mode specified in IEC 60601-2-33) (Same as predicate). Maximum dB/dt: 1st operating mode specified in IEC 60601-2-33 (Same as predicate). Potential emergency condition and means provided for shutdown: Shutdown by Emergency Ramp Down Unit for collision hazard for ferromagnetic objects (Same as predicate). "No change from the previous predicate submission, K230355" for imaging performance parameters. Risk analysis, verification/validation testing through bench testing demonstrate system requirements met. Image quality testing confirmed acceptance criteria met. Conclusion: Modifications do not change indications for use or intended use. Subject device is safe and effective for its intended use.

    2. Sample Size Used for the Test Set and Data Provenance

    • 4D Flow: "a phantom with known flow values" and "volunteers." Specific numbers are not provided.
    • Zoom DWI: "phantom images" and "representative volunteer images." Specific numbers are not provided.
    • 3D-QALAS: "multiple volunteers." Specific numbers are not provided.
    • Data Provenance: Not explicitly stated, but given Canon Medical Systems Corporation is based in Japan (manufacturer) and the U.S. (agent), it's likely a mix or either. The studies are described as "bench testing" and using "volunteers," implying prospective data collection for these specific tests.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • 3D-QALAS: "three experienced reviewers" compared images. Their specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed, but their role as "reviewers" suggests they are professionals qualified to assess image quality.
    • Other features (4D Flow, Zoom DWI): The ground truth appears to be established by comparison to known phantom values or visual confirmation of expected image quality improvements (e.g., artifact suppression for Zoom DWI). No external "experts" beyond the testing team are mentioned for establishing ground truth in these cases, which is typical for image quality and functional assessments.

    4. Adjudication Method for the Test Set

    • For 3D-QALAS, comparison was made by "three experienced reviewers." The document does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It simply states they "compared" the images.
    • For other features, adjudication methods are not applicable as the "ground truth" relies on phantom measurements or visual confirmation against expected technical performance.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a MRMC study comparing human readers with and without AI assistance was not reported. This submission concerns hardware and image reconstruction software changes for an MRI system, not an AI diagnostic algorithm intended for human reader assistance in interpretation. The "AiCE Reconstruction Processing Unit" processes raw MR data into images, it does not interpret those images for diagnostic findings. Therefore, the effect size of human readers improving with AI vs without AI assistance is not relevant or measured here.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    This refers to the performance of the image reconstruction itself. The testing described (e.g., for 4D Flow, Zoom DWI, 3D-QALAS) demonstrates the standalone technical performance of these new imaging capabilities and the AiCE reconstruction unit in producing images with desired characteristics (e.g., flow visualization, artifact suppression, specific contrast weighting). The "performance" is that the images are generated accurately according to the algorithms' design and meet technical quality metrics.

    7. The Type of Ground Truth Used

    • 4D Flow: Phantom with "known flow values" (objective physical ground truth) and visual assessment from "volunteer images."
    • Zoom DWI: Phantom images and visual assessment from "volunteer images" (technical image quality and accuracy of ADC values).
    • 3D-QALAS: Comparison against "reference images published in the literature" (literature-based reference) and assessment by "three experienced reviewers" on image quality metrics (expert qualitative assessment against a standard).
    • General System Performance: Compliance with recognized consensus standards (e.g., IEC, NEMA) and comparison to the characteristics of the predicate device (regulatory/technical ground truth).

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of supervised machine learning for diagnostic tasks. The AiCE (Artificial intelligence Clear Engine) is mentioned as a "Reconstruction Processing Unit," suggesting it's an AI reconstruction algorithm, not an AI diagnostic algorithm. Image reconstruction algorithms may use learned models, but the source document does not provide details on their training data.

    9. How the Ground Truth for the Training Set was Established

    Not applicable, as a "training set" in the context of a diagnostic AI algorithm is not described. If the AiCE reconstructor uses a deep learning approach, its "training" would likely involve large datasets of raw MR data and corresponding high-quality reference images (e.g., from conventional reconstruction or higher-resolution scans) to learn the mapping from raw data to reconstructed images; however, this level of detail is not provided in a 510(k) summary focused on substantial equivalence of an entire MRI system.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233976
    Date Cleared
    2024-07-19

    (217 days)

    Product Code
    Regulation Number
    870.2880
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VasoGuard (V10, V8, V6, V4, V2)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VasoGuard devices are intended for use in the non-invasive evaluation of peripheral vascular pathology in patients. The devices are not intended to replace other means of evaluating vital patient physiological testing, are not intended to be used in neonatal applications, and are not intended to be used inside the sterile field. The intended use is attended use by trained medical professionals in hospitals, clinics, and physician offices by prescription or on the order of a medical doctor.

    Device Description

    The VasoGuard is a family of products designed for non-invasive peripheral vascular diagnostic testing. It uses Doppler probes and photoplethysmography (PPG) sensors positioned on the body to measure physiologic signals and report data to the interpreting clinician. The system consists of up to 10 independent pneumatic pressure channels, up to five PPG ports, up to three Doppler ports that support 4MHz and 8MHz continuous wave (CW) Doppler probes, and a touchless temperature sensor. The V series indicates the products are all made from the same parts with the only differences being that certain parts are not installed when assembled during manufacturing or some features are not enabled.

    The VasoGuard family consists of five different configurations of the same device, each varying in the number of pressure channels and sensor ports accessible through one of two available enclosures. The Full-Size enclosure supports Models V6, V8 and V10, and the Mini enclosure supports Models V2 and V4.

    All VasoGuard models contain the same main printed circuit board (PCB), manifold PCB(s), built-in power supply, built-in USB hub, and connect to a dedicated Windows-based medical grade PC via USB. Internally the models all utilize the same manifold assemblies and only differ in the quantity installed. The models include physical connections for control of 10 BP cuffs simultaneously, up to five PPG sensors simultaneously, one of up to three Doppler probes (4 MHz or 8 MHz), one USB camera, one IR remote control, and one touch-free infrared skin thermometer.

    The VasoGuard software is pre-installed on the Windows PC. It controls all the models and automatically recognizes which model is connected thereby exposing only software capabilities only available on that model.

    Each model includes certain components and accessories in addition to the VasoGuard device, including:

    • Medical Grade Touchscreen PC
    • Windows® 10 Enterprise LTSC
    • Washable Keyboard with Trackpad
    • Mobile Cart with Height-adjustable Rolling Cart
    • Set of Pneumatic Hoses with Articulating Support Arm
    • Set of Blood Pressure Cuffs (Shenzhen Vistar Medical Supplies Co., Ltd. K152468)
    • Set of PPG Sensors
    • Doppler Probes (4 MHz and/or 8 MHz)
    • USB Camera
    • Infrared Remote Control and Receiver
    • Touch-free Infrared Skin Thermometer (Tecnimed SRL - 510(k) K122412)

    The VasoGuard software features include patient database management; patient and exam search; importing and exporting of exams; facility management; importing of settings; exam protocol management, custom segment and vessel naming; customizable testing screens, reports, and graphs; backup and restore database; and keyboard shortcuts.

    The quantitative measurements are the same for all VasoGuard models. One of the primary measurements is the Ankle Brachial Index (ABI). The ABI uses the Doppler probe to determine the ratio of the highest systolic pressure at the arm to the systolic pressure at the ankle. Another primary measurement of the VasoGuard is segmental blood pressures. Another main measurement of the VasoGuard is recording a waveform representing blood flow for each heartbeat. The system is capable of recording Doppler, PPG (photoplethysmography) and PVR (Pulse Volume Recording) waveforms.

    AI/ML Overview

    The provided document, an FDA 510(k) summary for the VasoGuard device, does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a study proving the device meets those criteria. The document focuses on demonstrating substantial equivalence to a predicate device through technological characteristics and non-clinical testing, rather than presenting a performance study with defined acceptance criteria.

    However, based on the information available, here's what can be extracted and inferred:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria for clinical performance nor does it provide reported device performance in a format that would directly fulfill this request from a clinical study standpoint.

    Instead, the document focuses on bench testing for comparability to the predicate device. The "acceptance criteria" here are implicitly that the VasoGuard device performs "comparably" to the predicate in terms of waveform quality, sensitivity, and accuracy of reported values.

    Acceptance Criteria (Implied from Bench Testing)Reported Device Performance (Bench Testing)
    Comparable waveform quality to predicateFound to be substantially equivalent in waveform quality
    Comparable sensitivity to predicateFound to be substantially equivalent in sensitivity
    Comparable accuracy of reported values to predicateFound to be substantially equivalent in accuracy of reported values

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document primarily refers to "bench testing with the predicate device using simulators and live signals" and "unit, design verification, performance, accelerated aging, and validation testing." This indicates the tests were largely non-clinical and involved simulators and controlled signals, not patient data. Therefore, there isn't a "test set" in the sense of clinical patient data, nor is there information on data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Given that the testing described is non-clinical bench testing with simulators and live signals, there wouldn't be "experts" establishing ground truth in the clinical interpretation sense. Ground truth in this context would likely be derived from the known parameters of the simulators or the specifications of the "live signals" themselves, as per engineering and quality control standards. No specific number or qualifications of experts are mentioned for this type of ground truth establishment.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    No adjudication method is mentioned, as the testing described is non-clinical bench testing, not a clinical study involving human interpretation.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study (MRMC or AI assistance) is mentioned or implied. The VasoGuard device is described as a diagnostic system for non-invasive peripheral vascular testing (Doppler, PPG, PVR), not an AI-powered diagnostic tool aiding human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is described as producing quantitative measurements such as Ankle Brachial Index (ABI), segmental blood pressures, and waveforms (Doppler, PPG, PVR). While the device performs calculations and generates data, it explicitly states: "The software automatically places a cursor at the time location which is suspected as being the systolic pressure, yet it is the responsibility of the medical staff to modify the cursor location to define the correct segmental pressure." This indicates a human-in-the-loop design where medical staff are responsible for final interpretation and adjustment, rather than a purely standalone AI algorithm. It's a measurement device, not an AI for diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the non-clinical bench testing, the "ground truth" would be the known, controlled inputs from the simulators and "live signals" used during the testing. This is a technical ground truth based on engineering specifications rather than clinical ground truth (expert consensus, pathology, outcomes data).

    8. The sample size for the training set

    The document does not mention any training set. This suggests that the VasoGuard device is a measurement instrument, not a machine learning model that requires a training set.

    9. How the ground truth for the training set was established

    Since no training set is mentioned, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233687
    Date Cleared
    2024-05-03

    (168 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ECHELON Synergy V10.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ECHELON Synergy System is an imaging device and is intended to provide the physician with physiological and clinical information, obtained non-invasively and without the use of ionizing radiation. The MR system produces transverse, coronal, sagittal, oblique, and curved cross sectional images that display the internal structure of the head, body, or extremities. The images produced by the MR system reflect the spatial distribution of protons (hydrogen nuclei) exhibiting magnetic resonance. The NMR properties that determine the image appearance are proton density, spinlattice relaxation time (TI), spin-spin relaxation time (T2) and flow. When interpreted by a trained physician, these images provide information that can be useful in diagnosis determination.

    Anatomical Region: Head, Body, Spine, Extremities Nucleus excited: Proton

    Diagnostic uses:

    • · TI, T2, proton density weighted imaging
    • · Diffusion weighted imaging
    • · MR Angiography
    • · Image processing
    • · Spectroscopy
    • · Whole Body
    Device Description

    The ECHELON Synergy is a Magnetic Resonance Imaging System that utilizes a 1.5 Tesla superconducting magnet in a gantry design. Magnetic Resonance imaging (MRI) is based on the fact that certain atomic nuclei have electromagnetic properties that cause them to act as small spinning bar magnets. The most ubiquitous of these nuclei is hydrogen, which makes it the primary nuclei currently used in magnetic resonance imaging. When placed in a static maqnetic field, these nuclei assume a net orientation or alignment with the magnetic field, referred to as a net magnetization vector. The introduction of a short burst of radiofrequency (RF) excitation of a wavelength specific to the magnetic field strength and to the atomic nuclei under consideration can cause a re-orientation of the net magnetization vector. When the RF excitation is removed, the protons relax and return to their original vector. The rate of relaxation is exponential and varies with the character of the proton and its adjacent molecular environment. This re-orientation process is characterized by two exponential relaxation times, called T1 and T2. A RF emission or echo that can be measured accompanies these relaxation events. The emissions are used to develop a representation of the relaxation events in a three dimensional matrix. Spatial localization is encoded into the echoes by varving the RF excitation. applying appropriate magnetic field gradients in the x, y, and z directions, and changing the direction and strength of these gradients. Images depicting the spatial distribution of the NMR characteristics can be reconstructed by using image processing techniques similar to those used in computed tomography.

    AI/ML Overview

    The provided document describes the Fujifilm ECHELON Synergy V10.0 MRI system, which is an updated version of a previously cleared device. The submission focuses on demonstrating substantial equivalence to the predicate device (ECHELON Synergy MRI System K223426) by highlighting changes and providing performance evaluations.

    Here's an analysis of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" for the overall device in a quantifiable format. Instead, it demonstrates the new features' performance through clinical image testing and phantom studies, comparing them to conventional methods or manual positioning. The acceptance criteria for "DLR Clear" are implied through achieving statistical significance for superiority in certain image quality metrics over conventional imaging and clinical acceptability. For "AutoPose," the criteria are implied through reduction or equivalence in time and steps for slice positioning.

    Here's a summary of the performance results for the new features (DLR Clear and AutoPose):

    FeatureAcceptance Criteria (Implied)Reported Device Performance
    DLR ClearPhantom Testing: Reduce truncation artifact, improve image sharpness, improve spatial resolution (Total Validation, Relative Edge Sharpness, FWHM).
    Clinical Testing: Superiority or equivalence to conventional images in truncation artifact reduction, image sharpness, lesion conspicuity, and overall image quality (statistically significant if superior). Also, clinical acceptability across all images with DLR Clear.
    High-Resolution vs. Low-Resolution (Clinical): Superiority in overall image quality for high-resolution DLR Clear images compared to low-resolution conventional images from the same data, and clinical acceptability.Phantom Testing: Demonstrated reduction of truncation artifact, improvement of image sharpness, and improvement of spatial resolution. (Reported metrics: Total Validation, Relative Edge Sharpness, FWHM).
    Clinical Testing:
    • Truncation artifact reduction, image sharpness, and overall image quality in images with DLR Clear were superior to conventional images with **statistically significant difference (p
    Ask a Question

    Ask a specific question about this device

    K Number
    K231748
    Date Cleared
    2023-09-12

    (89 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Cartesion Prime (PCD-1000A/3) V10.15

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is a diagnostic imaging system that combines Positron Emission Tomography (PET) and X-ray Computed Tomography (CT) systems. The CT component produces cross-sectional images of the body by computer reconstruction of X-ray transmission data. The PET component images the distribution of PET radiopharmaceuticals in the patient body. The PET component utilizes CT images for attenuation correction and anatomical reference in the fused PET and CT images.

    This device is to be used by a trained health care professional to gather metabolic and functional information from the distribution of the radiopharmaceutical in the body for the assessment of metabolic and physiologic functions. This information can assist in the evaluation, detection, diagnosis, staging, restaging, follow-up, therapeutic planning and therapeutic outcome assessment of (but not limited to) oncological, cardiovascular, neurological diseases and disorders. Additionally, this device can be operated independently as a whole body multi-slice CT scanner.

    AiCE-i for PET is intended to improve image quality and reduce image noise for FDG whole body data by employing deep learning artificial neural network methods which can explore the statistical properties of PET data. The AiCE algorithm can be applied to improve image quality and denoising of PET images.

    Deviceless PET Respiratory-gating system, for use with Cartesion Prime PET/CT system, is intended to automatically generate a gating signal from the list-mode PET data. The generated signal can be used to reconstruct motion corrected PET images affected by respiratory motion. In addition, a single motion corrected volume can automatically be generated. Resulting motion corrected PET images can be used to aid clinicians in detection, evaluation, diagnosis, staging, restaging, follow-up of diseases and disorders, radiotherapy planning, as well as their therapeutic planning, and therapeutic outcome assessment. Images of lesions in the thorax, abdomen and pelvis are mostly affected by respiratory motion. Deviceless PET Respiratory-gating system may be used with PET radiopharmaceuticals, in patients of all ages, with a wide range of sizes, body habitus, and extent/type of disease.

    Device Description

    Cartesion Prime (PCD-1000A/3) V10.15 system combines a high-end CT and a high-throughput PET designed to acquire CT, PET and fusion images. The high-end CT system is a multi-slice helical CT scanner with a gantry aperture of 780 mm and a maximum scan field of view (FOV) of 700 mm. The high-throughput PET system has a digital PET detector utilizing SIPM sensors with temporal resolution of

    AI/ML Overview

    This document describes the marketing authorization for the Cartesion Prime (PCD-1000A/3) V10.15 system, which combines Positron Emission Tomography (PET) and X-ray Computed Tomography (CT). The submission focuses on two new features: AiCE-i for PET and Deviceless PET Respiratory-gating system.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of acceptance criteria with corresponding performance metrics for AiCE-i for PET or the Deviceless PET Respiratory-gating system in a quantifiable manner (e.g., target SUV accuracy and achieved SUV accuracy).

    However, for the Deviceless PET Respiratory-gating system, a qualitative assessment is provided.

    Feature/ParameterAcceptance Criteria (Implied)Reported Device Performance
    AiCE-i for PET(Implied) Improve image quality and reduce image noise for FDG whole body data by exploring statistical properties of PET data. Better differentiation of signal from noise.Intended to improve image quality and reduce image noise for FDG whole body data by employing deep learning artificial neural network methods which can more fully explore the statistical properties of the signal and noise of PET data. The AiCE algorithm will be able to better differentiate signal from noise and can be applied to improve image quality and denoising of PET images.
    Deviceless PET Respiratory-gating system(Implied) Substantially equivalent to current device-based respiratory gating. Improved quantitative parameters (SUV max/mean, tumor volume) over device-based gating. Diagnostic quality of images.Demonstrated to be substantially equivalent to the current method of respiratory gating, using an external device. With respect to quantitative parameters such as accuracy of SUV (max and mean) and tumor volume, is improved over the current device-based gating method.
    All images were of diagnostic quality.
    Deviceless had better or same performance as non-gated images or device-based gated images.

    2. Sample Size Used for the Test Set and Data Provenance

    • Deviceless PET Respiratory-gating system (Clinical Images Test Set): 10 patients.
    • Data Provenance: The document does not explicitly state the country of origin or if the data was retrospective or prospective. It refers to "clinical images," which typically implies prospective data collection, but this is not explicitly confirmed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three physicians.
    • Qualifications: "having at least 20 years of experience in nuclear medicine."

    4. Adjudication Method for the Test Set

    The document states: "All three physicians determined that all images were of diagnostic quality and images with deviceless had better or same performance as non-gated images or device-based gated images." This implies a consensus-based adjudication method, where all three experts had to agree on the diagnostic quality and comparative performance. It doesn't specify if a majority rule (e.g., 2+1) or an independent assessment followed by discussion was used, but the phrasing "All three physicians determined" suggests a unanimous agreement or a strong consensus.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and Effect Size

    Yes, a comparative effectiveness study involving human readers (the three physicians) was performed for the Deviceless PET Respiratory-gating system.

    • Effect Size: The document qualitatively states that the deviceless system "had better or same performance as non-gated images or device-based gated images" and that "quantitative parameters such as accuracy of SUV (max and mean) and tumor volume, is improved over the current device-based gating method." However, no specific numerical effect size or statistical measure of improvement is provided (e.g., % improvement in SUV accuracy, statistically significant difference in diagnostic confidence scores).

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • AiCE-i for PET: The description of AiCE-i for PET implies a standalone algorithm's performance in improving image quality and denoising. However, no specific standalone study details (e.g., metrics, dataset) are provided in this summary.
    • Deviceless PET Respiratory-gating system: Bench tests were conducted to demonstrate substantial equivalence and improvement in quantitative parameters (SUV max/mean, tumor volume) against device-based gating. This suggests an evaluation of the algorithm's output without direct human interpretation in that specific test phase, although the subsequent clinical image review involved human readers. Therefore, yes, a standalone performance evaluation for quantitative parameters was performed for the Deviceless PET Respiratory-gating system.

    7. The Type of Ground Truth Used

    • Deviceless PET Respiratory-gating system (Clinical Images Test Set): The ground truth for the clinical image review was based on expert consensus (the three physicians' determination of diagnostic quality and comparative performance). The statement regarding "accuracy of SUV (max and mean) and tumor volume" in bench testing implies the use of a quantitative ground truth, likely derived from established phantom measurements or gold-standard methodologies, though not explicitly detailed.

    8. The Sample Size for the Training Set

    The document does not provide information on the sample size for the training set for either AiCE-i for PET or the Deviceless PET Respiratory-gating system.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232400
    Device Name
    VariSeed (v10)
    Date Cleared
    2023-09-08

    (29 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VariSeed (v10)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VariSeed is intended for use as a software application used by medical professionals to plan, guide, optimize, and document low dose rate brachytherapy and procedures based on template guided needle insertion.

    VariSeed is indicated for use as a treatment planning software application used by medical professionals to plan, guide, optimize and document low-dose-rate brachytherapy procedures and for use as a biopsy procedure tracking software application used by medical professionals to plan, guide, and document biopsy procedures based on template guided needle insertion. VariSeed may be used on any patient considered suitable for this type of treatment and is intended to be used outside of the sterile field in an operating room environment or in a normal office environment.

    Device Description

    VariSeed 10 is a free-standing PC based treatment planning software designed for preoperative and intraoperative planning of LDR implants, tracking of the implant procedure, and postoperative evaluation of completed implants. VariSeed also provides tools for supporting intraoperative template guided biopsy and using those results to guide future treatment.

    AI/ML Overview

    The provided text is an FDA 510(k) clearance letter for Varian Medical Systems' VariSeed (v10) device. It states that the device is substantially equivalent to a legally marketed predicate device (VariSeed v9.0).

    However, the core of your request is to describe the acceptance criteria and the study that proves the device meets those criteria, specifically concerning a study that evaluates device performance through quantitative measures, such as accuracy metrics, and details about the test set, expert involvement, and ground truth establishment.

    The provided FDA letter explicitly states: "No clinical tests have been included in this pre-market submission." and "Verification testing was performed to demonstrate that the performance and functionality of the VariSeed v10 treatment planning software meets the design input requirements. Validation testing was performed on production equivalent devices, under clinically representative conditions and by qualified personnel."

    This indicates that the clearance was based on non-clinical performance data and verification/validation testing against design requirements, rather than a clinical study evaluating the device's diagnostic performance (e.g., accuracy against ground truth in a clinical setting).

    Therefore, I cannot provide the detailed information you requested about acceptance criteria and a study that proves the device meets the acceptance criteria in the context of clinical performance, sample sizes, expert involvement, or ground truth for a clinical test set, because such a study was not included in this submission as per the document.

    The "acceptance criteria" referred to in the document are implicitly the design input requirements and the successful completion of verification and validation against those requirements.

    Here's a breakdown of what can be inferred from the document and why most of your requested points cannot be answered:

    1. A table of acceptance criteria and the reported device performance:

    • Acceptance Criteria (Implied): The device meets its design input requirements and performs its intended functions (planning, guiding, optimizing, and documenting low dose rate brachytherapy and biopsy procedures, including support for PET and MR images).
    • Reported Device Performance: "Verification testing was performed to demonstrate that the performance and functionality of the VariSeed v10 treatment planning software meets the design input requirements." And "Validation testing was performed on production equivalent devices, under clinically representative conditions and by qualified personnel."
    • Quantitative Metrics: The document does not provide specific quantitative performance metrics (e.g., accuracy, sensitivity, specificity, or error margins) often seen in studies evaluating AI or diagnostic devices. This implies the focus was on functional verification and validation against engineering specifications, not clinical outcomes or diagnostic accuracy.

    2. Sample size used for the test set and the data provenance:

    • Not specified. Since no clinical tests were included, there is no information about a "test set" in the context of clinical data. The validation was likely performed on synthetic, phantom, or previously acquired (non-patient-identifiable) data to test software functionality and accuracy of calculations, rather than on a true patient "test set" for clinical performance evaluation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. No clinical ground truth establishment process is described given the lack of clinical study data.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not applicable.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. The document explicitly states "No clinical tests have been included in this pre-market submission." Therefore, no MRMC study or AI assistance evaluation was performed as part of this submission. The device is described as "treatment planning software," not an AI-driven diagnostic assistance tool in the way you might expect. The "AI" implied in your question (human readers improving with AI assistance) is not detailed or alluded to in this context.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The software itself operates "stand-alone" in the sense that it performs its calculations and functions without constant human input for each step once initiated. However, its intended use is "by medical professionals to plan, guide, optimize, and document," implying it's a tool used by a human, not a fully autonomous diagnostic device. The document does not describe "algorithm only" performance in the context of clinical decision-making or diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not specified. For internal verification and validation of a treatment planning system, ground truth would typically refer to known geometric properties of phantoms, known dose distributions from physical measurements, or validated computational models. It would not typically involve pathology or outcomes data for this type of software clearance.

    8. The sample size for the training set:

    • Not applicable. This section does not describe an AI/ML device that requires a distinct "training set" for model development. It's a software application following a traditional software development lifecycle, not a deep learning model requiring vast amounts of labeled training data.

    9. How the ground truth for the training set was established:

    • Not applicable.

    In summary, the FDA clearance for VariSeed (v10) was based on its substantial equivalence to its predicate device (VariSeed v9.0) and on "non-clinical data," specifically "verification and validation" against "design input requirements" under "clinically representative conditions." It was not based on a clinical study demonstrating performance against a diagnostic ground truth with human experts and patient data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223726
    Date Cleared
    2023-03-07

    (84 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Aquilion Precision (TSX-304A/4) V10.14 with AiCE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is indicated to acquire and display cross-sectional volumes of the whole the head. The Aquilion Precision has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.

    FIRST is an iterative reconstruction algorithm intended to reduce exposure dose and improve high contrast spatial resolution for abdomen, pelvis, chest, cardiac, extremities, and head applications.

    AiCE is a noise reduction algorithm that improves image quality and reduces image noise by employing Deep Convolutional Network methods for abdomen, pelvis, lung, cardiac, extremities, head, and inner ear applications.

    Device Description

    Aquilion Precision (TSX-304A/4) V10.14 with AiCE is an ultra-high resolution whole body multislice helical CT scanner, consisting of a gantry, couch and a console used for data processing and display. Aquilion Precision incorporates a 160-row, 0.25 mm detector, a 5.7- MHU large-capacity tube, and 0.35 s scanning, enabling wide-range scanning with short scan times to capture cross sectional volume data sets used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician. In addition, the subject device incorporates the latest reconstruction technology, FIRST, intended to reduce exposure dose while maintaining and/or improving image quality as well as, AiCE (Advanced intelligent Clear-IQ Engine), intended to reduce image noise and improve image quality by utilizing Deep Convolutional Neural Network methods to 1024x1024 HR/SHR images. These methods can more fully explore the statistical properties of the signal and noise. By learning to differentiate structure from noise, the algorithm produces fast, high quality CT reconstruction.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for the Aquilion Precision (TSX-304A/4) V10.14 with AiCE:

    Context: The submission is for a modification to an existing device, where the AiCE (Advanced intelligent Clear-IQ Engine) noise reduction algorithm is expanded to new anatomical regions: extremities, head (brain CTA), and inner ear. Therefore, the focus of the performance testing is to demonstrate sufficient image quality and potential improvements in spatial resolution in these new applications.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly list "acceptance criteria" as pass/fail thresholds with specific numerical values for each image quality metric. Instead, the performance testing aims to demonstrate equivalence to the predicate device and highlight quantitative improvements where applicable. The stated performance demonstrates that "AiCE is substantially equivalent to the predicate device as demonstrated by the results of the above testing" and that "the reconstructed images using the subject device were of diagnostic quality."

    Implicit Acceptance Criteria & Reported Performance:

    Image Quality Metric / CapabilityImplicit Acceptance Criteria (Demonstrated)Reported Device Performance
    General Image Quality (across all applications)- Diagnostic quality of reconstructed images.
    • Equivalence to predicate device in previously cleared applications. | "reconstructed images using the subject device were of diagnostic quality."
      "AiCE is substantially equivalent to the predicate device" (based on Phantom Test Results) |
      | Contrast-to-Noise Ratios (CNR) | Maintained or improved compared to predicate/baseline. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | CT Number Accuracy | Maintained accuracy. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Uniformity | Maintained uniformity. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Slice Sensitivity Profile (SSPz) | Maintained or improved profile. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Modulation Transfer Function (MTF)-Wire | Maintained or improved spatial resolution. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Modulation Transfer Function (MTF)-Edge | Maintained or improved spatial resolution. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Standard Deviation of Noise (SD) | Maintained or reduced noise. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Noise Power Spectra (NPS) | Acceptable noise characteristics. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Low Contrast Detectability (LCD) | Maintained or improved detectability. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | Pediatric conditions | Acceptable performance for pediatric imaging. | Assessed (specific values not provided, but deemed "substantially equivalent"). |
      | High Contrast Spatial Resolution (Quantitative Improvement) | Demonstrable improvement in specific applications for newly added AiCE regions. | Bone in HR mode: 5.37 lp/cm improvement (at 10% MTF and same dose).
      Inner Ear in HR mode: 7.50 lp/cm improvement (at 10% MTF and same dose).
      Brain CTA in HR mode: 11.31 lp/cm improvement (at 10% MTF and same dose). |

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document refers to "Representative Abdomen Bone, Brain CTA, and Inner Ear images." This implies a qualitative selection rather than a large, statistically defined sample size of clinical cases. The quantitative spatial resolution improvement was likely assessed using phantoms, not clinical images, which is common for such technical measurements.
    • Data Provenance: Not explicitly stated regarding country of origin. The images were "obtained using the subject device," implying prospective acquisition for the purpose of this evaluation, but it's not explicitly stated if they were newly acquired for this submission or if they were existing images. The evaluation states "reviewed by an American Board Certified Radiologist," which might suggest clinical images from the US, but this is not definitive.
    • Retrospective or Prospective: Unspecified, but the phrasing "obtained using the subject device" suggests they could be prospectively acquired.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: One.
    • Qualifications of Experts: "an American Board Certified Radiologist." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    No explicit adjudication method is described. The single radiologist reviewed the images and "it was confirmed that the reconstructed images using the subject device were of diagnostic quality." This implies a sole reader assessment for the qualitative clinical image evaluation.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not reported in this summary. The clinical evaluation was a single radiologist's qualitative assessment of diagnostic quality.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance assessment was conducted for objective image quality metrics using phantoms. This includes:

    • Contrast-to-Noise Ratios (CNR)
    • CT Number Accuracy
    • Uniformity
    • Slice Sensitivity Profile (SSPz)
    • Modulation Transfer Function (MTF)-Wire
    • Modulation Transfer Function (MTF)-Edge
    • Standard Deviation of Noise (SD)
    • Noise Power Spectra (NPS)
    • Low Contrast Detectability (LCD)
    • Pediatric conditions

    The quantitative spatial resolution improvement claims (5.37 lp/cm, 7.50 lp/cm, 11.31 lp/cm) are also results of standalone (algorithm only) performance tests, likely performed on phantoms.


    7. The Type of Ground Truth Used

    • For objective image quality metrics (phantoms): The ground truth is the known physical properties and measurements derived from the phantoms themselves (e.g., known dimensions for MTF, known densities for CT number accuracy).
    • For clinical image quality: The ground truth for "diagnostic quality" was established by expert consensus / opinion from a single "American Board Certified Radiologist." There's no mention of pathology or outcomes data being used as ground truth for this aspect.

    8. The Sample Size for the Training Set

    The document does not provide any information about the sample size (or any details) for the training set used for the AiCE Deep Convolutional Network. This information is typically not included in FDA 510(k) summaries, as the focus is on validation and verification rather than the internal development process of the AI model.


    9. How the Ground Truth for the Training Set was Established

    As no information is provided about the training set, there is also no information on how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K222542
    Date Cleared
    2022-09-21

    (30 days)

    Product Code
    Regulation Number
    876.4300
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MCB UNIT Model: V10GMCBUS

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Electrosurgical unit « MCB » is intended for use for the ablation, removal, resection, and coagulation of soft tissue, and where associated hemostasis is required in endoscopic urological surgical procedures.

    The device is intended for use by qualified medical personnel trained in the use of electrosurgical equipment.

    Device Description

    MCB is a reusable, non-sterile electrosurgical bipolar generator with cutting and coagulation modes. The maximum output power is 500 W.

    The front panel GUI (graphical user interface) features soft keys and digital displays for:
    • the connection status of accessories connected to the electrosurgical generator.
    • the current settings of the chosen output mode (Cut/ Coag), and possibility to adjust it
    • Sound Level adjustment and LEDs (Green for Sound and Yellow/Blue for output activation)
    • Electrode shortcut Alarm reset

    At switch on, Serial Number and Software Version are displayed

    AI/ML Overview

    The provided text is a 510(k) summary for the MCB UNIT Model: V10GMCBUS electrosurgical unit. It includes information about the device, its intended use, and the studies conducted to demonstrate substantial equivalence to a predicate device. However, it does not contain the detailed acceptance criteria and a specific study proving the device meets those criteria, as typically found in clinical performance studies.

    The document primarily focuses on non-clinical testing and regulatory compliance, not on clinical performance metrics with acceptance criteria.

    Therefore, I cannot fulfill all parts of your request based on the provided text. I can, however, extract the relevant non-clinical performance data and what is described regarding the validation studies.

    Here's what can be extracted and inferred:

    1. A table of acceptance criteria and the reported device performance:

    The document does not explicitly state numerical acceptance criteria for specific device performance metrics in a format that would allow for a table comparing "acceptance criteria" against "reported device performance" in a clinical context (e.g., sensitivity, specificity, accuracy for an AI device).

    Instead, the non-clinical performance data section refers to validation studies based on recognized standards and FDA guidance for electrosurgical devices. The conclusion states that "Slight differences do not raise any questions regarding safety and effectiveness," implying that the device's performance, as evaluated against these standards, demonstrated sufficient safety and effectiveness to be substantially equivalent to the predicate.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    The document mentions "Thermal Effect studies on representative tissues for urological Application" in the "Summary of the Non-Clinical performance data" section. However, it does not provide any details regarding:

    • The sample size of these "representative tissues."
    • The data provenance (e.g., country of origin, retrospective or prospective nature of the tissue samples).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

    This information is not provided in the document. The studies mentioned are non-clinical, focusing on the electrosurgical unit's physical characteristics and compliance with electrical safety and usability standards. The concept of "ground truth" established by experts, as it would apply to diagnostic AI devices, does not directly apply here.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided as the studies are non-clinical and do not involve human interpretation of diagnostic data requiring adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    This device is an electrosurgical unit, not an AI diagnostic tool. Therefore, a multi-reader multi-case (MRMC) comparative effectiveness study to assess human reader improvement with AI assistance is not applicable and was not performed.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    This is an electrosurgical hardware device, not an AI algorithm. So, a standalone algorithm performance study is not applicable.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    For the non-clinical performance data, the "ground truth" would be established by the physical and electrical safety standards outlined (e.g., IEC 60601-2-2 for Safety of Electrosurgical Generator, IEC 60601-1-2 for EMC). The "Thermal Effect studies on representative tissues" would presumably use objective measurements of tissue effects (e.g., lesion depth, coagulation extent) against expected outcomes based on the device's settings and the predicate device's known performance. However, the specific methodology and objective measures are not detailed.

    8. The sample size for the training set:

    This device is an electrosurgical unit. There is no mention of a training set as it is not an AI/ML device that requires data for training algorithms.

    9. How the ground truth for the training set was established:

    As there is no training set for an AI/ML algorithm, this question is not applicable.


    Summary of what is present:

    • Non-clinical performance data: Validation studies were based on recognized standards (ISO 14971, IEC 62304, IEC 62366-1, IEC 60601-2-2, IEC 60601-1-2) and FDA guidance for electrosurgical devices (March 9, 2020), specifically for "Thermal Effect studies on representative tissues for urological Application."
    • Software validation: Based on "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (May 11, 2005), with the software considered a "Moderate Level of Concern."
    • Usability: Assessed and found to be safe and effective for its intended uses.
    • Overall Conclusion: Substantial equivalence to the predicate device (GYRUS ACMI Inc. PK SUPERPULSE SYSTEM GENERATOR MODEL 744000, 510k Number : K100816) based on same Indications for Use, similar technological and technical characteristics, and results of non-clinical tests.

    The document primarily demonstrates compliance with regulatory and safety standards, rather than clinical performance against specific acceptance criteria like an AI diagnostic device would.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 5