Search Filters

Search Results

Found 261 results

510(k) Data Aggregation

    K Number
    K242925
    Device Name
    MR Contour DL
    Manufacturer
    Date Cleared
    2025-04-01

    (189 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE HealthCare

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MR Contour DL generates a Radiotherapy Structure Set (RTSS) DICOM with segmented organs at risk which can be used by trained medical professionals. It is intended to aid in radiation therapy planning by generating initial contours to accelerate workflow for radiation therapy planning. It is the responsibility of the user to verify the processed output contours and user-defined labels for each organ at risk and correct the contours/labels as needed. MR Contour DL is intended to be used with images acquired on MR scanners, in adult patients.

    Device Description

    MR Contour DL is a post processing application intended to assist a clinician by generating contours of organ at risk (OAR) from MR images in the form of a DICOM Radiotherapy Structure Set (RTSS) series. MR Contour DL is designed to automatically contour the organs in the head/neck, and in the pelvis for Radiation Therapy (RT) planning of adult cases. The output of the MR Contour DL is intended to be used by radiotherapy (RT) practitioners after review and editing, if necessary, and confirming the accuracy of the contours for use in radiation therapy planning.

    MR Contour DL uses customizable input parameters that define RTSS description, RTSS labeling, organ naming and coloring. MR Contour DL does not have a user interface of its own and can be integrated with other software and hardware platforms. MR Contour DL has the capability to transfer the input and output series to the customer desired DICOM destination(s) for review.

    MR Contour DL uses deep learning segmentation algorithms that have been designed and trained specifically for the task of generating organ at risk contours from MR images. MR Contour DL is designed to contour 37 different organs or structures using the deep learning algorithms in the application processing workflow.

    The input of the application is MR DICOM images in adult patients acquired from compatible MR scanners. In the user-configured profile, the user has the flexibility to choose both the covered anatomy of input scan and the specific organs for segmentation. The proposed device has been tested on GE HealthCare MR data.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for MR Contour DL:

    1. Table of Acceptance Criteria and Reported Device Performance

    Device: MR Contour DL

    MetricOrgan Anatomy RegionAcceptance CriteriaReported Performance (Mean)Outcome
    DICE Similarity Coefficient (DSC)Small Organs (e.g., chiasm, inner-ear)≥ 50%67.4% - 98.8% (across all organs)Met
    Medium Organs (e.g., brainstem, eye)≥ 65%79.6% - 95.5% (across relevant organs)Met
    Large Organs (e.g., bladder, head-body)≥ 80%90.3% - 99.3% (across relevant organs)Met
    95th percentile Hausdorff Distance (HD95) ComparisonAll OrgansImproved or Equivalent to Predicate DeviceImproved or Equivalent in 24/28 organs analyzed; average HD95 of 4.7 mm (
    Ask a Question

    Ask a specific question about this device

    K Number
    K241300
    Device Name
    ViewPoint 6
    Manufacturer
    Date Cleared
    2024-07-02

    (54 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ViewPoint 6 is intended to be used in medical practices and in clinical departments and serves the purposes of diagnostic interpretation of images, electronic documentation of examinations in the form of text and images and generation of medical reports primarily for diagnostic ultrasound.

    ViewPoint 6 provides the user the ability to include images, drawings, and charts into medical reports. ViewPoint 6 is designed to accept, transfer, display, calculate, store and process medical images and data, and enables the user to measure and annotate the images. The medical images, which ViewPoint 6 displays to the user, can be used for diagnostic purposes.

    ViewPoint 6 is intended for professional use only. ViewPoint 6 is not intended to be used as an automated diagnosis system.

    ViewPoint 6 is not intended to operate medical devices in surgery related procedures.

    Device Description

    ViewPoint 6 is an image archiving and reporting software for medical practices and clinical radiological departments. It is used for diagnostic interpretation of images and other data. ViewPoint 6 is for professional use only and enables quick diagnostic reporting with standardized terminology. It is designed with intuitive graphical user interfaces (GUIs) and is based on Microsoft Windows® with defined hardware requirements.

    Viewpoint 6 provides exam type specific reporting forms for various medical care areas. Forms are composed of different sections with data entry fields. The documentation can include measurements, exam findings, images and graphs. All data is saved in the ViewPoint 6 database and can be compiled to a professional report. Images and image sequences can be reviewed in the ViewPoint 6 display area based on user preference.

    ViewPoint 6 supports both a single workstation and a client/browser - server setup. The number of user licenses determines how many workstations in the network have concurrent access to the database. Access can be limited to read-only functionality.

    ViewPoint 6 software is a server-based application with client-server architecture, accessed via client computers or mobile devices as well as browser-based systems. Viewpoint 6 is installed on client provided servers within a hospital network.

    The software comes with features to view, annotate, measure, calculate, save and retrieve clinical data (including images via DICOM format) to support patient documentation and record keeping related to ultrasound image scans. Additionally, the software is available for patient administrative tasks such as appointment scheduling and exam billing.

    This product does not control or alter any of the medical devices providing data across the hospital network.

    AI/ML Overview

    The provided text is a 510(k) Summary for a medical device called "ViewPoint 6." This type of document focuses on demonstrating substantial equivalence to a predicate device and typically does not contain detailed primary study results with acceptance criteria and specific performance metrics as would be found in a clinical study report. It primarily outlines the scope of V&V activities and voluntary standards adhered to.

    Based on the provided text, the following information can be extracted regarding the device performance and acceptance criteria:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a specific table of acceptance criteria with corresponding reported device performance values in terms of clinical accuracy (e.g., sensitivity, specificity). Instead, it states that:

    "Successful completion of design verification and validation testing was performed to confirm that software and user requirements have been met."

    This implies that the acceptance criteria are tied to the fulfillment of software and user requirements, which are assessed through various testing activities, but the specific numerical performance metrics are not detailed in this summary. The general statement about "Performance dependent on customer hardware but minimum hardware requirements for acceptable performance are defined in the System Requirements" suggests that performance acceptance is related to system specifications rather than clinical efficacy metrics.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify any sample sizes for a clinical test set, nor does it provide data provenance (e.g., country of origin, retrospective or prospective nature). The V&V activities mentioned (Risk Analysis, Requirements Reviews, Design Reviews, Testing on unit level, Integration testing, Performance testing, Safety testing) are typically software engineering and system-level tests and do not involve a clinical test set with patient data for performance evaluation in the context of diagnostic accuracy.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    No clinical test set is described, and therefore, there is no information about the number or qualifications of experts used to establish ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    No clinical test set is described, so no adjudication method is mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document states:

    "The similarities and differences between the subject device and the predicate device, were determined not to have a significant impact on the device's performance, the clinical performance, and the actual use scenarios. Therefore, the subject of this premarket submission, ViewPoint 6, did not require clinical studies to support substantial equivalence."

    This explicitly indicates that no MRMC comparative effectiveness study, or any clinical study, was conducted or deemed necessary for this 510(k) submission. Therefore, there is no information about AI assistance or its effect size on human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is described as "an image archiving and reporting software for medical practices and clinical radiological departments. It is used for diagnostic interpretation of images and other data." The indications for use state it "is not intended to be used as an automated diagnosis system." This confirms it's a tool to assist clinicians, not a standalone diagnostic algorithm. No standalone algorithmic performance study was conducted or mentioned.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    As no clinical studies were performed for diagnostic accuracy, no specific ground truth type (expert consensus, pathology, outcomes data, etc.) is mentioned. The V&V activities focused on software and system requirements.

    8. The sample size for the training set

    The document does not refer to any AI/ML components that would require a training set. Therefore, no training set sample size is provided.

    9. How the ground truth for the training set was established

    Since no training set is mentioned for AI/ML, there is no information on how its ground truth would be established.

    In summary:

    This 510(k) submission for ViewPoint 6 primarily relies on demonstrating substantial equivalence to a predicate device (ViewPoint 6 v6.12 K203677) through software validation and verification activities, adherence to voluntary standards, and a comparison of technological characteristics. It explicitly states that clinical studies were not required because the changes from the predicate device were not deemed to have a significant impact on clinical performance. Therefore, detailed information regarding clinical performance acceptance criteria, sample sizes for test or training sets, expert adjudication, or AI performance metrics is not present in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233698
    Device Name
    True Enhance DL
    Date Cleared
    2024-04-11

    (146 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare Japan Corporation

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    True Enhance DL is a deep learning-based image processing method trained to estimate monochromatic, 50 keV GSI images. The algorithm is intended to improve the contrast of 120 kVp, single energy images of the body.

    This device is intended to provide non-quantitative, adjunct information and should not be interpreted without the original 120 kVp image.

    True Enhance DL may be used for patients of all ages.

    Device Description

    True Enhance DL is a deep learning-based image processing method for contrast enhanced images of the body obtained using the Revolution Ascend Family (K213938), which consists of multiple commercial configurations: Revolution Ascend Elite, Revolution Ascend Plus, and Revolution Ascend Select. True Enhance DL is intended to post-process single energy, 120 kVp images to output nonquantitative, adjunctive information with better contrast than single energy input data.

    True Enhance DL brings four deep leaning models that the user can choose depending on different contrast enhancement phases. These four models are CT Angiography, Arterial, Portal/Venous, and Delayed True Enhance DL.

    True Enhance DL is not intended to replace hardware based Monochromatic Imaging by Gemstone Spectral Imaging (GSI) technology or replicate GSI dual energy acquisitions. The device was trained to estimate monochromatic, 50 keV GSI images, and only enhances images from 120 kVp acquisitions on non-GSI Revolution Ascend systems.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria (Implicit)Reported Device Performance
    Primary Goal: Improve the contrast of 120 kVp, single energy images of the body."The result of this reader study and head-to-head material comparison validated that True Enhance DL software provides additional benefit by improving contrast in the True Enhance output when compared to the original 120 kVp single energy images."
    Provide non-quantitative, adjunct information.The device's indication explicitly states it "is intended to provide non-quantitative, adjunct information."
    Not replace hardware-based Monochromatic Imaging by Gemstone Spectral Imaging (GSI) technology or replicate GSI dual energy acquisitions."True Enhance DL is not intended to replace hardware based Monochromatic Imaging by Gemstone Spectral Imaging (GSI) technology or replicate GSI dual energy acquisitions."
    Output estimable as 50 keV GSI images."The device was trained to estimate monochromatic, 50 keV GSI images."
    No new or different questions of safety or effectiveness compared to the predicate device."GE's quality system's design verification, and risk management processes did not identify any new questions of safety or effectiveness, hazards, unexpected results, or adverse effects stemming from the changes to the predicate."
    Achieve adequate image quality."The changes associated with True Enhance DL do not create a new Intended Use and represent technological characteristics that produce images that have demonstrated adequate image quality..."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not explicitly stated as a number of cases. The text mentions "sample clinical data" and "Additional representative clinical cases and anthropomorphic phantom cases."
    • Data Provenance: Retrospective. The study used "retrospectively collected representative clinical cases." The country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    • Number of Experts: Four.
    • Qualifications of Experts: "Four board certified radiologists." Specific years of experience are not mentioned.

    4. Adjudication Method for the Test Set:

    • The text does not explicitly state a formal adjudication method (e.g., 2+1, 3+1). It indicates that the four radiologists each provided a comparative assessment of image quality related to diagnostic use. This suggests individual reader assessment rather than a consensus-building adjudication process for ground truth. However, they were asked to "rate the contrast enhancement in the True Enhance DL series vs the native image series," which implies a comparative evaluation.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    • A MRMC-like study was done, as "four board certified radiologists" read the images.
    • Effect Size: The text states, "the readers were asked to rate the contrast enhancement in the True Enhance DL series vs the native image series" and "validated that True Enhance DL software provides additional benefit by improving contrast." However, a quantitative effect size of human readers' improvement with AI vs. without AI assistance is not provided in this summary. The focus was on the software's ability to improve contrast rather than a comparative effectiveness of human performance with and without the tool.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    • Yes, a standalone evaluation was conducted to assess image characteristics. The text mentions "Additional representative clinical cases and anthropomorphic phantom cases from a GSI system generating both single energy 120 kVp and 50 keV monochromatic images were evaluated for CT number in various anatomical regions to study image characteristics for different materials of the device output compared to 50 keV and 120 kVp reference images." This assesses the algorithm's output properties directly against a reference, which constitutes a standalone performance aspect.

    7. The Type of Ground Truth Used:

    • Expert Consensus / Reader Assessment: For the image quality and contrast improvement aspects, the subjective assessment of "four board certified radiologists" served as the ground truth.
    • Reference Images / Clinical Data: For the standalone evaluation, "50 keV and 120 kVp reference images" (likely derived from GSI systems with known energy characteristics) were used to study the algorithm's output. Clinical cases with "disease/pathology" were used, implying the presence of known conditions, although how these conditions served as "ground truth" for the AI's performance beyond simply being present in the data is not fully detailed.

    8. The Sample Size for the Training Set:

    • The sample size for the training set is not provided in this document. The text only states, "The device was trained to estimate monochromatic, 50 keV GSI images."

    9. How the Ground Truth for the Training Set Was Established:

    • The document implies that the training was based on "to estimate monochromatic, 50 keV GSI images." This suggests that 50 keV monochromatic GSI images (likely acquired from dual-energy CT scans, which serve as a form of ground truth for spectral decomposition) were used as the target output for the deep learning model during training. The process of generating these reference 50 keV GSI images themselves would involve the CT system's physics and reconstruction algorithms.
    Ask a Question

    Ask a specific question about this device

    K Number
    K233728
    Device Name
    SIGNA Champion
    Date Cleared
    2024-01-19

    (59 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare (Tianjin) Company Limited

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SIGNA™ Champion is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, TMI, spine, breast, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body.

    Depending on the region of interest being imaged, contrast agents may be used. The images produced by SIGNA™ Champion reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    SIGNA™ Champion is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan time. The system uses a combination of time-varying magnet fields (Gradients) and RF transmissions to obtain information regarding the density and position of elements exhibiting magnetic resonance. The system can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. The system features a 1.5T superconducting magnet with 70cm bore size. The system is designed to conform to NEMA DICOM standards (Digital Imaging and Communications in Medicine).

    AI/ML Overview

    The provided text does not contain information about acceptance criteria and a study proving the device meets those criteria in the context of an AI/human reader performance study. The document is a 510(k) premarket notification for a Magnetic Resonance Diagnostic Device (SIGNA™ Champion).

    The relevant sections state:

    • "The subject of this premarket submission, the SIGNA™ Champion, did not require clinical studies to support substantial equivalence. Sample clinical images have been included in this submission."
    • "The sample clinical images demonstrate acceptable diagnostic image performance of the SIGNA™ Champion in accordance with the FDA Guidance 'Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices' issued on October 10, 2023. The image quality of the SIGNA™ Champion is substantially equivalent to that of the predicate device."

    This indicates that the FDA clearance for the SIGNA™ Champion MR system was based on demonstrating substantial equivalence to a previous predicate device (SIGNA™ Voyager) through non-clinical testing and the review of sample clinical images, rather than a prospective clinical study involving human readers and AI assistance for diagnostic tasks.

    Therefore, I cannot provide the requested information regarding acceptance criteria and study details for an AI-assisted diagnostic device performance study because such a study was not conducted or reported in this 510(k) submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232346
    Manufacturer
    Date Cleared
    2023-10-27

    (84 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Digital Expert Access with Remote Scanning is intended as a remote collaboration tool to view and review MR images, to remotely control MR Imaging Devices and to initiate MRI scans remotely.

    Digital Expert Access with Remote Scanning is a remote scan assistance solution which allows remote control of an MR Imaging Device including the ability to initiate a scan remotely. This access provides real time communication mechanisms between the remote and onsite users to facilitate the acquisition occurring on the device. Access must be granted by the onsite user operating the system. Images reviewed remotely are not for diagnostic use.

    Device Description

    Digital Expert Access is a variation of the Customer Remote Console cleared under K150193. It is a remote scan assistance solution designed to address the skill variability in technologists and their need for ondemand support by allowing them to interact directly with a remote expert connected to the hospital network. By enabling the collaboration between an Onsite Technologist and Remote Expert, Digital Expert Access helps the onsite technologist to seek guidance and real time support on scanning related queries including but not limited to training, procedure assessment, and scanning parameter management.

    Digital Expert Access with Remote Scanning introduces a feature that enables the Remote Expert to initiate a scan and make changes in real time during the scanning session. This remote scan feature is only available when Digital Expert Access is connected to a compatible GE HealthCare MRI system.

    Digital Expert Access with Remote Scanning enables the following capabilities for the Onsite Technologist and the Remote Expert:

      1. Collaborative session between an Onsite Technologist and Remote Expert
      1. Real-time scanner screen share and live annotation
    • ന് Remote console access and control
      1. Remote Scan Initiation

    Digital Expert Access with Remote Scanning is not intended for diagnostic use or patient safety-related management. This solution is not intended to be used by individuals who are not properly trained in the operation of GE HealthCare Medical Imaging systems. Digital Expert Access with Remote Scanning does not directly interface with any patients and requires the Onsite Technologist to be continuously present throughout the scanning procedure. Digital Expert Access with Remote Scanning does not acquire any MRI images, nor does it do any post image processing. All image acquisition and image processing is conducted by the GE HealthCare MRI system.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a device called "Digital Expert Access with Remote Scanning." This document outlines the device's purpose, intended use, and substantial equivalence to predicate devices, but it does not contain information about specific acceptance criteria or an analytical study proving the device meets those criteria in terms of performance metrics like accuracy, sensitivity, or specificity.

    The document emphasizes that the device did not require clinical studies to support substantial equivalence. Instead, the focus was on non-clinical tests, primarily design verification and validation testing, to ensure proper implementation of design requirements and that no new safety or effectiveness issues were introduced.

    Therefore, many of the requested details about acceptance criteria for performance, sample sizes, ground truth establishment, expert involvement, and MRMC studies are not present in this document because the submission strategy relied on substantial equivalence based on technological similarity and non-clinical testing rather than performance studies against defined quantitative acceptance criteria.

    However, based on the information provided, here's what can be extracted and inferred regarding the "acceptance criteria" and "study":

    Acceptance Criteria (Inferred from Non-Clinical Testing and Device Functionality):

    Since no performance metrics are given, the acceptance criteria are implicitly related to the successful functionality and safety of the remote scanning feature. These would likely be qualitative or functional:

    Acceptance Criterion (Inferred)Reported Device Performance (Summary)
    1. Successful Remote Control of MR Imaging DeviceDemonstrated through non-clinical verification and validation testing on a subset of GE HealthCare MRI systems.
    2. Successful Remote Scan InitiationDemonstrated through non-clinical verification and validation testing on a subset of GE HealthCare MRI systems.
    3. Real-time Communication Mechanisms (Onsite & Remote Users)Device described as providing real-time communication mechanisms; functionality tested during design verification.
    4. Access Granting Mechanism by Onsite UserFunctionality described and tested: "Access must be granted by the onsite user circulating the system."
    5. No New Potential Safety Risks IntroducedConcluded based on design verification and validation testing; no unexpected test results or safety issues observed.
    6. Proper Implementation of Design RequirementsVerified through design control testing per GE HealthCare's quality system (e.g., Requirement Definition, Risk Analysis, Technical Design Reviews).
    7. Maintenance of Safety & EffectivenessConfirmed by design verification and validation testing, concluding it has not been affected.

    Study Information (Based on Non-Clinical Testing):

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not explicitly stated as a number of "cases" or "patients" in a clinical study. The testing was conducted on a "subset of GE HealthCare MRI systems" on the bench. This implies a sample of hardware configurations rather than patient data.
      • Data Provenance: Not applicable as it was non-clinical "bench" testing, not involving patient data or a specific country of origin for such data. It was performed internally by GE HealthCare.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • Not applicable. As this was non-clinical, functional testing, there was no "ground truth" in the clinical diagnostic sense established by experts. Testing would have involved engineers and quality assurance personnel verifying the system's intended functionality.
    3. Adjudication Method for the Test Set:

      • Not applicable for functional verification testing. Adjudication methods are typically used when comparing human interpretations or algorithmic outputs against a gold standard in diagnostic or clinical performance studies.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:

      • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "The subject of this premarket submission, Digital Expert Access with Remote Scanning did not require clinical studies to support substantial equivalence..." The device is a remote control and collaboration tool, not an AI-assisted diagnostic device meant to improve human reader performance.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

      • Not applicable in the context of performance. The device is inherently a "human-in-the-loop" collaboration tool. While its technical components were likely tested in isolation (standalone components), the overall device's "performance" is tied to its functional remote control and communication capabilities, not an algorithmic diagnostic output.
    6. The Type of Ground Truth Used:

      • For the non-clinical testing, the "ground truth" was the pre-defined design requirements and expected functional behavior of the device. This is typical for engineering verification and validation testing, where the "truth" is whether the system performs as designed and specified (e.g., does the remote expert successfully initiate a scan?).
    7. The Sample Size for the Training Set:

      • Not applicable. The document does not describe the device as employing machine learning or AI that would require a "training set" of data in the sense of a deep learning model. It is a control and communication software system.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable. As no training set for a machine learning model is mentioned, ground truth establishment for such a set is irrelevant to this submission.

    In summary, the FDA 510(k) clearance for this device was based on a demonstration of "substantial equivalence" to existing predicate devices through non-clinical design verification and validation testing, focusing on ensuring the new remote scanning feature did not introduce new safety or effectiveness issues, rather than requiring quantitative performance studies with clinical acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223523
    Device Name
    Sonic DL
    Date Cleared
    2023-05-30

    (188 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Medical Systems,LLC (GE Healthcare)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonic DL is a Deep Learning based image reconstruction technique that is available for use on GE Healthcare 1.5T and 3.0T MR systems. Sonic DL reconstructs MR images from highly under-sampled data, and thereby enables highly accelerated acquisitions. Sonic DL is intended for cardiac imaging, and for patients of all ages.

    Device Description

    Sonic DL is a new software feature intended for use with GE Healthcare MR systems. It consists of a deep learning based reconstruction algorithm that is applied to data from MR cardiac cine exams obtained using a highly accelerated acquisition technique.

    Sonic DL is an optional feature that is integrated into the MR system software and activated through a purchasable software option key.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Sonic DL device, based on the provided document:

    Sonic DL Acceptance Criteria and Study Details

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes the performance of Sonic DL in comparison to conventional ASSET Cine images. While explicit numerical acceptance criteria for regulatory clearance are not stated, the studies aim to demonstrate non-inferiority or superiority in certain aspects. The implicit acceptance criteria are:

    • Diagnostic Quality: Sonic DL images must be rated as being of diagnostic quality.
    • Functional Measurement Agreement: Functional cardiac measurements (e.g., LV volumes, EF, CO) from Sonic DL images must agree closely with those from conventional ASSET Cine images, ideally within typical inter-reader variability.
    • Reduced Scan Time: Sonic DL must provide significantly shorter scan times.
    • Preserved Image Quality: Image quality must be preserved despite higher acceleration.
    • Single Heartbeat Imaging (Functional): Enable functional imaging in a single heartbeat.
    • Rapid Free-Breathing Functional Imaging: Enable rapid functional imaging without breath-holds.
    Implicit Acceptance CriterionReported Device Performance
    Diagnostic Quality"on average the Sonic DL images were rated as being of diagnostic quality" (second reader study).
    Functional Measurement Agreement"the inter-method variability (coefficient of variability comparing functional measurements taken with Sonic DL images versus measurements using the conventional ASSET Cine images) was smaller than the inter-observer intra-method variability for the conventional ASSET Cine images for all parameters, indicating that Sonic DL is suitable for performing functional cardiac measurements" (first reader study).
    "Functional measurements using Sonic DL 1 R-R free breathing images from 10 subjects were compared to functional measurements using the conventional ASSET Cine breath hold images, and showed close agreement" (additional clinical testing for 1 R-R free breathing).
    Reduced Scan Time"providing a significant reduction in scan time compared to the conventional ASSET Cine images" (second reader study).
    "the Sonic DL feature provided significantly shorter scan times than the conventional Cine imaging" (overall conclusion).
    Preserved Image Quality"capable of reconstructing Cine images from highly under sampled data that are similar to the fully sampled Cine images in terms of image quality and temporal sharpness" (nonclinical testing).
    "the image quality of 13 Sonic DL 1 R-R free breathing cases was evaluated by a U.S. board certified radiologist, and scored higher than the corresponding conventional free breathing Cine images from the same subjects" (additional clinical testing for 1 R-R free breathing).
    Single Heartbeat Functional Imaging"Sonic DL is capable of achieving a 12 times acceleration factor and obtaining free-breathing images in a single heartbeat (1 R-R)" (additional clinical testing).
    Rapid Free-Breathing Functional Imaging"Sonic DL is capable of... obtaining free-breathing images in a single heartbeat (1 R-R)" (additional clinical testing).

    2. Sample Size Used for the Test Set and Data Provenance

    The document describes two primary reader evaluation studies and additional clinical testing.

    • First Reader Study (Functional Measurements):
      • Sample Size: 107 image series from 57 unique subjects (46 patients, 11 healthy volunteers).
      • Data Provenance: Data from 7 sites: 2 GE Healthcare facilities and 5 external clinical collaborators. This indicates data from multiple sources, likely a mix of prospective and retrospective collection. The geographic origin of these sites is not explicitly stated but implies a multi-center study potentially from different countries where GE Healthcare operates or collaborates.
    • Second Reader Study (Image Quality Assessment):
      • Sample Size: 127 image sets, which included a subset of the subjects from the first study.
      • Data Provenance: Same as the first reader study (clinical sites and healthy volunteers at GE Healthcare facilities).
    • Additional Clinical Testing (1 R-R Free Breathing):
      • Functional Measurements: 10 subjects.
      • Image Quality Evaluation: 13 subjects.
      • Data Provenance: In vivo cardiac cine images from 19 healthy volunteers. This implies prospective collection or a subset of prospectively collected healthy volunteer data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • First Reader Study (Functional Measurements): Three radiologists. Qualifications are not explicitly stated, but their role in making quantitative measurements implies expertise in cardiac MRI.
    • Second Reader Study (Image Quality Assessment): Three radiologists. Qualifications are not explicitly stated, but their role in blinded image quality assessments implies expertise in cardiac MRI interpretation.
    • Additional Clinical Testing (1 R-R Free Breathing Image Quality): One U.S. board certified radiologist.

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (like 2+1, 3+1, or none) for either the functional measurements or the image quality assessments. For the first study, it mentions "inter-method variability" and "inter-observer intra-method variability," suggesting that the readings from the three radiologists were compared against each other and against the conventional method, but not necessarily adjudicated to establish a single "ground truth" per case. For the second study, "blinded image quality assessments" were performed, and ratings were averaged, but no adjudication process is described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size

    A clear MRMC comparative effectiveness study, in the sense of measuring human reader improvement with AI vs. without AI assistance, is not explicitly described.

    The studies compare the performance of Sonic DL images (algorithm output) against conventional images, with human readers evaluating both.

    • The first reader study compares quantitative measurements from Sonic DL images to conventional images, indicating suitability for performing functional cardiac measurements by showing smaller inter-method variability than inter-observer intra-method variability for conventional images. This suggests Sonic DL is at least as reliable as the variability between conventional human measurements.
    • The second reader study involves blinded image quality assessments of both conventional and Sonic DL images, confirming that Sonic DL images were rated as diagnostic quality.
    • The additional clinical testing for 1 R-R free breathing shows that Sonic DL images were "scored higher than the corresponding conventional free breathing Cine images" by a U.S. board-certified radiologist.

    These are comparisons of the image quality and output from the AI system versus conventional imaging, interpreted by readers, rather than measuring human reader performance assisted by the AI system.

    Therefore, the effect size of how much human readers improve with AI vs. without AI assistance is not provided because the studies were designed to evaluate the image output quality and measurement agreement of the AI-reconstructed images themselves, not to assess an AI-assisted workflow for human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, standalone performance was assessed for image quality metrics.

    • Nonclinical Testing: "Model accuracy metrics such as Peak-Signal-to-Noise (PSNR), Root-Mean-Square Error (RMSE), Structural Similarity Index Measure (SSIM), and Mean Absolute Error (MAE) were used to compare simulated Sonic DL images with different levels of acceleration and numbers of phases to the fully sampled images." This is a standalone evaluation of the algorithm's output quality against a reference.
    • In Vivo Testing: "model accuracy and temporal sharpness evaluations were conducted using in vivo cardiac cine images obtained from 19 health volunteers." This is also a standalone technical evaluation of the algorithm's output on real data.

    7. The Type of Ground Truth Used

    • Nonclinical Testing (Simulated Data): The ground truth was the "fully sampled images" generated from an MRXCAT phantom and a digital phantom.
    • Clinical Testing (Reader Studies):
      • Functional Measurements: The "ground truth" for comparison was the measurements taken from the "conventional ASSET Cine images." The variability of these conventional measurements across readers also served as a baseline for comparison. This is a form of clinical surrogate ground truth (comparing to an established accepted method).
      • Image Quality Assessments: The "ground truth" was the expert consensus/opinion of the radiologists during their blinded assessments of diagnostic quality.
      • Additional Clinical Testing (1 R-R Free Breathing): Functional measurements were compared to "conventional ASSET Cine breath hold images" (clinical surrogate ground truth). Image quality was based on the scoring by a "U.S. board certified radiologist" (expert opinion).

    No pathology or outcomes data were used as ground truth. The ground truth in the clinical setting was primarily based on established imaging techniques (conventional MR) and expert radiologist assessments.

    8. The Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set used for the deep learning model. It only describes the data used for testing the device.

    9. How the Ground Truth for the Training Set Was Established

    Since the training set size is not provided, the method for establishing its ground truth is also not described in the provided text. Typically, for deep learning reconstruction, the "ground truth" for training often involves fully sampled or high-quality reference images corresponding to the undersampled input data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223212
    Device Name
    Precision DL
    Manufacturer
    Date Cleared
    2023-04-27

    (192 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Precision DL is a deep learning-based image processing method intended to enhance image quality of non-ToF PET images for clinical oncology purpose, using F-18 FDG. Precision DL may be used for patients of all ages.

    Device Description

    Precision DL is a deep learning-based image processing method intended for PET oncology 18F-FDG images obtained using the predicate device Omni Legend PET/CT system. Precision DL enhances the non-ToF Q.Clear images to have image quality performance similar to PET images obtained using ToF capable PET systems, including enhancement in image Contrast Recovery (CR), Contrast to Noise Ratio (CNR), and quantitation accuracy. Precision DL's training used clinical data from diverse clinical sites, accounting for relevant variations in patients and sites' protocols.

    Precision DL brings three deep learning models to provide users the choice between different strengths of contrast enhancement and noise reduction. The three models, Low, Medium, and High Precision DL, are trained such that the High Precision DL brings the highest contrast enhancement and lowest noise reduction, while the Low Precision DL brings the lowest contrast enhancement and highest noise reduction. Medium Precision DL brings contrast-noise tradeoff in between High and Low Precision DL.

    Precision DL is deployed within the acquisition and processing software of Omni Legend, for processing images produced using non-ToF Q.Clear image reconstruction.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study for Precision DL, based on the provided FDA 510(k) summary:

    Device: Precision DL (Deep Learning-based image processing for non-ToF PET images)

    Intended Use: Enhance image quality of non-ToF PET images for clinical oncology using F-18 FDG.


    1. Table of Acceptance Criteria and Reported Device Performance

    The 510(k) summary describes performance improvements over non-ToF Q.Clear reconstruction. It does not explicitly state discrete acceptance criteria values but rather demonstrates general improvements in imaging metrics and equivalence to ToF images.

    Metric / Acceptance CriteriaReported Device Performance (Precision DL vs. non-ToF Q.Clear)
    Quantitation AccuracyImproved accuracy. Performance similar to ToF images.
    Contrast Recovery (CR)Enhanced. Performance similar to ToF images.
    Contrast-to-Noise Ratio (CNR)Enhanced. Performance similar to ToF images.
    Lesion DetectabilityExplicitly tested, and implied improvement given CR and CNR enhancements.
    Dose / Time ReductionExplicitly tested. (Specific results not detailed, but likely aims to show maintenance of quality with reduced dose/time, or enhanced quality at standard dose/time).
    Overall Image Quality (Clinical Assessment)Acceptable diagnostic results by board-certified radiologists, demonstrating acceptable image quality.
    Preference (Clinical Assessment)Readers preferred Precision DL images over unassisted images, and found them similar to Discovery MI ToF images.
    Safety and Effectiveness (Regulatory Acceptance)No new questions of safety or effectiveness, hazards, unexpected results, or adverse effects were identified compared to the predicate device. Substantially Equivalent.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Clinical Data for Bench Testing): 80 PET-CT exams.
      • 40 exams from an Omni Legend system.
      • 40 exams from Discovery MI systems (with hardware-based ToF).
    • Sample Size (Clinical Reader Study): Not explicitly stated precisely for the number of cases and images. It mentions "clinical cases of the same patients obtained on Discovery MI and Omni Legend with Precision DL."
    • Data Provenance: Multiple clinical sites in North America, Europe, and Israel. The data was "segregated, completely independent, and not used in any stage of the algorithm development, including training." This indicates prospective or retrospectively collected data used for testing only.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts (for Ground Truth related to phantom data): Not applicable for phantom data, as ground truth is known from inserted lesions.
    • Number of Experts (for Clinical Reader Study): "Board certified radiologists." The exact number is not explicitly stated in the summary, nor are their specific years of experience. However, the study involved reviews and preference questions by these experts.

    4. Adjudication Method

    • The summary mentions a "clinical reader study" where "board certified radiologists... answered blinded preference questions comparing clinical cases." This suggests individual reader assessments were aggregated, but it does not explicitly state an adjudication method like 2+1 or 3+1 for resolving discrepancies in diagnostic findings. The focus appears to be on overall image quality and preference rather than a specific diagnostic consensus for each case.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a clinical reader study was performed, which included "board certified radiologists" reviewing "clinical cases." They answered "blinded preference questions comparing clinical cases of the same patients obtained on Discovery MI and Omni Legend with Precision DL."
    • Effect Size of Human Readers with AI vs. Without AI Assistance: The summary states, "The results of the reader study and preference questions support the determination of substantial equivalence. All readers attested that their assessments of Precision DL demonstrated acceptable diagnostic results." While it indicates positive results and physician acceptance, it does not quantify an effect size of how much human readers improved their performance (e.g., in diagnostic accuracy, confidence, or reduced read time) with AI assistance compared to reading without AI assistance (i.e., using only non-ToF Q.Clear images). The study primarily focused on image quality acceptability and preference.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, a standalone performance assessment was conducted as part of the "additional engineering bench testing." This included quantitative assessments using both clinical and phantom data for metrics such as Quantitation Accuracy, Contrast Recovery, Contrast-to-Noise Ratio, Lesion Detectability, and Dose/Time Reduction. This part of the testing directly evaluated the algorithm's output (processed images) against established ground truths/references.

    7. Type of Ground Truth Used

    • For Bench Testing (Quantitative Metrics):
      • Phantom Data: Known quantitation from inserted lesions of known size, location, and contrast.
      • Clinical Data: Discovery MI's ToF PET images served as a reference for comparison, implying they are considered the gold standard for high-quality images that Precision DL aims to emulate.
    • For Clinical Reader Study: The "ground truth" here is implied to be the expert consensus on acceptable diagnostic quality and preference rather than a definitive diagnosis based on pathology or long-term outcomes for each case. The goal was to confirm that the enhanced images retained or improved diagnostic acceptability.

    8. Sample Size for the Training Set

    • The summary states, "Precision DL's training used clinical data from diverse clinical sites, accounting for relevant variations in patients and sites' protocols."
    • However, the specific sample size for the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    • The summary indicates that Precision DL "is trained to enhance non-ToF images to have IQ performance similar to ToF images." This implies that the ground truth for training would likely be high-quality ToF PET images (potentially from a system like Discovery MI) that the algorithm was designed to mimic or achieve certain quality metrics aligned with ToF performance.
    • The text doesn't detail the process of establishing ground truth for individual images within the training set, but it's reasonable to infer a reference standard from ToF images was used.
    Ask a Question

    Ask a specific question about this device

    K Number
    K230807
    Date Cleared
    2023-04-20

    (28 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare Japan Corporation

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Deep Learning Image Reconstruction software is a deep learning based reconstruction method intended to produce cross-sectional images of the head and whole body by computer reconstruction of X-ray transmission data taken at different angles and planes, including Axial, (Volumetric), and Cardiac acquisitions, for all ages.

    Deep Learning Image Reconstruction software can be used for head, whole body, cardiac, and vascular CT applications.

    Device Description

    Deep Learning Image Reconstruction is an image reconstruction method that uses a dedicated Deep Neural Network (DNN) that has been designed and trained specifically to generate CT Images to give an image appearance, as shown on axial NPS plots, similar to traditional FBP images while maintaining the performance of ASiR-V in the following areas: image noise (pixel standard deviation), low contrast detectability, high-contrast spatial resolution, and streak artifact suppression.

    The images produced are branded as "TrueFidelity™ CT Images". Reconstruction times with Deep Learning Image Reconstruction support a normal throughput for routine CT.

    The deep learning technology is integrated into the scanner's existing raw data-based image reconstruction chain to produce DICOM compatible "TrueFidelity™ CT Images".

    The system allows user selection of three strengths of Deep Learning Image Recon: Low, Medium or High. The strength selection will vary with individual users' preference for the specific clinical need.

    The DLR algorithm was modified on the Revolution CT/Apex platform for improved reconstruction speed and image quality and cleared in February 2022 with K213999. The same modified DLIR is now being ported to Revolution EVO (K131576) /Revolution Maxima (K192686), Revolution Ascend (K203169, K213938) and Discovery CT750 HD family CT systems including Discovery CT750 HD, Revolution Frontier and Revolution Discovery CT (K120833).

    AI/ML Overview

    The provided text describes that the Deep Learning Image Reconstruction software was tested for substantial equivalence to a predicate device (K213999). The study performed was largely an engineering bench testing, comparing various image quality metrics between images reconstructed with Deep Learning Image Reconstruction (DLIR) and ASiR-V from the same raw datasets.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The text indicates that the device aims to maintain the performance of ASiR-V in specific areas while offering an image appearance similar to traditional FBP images. The "acceptance criteria" can be inferred from the list of image quality metrics evaluated, with the performance goal being comparable or improved relative to ASiR-V.

    Acceptance Criteria (Implied Goal: Performance comparable to or better than ASiR-V)Reported Device Performance (Implied: Met acceptance criteria, no adverse findings)
    Image noise (pixel standard deviation)DLIR maintains ASiR-V performance.
    Low contrast detectability (LCD)Evaluation performed. (Implied: Met acceptance criteria)
    High-contrast spatial resolution (MTF)Evaluation performed. (Implied: Met acceptance criteria)
    Streak artifact suppressionDLIR maintains ASiR-V performance.
    Spatial Resolution, longitudinal (FWHM slice sensitivity profile)Evaluation performed. (Implied: Met acceptance criteria)
    Noise Power Spectrum (NPS) and Standard Deviation of noiseEvaluation performed (NPS plots similar to FBP). (Implied: Met acceptance criteria)
    CT Number UniformityEvaluation performed. (Implied: Met acceptance criteria)
    CT Number AccuracyEvaluation performed. (Implied: Met acceptance criteria)
    Contrast to Noise (CNR) ratioEvaluation performed. (Implied: Met acceptance criteria)
    Artifact analysis (metal objects, unintended motion, truncation)Evaluation performed. (Implied: Met acceptance criteria)
    Pediatric Phantom IQ Performance EvaluationEvaluation performed. (Implied: Met acceptance criteria)
    Low Dose Lung Cancer Screening Protocol IQ Performance EvaluationEvaluation performed. (Implied: Met acceptance criteria)
    Image appearance (NPS plots similar to traditional FBP)Designed to give an image appearance, as shown on axial NPS plots, similar to traditional FBP images.
    No additional risks/hazards, warnings, or limitationsNo additional hazards were identified, and no unexpected test results were observed.
    Maintains normal throughput for routine CTReconstruction times with Deep Learning Image Reconstruction support a normal throughput for routine CT.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The text states "the identical raw datasets obtained on GEHC's Revolution Ascend, Revolution Frontier and Discovery CT750 HD systems". However, the number of cases or specific sample size for these datasets is not explicitly stated.
    • Data Provenance: The raw datasets were "obtained on GEHC's Revolution Ascend, Revolution Frontier and Discovery CT750 HD systems". The country of origin is not specified, and it is stated that the study used retrospective raw datasets (i.e., existing data, not newly acquired data for the study).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The provided text focuses on engineering bench testing and image quality metrics. It does not mention the use of experts to establish ground truth for the test set or their qualifications. The evaluation primarily relies on quantitative image quality metrics.

    4. Adjudication Method for the Test Set

    Since experts were not explicitly used to establish ground truth, there is no mention of an adjudication method for the test set in the provided text.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    An MRMC comparative effectiveness study was not performed according to the provided text. The study focused on technical image quality comparisons at the algorithm level, not human reader performance with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation was done. The study described is primarily a standalone evaluation of the algorithm's image quality output (e.g., noise, resolution, artifacts, detectability) when compared to images reconstructed with ASiR-V from the same raw data.

    7. The Type of Ground Truth Used

    The "ground truth" for the test set was essentially:

    • Quantitative Image Quality Metrics: Performance relative to ASiR-V for metrics like image noise, LCD, spatial resolution, streak artifact suppression, CT uniformity, CT number accuracy, CNR, spatial resolution (longitudinal), NPS, and artifact analysis.
    • Reference Image Appearance: The stated goal was an image appearance similar to traditional FBP images shown on axial NPS plots.

    There is no mention of pathology, expert consensus on clinical findings, or outcomes data being used as ground truth for this particular substantial equivalence study.

    8. The Sample Size for the Training Set

    The text states that the Deep Neural Network (DNN) is "trained on the CT scanner" and models the scanned object using "information obtained from extensive phantom and clinical data." However, the specific sample size for the training set is not provided.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set is implicitly established through the "extensive phantom and clinical data" mentioned as being used to train the DNN. The text indicates the DNN is trained to model noise propagation and identify noise characteristics to remove it, and to generate images with an appearance similar to traditional FBP while maintaining ASiR-V performance. This suggests the training involves learning from "ground truth" as defined by:

    • Reference Image Quality: Likely images reconstructed with proven methods (e.g., FBP, ASiR-V) or images from phantoms with known properties.
    • Noise Characteristics: The DNN is trained to understand and model noise.

    However, the specific methodology for establishing this ground truth for the training data (e.g., expert annotation, simulated data, pathology confirmed disease) is not detailed in the provided text.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare Finland Oy

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for Use for CARESCAPE Canvas 1000:

    CARESCAPE Canvas 1000 is a multi-parameter patient monitor intended for use in multiple areas within a professional healthcare facility.

    CARESCAPE Canvas 1000 is intended for use on adult, pediatric, and neonatal patients one patient at a time.

    CARESCAPE Canvas 1000 is indicated for monitoring of:

    · hemodynamic (including ECG, ST segment, arrhythmia detection, ECG diagnostic analysis and measurement, invasive pressure, non-invasive blood pressure, pulse oximetry, regional oxygen saturation, total hemoglobin concentration, cardiac output (thermodilution and pulse contour), temperature, mixed venous oxygen saturation, and central venous oxygen saturation),

    · respiratory (impedance respiration, airway gases (CO2, O2, N2O, and anesthetic agents), spirometry, gas exchange), and

    · neurophysiological status (including electroencephalography, Entropy, Bispectral Index (BIS), and neuromuscular transmission).

    CARESCAPE Canvas 1000 is able to detect and generate alarms for ECG arrhythmias: atrial fibrillation, accelerated ventricular rhythm, asystole, bigeminy, bradycardia, ventricular couplet, irregular, missing beat, multifocal premature ventricular contractions (PVCs), pause, R on T, supra ventricular tachycardia, trigeminy, ventricular bradycardia, ventricular fibrillation/ ventricular tachycardia, ventricular tachycardia, and VT>2. CARESCAPE Canvas 1000 also shows alarms from other ECG sources.

    CARESCAPE Canvas 1000 also provides other alarms, trends, snapshots and events, and calculations and can be connected to displays, printers and recording devices.

    CARESCAPE Canvas 1000 can interface to other devices. It can also be connected to other monitors for remote viewing and to data management software devices via a network.

    CARESCAPE Canvas 1000 is intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a professional healthcare facility.

    CARESCAPE Canvas 1000 is not intended for use in an MRI environment.

    Indications for Use for CARESCAPE Canvas Smart Display:

    CARESCAPE Canvas Smart Display is a multi-parameter patient monitor intended for use in multiple areas within a professional healthcare facility.

    CARESCAPE Canvas Smart Display is intended for use on adult, pediatric, and neonatal patients one patient at a time.

    CARESCAPE Canvas Smart Display is indicated for monitoring of:

    · hemodynamic (including ECG, ST segment, arrhythmia detection, ECG diagnostic analysis and measurement, invasive pressure, non-invasive blood pressure, pulse oximetry, regional oxygen saturation, total hemoglobin concentration, cardiac output (thermodilution), and temperature, and · respiratory (impedance respiration, airway gases (CO2)

    CARESCAPE Canvas Smart Display is able to detect and generate alarms for ECG arrhythmias: atrial fibrillation, accelerated ventricular rhythm, asystole, bigeminy, bradycardia, ventricular couplet, irregular, missing beat, multifocal premature ventricular contractions (PVCs), pause, R on T, supra ventricular tachycardia, trigeminy, ventricular bradycardia, ventricular fibrillation/ ventricular tachycardia, ventricular tachycardia, and VT>2. CARESCAPE Canvas Smart Display also shows alarms from other ECG sources.

    CARESCAPE Canvas Smart Display also provides other alarms, trends, snapshots and events. CARESCAPE Canvas Smart Display can use CARESCAPE ONE or CARESCAPE Patient Data Module (PDM) as patient data acquisition devices. It can also be connected to other monitors for remote viewing and to data management software devices via a network.

    CARESCAPE Canvas Smart Display is intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a professional healthcare facility.

    CARESCAPE Canvas Smart Display is not intended for use in an MRI environment.

    Indications for Use for CARESCAPE Canvas D19:

    CARESCAPE Canvas D19 is intended for use as a secondary display with a compatible host device. It is intended for displaying measurement and parametric data from the host device and providing visual and audible alarms generated by the host device.

    CARESCAPE Canvas D19 enables controlling the host device, including starting and discharging a patient case, changing parametric measurement settings, changing alarm limits and disabling alarms.

    Using CARESCAPE Canvas D19 with a compatible host device enables real-time multi-parameter patient monitoring and continuous evaluation of the patient's ventilation, oxygenation, hemodynamic, circulation, temperature, and neurophysiological status.

    Indications for Use for F2 Frame; F2-01:

    The F2 Frame, module frame with two slots, is intended to be used with compatible GE multiparameter patient monitors to interface with two single width parameter modules, CARESCAPE ONE with a slide mount, and recorder.

    The F2 Frame is intended for use in multiple areas within a professional healthcare facility. The F2 Frame is intended for use under the direct supervision of a licensed healthcare practitioner, or by person trained in proper use of the equipment in a professional healthcare facility.

    The F2 Frame is intended for use on adult, pediatric, and neonatal patients and on one patient at a time.

    Device Description

    Hardware and software modifications carried out on the legally marketed predicate device CARESCAPE B850 V3.2, resulted in new products CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display, along with the CARESCAPE Canvas D19 and F2 Frame (F2-01) all of which are the subject of this submission.

    CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display are new modular multi-parameter patient monitoring systems. In addition, the new devices CARESCAPE Canvas D19 and F2 Frame (F2-01) are a new secondary display and new module frame respectively.

    The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display patient monitors incorporates a 19-inch display with a capacitive touch screen and the screen content is user-configurable. They have an integrated alarm light and USB connectivity for other user input devices. The user interface is touchscreen-based and can be used also with a mouse and a keyboard or a remote controller. The system also includes the medical application software (CARESCAPE Software version 3.3). The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display include features and subsystems that are optional or configurable.

    The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display are compatible with the CARESCAPE Patient Data Module and CARESCAPE ONE acquisition device via F0 docking station (cleared separately).

    For the CARESCAPE Canvas 1000 patient monitor, the other type of acquisition modules, E-modules (cleared separately) can be chosen based on care requirements and patient needs. Interfacing subsystems that can be used to connect the E-modules to the CARESCAPE Canvas 1000 include a new two-slot parameter module F2 frame (F2-01), a five-slot parameter module F5 frame (F5-01), and a seven-slot parameter module F7 frame (F7-01).

    The CARESCAPE Canvas 1000 can also be used together with the new secondary CARESCAPE Canvas D19 display. The CARESCAPE Canvas D19 display provides a capacitive touch screen, and the screen content is user configurable. The CARESCAPE Canvas D19 display integrates audible and visual alarms and provides USB connectivity for other user input devices.

    AI/ML Overview

    Please note that the provided text is a 510(k) summary for a medical device and primarily focuses on demonstrating substantial equivalence to a predicate device through non-clinical bench testing and adherence to various standards. It explicitly states that clinical studies were not required to support substantial equivalence. Therefore, some of the requested information regarding clinical studies, human expert involvement, and ground truth establishment from patient data will likely not be present.

    Based on the provided text, here's the information regarding acceptance criteria and device performance:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of specific, quantifiable acceptance criteria alongside reported performance data. Instead, it states that various tests were conducted to demonstrate that the design meets specifications and complies with consensus standards. The performance is generally reported as "meets the specifications," "meets the EMC requirements," "meets the electrical safety requirements," and "fulfilled through compliance."

    However, we can infer some "acceptance criteria" based on the standards and tests mentioned:

    CategoryInferred Acceptance Criteria (Based on Compliance)Reported Device Performance
    General PerformanceDevice design meets specifications relevant to its intended use (multi-parameter patient monitoring, ECG, ST segment, arrhythmia detection, various physiological measurements)."demonstrating the design meets the specifications"
    HardwareHardware functions as intended and meets safety/performance standards."Hardware Bench Testing conducted"
    AlarmsAlarm system (classification, notification, adjustment, critical limits, On/Off, audio silencing) functions correctly and meets relevant standards (IEC 60601-1-8)."Alarms Bench Testing conducted." "Alarm management core functionalities: Classification and notification of alarms, Adjustment of alarm settings, Possibility to set critical alarm limits, Alarm On/Off functionality and audio silencing - Identical (to predicate)." "meets the specifications listed in the requirements." "Additional data is provided for compliance to: IEC 60601-1-8: 2020..."
    EMCMeets Electromagnetic Compatibility (EMC) requirements as per IEC 60601-1-2 Edition 4.1 2020 and FDA guidance."meet the EMC requirements described in IEC 60601-1-2 Edition 4.1 2020." "evaluated for electromagnetic compatibility and potential risks from common emitters."
    Electrical SafetyMeets electrical safety requirements as per IEC 60601-1:2020 "Edition 3.2" and 21 CFR Part 898, § 898.12 (electrode lead wires and cables)."meet the electrical safety requirements of IEC 60601-1:2020 'Edition 3.2'." "performed by a recognized independent and Certified Body Testing Laboratory (CBTL)." "fulfilled through compliance with IEC 60601-1:2020... clause 8.5.2.3."
    Specific ParametersMeets performance standards for various physiological measurements (ECG, ST segment, NIBP, SpO2, temp, etc.) as detailed by specific IEC/ISO standards (e.g., IEC 60601-2-25, IEC 60601-2-27, IEC 80601-2-30, ISO 80601-2-55, etc.). Includes the EK-Pro arrhythmia detection algorithm performing equivalently to the predicate."Additional data is provided for compliance to: IEC 60601-2-25:2011, IEC 60601-2-27:2011, IEC 80601-2-30: 2018, IEC 60601-2-34: 2011, IEC 80601-2-49: 2018, ISO 80601-2-55: 2018, ISO 80601-2-56: 2017+AMD1:2018, ISO 80601-2-61: 2017, IEC 80601-2-26:2019, IEC 60601-2-40: 2016, ANSI/AAMI EC57:2012." "EK-Pro arrhythmia detection algorithm: EK-Pro V14 - Identical (to predicate)."
    EnvironmentalOperates and stores safely within specified temperature, humidity, and pressure ranges. Withstands mechanical stress, fluid ingress, and packaging requirements."confirmed to meet the specifications listed in the requirements." "Environmental (Mechanical, and Thermal Safety) testing" conducted. "Fluid ingress." "Packaging Bench Testing."
    ReprocessingReprocessing efficacy validation meets acceptance criteria based on documented instructions and worst-case devices/components, following FDA guidance "Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling.""Reprocessing efficacy validation has been conducted." "The reprocessing efficacy validation met the acceptance criteria for the reprocessing efficacy validation tests."
    Human Factors/UsabilityMeets usability requirements as per IEC 60601-1-6: 2020 and IEC 62366-1: 2020, and complies with FDA guidance "Applying Human Factors and Usability Engineering to Medical Devices.""Summative Usability testing has been concluded with 16 US Clinical, 16 US Technical and 15 US Cleaning users." "follows the FDA Guidance for Industry and Food and Drug Administration Staff 'Applying Human Factors and Usability Engineering to Medical Devices'."
    SoftwareComplies with FDA software guidance documents (e.g., Content of Premarket Submissions for Software, General Principles of Software Validation, Off-The-Shelf Software Use) and software standards IEC 62304: 2015 and ISO 14971:2019, addressing patient safety, security, and privacy risks."follows the FDA software guidance documents as outlined in this submission." "Software testing was conducted." "Software for this device is considered as a 'Major' level of concern." "Software standards IEC 62304: 2015 ... and risk management standard ISO 14971:2019 ... were also applied." "patient safety, security, and privacy risks have been addressed."

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document implies that the "test set" for performance evaluation was the device itself and its components as described ("CARESCAPE Canvas 1000, CARESCAPE Canvas Smart Display, CARESCAPE Canvas D19 and F2 Frame (F2-01)").
      • For usability testing, "16 US Clinical, 16 US Technical and 15 US Cleaning users" were involved.
    • Data Provenance: The testing described is non-clinical bench testing.
      • For usability testing, the users were located in the US.
      • No direct patient data or retrospective/prospective study data is mentioned beyond the device's inherent functional characteristics being tested according to standards.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not applicable in the context of establishing "ground truth" for patient data, as no clinical studies with patient data requiring expert adjudication were conducted or reported to establish substantial equivalence.
    • For usability testing, "16 US Clinical, 16 US Technical and 15 US Cleaning users" participated. Their specific qualifications (e.g., years of experience, types of healthcare professionals) are not detailed in this summary.

    4. Adjudication Method for the Test Set

    • Not applicable, as no clinical studies with patient data requiring adjudication were conducted or reported.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC study was done, as the document explicitly states: "The subjects of this premarket submission... did not require clinical studies to support substantial equivalence." The device is a patient monitor, not an AI-assisted diagnostic tool for image interpretation or similar.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • The performance evaluations mentioned (e.g., for general device functionality, electrical safety, EMC, specific parameter measurements like ECG/arrhythmia detection) represent the device's standalone performance in a bench setting, demonstrating its adherence to established standards and specifications. There is no separate "algorithm only" performance study reported distinctly from integrated device testing. The EK-Pro V14 algorithm, which is part of the device, is noted as "identical" to the predicate, implying its performance characteristics are maintained.

    7. The Type of Ground Truth Used

    • For the non-clinical bench testing, the "ground truth" was established by conformance to internationally recognized performance and safety standards (e.g., IEC, ISO, AAMI/ANSI) and the engineering specifications of the device/predicate. These standards define the acceptable range of performance for various parameters.
    • For usability testing, the "ground truth" was the successful completion of tasks and overall user feedback/satisfaction as assessed by human factors evaluation methods.
    • No ground truth from expert consensus on patient data, pathology, or outcomes data was used, as clinical studies were not required.

    8. The Sample Size for the Training Set

    • Not applicable. This document describes a 510(k) submission for a patient monitor, not a machine learning or AI model trained on a dataset. The device contains "Platform Software that has been updated from version 3.2 to version 3.3," but this refers to traditional software development and not a machine learning model requiring a "training set" in the AI sense.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable, as there is no mention of a "training set" in the context of machine learning. The software development likely followed conventional software engineering practices, with ground truth established through design specifications, requirements, and verification/validation testing.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221680
    Manufacturer
    Date Cleared
    2023-03-01

    (265 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    GE Healthcare

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The system is intended for use by Nuclear Medicine (NM) or Radiology practitioners and referring physicians for display, processing, archiving, printing, reporting and networking of NMI data, including planar scans (Static, Whole Body, Dynamic, Multi-Gated) and tomographic scans (SPECT, dedicated PET or Camera-Based-PET) acquired by gamma cameras or PET scanners. The system can run on dedicated workstation or in a server-client configuration.

    The NM or PET data can be coupled with registered and or fused CT or MR scans, and with physiological sigmals in order to depict, localize, and/or quantify the distribution of radionuclide tracers and anatomical structures in scamed body tissue for clinical diagnostic purposes.

    The DaTQUANT optional application enables visual evaluation and quantification of 1231-ioflupane (DaTscanTM) images. DaTQUANT Normal Database option enables quantification relative to normal population databases of 1231-ioflupane (DaTscanTM) images. These applications may assist in detection of loss of functional dopaminergic neuron terminals in the striatum, which is correlated with Parkinson disease.

    The Q.Lung AI application may aid physicians in:

    -Diagnosis of Pulmonary Embolism (PE), Chronic Obstructive Pulmonary Disease (COPD), Emphysema and other lung deficiencies.

    -Assess the fraction of total lung function provided by a lobe or whole lung for Lung cancer resection requiring removal of an entire lobe, bilobectomy, or pneumonectomy.

    The Q.Brain application allows the user to visualize and quantify relative changes in the brain's metabolic function or blood flow activity between a subject's images and controls, which may be resulting from brain functions in: -Epileptic seizures

    -Dementia. Such as Alzheimer's disease, Lewy body dementia, Parkinson's disease with dementia, vascular dementia, and frontotemporal dementia.

    -Inflammation

    -Brain death

    -Cerebrovascular disease such as Acute stroke, Chronic and acute ischemia

    -Traumatic Brain Injury (TBI)

    When integrated with the patient's clinical and diagnostic information may aid the physician in the interpretation of cognitive complaints, neuro-degenerative disease processes and brain injuries.

    The Alcyone CFR application allows for the quantification of coronary vascular function by deriving Myocardial Blood Flow (MBF) and then calculating Coronary Flow Reserve (CFR) indices on data acquired on PET scamers and on stationary SPECT scanners with the capacity for dynamic SPECT imaging. These indices may add information to physicians using Myocardial Perfusion Imaging for the diagnosis of Coronary Artery Disease (CAD).

    The Exini Bone application is intended to be used with NM bone scans for the evaluation of adult male patients with bone metastases from prostate cancer. Exini Bone quantifies the selected lesions and provides a Bone Scan Index value as adjunct information related to the progression of disease.

    The Q.Liver application provides processing, quantification, and multidimensional review of Liver SPECT/PET and CT images for display, segmentation, and a calculation of the SPECT 'liver to lune' shunt value and the patient's Body Surface Area (BSA) for use in calculating a therapeutic dose for Selective Internal Radiation Therapy (SIRT) treatment using a user defined formula.

    The O.Thera AI application allows physicians review and monitor patient radiation doses derived from nuclear medicine imaging data, including SPECT/CT, PET/CT, and Whole-body Planar images, and from biological samples from the patient. The application provides estimates of isotope residence time, absorbed dose, and equivalent dose at the whole organ level, as well as estimates of whole-body effective dose. The output from Q.Thera AI may aid physicians in monitoring patient radiation doses.

    For use with internally administered radioactive products. O.Thera AI should not be used to deviate from approved product dosing and administration instructions. Refer to the product's prescribing informations.

    Device Description

    Xeleris V Processing and Review System is a Nuclear Medicine Software system that is designed for general nuclear medicine processing and review procedures for detection of radioisotope tracer uptake in the patient's body, using a variety of individual processing applications orientated to specific clinical applications. It includes all of the clinical applications and features in the current production version of the predicate Xeleris V and, introduces two clinical applications

    Q.Thera AI: The Q.Thera Al application allows physicians review and monitor patient radiation doses derived from nuclear medicine imaging data, including SPECT/CT, and Whole-body Planar images, and from biological samples from the patient. The application provides estimates of isotope residence time, absorbed dose, and equivalent dose at the whole organ level, as well as estimates of whole-body effective dose. The output from Q.Thera Al may aid physicians in monitoring patient radiation doses.

    Q.Thera AI is a modification to the predicate's Dosimetry Toolkit application for enhancing site's dosimetry workflow through the following updates:

    • Image Pre-Processing: Q.Thera Al uses the predicate's Q.Volumetrix MI application for image preprocessing, bringing additional automated organ segmentations as well as enabling dosimetry on PET/CT imaging data.
    • Dosimetry Calculations: Q.Thera Al adds calculation of radiation doses to Dosimetry Toolkit's previous determination of isotope residence time. Similar to the reference Olinda/EXM (K163687), the added calculations follow the guidelines published by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine (SNM) and models from publication Nº 89 of the International Commission on Radiological Protection (ICRP).

    Generate Planar: The Generate Planar application produces 2D derived planar images from 3D SPECT images that are acquired using GE Healthcare's StarGuide SPECT-CT system (K210173). Generate Planar was first cleared on Xeleris 4.0 (K153355). It was also included in StarGuide's 510(k) clearance for producing derived planar images from hybrid SPECT-CT studies. Xeleris V brings the Generate Planar application from Xeleris 4.0 and expands it to also produce derived planar images from SPECT-only studies.

    AI/ML Overview

    This document does not contain the specific acceptance criteria or a detailed study proving the device meets those criteria, as typically found in a clinical study report. The document is a 510(k) summary for the Xeleris V Processing and Review System, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a de novo clinical trial with detailed performance metrics and acceptance thresholds.

    However, based on the information provided, we can infer some aspects related to the evaluation of the new applications, Q.Thera AI and Generate Planar, that are part of the Xeleris V system.

    Here's a breakdown of the available information:

    1. Table of acceptance criteria and reported device performance:

    The document does not provide a table with explicit acceptance criteria (e.g., minimum sensitivity, specificity, accuracy) or quantitative reported device performance for the Q.Thera AI and Generate Planar applications against predefined thresholds.

    Instead, the non-clinical testing sections describe the scope of testing for these new applications:

    • Q.Thera AI: "Bench testing for Q.Thera AI confirmed the correctness of the resulting radiation doses across different possible combinations (e.g. models, organs, isotopes) of calculations."
    • Generate Planar: "For Generate Planar, bench testing demonstrated similarity between derived planar images produced from SPECT only studies to derived planar images produced from SPECT-CT studies. Similarity was demonstrated using representative clinical datasets for a variety of factors that impact attenuation levels (e.g. body region, BMI)."

    These statements highlight that the "acceptance criteria" were qualitative demonstrations of "correctness" for Q.Thera AI calculations and "similarity" for Generate Planar images. There are no numerical performance metrics or thresholds mentioned.

    2. Sample size used for the test set and the data provenance:

    • Q.Thera AI: The document mentions "different possible combinations (e.g. models, organs, isotopes) of calculations" for bench testing, but does not specify a sample size for the test set or the number of cases. The data provenance is also not explicitly stated (e.g., country of origin, retrospective/prospective).
    • Generate Planar: "representative clinical datasets for a variety of factors that impact attenuation levels (e.g. body region, BMI)" were used. Again, the specific sample size, number of cases, and data provenance are not provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided. The testing described is bench testing focusing on internal correctness and similarity, not necessarily involving expert-derived ground truth on a test set of patient cases for diagnostic accuracy.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    Not applicable, as no external expert review or adjudication of performance on a clinical test set is described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No MRMC comparative effectiveness study is mentioned. The document explicitly states: "The proposed Xeleris V did not require clinical studies to support substantial equivalence." This implies that no studies comparing human reader performance with and without AI assistance were conducted as part of this submission.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The descriptions of "bench testing" for both Q.Thera AI and Generate Planar imply standalone evaluations of the algorithms' outputs against expected "correctness" or "similarity" without human intervention for interpretation or diagnosis. However, specific standalone performance metrics (e.g., accuracy against a gold standard) are not provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Q.Thera AI: The "correctness of the resulting radiation doses" implies a ground truth based on established dosimetric models and calculations (e.g., "MIRD committee of SNM and ICRP Publication 89"). This would be a ground truth derived from established scientific/medical formulas and guidelines rather than expert consensus on patient data or pathology.
    • Generate Planar: "similarity between derived planar images" suggests a ground truth or reference for comparison were other derived planar images (from SPECT-CT studies as cleared on Xeleris 4.0), rather than a clinical ground truth like pathology.

    8. The sample size for the training set:

    The document does not provide information about the training set size for the AI components of Q.Thera AI or Generate Planar. Given the nature of the description (dosimetry calculations based on models and similarity of image generation), it's possible that these are more rule-based or model-based applications rather than deep learning models requiring large training datasets, but this is not explicitly stated.

    9. How the ground truth for the training set was established:

    This information is not provided, as details about a training set are absent.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 27