Search Results
Found 2 results
510(k) Data Aggregation
(31 days)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, cardiology, neurology, infection and inflammation. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
The proposed device uMI Panvivo combines a 295/235 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The uMI Panvivo has been previously cleared by FDA via K243538. The main modifications performed on the uMI Panvivo (K243538) in this submission are due to the addition of Deep MAC(also named AI MAC), Digital Gating(also named Self-gating), OncoFocus(also named uExcel Focus and RMC), NeuroFocus(also named HMC), DeepRecon.PET (also named as HYPER DLR or DLR), uExcel DPR (also named HYPER DPR or HYPER AiR)and uKinetics. Details about the modifications are listed as below:
-
Deep MAC, Deep Learning-based Metal Artifact Correction (also named AI MAC) is an image reconstruction algorithm that combines physical beam hardening correction and deep learning technology. It is intended to correct the artifact caused by metal implants and external metal objects.
-
Digital Gating (also named Self-gating, cleared via K232712) can automatically extract a respiratory motion signal from the list-mode data during acquisition which called data-driven (DD) method. The respiratory motion signal was calculated by tracking the location of center-of-distribution(COD) in body cavity mask. By using the respiratory motion signal, system can perform gate reconstruction without respiratory capture device.
-
OncoFocus (also named uExcel Focus and RMC, cleared via K232712) is an AI-based algorithm to reduce respiratory motion artifacts in PET/CT images and at the same time reduce the PET/CT misalignment.
-
NeuroFocus (also named HMC) is head motion correction solution, which employs a statistics-based head motion correction method that correct motion artifacts automatically using the centroid-of-distribution (COD) without manual parameter tuning to generate motion free images.
-
DeepRecon.PET (also named as HYPER DLR or DLR, cleared via K193210) uses a deep learning technique to produce better SNR (signal-to-noise-ratio) image in post-processing procedure.
-
uExcel DPR (also named HYPER DPR or HYPER AiR, cleared via K232712) is a deep learning-based PET reconstruction algorithm designed to enhance the SNR of reconstructed images. High-SNR images improve clinical diagnostic efficacy, particularly under low-count acquisition conditions (e.g., low-dose radiotracer administration or fast scanning protocols).
-
uKinetics(cleared via K232712) is a kinetic modeling toolkit for indirect dynamic image parametric analysis and direct parametric analysis of multipass dynamic data. Image-derived input function (IDIF) can be extracted from anatomical CT images and dynamic PET images. Both IDIF and populated based input function (PBIF) can be used as input function of Patlak model to generate kinetic images which reveal biodistribution map of the metabolized molecule using indirect and direct methods.
The provided FDA 510(k) clearance letter describes the uMI Panvivo PET/CT System and mentions several new software functionalities (Deep MAC, Digital Gating, OncoFocus, NeuroFocus, DeepRecon.PET, uExcel DPR, and uKinetics). The document includes performance data for four of these functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC.
The following analysis focuses on the acceptance criteria and study details for these four AI-based image processing/reconstruction algorithms as detailed in the document. The document presents these as "performance verification" studies.
Overview of Acceptance Criteria and Device Performance (for DeepRecon.PET, uExcel DPR, OncoFocus, DeepMAC)
The document details the evaluation of four specific software functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC. Each of these has its own set of acceptance criteria and reported performance results, detailed below.
1. Table of Acceptance Criteria and Reported Device Performance
Software Functionality | Evaluation Item | Evaluation Method | Acceptance Criteria | Reported Performance |
---|---|---|---|---|
DeepRecon.PET | Image consistency | Measuring mean SUV of phantom background and liver ROIs (regions of interest) and calculating bias. Used to evaluate image bias. | The bias is less than 5%. | Pass |
Image background noise | a) Background variation (BV) in the IQ phantom. | |||
b) Liver and white matter signal-to-noise ratio (SNR) in the patient case. Used to evaluate noise reduction performance. | DeepRecon.PET has lower BV and higher SNR than OSEM with Gaussian filtering. | Pass | ||
Image contrast to noise ratio | a) Contrast to noise ratio (CNR) of the hot spheres in the IQ phantom. | |||
b) Contrast to noise ratio of lesions. CNR is a measure of the signal level in the presence of noise. Used to evaluate lesion detectability. | DeepRecon.PET has higher CNR than OSEM with Gaussian filtering. | Pass | ||
uExcel DPR | Quantitative evaluation | Contrast recovery (CR), background variability (BV), and contrast-to-noise ratio (CNR) calculated using NEMA IQ phantom data reconstructed with uExcel DPR and OSEM methods under acquisition conditions of 1 to 5 minutes per bed. |
Coefficient of Variation (COV) calculated using uniform cylindrical phantom data on images reconstructed with both uExcel DPR and OSEM methods. | The averaged CR, BV, and CNR of the uExcel DPR images should be superior to those of the OSEM images.
uExcel DPR requires fewer counts to achieve a matched COV compared to OSEM. | Pass.
- NEMA IQ Phantom Analysis: an average noise reduction of 81% and an average SNR enhancement of 391% were observed.
- Uniform cylindrical Analysis: 1/10 of the counts can obtain the matching noise level. |
| | Qualitative evaluation | uExcel DPR images reconstructed at lower counts qualitatively compared with full-count OSEM images. | uExcel DPR reconstructions with reduced count levels demonstrate comparable or superior image quality relative to higher-count OSEM reconstructions. | Pass. - 1.7
2.5 MBq/kg radiopharmaceutical injection conditions, combined with 23 minutes whole-body scanning (4~6 bed positions), achieves comparable diagnostic image quality. - Clinical evaluation by radiologists showed images sufficient for clinical diagnosis, with uExcel DPR exhibiting lower noise, better contrast, and superior sharpness compared to OSEM. |
| OncoFocus | Volume relative to no motion correction (∆Volume). | Calculate the volume relative to no motion correction images. | The ∆Volume value is less than 0%. | Pass |
| | Maximal standardized uptake value relative to no motion correction (∆SUVmax) | Calculate the SUVmax relative to no motion correction images. | The ∆SUVmax value is larger than 0%. | Pass |
| DeepMAC | Quantitative evaluation | For PMMA phantom data, the average CT value in the affected area of the metal substance and the same area of the control image before and after DeepMAC was compared. | After using DeepMAC, the difference between the average CT value in the affected area of the metal substance and the same area of the control image does not exceed 10HU. | Pass |
2. Sample Sizes Used for the Test Set and Data Provenance
-
DeepRecon.PET:
- Phantoms: NEMA IQ phantoms.
- Clinical Patients: 20 volunteers.
- Data Provenance: "collected from various clinical sites" and explicitly stated to be "different from the training data." The document does not specify country of origin or if it's retrospective/prospective, but "volunteers were enrolled" suggests prospective collection for the test set.
-
uExcel DPR:
- Phantoms: Two NEMA IQ phantom datasets, two uniform cylindrical phantom datasets.
- Clinical Patients: 19 human subjects.
- Data Provenance: "derived from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites and during separated time periods," and "different from the training data." "Study cohort" and "human subjects" imply prospective collection for the test set.
-
OncoFocus:
- Clinical Patients: 50 volunteers.
- Data Provenance: "collected from general clinical scenarios" and explicitly stated to be "on cases different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
-
DeepMAC:
- Phantoms: PMMA phantom datasets.
- Clinical Patients: 20 human subjects.
- Data Provenance: "from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites" and explicitly stated to be "different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state that experts established "ground truth" for the quantitative metrics (e.g., SUV, CNR, BV, CR, ∆Volume, ∆SUVmax, HU differences) for the test sets. These seem to be derived from physical measurements on phantoms or calculations from patient image data using established methods.
-
For qualitative evaluation/clinical diagnosis assessment:
- DeepRecon.PET: Two American Board of Radiologists certified physicians.
- uExcel DPR: Two American board-certified nuclear medicine physicians.
- OncoFocus: Two American Board of Radiologists-certified physicians.
- DeepMAC: Two American Board of Radiologists certified physicians.
The exact years of experience for these experts are not provided, only their board certification status.
4. Adjudication Method for the Test Set
The document states that the radiologists/physicians evaluated images "independently" (uExcel DPR) or simply "were evaluated by" (DeepRecon.PET, OncoFocus, DeepMAC). There is no mention of an adjudication method (such as 2+1 or 3+1 consensus) for discrepancies between reader evaluations for any of the functionalities. The evaluations appear to be separate assessments, with no stated consensus mechanism.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- The document describes qualitative evaluations by radiologists/physicians comparing the AI-processed images to conventionally processed images (OSEM/no motion correction/no MAC). These are MRMC comparative studies in the sense that multiple readers evaluated multiple cases.
- However, these studies were designed to evaluate the image quality (e.g., diagnostic sufficiency, noise, contrast, sharpness, lesion detectability, artifact reduction) of the AI-processed images compared to baseline images, rather than to measure an improvement in human reader performance (e.g., diagnostic accuracy, sensitivity, specificity, reading time) when assisted by AI vs. without AI.
- Therefore, the studies were not designed as comparative effectiveness studies measuring the effect size of human reader improvement with AI assistance. They focus on the perceived quality of the AI-processed images themselves.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, for DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC, quantitative (phantom and numerical) evaluations were conducted that represent the standalone performance of the algorithms in terms of image metrics (e.g., SUV bias, BV, SNR, CNR, CR, COV, ∆Volume, ∆SUVmax, HU differences). These quantitative results are directly attributed to the algorithm's output without human intervention for the measurement/calculation.
- The qualitative evaluations by the physicians (described in point 3 above) also assess the output of the algorithm, but with human interpretation.
7. The Type of Ground Truth Used
-
For Quantitative Evaluations:
- Phantoms: The "ground truth" for phantom studies is implicitly the known physical properties and geometry of the NEMA IQ and PMMA phantoms, allowing for quantitative measurements (e.g., true SUV, true CR, true signal-to-noise).
- Clinical Data (DeepRecon.PET, uExcel DPR): For these reconstruction algorithms, "ground-truth images were reconstructed from fully-sampled raw data" for the training set. For the test set, comparisons seem to be made against OSEM with Gaussian filtering or full-count OSEM images as reference/comparison points, rather than an independent "ground truth" established by an external standard.
- Clinical Data (OncoFocus): Comparisons are made relative to "no motion correction images" (∆Volume and ∆SUVmax), implying these are the baseline for comparison, not necessarily an absolute ground truth.
- Clinical Data (DeepMAC): Comparisons are made to a "control image" without metal artifacts for quantitative assessment of HU differences.
-
For Qualitative Evaluations:
- The "ground truth" is based on the expert consensus / qualitative assessment by the American Board-certified radiologists/nuclear medicine physicians, who compared images for attributes like noise, contrast, sharpness, motion artifact reduction, and diagnostic sufficiency. This suggests a form of expert consensus, although no specific adjudication is described. There's no mention of pathology or outcomes data as ground truth.
8. The Sample Size for the Training Set
The document provides the following for the training sets:
- DeepRecon.PET: "image samples with different tracers, covering a wide and diverse range of clinical scenarios." No specific number provided.
- uExcel DPR: "High statistical properties of the PET data acquired by the Long Axial Field-of-View (LAFOV) PET/CT system enable the model to better learn image features. Therefore, the training dataset for the AI module in the uExcel DPR system is derived from the uEXPLORER and uMI Panorama GS PET/CT systems." No specific number provided.
- OncoFocus: "The training dataset of the segmentation network (CNN-BC) and the mumap synthesis network (CNN-AC) in OncoFocus was collected from general clinical scenarios. Each subject was scanned by UIH PET/CT systems for clinical protocols. All the acquisitions ensure whole-body coverage." No specific number provided.
- DeepMAC: Not explicitly stated for the training set. Only validation dataset details are given.
9. How the Ground Truth for the Training Set Was Established
- DeepRecon.PET: "Ground-truth images were reconstructed from fully-sampled raw data. Training inputs were generated by reconstructing subsampled data at multiple down-sampling factors." This implies that the "ground truth" for training was derived from high-quality, fully-sampled (and likely high-dose) PET data.
- uExcel DPR: "Full-sampled data is used as the ground truth, while corresponding down-sampled data with varying down-sampling factors serves as the training input." Similar to DeepRecon.PET, high-quality, full-sampled data served as the ground truth.
- OncoFocus:
- For CNN-BC (body cavity segmentation network): "The input data of CNN-BC are CT-derived attenuation coefficient maps, and the target data of the network are body cavity region images." This suggests the target (ground truth) was pre-defined body cavity regions.
- For CNN-AC (attenuation map (umap) synthesis network): "The input data are non-attenuation-corrected (NAC) PET reconstruction images, and the target data of the network are the reference CT attenuation coefficient maps." The ground truth was "reference CT attenuation coefficient maps," likely derived from actual CT scans.
- DeepMAC: Not explicitly stated for the training set. The mention of pre-trained neural networks suggests an established training methodology, but the specific ground truth establishment is not detailed.
Ask a specific question about this device
(216 days)
The uEXPLORER is a diagnostic imaging system that combines two existing imaging modalities PET and CT. The quantitative distribution information of PET radiopharmaceuticals within the patient body measured by PET can assist healthcare providers in assessing metabolic and physiological functions. CT provides diagnostic tomographic anatomical information as well as photon attenuation for the scanned region. The accurate registration and fusion of PET and CT images provides anatomical reference for the findings in the PET images.
This system is intended to be operated by qualified healthcare professionals to assist in the detection, diagnosis, staging, restaging, treatment planning and treatment response evaluation for diseases, inflammation, infection and disorders in, but not limited to oncology, cardiology and neurology. The system maintains independent functionality of the CT device, allowing for single modality CT diagnostic imaging.
This CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society. * * Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed device uEXPLORER combines a 194 cm axial field of view (AFOV) PET and multi-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET gantry, CT gantry, patient table, power supply cabinet, console and reconstruction system, chiller, vital signal module.
The uEXPLORER has been previously cleared by FDA via K182938. The mainly modifications performed on the uEXPLORER (K182938) in this submission are due to the addition of HYPER Iterative, HYPER DLR, Digital gating, remote assistance and CT system modification.
Details about the modifications are listed as below:
- HYPER Iterative (has been cleared in K193241), uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function.
- HYPER DLR (has been cleared in K193210), uses a deep learning technique to produce better SNR (signal-to-noise-ratio).
- Digital Gating (has been cleared in K193241), uses motion correction method to ● provide better alternatives to reduce motion effects without sacrificing image statistics.
- Remote assistance.
- PET recon matrix: 1024×1024.
- TG-66 compliant flat tabletop.
- Update the performance according to the NEMA NU 2-2018 standard.
- Update the operation system.
- CT system modification: add Low Dose CT Lung Cancer Screening, Auto ALARA kVp, Organ-Based Auto ALARA mA, EasyRange, Injector Linkage, Shuttle Perfusion, Online MPR and Dual Energy analysis function. All functions have been cleared via K230162.
This document appears to be a 510(k) Premarket Notification from Shanghai United Imaging Healthcare Co., Ltd. for their uEXPLORER device.
Here's an analysis of the provided text to extract information about the acceptance criteria and study that proves the device meets them:
Crucial Observation: The document explicitly states: "No Clinical Study is included in this submission." This means that the information typically found in an FDA submission regarding "acceptance criteria" through a clinical performance study (like an MRMC study or standalone performance) is not present here. Instead, the substantial equivalence relies on non-clinical testing and comparison to predicate devices, particularly regarding modifications to previously cleared components.
Therefore, many of the requested points below cannot be fully answered as they pertain to clinical or human-in-the-loop performance studies that were not conducted or provided in this submission for the specific device being reviewed.
However, I can extract information related to the "non-clinical testing" and the rationale for substantial equivalence.
Acceptance Criteria and Device Performance (Based on Non-Clinical Testing and Substantial Equivalence Rationale):
Given the statement "No Clinical Study is included in this submission," the acceptance criteria are primarily related to non-clinical performance, safety, and functionality demonstrating equivalence to predicate devices and adherence to relevant standards. The "reported device performance" is essentially that it met these non-clinical criteria and maintained safety/effectiveness equivalent to the predicate.
1. Table of acceptance criteria and the reported device performance:
Acceptance Criteria Category | Specific Criteria (Implied from document) | Reported Device Performance (Implied from document) |
---|---|---|
Functional Equivalence | Maintains same basic operating principles/fundamental technologies as predicate. | "The uEXPLORER employs the same basic operating principles and fundamental technologies... The differences above between the proposed device and predicate device do not affect the intended use, technology characteristics, safety and effectiveness." |
Indications for Use Equivalence | Has similar indications for use as predicate. | "The uEXPLORER has ... the similar indications for use as the predicate device." (Indications for Use are listed in detail in section 6 of the document, matching the predicate's intent) |
Physical/Technical Specifications | Key specifications (e.g., gantry bore, scintillator, axial FOV, maximum table load) remain equivalent to predicate device. | Confirmed: Gantry bore (760mm), Scintillator material (LYSO), Number of detector rings (672), Axial FOV (194 cm), Gantry bore (76 cm for PET), Maximum table load (250 kg) are identical to the predicate (K182938). |
Addition of New Features (Non-Clinical Validation) | New features (HYPER Iterative, HYPER DLR, Digital Gating, CT system modifications) are either identical to previously cleared devices or validated through non-clinical testing. | HYPER Iterative: "has been cleared in K193241." "uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function." (Implies non-clinical validation of this algorithm in prior submission). |
HYPER DLR: "has been cleared in K193210." "uses a deep learning technique to produce better SNR." (Implies non-clinical validation of this algorithm in prior submission). | ||
Digital Gating: "has been cleared in K193241." "uses motion correction method..." (Implies non-clinical validation in prior submission). | ||
CT system modification: "All functions have been cleared via K230162." (Implies non-clinical validation of these functions in prior submission). Non-clinical tests were conducted for "Algorithm and Image performance." | ||
Safety - Electrical Safety & EMC | Conformance to relevant electrical safety and electromagnetic compatibility (EMC) standards. | Claims conformance to: ANSI AAMI ES60601-1, IEC 60601-1-2, IEC 60601-2-44, IEC 60601-1-3, IEC 60825-1. (Implies positive test results against these standards). |
Safety - Software | Conformance to software development and cybersecurity standards. | Claims conformance to: IEC 60601-1-6 (Usability), IEC 62304 (Software life cycle processes), NEMA PS 3.1-3.20 (DICOM), FDA Guidance for Software Contained in Medical Devices, FDA Guidance for Cybersecurity. (Implies software development and testing followed these standards). |
Safety - Biocompatibility | Conformance to biocompatibility standards for patient contact materials (if applicable, which for a large imaging system is less direct but still relevant for patient tables/touch points). | Claims conformance to: ISO 10993-1, ISO 10993-5, ISO 10993-10. (Implies positive results for relevant components). |
Performance - PET | Conformance to PET performance measurement standards. | Claims conformance to: NEMA NU 2-2018 (Performance Measurements of Positron Emission Tomographs). "Update the performance according to the NEMA NU 2-2018 standard." (Implies the device meets or exceeds the specifications in this standard). |
Risk Management | Application of risk management processes. | Claims conformance to: ISO 14971: 2019 (Application of risk management to medical devices). (Implies risks were identified, assessed, and mitigated). |
Quality System | Compliance with Quality System Regulation. | Claims conformance to: 21 CFR Part 820 Quality System Regulation. (This is a general requirement for all medical device manufacturers). |
Radiological Health | Compliance with radiological health regulations. | Claims conformance to: Code of Federal Regulations, Title 21, Subchapter J - Radiological Health. (This is a general requirement for X-ray emitting devices). |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable in the context of clinical data. For non-clinical performance and algorithm testing, the "sample size" would refer to the types and number of phantoms/datasets used. The document states "Algorithm and Image performance tests were conducted," but does not specify the number or nature of these test sets.
- Data Provenance: Not specified for any test data. The company is based in China.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable, as no clinical study with expert ground truth establishment was conducted or presented in this submission.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable, as no clinical study requiring adjudication was conducted or presented.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was explicitly NOT done. The submission states: "No Clinical Study is included in this submission." The new features (HYPER Iterative, HYPER DLR, Digital Gating, and CT modifications) had "been cleared" in other predicate devices via non-clinical performance evaluations, not human reader studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in essence, standalone performance validation of the algorithms was done, but as part of prior submissions for the predicate components. The document states "Algorithm and Image performance tests were conducted for the uEXPLORER during the product development." The key new features, HYPER Iterative, HYPER DLR, and Digital Gating, as well as the CT system modifications, are explicitly stated as having been "cleared" in previous 510(k) submissions (K193241, K193210, K230162). This implies their standalone performance was evaluated and accepted in those prior submissions through non-clinical means (e.g., phantom studies, image quality metrics like SNR, spatial resolution, noise reduction). The details of those prior standalone studies are not provided here, but the current submission leverages their previous clearance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical "Algorithm and Image performance tests," the ground truth would typically be established based on well-defined physical phantoms with known properties or simulated data, rather than expert consensus, pathology, or outcomes data, which are associated with clinical studies. The specific details are not provided.
8. The sample size for the training set
- Not applicable directly to this submission. The algorithms (HYPER DLR being deep learning) would have had training data, but those details pertain to their original development and previous clearances (K193210, K193241), not this particular 510(k) submission.
9. How the ground truth for the training set was established
- Not applicable directly to this submission. This information would be found in the documentation for the previous 510(k) clearances for the HYPER DLR and Digital Gating algorithms if they involved supervised learning that required established ground truth. Typically, for medical imaging algorithms, this could involve large datasets with expertly annotated images, but no specifics are in this document.
Ask a specific question about this device
Page 1 of 1