Search Results
Found 5 results
510(k) Data Aggregation
(253 days)
Medical Image Post-processing Software (uOmnispace.CT)
uOmnispace.CT is a software for viewing, manipulating, evaluating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
- The uOmnispace.CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.
- The uOmnispace.CT Dental Application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw.
- The uOmnispace.CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway.
- The uOmnispace.CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.
- The uOmnispace.CT Vessel Analysis application is intended to provide a tool for viewing, manipulating, and evaluating CT vascular images.
- The uOmnispace.CT Brain Perfusion application is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain.
- The uOmnispace.CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function.
- The uOmnispace.CT Calcium Scoring application is intended to identify calcifications and calculate the calcium score.
- The uOmnispace.CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes.
- The uOmnispace.CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk.
- The uOmnispace.CT Liver Evaluation application is intended to provide processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation.
- The uOmnispace.CT Dual Energy is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The Dual Energy application is intended to provide information on the chemical composition of the scanned body materials and/or contrast agents. Additionally, it enables images to be generated at multiple energies within the available spectrum.
- The uOmnispace.CT Cardiovascular Combined Analysis is an image analysis software package for evaluating contrast enhanced CT images. The CT Cardiovascular Combined Analysis is intended to analyze vascular and cardiac structures.It can be used in the qualitative and quantitative for the analysis of head-neck, abdomen, multi-body part combined, TAVR (Transcatheter Aortic Valve Replacement) CT data as input for the planning of cardiovascular procedures.
- The uOmnispace.CT Body Perfusion is intended to analyze blood flow information of dynamic CT images, by providing various perfusion-related parameters of the body parts.
The uOmnispace.CT is a post-processing software based on the uOmnispace platform for viewing, manipulating, evaluating and analyzing medical images, can run alone or with other advanced commercially cleared applications.
uOmnispace.CT contains the following applications:
- uOmnispace.CT Calcium Scoring
- uOmnispace.CT Lung Nodule
- uOmnispace.CT Colon Analysis
- uOmnispace.CT Lung Density Analysis
- uOmnispace.CT Dental Application
- uOmnispace.CT Bone Structure Analysis
- uOmnispace.CT Dual Energy
- uOmnispace.CT Vessel Analysis
- uOmnispace.CT Heart
- uOmnispace.CT Brain Perfusion
- uOmnispace.CT Dynamic Analysis
- uOmnispace.CT Liver Evaluation
- uOmnispace.CT Cardiovascular Combined Analysis
- uOmnispace.CT Body Perfusion
The modifications performed on the uOmnispace.CT (K233209) in this submission is due to the following changes that include:
- Add new application of Body Perfusion.
- Extend intended patient population for some applications
- Introduce deep-learning algorithm in applications of Lung Density Analysis, Vessel Analysis, Heart, Liver Evaluation and Cardiovascular Combined Analysis.
These modifications do not affect the intended use or alter the fundamental scientific technology of the device
This document describes the acceptance criteria and performance of the Medical Image Post-processing Software (uOmnispace.CT) for several AI-based segmentation algorithms, based on the provided FDA 510(k) clearance letter.
Acceptance Criteria and Reported Device Performance
Application | Algorithm | Validation Type | Acceptance Criteria (Dice Score) | Reported Device Performance (Dice Score) |
---|---|---|---|---|
Lung Density Analysis | Lung segmentation | Dice | 0.97 | 0.9801 |
Lung Density Analysis | Airway segmentation | Dice | 0.86 | 0.8954 |
Vessel Analysis | Bone removal (Abdomen & Limbs) | Dice | 0.90 | 0.96957 |
Vessel Analysis | Bone removal (Head & Neck) | Dice | 0.93 | 0.955 |
Heart | Coronary artery extraction | Dice | 0.870 | 0.916 |
Heart | Heart chamber segmentation | Dice | 0.910 | 0.970 |
Liver Evaluation | Liver segmentation | Dice | 0.97 | 0.981 |
Liver Evaluation | Hepatic artery segmentation | Dice | 0.85 | 0.927 |
Liver Evaluation | Hepatic portal vein segmentation | Dice | 0.89 | 0.933 |
Liver Evaluation | Hepatic vein segmentation | Dice | 0.86 | 0.914 |
Study Details for AI-Based Algorithms
The software features described in the submission are based on deep learning algorithms. The performance evaluation includes the following details for each application:
1. Lung Density Analysis (Lung Segmentation & Airway Segmentation)
-
Sample size used for the test set and data provenance:
- Sample Size: 100 subjects.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It notes the test dataset comprises 100 cases of Chest CT scans covering different gender, age, and anatomical variants.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated as a specific number of individual experts. The process mentions "well-trained annotators" and "a senior clinical specialist" for review and modification.
- Qualifications: "well-trained annotators" and "a senior clinical specialist" (no further details on experience provided).
-
Adjudication method for the test set:
- Ground truth annotations are initially done by "well-trained annotators." A "senior clinical specialist" then checks and modifies these annotations to ensure correctness. This implies a form of expert review and potential consensus or single expert finalization.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No, an MRMC comparative effectiveness study was not done. The evaluation focuses on standalone algorithm performance against ground truth.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, the performance testing explicitly evaluates the algorithm's output (Dice coefficient) against a reference standard (ground truth), indicating a standalone performance evaluation.
-
The type of ground truth used: Expert consensus, through a process of initial annotation by trained individuals and subsequent review/modification by a senior clinical specialist.
-
The sample size for the training set: Not specified in the provided document. It only states that the training data is "independent of the data used to test the algorithm."
-
How the ground truth for the training set was established: Not specified in the provided document. It only mentions the training data is independent from the test data.
2. Vessel Analysis (Automatic Bone Removal - Abdomen & Limbs, Head & Neck)
-
Sample size used for the test set and data provenance:
- Sample Size: 156 subjects.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It notes the test dataset comprises 156 cases of CTA scans covering different gender, age, and anatomical variants.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated. The process mentions "well-trained annotators" and "a senior clinical specialist" for review and modification.
- Qualifications: "well-trained annotators" and "a senior clinical specialist."
-
Adjudication method for the test set: Similar to Lung Density Analysis, ground truth annotations are done by "well-trained annotators," with a "senior clinical specialist" checking and modifying them.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes.
-
The type of ground truth used: Expert consensus.
-
The sample size for the training set: Not specified.
-
How the ground truth for the training set was established: Not specified.
3. Heart (Coronary Artery Extraction & Heart Chamber Segmentation)
-
Sample size used for the test set and data provenance:
- Sample Size: 72 subjects.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It notes the test dataset comprises 72 cases of CCTA scans covering different gender, age, and anatomical variants.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated. The process mentions "well-trained annotators" and "a senior clinical specialist" for review and modification.
- Qualifications: "well-trained annotators" and "a senior clinical specialist."
-
Adjudication method for the test set: Similar to previous sections, ground truth annotations are done by "well-trained annotators," with a "senior clinical specialist" checking and modifying them.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes.
-
The type of ground truth used: Expert consensus.
-
The sample size for the training set: Not specified.
-
How the ground truth for the training set was established: Not specified.
4. Liver Evaluation (Liver, Hepatic Artery, Hepatic Portal Vein, and Hepatic Vein Segmentation)
-
Sample size used for the test set and data provenance:
- Sample Size: 74 subjects for liver and hepatic artery segmentation; 80 subjects for hepatic portal vein and hepatic vein segmentation.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It notes the test datasets comprise Chest CT scans covering different gender, age, and anatomical variants.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated. The process mentions "well-trained annotators" and "a senior clinical specialist" for review and modification.
- Qualifications: "well-trained annotators" and "a senior clinical specialist."
-
Adjudication method for the test set: Similar to previous sections, ground truth annotations are done by "well-trained annotators," with a "senior clinical specialist" checking and modifying them.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes.
-
The type of ground truth used: Expert consensus.
-
The sample size for the training set: Not specified.
-
How the ground truth for the training set was established: Not specified.
Ask a specific question about this device
(232 days)
uOmnispace.CT
uOmnispace. CT is a software for viewing, manipulating, evaluating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additions: -The uOmnispace. CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon. -The uOmnispace. CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. -The uOmnispace. CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway. -The uOmnispace.CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. -The uOmnispace.CT Vessel Analysis application is intended to provide a tool for viewing, and evaluating CT vascular images. -The uOmnispace. CT Brain Perfusion is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain. -The uOmnispace.CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function. -The uOmnispace. CT Calcium Scoring application is intended to identify calcifications and calculate the calcium soore. -The uOmnispace. CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes. -The uOmnispace.CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk. -The uOmnispace. CT Liver Evaluation application is intended to processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation. -The uOmnispace. CT Dual Energy is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The u0mnispace.CT Dual Energy application is intended to provide information on the chemical composition of the scanned body materials and/or contrast agents. Additionally, it enables images to be generated at multiple energies within the available spectrum. -The uOmnispace.CT Cardiovascular Combined Analysis is an image analysis software package for evaluating contrast enhanced CT images. The CT Cardiovascular Combined Analysis is intended to analyze vascular and cardiac structures. It can be used in the qualitative and quantitative for the analysis of head-neck, abdomen, multi-body part combined, TAVR (Transcatheter Aortic Valve Replacement) CT data as input for the planning of cardiovascular procedures.
The uOmnispace.CT is a post-processing software based on the uOmnispace platform for viewing, manipulating, evaluating and analyzing medical images, can run alone or with other advanced commercially cleared applications.
The provided text describes the performance data for three AI/ML algorithms integrated into the uOmnispace.CT software: Spine Labeling Algorithm, Rib Labeling Algorithm, and TAVR Analysis Algorithm.
Here's a breakdown of the acceptance criteria and study details for each:
1. Spine Labeling Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Score based on ground truth | The average score of the proposed device results is higher than 4 points. | 5.0 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 120 subjects.
- Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (90 subjects) and Asia (30 subjects) for ethnicity.
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
- Qualifications of Experts: Licensed physicians with U.S. credentials.
- Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to set annotation points and assign anatomical labels. All ground truth was finally evaluated by two licensed physicians with U.S. credentials. This suggests a post-annotation review/adjudication by experts.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
- Type of Ground Truth Used: Expert consensus (annotators + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
2. Rib Labeling Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Score based on ground truth | The average score of the proposed device results is higher than 4 points. | 5.0 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 120 subjects.
- Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (80 subjects) and Asia (40 subjects) for ethnicity.
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
- Qualifications of Experts: Licensed physicians with U.S. credentials.
- Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to generate initial rib masks, which were then refined, and anatomical labels assigned. After the first round, annotators "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
- Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
3. TAVR Analysis Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Verify the consistency with ground truth (Mean Landmark Error) | The mean landmark error between the proposed device results and ground truth is less than the threshold, 1 mm. | 0.86 mm | Yes |
Subjective Scoring of doctors with U.S. professional qualifications | The average score of the evaluation criteria is higher than 2. | 3 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 60 subjects.
- Data Provenance: Retrospective. Clinical subgroups included Asia (25 subjects) and U.S. (35 subjects) for ethnicity, including data from U.S. Facility 1 (25 subjects) and U.S. Facility 2 (10 subjects).
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials for the final evaluation of the ground truth.
- Qualifications of Experts: Licensed physicians with U.S. credentials (specifically, "two MD with the American Board of Radiology Qualification" for the subjective scoring).
- Adjudication Method: Ground truth annotations were made by "well-trained annotators." After the first round of annotation, they "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth and subjective expert scoring.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on landmark error and subjective expert scoring.
- Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the post-processing algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
Ask a specific question about this device
(202 days)
uOmnispace.MR
uOmnispace.MR is a software solution intended to be used for viewing, manipulating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The uOmnispace.MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The uOmnispace.MR Dynamic application is intended to provide a general postprocessing tool for time course studies.
The uOmnispace.MR MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The uOmnispace.MR MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The uOmnispace.MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.
The uOmnispace.MR Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.
· MR uOmnispace.MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.
The uOmnispace.MR DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The uOmnispace.MR United Neuro is intended to view, manipulate MR neurological images.
■ The uOmnispace.MR Cardiac Function is intended to view, evaluate functional analysis of cardiac MR images.
The uOmnispace.MR Flow Analysis is intended to view, evaluate flow analysis of flow MR images.
The uOmnispace.MR is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing MR images, can run alone or with other advanced commercially cleared applications.
This proposed device contains the following applications:
- uOmnispace.MR Stitching
- uOmnispace.MR Dynamic
- uOmnispace.MR MRS
- uOmnispace.MR MAPs
- uOmnispace.MR Breast Evaluation
- . uOmnispace.MR Brain Perfusion
- uOmnispace.MR Vessel Analysis
- uOmnispace.MR DCE Analysis
- uOmnispace.MR United Neuro
- uOmnispace.MR Cardiac Analysis
- uOmnispace.MR Flow Analysis
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Validation Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Dice | To evaluate the proposed device of automatic ventricular segmentation, we compared the results with those of the cardiac function application of predicate device. The Sørensen-Dice coefficient is used to evaluate consistency. If dice > 0.95, it is considered consistent between the two devices. | 1.00 |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 114 samples from 114 different patients.
- Data Provenance: The data includes patients of various genders (35 Male, 20 Female, 59 Unknown), ages (5 between 14-25, 12 between 25-40, 22 between 40-60, 13 between 60-79, 62 Unknown), and ethnicities (50 Europe, 53 Asia, 11 USA). The data was acquired using MR scanners from various manufacturers: UIH (58), GE (2), Philips (2), Siemens (52), and with different magnetic field strengths: 1.5T (23), 3.0T (41), 50 Unknown. The text does not explicitly state if the data was retrospective or prospective, but the mention of a "deep learning-based Automatic ventricular segmentation Algorithm for the LV&RV Contour Segmentation feature" and "The performance testing for deep learning-based Automatic ventricular segmentation Algorithm was performed on 114 subjects...during the product development" implies a retrospective study using existing data to validate the developed algorithm.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The test set's ground truth was established by comparing the proposed device's results with those of the predicate device. The text does not explicitly state that human experts established the ground truth for the test set by manually segmenting the images for direct comparison against the algorithm's output. Instead, it seems the predicate device's output serves as the "ground truth" for the comparison of the new device's algorithm.
However, for the training ground truth, the following was stated:
- Number of Experts: Two cardiologists.
- Qualifications: Both cardiologists had "more than 10 years of experience each."
4. Adjudication Method for the Test Set
The study does not describe an adjudication method for the test set in the conventional sense of multiple human readers independently assessing the cases. Instead, the comparison is made between the proposed device's algorithm output and the predicate device's output.
For the training ground truth, the following adjudication method was used:
- Manual tracing was performed by an experienced user.
- Validation of these contours was done by two independent experts (more than 10 years experience).
- If there was a disagreement, a consensus between the experts was reached.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
No MRMC comparative effectiveness study was done to assess how much human readers improve with AI vs without AI assistance. The study focuses on comparing the proposed device's algorithm performance directly against a predicate device's cardiac function application based on the Dice coefficient.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done for the "deep learning-based Automatic ventricular segmentation Algorithm" for the LV&RV Contour Segmentation feature. The device's algorithm output was directly compared to the output of the predicate device's cardiac function application using the Dice coefficient.
7. The Type of Ground Truth Used
For the test set, the "ground truth" for comparison was the output of the cardiac function application of the predicate device.
For the training set, the ground truth was expert consensus based on manual tracing by an experienced user and validated by two independent cardiologists with over 10 years of experience.
8. The Sample Size for the Training Set
The document states: "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, it does not provide the specific sample size for the training set.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established through manual annotation and expert consensus:
- It was "manually drawn on short axis slices in diastole and systole by two cardiologists with more than 10 years of experience each."
- "Manual tracing of the cardiac was performed by an experienced user."
- "The validation of these contours was done by two independent expert (more than 10 years) in this domain."
- "If there is a disagreement, a consensus between the experts was done."
Ask a specific question about this device
(83 days)
uOmnispace.MI
uOmnispace.MI is a software solution intended to be used for viewing, processing, evaluating and analyzing of PET, CT, MR, SPECT images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
u uOmnispace.MI MM Fusion application is intended to provide tools for viewing, analyzing and reporting PET, CT, MR, SPECT data with its flexible workflow and optimized layout protocols for dedicated reporting purposes on oncology, neurology, cardiology.
u uOmnispace.MI MM Oncology application is intended to provide tools to display and analyze the follow-up PET, CT, MR data, with which users can do image registration, lesion segmentation, and statistical analysis.
· uOmnispace.MI Dynamic Analysis application is intended to display PET data and anatomical data such as CT or MR, and supports to do lesion segmentation and output associated time activity curve.
u uOmnispace.MI NeuroQ application is intended to analyze the brain PET scan, give quantitative results of the relative activity of different brain regions, and make comparison of activity of normal brain regions in AC database or between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.
u uOmnispace.MI Emory Cardiac Toolbox application is intended to provide cardiac short axis reconstruction, browsing function. And it also performs perfusion analysis, activity analysis and cardiac function analysis of the cardiac short axis.
The uOmnispace.MI is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing PET, CT, MR, SPECT images, can run alone or with other advanced commercially cleared applications.
This proposed device contains the following applications:
- uOmnispace.MI MM Fusion
- uOmnispace.MI MM Oncology
- . uOmnispace.MI Dynamic Analysis
Additionally, uOmnispace.MI offers the users the options to run the following third-party applications in uOmnispace.MI:
- uOmnispace.MI NeuroQ ●
- uOmnispace.MI Emory Cardiac Toolbox ●
Here's an analysis of the acceptance criteria and study detailed in the provided document, addressing each of your requested points:
Acceptance Criteria and Study Details for uOmnispace.MI
1. Table of Acceptance Criteria and Reported Device Performance
For Spine Labeling Algorithm:
Acceptance Criteria | Reported Device Performance (Average Score) | Meets Criteria? |
---|---|---|
Average score higher than 4 points | 4.951 points | Yes |
For Rib Labeling Algorithm:
Acceptance Criteria | Reported Device Performance (Average Score) | Meets Criteria? |
---|---|---|
Average score higher than 4 points | 5 points | Yes |
Note: The document also states that an average score of "higher than 4 points is equivalent to the mean identification rate of spine labeling is greater than 92% (>83.3%, correctly labeled vertebrae number ≥23, total vertebrae number=25, 23/25=92%), and the mean identification rate of rib labeling is greater than 91.7%(>91.5% , correctly labeled rib number ≥22, total rib number=24, 22/24~91.7%)." This indicates the acceptance criteria are linked to established identification rates from literature, ensuring clinical relevance.
2. Sample Size Used for the Test Set and Data Provenance
For Spine Labeling Algorithm:
- Sample Size: 286 CT scans, corresponding to 267 unique patients.
- Data Provenance:
- Countries of Origin: Asian (Chinese) data (106 samples), European data (160 samples), The United States data (20 samples).
- Retrospective/Prospective: Not explicitly stated, but typically such large datasets collected for algorithm validation are retrospective.
For Rib Labeling Algorithm:
- Sample Size: 160 CT scans, corresponding to 156 unique patients.
- Data Provenance:
- Countries of Origin: Asian (Chinese) data (80 samples), The United States data (80 samples).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: At least one "senior clinical specialist" is explicitly mentioned for final review and modification. "Well-trained annotators" performed the initial annotations. The exact number of annotators is not specified.
- Qualifications of Experts:
- Annotators: Described as "well-trained annotators." Specific professional qualifications (e.g., radiologist, technician) or years of experience are not provided.
- Reviewer: "A senior clinical specialist." Specific professional qualifications or years of experience are not provided.
4. Adjudication Method for the Test Set
The adjudication method involved a multi-step process:
- Initial annotations were done by "well-trained annotators" using an interactive tool.
- For rib labeling, annotators "check each other's annotation."
- A "senior clinical specialist" performed a final check and modification to ensure correctness.
This indicates a multi-annotator review with a senior specialist as the final adjudicator. It is not explicitly a 2+1 or 3+1 method as such, but rather a hierarchical review process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The performance verification focused on the standalone algorithm's accuracy against a ground truth, rather than comparing human reader performance with and without AI assistance.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone (algorithm only) performance study was done. The entire performance verification section describes how the deep learning-based algorithms for spine and rib labeling were tested against ground truth annotations to assess their accuracy in an automated fashion. The reported scores explicitly reflect the algorithm's performance.
7. The Type of Ground Truth Used
The ground truth for both spine and rib labeling was established through expert consensus based on manual annotations, followed by review and modification by a senior clinical specialist. It is not directly pathology or outcome data.
8. The Sample Size for the Training Set
The document explicitly states: "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm." and "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."
However, the actual sample size for the training set is not provided in the given text.
9. How the Ground Truth for the Training Set Was Established
The document states that the training data and test data were independent. While it describes how the ground truth for the test set was established (well-trained annotators + senior clinical specialist review), it does not explicitly describe the methodology for establishing the ground truth for the training set. It can be inferred that a similar expert annotation process would have been used, but details are not provided.
Ask a specific question about this device
(195 days)
uOmnispace
uOmnispace is a software solution intended to be used for viewing, manipulation, communication and storage of medical images. It allows processing and filming of multimodality DICOM images.
It can be used as a stand-alone device or together with a variety of cleared and unmodified software options, and also support to plug in multi-vendor applications which meet interface requirements.
u Omnispace is intended to be used by trained professionals, including but not limited to physicians and medical technicians.
The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.
uOmnispace is a software only medical device, the hardware itself is not seen as part of the medical device and therefore not in the scope of this product.
uOmnispace provides 2D and 3D viewing, annotation and measurement tools, manually and automatically segmentation tools (Rib extraction algorithm is based on Machine Learning) and film and report features to cover the radiological tasks reading images and reporting. uOmnispace supports DICOM formatted images and objects, CT, MRI, PET and DR multimodality are supported.
uOmnispace is a software medical device that allows multiple users to remotely access clinical applications from compatible computers on a network. The system allows processing and filming of multimodality DICOM images. This software is for use with off the-shelf PC computer technology that meets defined minimum specifications.
uOmnispace communicates with imaging systems of different modalities and medical information systems of the hospital using the DICOM3.0 standard.
The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.
The acceptance criteria and the study proving the device meets accepted criteria for the uOmnispace Medical Image Post-processing Software are described below.
1. Table of Acceptance Criteria and Reported Device Performance:
Validation Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Average DICE | The average dice of testing data is higher than 0.8 | The average dice on testing data set is 0.855 |
2. Sample size used for the test set and data provenance:
- Sample Size: 60 chest CTs.
- Data Provenance: The data was collected during product development. The document does not specify the country of origin of the data nor explicitly states whether it was retrospective or prospective, though the context of "product development" often implies some level of retrospective analysis of collected data.
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated as a specific number, but it involved multiple annotators and a "senior clinical specialist".
- Qualifications: "Annotators" and a "senior clinical specialist". Specific details like years of experience or medical certifications (e.g., radiologist) are not provided for the individual annotators or the senior specialist.
4. Adjudication method for the test set:
- Adjudication Method: A multi-step process:
- An initial rib mask was generated using a threshold-based interactive tool.
- Annotators refined the first-round annotation.
- Annotators checked each other's annotations.
- A senior clinical specialist checked and modified annotations to ensure ground truth correctness.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly done to measure human reader improvement with AI assistance. The study focused on validating the standalone performance of the ML-based rib segmentation algorithm against ground truth.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study of the ML-based rib segmentation algorithm was done. The performance was evaluated by comparing its output to established ground truth.
7. The type of ground truth used:
- Type of Ground Truth: Expert consensus, established through a multi-step annotation and refinement process involving annotators and a senior clinical specialist.
8. The sample size for the training set:
- The document states that the training data used was "independent of the algorithm" but does not specify the sample size for the training set.
9. How the ground truth for the training set was established:
- The document does not explicitly describe how the ground truth for the training set was established, only that the training data and testing data were independent.
Ask a specific question about this device
Page 1 of 1