Search Results
Found 1 results
510(k) Data Aggregation
(83 days)
uOmnispace.MI
uOmnispace.MI is a software solution intended to be used for viewing, processing, evaluating and analyzing of PET, CT, MR, SPECT images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
u uOmnispace.MI MM Fusion application is intended to provide tools for viewing, analyzing and reporting PET, CT, MR, SPECT data with its flexible workflow and optimized layout protocols for dedicated reporting purposes on oncology, neurology, cardiology.
u uOmnispace.MI MM Oncology application is intended to provide tools to display and analyze the follow-up PET, CT, MR data, with which users can do image registration, lesion segmentation, and statistical analysis.
· uOmnispace.MI Dynamic Analysis application is intended to display PET data and anatomical data such as CT or MR, and supports to do lesion segmentation and output associated time activity curve.
u uOmnispace.MI NeuroQ application is intended to analyze the brain PET scan, give quantitative results of the relative activity of different brain regions, and make comparison of activity of normal brain regions in AC database or between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.
u uOmnispace.MI Emory Cardiac Toolbox application is intended to provide cardiac short axis reconstruction, browsing function. And it also performs perfusion analysis, activity analysis and cardiac function analysis of the cardiac short axis.
The uOmnispace.MI is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing PET, CT, MR, SPECT images, can run alone or with other advanced commercially cleared applications.
This proposed device contains the following applications:
- uOmnispace.MI MM Fusion
- uOmnispace.MI MM Oncology
- . uOmnispace.MI Dynamic Analysis
Additionally, uOmnispace.MI offers the users the options to run the following third-party applications in uOmnispace.MI:
- uOmnispace.MI NeuroQ ●
- uOmnispace.MI Emory Cardiac Toolbox ●
Here's an analysis of the acceptance criteria and study detailed in the provided document, addressing each of your requested points:
Acceptance Criteria and Study Details for uOmnispace.MI
1. Table of Acceptance Criteria and Reported Device Performance
For Spine Labeling Algorithm:
Acceptance Criteria | Reported Device Performance (Average Score) | Meets Criteria? |
---|---|---|
Average score higher than 4 points | 4.951 points | Yes |
For Rib Labeling Algorithm:
Acceptance Criteria | Reported Device Performance (Average Score) | Meets Criteria? |
---|---|---|
Average score higher than 4 points | 5 points | Yes |
Note: The document also states that an average score of "higher than 4 points is equivalent to the mean identification rate of spine labeling is greater than 92% (>83.3%, correctly labeled vertebrae number ≥23, total vertebrae number=25, 23/25=92%), and the mean identification rate of rib labeling is greater than 91.7%(>91.5% , correctly labeled rib number ≥22, total rib number=24, 22/24~91.7%)." This indicates the acceptance criteria are linked to established identification rates from literature, ensuring clinical relevance.
2. Sample Size Used for the Test Set and Data Provenance
For Spine Labeling Algorithm:
- Sample Size: 286 CT scans, corresponding to 267 unique patients.
- Data Provenance:
- Countries of Origin: Asian (Chinese) data (106 samples), European data (160 samples), The United States data (20 samples).
- Retrospective/Prospective: Not explicitly stated, but typically such large datasets collected for algorithm validation are retrospective.
For Rib Labeling Algorithm:
- Sample Size: 160 CT scans, corresponding to 156 unique patients.
- Data Provenance:
- Countries of Origin: Asian (Chinese) data (80 samples), The United States data (80 samples).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: At least one "senior clinical specialist" is explicitly mentioned for final review and modification. "Well-trained annotators" performed the initial annotations. The exact number of annotators is not specified.
- Qualifications of Experts:
- Annotators: Described as "well-trained annotators." Specific professional qualifications (e.g., radiologist, technician) or years of experience are not provided.
- Reviewer: "A senior clinical specialist." Specific professional qualifications or years of experience are not provided.
4. Adjudication Method for the Test Set
The adjudication method involved a multi-step process:
- Initial annotations were done by "well-trained annotators" using an interactive tool.
- For rib labeling, annotators "check each other's annotation."
- A "senior clinical specialist" performed a final check and modification to ensure correctness.
This indicates a multi-annotator review with a senior specialist as the final adjudicator. It is not explicitly a 2+1 or 3+1 method as such, but rather a hierarchical review process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The performance verification focused on the standalone algorithm's accuracy against a ground truth, rather than comparing human reader performance with and without AI assistance.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone (algorithm only) performance study was done. The entire performance verification section describes how the deep learning-based algorithms for spine and rib labeling were tested against ground truth annotations to assess their accuracy in an automated fashion. The reported scores explicitly reflect the algorithm's performance.
7. The Type of Ground Truth Used
The ground truth for both spine and rib labeling was established through expert consensus based on manual annotations, followed by review and modification by a senior clinical specialist. It is not directly pathology or outcome data.
8. The Sample Size for the Training Set
The document explicitly states: "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm." and "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."
However, the actual sample size for the training set is not provided in the given text.
9. How the Ground Truth for the Training Set Was Established
The document states that the training data and test data were independent. While it describes how the ground truth for the test set was established (well-trained annotators + senior clinical specialist review), it does not explicitly describe the methodology for establishing the ground truth for the training set. It can be inferred that a similar expert annotation process would have been used, but details are not provided.
Ask a specific question about this device
Page 1 of 1