Search Results
Found 1 results
510(k) Data Aggregation
(210 days)
Surgical Reality Viewer is a medical imaging visualization software intended to assist trained healthcare professionals with preoperative and intraoperative visualizations, by displaying 2D and 3D renderings of DICOM compliant patient images and normal anatomic segmentations derived from patient images as well as functions for manipulation of segmentations and 3D models.
Surgical Reality Viewer assists the trained healthcare professional who is responsible for making all final patient management decisions.
The machine learning algorithms in use by Surgical Reality Viewer are intended for use on adult patients aged 22 years and over.
Surgical Reality Viewer is medical imaging visualization software that accepts DICOM compliant images (e.g. CT-scans or MR images) and segmentation files in various 3D object file formats (e.g. NifTi, OBJ, MHD, STL, etc.). The device can generate preliminary segmentations of normal anatomy on demand using machine learning and computer vision algorithms. It provides tools for editing and/or creating segmentations using various built-in 2D and 3D image manipulation functions. The software generates a 3D segmented view of the loaded patient data, either on a supported 2D or 3D screen, and offers features such as pre-operative (re)viewing of DICOM data overlaid with segmentation, (intra/post)operative visualization of anatomical structures, 2D-viewing, volume rendering, surface rendering, immersive and interactive 3D-viewing, 2D and 3D measuring of DICOM image data, storing on a local device, anatomic labelling including segmentation tools, and tools for annotations, brushing or carving of anatomical structures. Surgical Reality Viewer runs on a dedicated computer within the customer environment, meeting specific hardware requirements including a Windows operating system (version 10 or higher), GPU (Nvidia GeForce 2070), CPU (Intel i7), 16GB RAM, and at least 100GB free hard drive space.
Here's a breakdown of the acceptance criteria and study details for the Surgical Reality Viewer, based on the provided FDA 510(k) clearance letter and summary:
Acceptance Criteria and Reported Device Performance
The provided document details the performance of the machine learning algorithms for various anatomical segmentations using the Sørensen–Dice coefficient (DSC). Additionally, it describes a qualitative assessment of suitability.
Table of Acceptance Criteria (Implicit) and Reported Device Performance
| Anatomical Structure | Metric (Implicit Acceptance Criteria) | Reported Device Performance |
|---|---|---|
| Lobe segmentation | Average Sørensen–Dice coefficient (DSC) | 0.97 |
| - LUL | DSC | 0.98 |
| - LLL | DSC | 0.98 |
| - RUL | DSC | 0.98 |
| - RLL | DSC | 0.98 |
| - RML | DSC | 0.96 |
| Vessel segmentation | Average Sørensen–Dice coefficient (DSC) | 0.84 |
| - Artery | DSC | 0.84 |
| - Vein | DSC | 0.83 |
| Airway segmentation | Sørensen–Dice coefficient (DSC) | 0.96 |
| Aorta segmentation | Sørensen–Dice coefficient (DSC) | 0.96 |
| Pulmonary segmentation | Average Sørensen–Dice coefficient (DSC) | 0.85 |
| - Left segments | DSC | 0.85 |
| - Right segments | DSC | 0.85 |
| Qualitative Scores (Suitability) | (Score 1-5, higher is better) | Reported Scores: |
| Airways segmentations | Suitability score | 4.8 |
| Artery segmentations | Suitability score | 4.8 |
| Vein segmentations | Suitability score | 4.9 |
| Lobe Segmentations | Suitability score | 5.0 |
| Pulmonary lobe segments | Suitability score | 4.7 |
| Aorta segmentations | Suitability score | 5.0 |
Note on Acceptance Criteria: The document directly presents the performance metrics (DSC and qualitative scores). While explicit numerical acceptance criteria (e.g., "must be >= 0.95 DSC") are not stated, the reported high performance figures implicitly demonstrate the device meets acceptable levels for these metrics.
Study Details
1. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 102 CT images (Each study belonged uniquely to a single patient subject).
- Data Provenance: 60 (n=60) scans were obtained from the United States. The remaining 42 scans' country of origin is not specified, but the document mentions "geographical location" as a subgroup for generalizability.
- Retrospective/Prospective: Not explicitly stated, but the mention of "curated datasets" and "clinical testing dataset" without ongoing patient enrollment suggests a retrospective study.
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated as a specific number. The document mentions "trained professionals" who generated the initial segmentations and "thoracic surgeons with a minimum of 2 years professional working experience" who verified these segmentations. This implies at least two distinct groups of experts were involved, potentially multiple individuals within each group.
- Qualifications of Experts:
- Initial Segmentation Generation: "Trained professionals." (Specific professional background and experience level not detailed).
- Segmentation Verification: "Thoracic surgeons with a minimum of 2 years professional working experience."
3. Adjudication Method (for the Test Set)
- Adjudication Method: Not explicitly stated. The process described is "segmented by trained professionals and the segmentations were verified by thoracic surgeons." This suggests a single ground truth was established after the verification step, but the specific process for resolving discrepancies (e.g., consensus, tie-breaking by a third expert) is not detailed. It does not mention a 2+1 or 3+1 method.
4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not explicitly described. The study focuses on the standalone performance of the algorithm against ground truth, and separate qualitative scoring of the suitability of segmentations. There is no mention of comparing human readers with and without AI assistance to determine an "effect size" of improvement.
5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Study: Yes, a standalone performance study was done. The "Performance was verified by comparing segmentations generated by the machine learning models against ground truth segmentations generated by trained professionals." This directly assesses the algorithm's performance without a human in the loop for generating the primary segmentation output being evaluated for accuracy.
6. The Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the quantitative analysis (DSC) was established by "expert consensus" (or at least expert-verified segmentations). Specifically, "segmentations generated by trained professionals and the segmentations were verified by thoracic surgeons." For the qualitative assessment, "medical professionals were tasked to qualitatively score the suitability of the segmentations provided through the Viewer," which is also an expert-based evaluation of the AI output.
7. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document mentions "Each of the algorithms has been trained and tuned on curated datasets representative of the intended patient population," but does not provide a specific number for the training set. It only states that a "CT image was either part of the tuning or testing dataset and not in both," indicating that the 102 CT images used for testing were separate from the training/tuning data.
8. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth: Not explicitly stated. The document mentions "trained and tuned on curated datasets representative of the intended patient population." While not explicitly detailed, it's reasonable to infer that a similar expert-driven process (like the ground truth establishment for the test set) would have been used for creating the ground truth in the training dataset to ensure high-quality training data.
Ask a specific question about this device
Page 1 of 1