Search Results
Found 1 results
510(k) Data Aggregation
(51 days)
CONVERGENT LIFE SCIENCES, INC.
target3D Fusion is a software application intended to be used by physicians in a clinic or hospital for visualization in 2D and 3D, registration, and fusion of Ultrasound (US), Magnetic Resonance (MR) and Computed Tomography (CT) images of the prostate. The software features also include multi-modality data communication, surface and volume rendering, segmentation, multi-planar reconstruction, organ and regions of interest delineation, landmark selection, measurements, patient database management, and data reporting.
target3D Fusion is a software application, which allows a physician to segment the prostate gland, and identify and label various structures including regions of interest (ROIs) on a pre-procedural DICOM image. The software further allows a physician to fuse the prepared pre-procedural DICOM image files with one or more intra-procedure live DICOM image files to guide the procedure.
The software can delineate the gland boundary as well the boundaries of any other anatomical landmarks on a pre-procedure DICOM image. The structures including regions of interest are identified using visualization, and stored as standard surface format meshes. Each such structure is labeled uniquely.
target3D Fusion provides a physician with image fusion such that the information from a pre-procedure or planning imaging modality such as MR or CT is mapped to the frame of reference of the intra-procedure or live imaging modality such as ultrasound for real-time guidance while taking advantage of diagnostic capabilities of the pre-procedural planning image. The mapped information contains at least one structural image, and the target area to be treated. The pre-procedure image is registered with the intra-procedure image using a combination of rigid, affine and non-rigid elastic registration provides a correspondence or a deformation map, which is used to map planning information from the frame of reference of the planning image to an intra-procedure image.
The target3D Fusion device has acceptance criteria related to its segmentation accuracy, affine registration accuracy, and overall registration accuracy.
Here's a breakdown of the information requested:
- Table of acceptance criteria and the reported device performance:
Acceptance Criteria Category | Specific Criterion | Reported Device Performance |
---|---|---|
Segmentation Accuracy | Not explicitly stated as a numerical criterion, but aims for accurate segmentation compared to ground truth. | Average absolute volume difference errors: 2.8525% |
Affine Registration Accuracy | Errors measured as overlap between objects being registered. | Overlap errors: under 0.0001 mm |
Overall Registration Accuracy | Target registration error measured as average distance between landmarks (beads) across datasets. | Average distance (TRE): 1.7093 mm |
Standard deviation: 0.4008 mm |
-
Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated as a number of cases or images. The document mentions "datasets" for overall registration accuracy and "surfaces" for affine registration.
- Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or its country of origin. The test for overall registration accuracy mentions "phantoms containing beads," indicating some artificial data was used alongside potentially real clinical data for other tests.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the given text.
-
Adjudication method for the test set:
- This information is not provided in the given text.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A MRMC comparative effectiveness study was not described for target3D Fusion. The studies focused on the performance of the software itself rather than its impact on human reader performance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the described tests ("Segmentation Accuracy," "Affine Registration Accuracy," "Overall Registration Accuracy") appear to be standalone performance evaluations of the algorithm's capabilities against ground truth or synthetic deformations. The phrasing "compared segmentation algorithms in target3D Fusion with ground truth data" and "errors measured as the overlap between objects being registered" indicates algorithmic performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Segmentation Accuracy, it mentions "ground truth data" generally.
- For Affine Registration Accuracy, "synthetic deformations between surfaces" were used, meaning the ground truth for deformation was known.
- For Overall Registration Accuracy, "phantoms containing beads used as landmarks" were used, where the positions of these beads likely served as the ground truth for registration accuracy.
-
The sample size for the training set:
- This information is not provided in the given text.
-
How the ground truth for the training set was established:
- This information is not provided in the given text.
Ask a specific question about this device
Page 1 of 1