Search Results
Found 2 results
510(k) Data Aggregation
(200 days)
DeepCatch analyzes CT images and auto-segments anatomical structures (skin, bone, muscle, visceral fat, subcutaneous fat, internal organs and central nervous system). Then, its volume and proportions are calculated and provided with the relevant 3D model.
By using DeepCatch, it is possible to obtain accurate values for the volume and proportion of each anatomical structures by secondary utilization of CT images obtained for various purposes in the medical field. The type of input data is whole body CT. This device is intended to be used in conjunction with professional clinical judgement. The physician is responsible for inspecting and confirming all results.
DeepCatch is medial image processing software that provides 3D reconstruction and visualization of ROI, advanced image quality improvement, auto segmentation for specific target, texture analysis, etc. Data that accurately analyzes the amount of skeletal muscle and adipose tissue distributed in the body in 3D can be used as base data in various fields.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text.
DeepCatch Device Performance Study Analysis
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the null and alternative hypotheses for the performance tests. The reported device performance is the outcome of these tests.
| Test Type & Metric | Acceptance Criteria (Alternative Hypothesis) | Reported Device Performance |
|---|---|---|
| Internal Datasets (n=100) | ||
| DSC (between GT & DeepCatch segmentation results) | Group's DSC mean ≥ 0.900 | DSC means ≥ 90% (met) |
| External Datasets (n=580) | ||
| DSC (between GT & DeepCatch segmentation results) | Group's DSC mean ≥ 0.900 | DSC mean > 90% in all areas (met) |
| Volume (Difference between GT & DeepCatch measurement results) | Mean of within-group difference < ±10% (0.10) | Mean value of volume < 10% (met) |
| Area (Difference between GT & DeepCatch measurement results) | Mean of within-group difference < ±10% (0.10) | Mean value of area < 10% (met) |
| Ratio (Difference between GT & DeepCatch measurement results) | Mean of within-group difference < ±1% (0.01) | Mean value of ratio < 1% (met) |
| Body Circumference (Difference between GT & DeepCatch measurement results) | Mean of within-group difference < ±5% (0.05) | Mean value of body circumference < 5% (met) |
| US-based Datasets (n=167) | ||
| DSC (all anatomical structures) | Group's DSC mean ≥ 0.900 | DSC mean > 90% (met) |
| Volume | Mean of within-group difference < ±10% | Volume < 10% (met) |
| Area | Mean of within-group difference < ±10% | Area < 1% (implies the same criteria for ratio) (met) |
| Abdominal Circumference (error measurement) | Mean of within-group difference < ±5% | Error measurements for GT to abdominal circumference < 5% (met) |
| Comparative Performance Test: | ||
| DSC (DeepCatch vs. MEDIP PRO) | DeepCatch DSC not inferior to MEDIP PRO | DeepCatch DSC was not inferior to MEDIP PRO, and showed better performance for Muscle segmentation (met) |
| Volume, Ratio, Area, Body Circumference (DeepCatch vs. Synapse 3D) | DeepCatch performance not inferior to Synapse 3D for these metrics | DeepCatch showed no difference compared to Synapse 3D, and showed better performance in AVF Area (AW) and SF Area (AW) (met) |
2. Sample Sizes and Data Provenance
- Internal Datasets: n=100. Provenance not explicitly stated for internal datasets, but the context implies they are from unmentioned sources used internally for development/initial testing.
- External Datasets: n=580.
- Country of Origin: Korea (562 scans), France (18 scans).
- Retrospective/Prospective: Not explicitly stated, but typically external datasets for validation are retrospective.
- US-based Datasets: n=167.
- Country of Origin: US-based locations (East River Medical Imaging).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective.
- Comparative Performance Test (MEDIP PRO): n=100.
- Country of Origin: US (scanners from Siemens and Healthineers).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective.
- Comparative Performance Test (Synapse 3D): n=100.
- Country of Origin: US (scanners from Siemens and Healthineers).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective.
3. Number of Experts and Qualifications for Ground Truth
- The text states: "Ground truthing for each image was created by a licensed physician" for the comparative performance tests (MEDIP PRO and Synapse 3D comparison sets).
- For the internal, external, and US-based datasets, it refers to "GT" (Ground Truth) but does not explicitly state how many experts or their specific qualifications for establishing this ground truth, only that DeepCatch's results were compared against this GT. It is strongly implied the GT was expert-created, as is standard practice for medical image segmentation/measurement.
4. Adjudication Method for the Test Set
- The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1 consensus) for establishing the ground truth on any of the test sets. It only mentions that "Ground truthing for each image was created by a licensed physician." This might imply a single expert per case, or a process not detailed in the summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a "Multi-Reader Multi-Case (MRMC) comparative effectiveness study" involving human readers improving with AI vs. without AI assistance was not conducted or described.
- The comparative performance tests focused on comparing the algorithm's performance directly against predicate devices (other algorithms), not on human reader performance with and without AI assistance.
6. Standalone (Algorithm Only) Performance Study
- Yes, standalone performance studies were done.
- The "Performance test using data set from Korea (562) and France (18)" and the "Performance test using data set from US-based locations with DeepCatch" are examples of standalone algorithm performance evaluations, where the DeepCatch algorithm's segmentation and measurement results were compared directly against the established Ground Truth (GT).
- The "Comparative Performance Test" sections also evaluate the algorithm's standalone performance against other algorithms (predicate devices), rather than human-in-the-loop performance.
7. Type of Ground Truth Used
- The ground truth used was expert consensus / expert-labeled data.
- The text explicitly states for the comparative performance tests: "Ground truthing for each image was created by a licensed physician."
- For the other performance tests, "GT" is referenced, implying a similar expert-derived ground truth. There's no mention of pathology or outcomes data being used as ground truth for segmentation or volume measurements.
8. Sample Size for the Training Set
- The document states: "All data used images independent of the images used to learn the algorithm."
- However, the specific sample size for the training set is not provided in this summary.
9. How the Ground Truth for the Training Set Was Established
- The document implies that the training data had its own ground truth ("images used to learn the algorithm"), but it does not describe how the ground truth for the training set was established. This information is typically found in a more detailed technical report.
Ask a specific question about this device
(203 days)
MEDIP PRO is intended for use as a software interface and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file. It is also used as pre-operative software for treatment planning.
The 3D printed models generated from the output file are meant for non-diagnostic use. MEDIP PRO should be used in conjunction with other diagnostic tools and expert clinical judgement.
MEDIP PRO is medial image processing software that provides 3D reconstruction and visualization of ROI, advanced image quality improvement, auto segmentation for specific target, texture analysis and etc. through loading DICOM file imaged from CT or MRI by user(doctors). Also, it supports exporting STL data for 3D printing.
The provided text describes the MEDIP PRO device, but it lacks detailed information regarding specific acceptance criteria and the comprehensive study results to explicitly prove the device meets these criteria. The document focuses on regulatory approval (510(k)) by demonstrating substantial equivalence to a predicate device, rather than presenting a detailed performance study with quantifiable acceptance criteria.
However, based on the available information, we can extract some relevant details and acknowledge the missing ones:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance | Study to Prove Performance |
|---|---|---|
| Segmentation Performance: Similarity of auto-segmentation results to ground truth. | "Segmentation Performance" was evaluated. Specific metrics (e.g., Dice score, Jaccard index, accuracy, precision, recall) and their target values are not specified in the provided document. | Comparative Performance Test with a predicate device. Details on specific metrics and results are not provided. |
| Measurement Accuracy: Accuracy of distance measurements within the software. | "Measurement of Distance Phantom study" was conducted. Specific accuracy limits (e.g., ±X mm) are not specified. | Comparative Performance Test with a predicate device. Details on specific metrics and results are not provided. |
| Usability: User experience and ease of use of the software. | "Usability test System measurements & segmentation" was conducted. Specific acceptance criteria for usability (e.g., task completion rate, time on task, satisfaction scores) are not specified. | Comparative Performance Test with a predicate device. Details on specific metrics and results are not provided. |
| Software Validation: Adherence to software development processes and functional requirements. | "The MEDIP PRO... was designed and developed according to a software development process and was verified and validated." | Verified each independent software subsystem against defined requirements, interfaces between subsystems, and validated the integrated system against overall system requirements. |
Study Details Based on Provided Information:
2. Sample size used for the test set and the data provenance:
- The document does not specify the sample size (number of cases or images) used for the "Comparative Performance Test" which included segmentation and measurement evaluation.
- The data provenance (e.g., country of origin, retrospective or prospective) for the test set is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It only mentions that the device "should be used in conjunction with other diagnostic tools and expert clinical judgement."
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not specify any adjudication method used for the test set.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC comparative effectiveness study involving human readers with and without AI assistance was not mentioned or performed. The study mentioned is a "Comparative Performance Test" between MEDIP PRO and a predicate device, focusing on the device's inherent performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done. The "Comparative Performance Test" and "Performance Test" sections describe evaluating the device's (MEDIP PRO application) functionalities and performance (segmentation, measurement) against a predicate device. This implies evaluating the algorithm's performance without direct human-in-the-loop interaction for performance improvement measurement within the study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document does not explicitly state the type of ground truth used for the segmentation and measurement performance evaluations. It refers to a "Distance Phantom study" for measurements, implying physical phantoms as ground truth for that aspect. For segmentation, it's likely expert-derived segmentations, but this is not confirmed.
8. The sample size for the training set:
- The document does not provide any information regarding the sample size used for the training set for the MEDIP PRO software (if it uses machine learning/AI for features like auto-segmentation, which is implied by "auto segmentation for specific target").
9. How the ground truth for the training set was established:
- The document does not provide any information on how ground truth was established for the training set.
Ask a specific question about this device
Page 1 of 1