Search Results
Found 1 results
510(k) Data Aggregation
(62 days)
Materialise Mimics Enlight
Materialise Mimics Enlight is intended for use as a software interface and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file.
It is also intended as a software to aid interpreting DICOM compliant images for structural heart and vascular treatment options. For this purpose. Materialise Mimics Enlight provides additional visualisation and measurement tools to enable the user to screen and plan the procedure.
The Materialise Mimics Enlight output file can be used for the fabrication of physical replicas of the using traditional additive manufacturing methods. The physical replica can be used for diagnostic purposes in the field of cardiovascular applications.
Materialise Mimics Enlight should be used in conjunction with other diagnostic tools and expert clinical judgement.
Materialise Mimics Enlight for structural heart and vascular planning is a software interface that is organized in a workflow approach. High level, each workflow in the field of structural heart and vascular will follow the same kind of structure of 4 steps which will enable the user to plan the procedure:
-
- Analyse anatomy
-
- Plan device
-
- Plan delivery
-
- Output
To perform these steps the software provides different methods and tools to visualize and measure based on the medical images.
The user is a medical professional, like cardiologists or clinical specialists. To start the workflow DICOM compliant medical images will need to be imported. The software will read the images and convert them into a project file. The user can now start the workflow and follow the steps visualized in the software. The base of the workflow is to create a 3D reconstruction of the anatomy based on the medical images to use this further together with the 2D medical images in the workflow to plan the procedure.
The provided text describes the Materialise Mimics Enlight device and its 510(k) submission for FDA clearance. However, it does not contain specific details about acceptance criteria, numerical performance data, details of the study (sample sizes, ground truth provenance, number/qualifications of experts, adjudication methods, MRMC studies, or standalone performance), or training set information.
The document mainly focuses on:
- Defining Materialise Mimics Enlight's intended use and indications.
- Establishing substantial equivalence to predicate devices (Mimics Medical, 3mensio Workstation, Mimics inPrint).
- Describing general technological similarities and differences between the subject device and predicates.
- Stating that software verification and validation were performed according to FDA guidance, including bench testing and end-user validation.
- Mentioning "geometric accuracy" assessments for virtual models and physical replicas, and interrater consistency for the semi-automatic neo-LVOT tool, with the conclusion that "deviations were within the acceptance criteria."
Therefore, based only on the provided text, I cannot complete the requested tables and descriptions with specific numerical values for acceptance criteria or study results.
Here's a summary of what can be extracted and what is missing:
1. Table of acceptance criteria and reported device performance
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Geometric Accuracy (Virtual Models) | Not specified numerically in document | "Deviations were within the acceptance criteria." |
Geometric Accuracy (Physical Replicas) | Not specified numerically in document | "Deviations were within the acceptance criteria." |
Semi-automatic Neo-LVOT Tool | Not specified numerically in document (e.g., target interrater consistency percentage or statistical threshold) | "demonstrated a higher interrater consistency/repeatability." |
Missing Information: Specific numerical values for the acceptance criteria for geometric accuracy (e.g., tolerance in mm) and for interrater consistency of the neo-LVOT tool.
2. Sample size used for the test set and data provenance
- Sample size for test set: Not specified. The document mentions "Bench testing" and "a set of 3D printers" for physical replicas, but no case numbers.
- Data provenance (country of origin, retrospective/prospective): Not specified.
3. Number of experts used to establish the ground truth for the test set and their qualifications
- Number of experts: Not specified.
- Qualifications of experts: Not specified. The document mentions "medical professional, like cardiologists or clinical specialists" as intended users, but not specifically for ground truth establishment in a test set.
4. Adjudication method for the test set
- Adjudication method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and its effect size
- The document implies general "end-user validation" and mentions the neo-LVOT tool showing "higher interrater consistency/repeatability," which suggests some form of human reader involvement. However, it does not explicitly state that a multi-reader, multi-case (MRMC) comparative effectiveness study was performed in the context of human readers improving with AI vs. without AI assistance.
- Effect size: Not specified.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- "Software verification and validation were performed... This includes verification against defined requirements, and validation against user needs. Both end-user validation and bench testing were performed." This implies that the device's performance was evaluated, potentially including standalone aspects, but it doesn't separate out a clear standalone performance study result. The "semi-automatic" nature of the Neo-LVOT tool means it's not purely algorithmic.
7. The type of ground truth used
- While not explicitly stated, the context of "geometric accuracy of virtual models" and "physical replicas" suggests ground truth would be based on:
- Geometric measurements: Reference measurements from the original DICOM data or CAD models for virtual models, and precise measurements of the physical replicas for comparison.
- For the neo-LVOT tool, ground truth for "interrater consistency/repeatability" would likely be derived from expert measurements.
8. The sample size for the training set
- Sample size for training set: Not specified. The document focuses on verification and validation, not development or training data.
9. How the ground truth for the training set was established
- Ground truth for training set: Not specified. As above, the document does not detail the training set.
Ask a specific question about this device
Page 1 of 1