Search Results
Found 2 results
510(k) Data Aggregation
(237 days)
LARALAB enables visualization, assessment and measurement of cardiovascular structures for:
- Preprocedural planning and sizing for cardiovascular interventions and surgery
- Postprocedural image review
To facilitate the above, LARALAB provides general functionality such as:
- Automatic segmentation of cardiovascular structures and other objects of interest (calcifications)
- Automatic measurements
- Manual measurement and adjustment tools
- Visualization and image reconstruction techniques: Multiplanar Reconstruction (MPR), Surface rendering
- Reporting tools
LARALAB is a stand-alone software developed to enable cardiologists, radiologists, heart surgeons and healthcare professionals ("Users") to import, view and process Medical Images. In particular, the software generates pre-calculated automatic segmentations and measurements based on deterministic Deep Learning Algorithms. Based on the output of the Deep Learning Algorithms, the User is able to further visualize, assess and measure ("Case Planning") various anatomical structures of the heart in the context of cardiovascular procedures (e.g., TAVR) such as heart valves, heart chambers, cardiac tissue and vessels, as well as such vessels and tissue relevant as access routes.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for LARALAB:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric | Acceptance Criterion | Reported Device Performance |
---|---|---|---|
Segmentation Accuracy | Dice score for primary cardiovascular structures (LA, LV, RV, RA) | Met predefined acceptance criteria | Ranged from 0.89 to 0.98 |
Dice score for secondary and tertiary structures | Met predefined acceptance criteria | Met predefined acceptance criteria | |
Mean Surface Distance (MSD) | Not explicitly stated, implied by "met predefined acceptance criteria" | Not explicitly stated, implied by met criteria | |
95th percentile Hausdorff distance (95% HD) | Not explicitly stated, implied by "met predefined acceptance criteria" | Not explicitly stated, implied by met criteria | |
Measurement Accuracy | Bland-Altman analysis: Mean bias and 95% Limits of Agreement for all assessed parameters | Within predefined acceptance criteria | Within predefined acceptance criteria |
Measurement Consistency (Ground Truth) | Intraclass Correlation Coefficient (ICC) for clinical experts' manual measurements | ICC > 0.75 | Above 0.75 for all measurements |
Cybersecurity | Identify medium or high-risk vulnerabilities | No medium or high-risk vulnerabilities identified | No medium or high-risk vulnerabilities identified |
Overall security posture | Strong overall security posture with no critical issues | Strong overall security posture with no critical issues |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 60 patient datasets
- Data Provenance: Multi-centric observational cohort study. The document does not explicitly state the country of origin but implies data diversity across different CT manufacturers and imaging parameters (slice thickness, contrast enhancement). The study was retrospective as it states "No datasets were included that were used for training the deep learning models," indicating these were pre-existing datasets not specifically collected for the deep learning training.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated as a specific number, but "clinical experts" are mentioned as generating the ground truth using the predicate device. The ICC values (above 0.75) confirm that multiple experts were involved and showed good agreement.
- Qualifications: "Expert clinicians" (implied to be cardiologists, radiologists, heart surgeons, or other healthcare professionals as per the device's intended users and the "Comparison" section referencing these specialists). No specific years of experience are provided, but their status as "experts" and their use of the predicate device for ground truth generation supports their qualification.
4. Adjudication Method for the Test Set
The document does not explicitly state a specific adjudication method like 2+1 or 3+1. However, since the Intraclass Correlation Coefficient (ICC) was calculated to assess the consistency between the clinical experts' manual measurements, it implies that multiple experts independently created measurements, and their agreement was quantified, likely without a formal adjudication process to resolve disagreements, but rather to confirm their consistency.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Yes, in spirit. While not explicitly termed an "MRMC comparative effectiveness study" in the context of human readers with AI vs. without AI assistance, the study involves expert clinicians generating ground truth using the predicate device (which the LARALAB device is compared against). This essentially sets up a comparison baseline for performance against a current standard.
- Effect Size of Human Readers Improve with AI vs. without AI assistance: The study focuses on comparing LARALAB's automatic segmentations and measurements to manual ground-truth measurements obtained by clinicians using the predicate device. It demonstrates that LARALAB's automatic outputs are as accurate and reliable as those obtained using the predicate device manually. The document states, "The study concluded that LARALAB's automatic pre-calculated segmentations and measurements are as accurate and reliable as those obtained using the predicate device." This implies that the AI-driven automated measurements are on par with, and potentially reduce the burden of, manual measurements by human experts. No specific numerical effect size of human improvement with AI assistance is provided, as the study primarily validated the AI's standalone performance against human-derived ground truth.
6. If a Standalone Performance (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes. The study directly evaluates the "automatic pre-calculated segmentations and measurements" generated by LARALAB's "deterministic Deep Learning Algorithms." These automatic outputs are then compared against the ground truth. This is a standalone performance evaluation of the algorithm. The device then allows the user to "further visualize, assess and measure" and "review/adjust/approve" the pre-calculated outputs, indicating that the algorithm's initial output is standalone.
7. The Type of Ground Truth Used
- Expert Consensus/Manual Measurements using a Predicate Device. The ground truth was established by "expert clinicians with the predicate device." Specifically, manual measurements generated by these experts using the predicate device served as the reference. The ICC was used to confirm the consistency of these expert measurements.
8. The Sample Size for the Training Set
- The document states, "No datasets were included that were used for training the deep learning models" for the test set. However, the actual sample size for the training set is not provided in this document.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the deep learning algorithms were used to generate "pre-calculated automatic segmentations and measurements." Without further information, one would infer similar methods (e.g., expert annotation) were likely used, but this is not confirmed in the text.
Ask a specific question about this device
(62 days)
Materialise Mimics Enlight is intended for use as a software interface and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file.
It is also intended as a software to aid interpreting DICOM compliant images for structural heart and vascular treatment options. For this purpose. Materialise Mimics Enlight provides additional visualisation and measurement tools to enable the user to screen and plan the procedure.
The Materialise Mimics Enlight output file can be used for the fabrication of physical replicas of the using traditional additive manufacturing methods. The physical replica can be used for diagnostic purposes in the field of cardiovascular applications.
Materialise Mimics Enlight should be used in conjunction with other diagnostic tools and expert clinical judgement.
Materialise Mimics Enlight for structural heart and vascular planning is a software interface that is organized in a workflow approach. High level, each workflow in the field of structural heart and vascular will follow the same kind of structure of 4 steps which will enable the user to plan the procedure:
-
- Analyse anatomy
-
- Plan device
-
- Plan delivery
-
- Output
To perform these steps the software provides different methods and tools to visualize and measure based on the medical images.
The user is a medical professional, like cardiologists or clinical specialists. To start the workflow DICOM compliant medical images will need to be imported. The software will read the images and convert them into a project file. The user can now start the workflow and follow the steps visualized in the software. The base of the workflow is to create a 3D reconstruction of the anatomy based on the medical images to use this further together with the 2D medical images in the workflow to plan the procedure.
The provided text describes the Materialise Mimics Enlight device and its 510(k) submission for FDA clearance. However, it does not contain specific details about acceptance criteria, numerical performance data, details of the study (sample sizes, ground truth provenance, number/qualifications of experts, adjudication methods, MRMC studies, or standalone performance), or training set information.
The document mainly focuses on:
- Defining Materialise Mimics Enlight's intended use and indications.
- Establishing substantial equivalence to predicate devices (Mimics Medical, 3mensio Workstation, Mimics inPrint).
- Describing general technological similarities and differences between the subject device and predicates.
- Stating that software verification and validation were performed according to FDA guidance, including bench testing and end-user validation.
- Mentioning "geometric accuracy" assessments for virtual models and physical replicas, and interrater consistency for the semi-automatic neo-LVOT tool, with the conclusion that "deviations were within the acceptance criteria."
Therefore, based only on the provided text, I cannot complete the requested tables and descriptions with specific numerical values for acceptance criteria or study results.
Here's a summary of what can be extracted and what is missing:
1. Table of acceptance criteria and reported device performance
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Geometric Accuracy (Virtual Models) | Not specified numerically in document | "Deviations were within the acceptance criteria." |
Geometric Accuracy (Physical Replicas) | Not specified numerically in document | "Deviations were within the acceptance criteria." |
Semi-automatic Neo-LVOT Tool | Not specified numerically in document (e.g., target interrater consistency percentage or statistical threshold) | "demonstrated a higher interrater consistency/repeatability." |
Missing Information: Specific numerical values for the acceptance criteria for geometric accuracy (e.g., tolerance in mm) and for interrater consistency of the neo-LVOT tool.
2. Sample size used for the test set and data provenance
- Sample size for test set: Not specified. The document mentions "Bench testing" and "a set of 3D printers" for physical replicas, but no case numbers.
- Data provenance (country of origin, retrospective/prospective): Not specified.
3. Number of experts used to establish the ground truth for the test set and their qualifications
- Number of experts: Not specified.
- Qualifications of experts: Not specified. The document mentions "medical professional, like cardiologists or clinical specialists" as intended users, but not specifically for ground truth establishment in a test set.
4. Adjudication method for the test set
- Adjudication method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and its effect size
- The document implies general "end-user validation" and mentions the neo-LVOT tool showing "higher interrater consistency/repeatability," which suggests some form of human reader involvement. However, it does not explicitly state that a multi-reader, multi-case (MRMC) comparative effectiveness study was performed in the context of human readers improving with AI vs. without AI assistance.
- Effect size: Not specified.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- "Software verification and validation were performed... This includes verification against defined requirements, and validation against user needs. Both end-user validation and bench testing were performed." This implies that the device's performance was evaluated, potentially including standalone aspects, but it doesn't separate out a clear standalone performance study result. The "semi-automatic" nature of the Neo-LVOT tool means it's not purely algorithmic.
7. The type of ground truth used
- While not explicitly stated, the context of "geometric accuracy of virtual models" and "physical replicas" suggests ground truth would be based on:
- Geometric measurements: Reference measurements from the original DICOM data or CAD models for virtual models, and precise measurements of the physical replicas for comparison.
- For the neo-LVOT tool, ground truth for "interrater consistency/repeatability" would likely be derived from expert measurements.
8. The sample size for the training set
- Sample size for training set: Not specified. The document focuses on verification and validation, not development or training data.
9. How the ground truth for the training set was established
- Ground truth for training set: Not specified. As above, the document does not detail the training set.
Ask a specific question about this device
Page 1 of 1