(148 days)
Sim&Size enables visualization of cerebral blood vessels for preoperational planning and sizing for neurovascular interventions and surgery. Sim&Size also allows for the ability to computationally model the placement of neurointerventional devices.
General functionalities are provided such as:
- Segmentation of neurovascular structures
- Automatic centerline detection
- Visualization of X-ray based images for 2D review and 3D reconstruction
- Placing and sizing tools
- Reporting tools
Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition.
Sim&Size is a Software as a Medical Device (SaMD) for simulating neurovascular implantable medical devices. The product enables visualization of cerebral blood vessels for preoperational planning for neurovascular interventions and surgery. It uses an image of the patient produced by 3D rotational angiography. It offers clinicians the possibility of simulating neurovascular implantable medical devices in the artery or in the aneurysm to be treated through endovascular surgery and provides support in the treatment for the sizing and positioning of implantable medical devices.
Each type of implant device is simulated in a simulation module of Sim&Size:
- FDsize, a module that allows pre-operationally planning Flow-Diverter (FD) devices.
- IDsize, a module that allows pre-operationally planning Intrasaccular (ID) devices.
- STsize, a module that allows pre-operationally planning Stent (ST) devices.
- FCsize, a module that allows pre-operationally planning First and filling coils (FC) devices.
Associated with these four modules, a common module is intended to import DICOM and to provide a 3D reconstruction of the vascular tree in the surgical area.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Sim&Size:
Acceptance Criteria and Device Performance
The provided document highlights performance testing without explicitly stating quantitative acceptance criteria. However, the nature of the tests implies the device must accurately "predictive behavior of the implantable medical device with its theoretical behavior," accurately "compare the device placement in a silicone phantom model with the device simulation," and accurately "compare the in vitro retrospective cases with the device simulation."
Given the context of a 510(k) submission, the implicit acceptance criterion is that the device's performance is substantially equivalent to the predicate device and that the new features do not raise new questions of safety and effectiveness.
Here's a table based on the types of performance tests conducted:
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Verification Testing: Predictive behavior matches theoretical behavior of implantable medical devices. | "Verification testing, which compares the predictive behavior of the implantable medical device with its theoretical behavior." (Implies successful verification based on "Conclusion" stating device "performs as intended.") |
Bench Testing: Simulated device placement matches physical placement in a silicone phantom model. | "Bench testing, which compares the device placement in a silicone phantom model with the device simulation." (Implies successful bench testing based on "Conclusion" stating device "performs as intended.") |
Retrospective In Vivo Testing: Simulated cases match actual in vivo outcomes (or in vitro representations of retrospective in vivo data). | "Retrospective in vivo testing, which compares the in vitro retrospective cases with the device simulation." (Implies successful retrospective testing based on "Conclusion" stating device "performs as intended.") This suggests the retrospective cases were either in vitro models derived from in vivo data or in vitro analyses of actual in vivo outcomes. The document specifically says "in vitro retrospective cases," which could mean a lab-based re-creation or analysis from real patient data. |
Overall Performance: New features do not introduce new safety or effectiveness concerns and the device is substantially equivalent to the predicate. | The Conclusion states: "The subject and predicate devices are substantially equivalent. The results of the verification and validation tests demonstrate that the Sim&Size device performs as intended. The new features added to the subject device do not raise new questions of safety and effectiveness." |
Study Details:
Based on the provided document, here's what can be inferred about the studies conducted:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated in the document.
- Data Provenance:
- "Retrospective in vivo testing" suggests real-world patient data, but the phrase "in vitro retrospective cases" implies these were lab-based re-creations or analyses of that data. The specific country of origin is not mentioned, but given the company's address (Montpellier, France), it's plausible the data could originate from Europe, although this is not confirmed.
- "Bench testing" uses a "silicone phantom model," which is an experimental setup, not clinical data provenance.
- "Verification testing" involves comparing theoretical behavior, which doesn't involve a dataset in the same way clinical or phantom models do.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The document refers to "theoretical behavior," "silicone phantom model," and "in vitro retrospective cases" as benchmarks, but it doesn't detail how the ground truth for "in vitro retrospective cases" was established or if experts were involved in defining the "theoretical behavior" or validating the phantom results.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided in the document.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study is explicitly mentioned. The device "enables visualization of cerebral blood vessels" and "allows for the ability to computationally model the placement of neurointerventional devices," but it's stated that "Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition." This indicates it's a tool for assistance, but the document does not detail studies on human reader performance improvement with this AI.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The "Verification testing," "Bench testing," and "Retrospective in vivo testing" (comparing simulations to "in vitro retrospective cases") all describe methods that would assess the algorithm's standalone performance without a human in the loop for the actual comparison/measurement, although human input (e.g., in segmentation, placing/sizing tools) is part of the device's intended use. The wording "compares the predictive behavior... with its theoretical behavior" and "compares the device placement... with the device simulation" explicitly refers to the device's performance, implying a standalone assessment of the algorithmic component.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Theoretical Behavior: Used for "Verification testing" (e.g., physical laws, engineering models of device deployment).
- Physical Phantom Model: Used for "Bench testing" (measurements from a physical silicone model).
- "In vitro retrospective cases": Used for "Retrospective in vivo testing." This implies a ground truth derived from actual patient data, analyzed or re-created in a laboratory (in vitro). It's not explicitly stated if this ground truth was pathology or outcomes data, but rather a representation of the in vivo reality.
-
The sample size for the training set:
- This information is not provided in the document. This section focuses on validation testing, not the training of any underlying models.
-
How the ground truth for the training set was established:
- This information is not provided as the document does not detail the training process.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).