K Number
K212373
Device Name
Sim&Size
Manufacturer
Date Cleared
2022-01-27

(181 days)

Product Code
Regulation Number
892.2050
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Sim&Size enables visualization of cerebral blood vessels for preoperational planning and sizing for neurovascular interventions and surgery. Sim&Size also allows for the ability to computationally model the placement of neurointerventional devices.

General functionalities are provided such as:

  • Segmentation of neurovascular structures
  • Automatic centerline detection
  • Visualization of X-Ray based images for 2D review and 3D reconstruction
  • Placing and sizing tools
  • Reporting tools

Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition.

Device Description

The Sim&Size software is a medical device intended to provide a 3D view of the final placement of implants. It uses an image of the patient produced by 3D rotational angiography. It offers clinicians the possibility of computationally model neurovascular implantable medical devices (IMD) in the artery or in the aneurysm to be treated through endovascular surgery. IMD such as the flow-diverters (FD) and the intrasaccular devices (ISD).

Sim&Size is a software designed with three modules. FDsize is the module that allows to pre-operationally plan the choice of size of flow-diverter devices. IDsize is the module that allows to pre-operationally plan the choice of size of intrasaccular devices. STsize is the module that allows to pre-operationally plan the choice of stent devices.

Associated to these three module is intended to import DICOM and to provide a 3D reconstruction of the vascular tree in the surgical area.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for the Sim&Size device, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The FDA submission details a type of "performance testing - bench" for the computational modeling of neurovascular devices. This includes verification tests (checking mathematical definitions) and validation tests (experimental bench tests, in vitro datasets). The document states that "All performance testing has been performed and passed. The Sim&Size software version 1.1.2 has met the required specifications for the completed tests."

While specific numerical acceptance criteria (e.g., tolerance ranges for length or apposition) are not explicitly stated in the provided text, the successful completion of these tests serves as the reported performance, meeting the implicit acceptance criteria of conforming to mathematical definitions, validity for new IMD devices, proper calibration, and predictability.

Acceptance Criteria CategorySpecific Criteria (Implicit/Derived)Reported Device Performance
Software FunctionalityDICOM image import successfulTests passed (continuous, supervised, acceptance)
Patient manager functions correctlyTests passed (continuous, supervised, acceptance)
Image display & processing correctTests passed (continuous, supervised, acceptance)
Anatomic reconstruction visualization correctTests passed (continuous, supervised, acceptance)
Report creation and visualization correctTests passed (continuous, supervised, acceptance)
Fusion correction (auto/manual) correctTests passed (continuous, supervised, acceptance)
Cybersecurity requirements metTests passed (continuous, supervised, acceptance)
No regression from predicate deviceNo regression identified between predicate and 1.1.2
FDsize Module PerformanceFlow Diverter final length conforms to mathematical definitionVerification tests passed
Flow Diverter apposition conforms to mathematical definitionVerification tests passed
Simulation model valid for new IMDsValidation tests passed
Proper calibration with device geometrical parametersValidation tests passed
Predictability of Sim&Size output for new IMDsValidation tests passed
STsize Module PerformanceLaser cut stents verification via numerical solverVerification tests passed (for in-house numerical solver)
IMD final length (braided stents) conforms to mathematical definitionVerification tests passed
IMD apposition (braided stents) conforms to mathematical definitionVerification tests passed
Simulation model valid for new stent IMDsValidation tests passed
Proper calibration with device geometrical parametersValidation tests passed
Predictability of Sim&Size output for new IMDsValidation tests passed

2. Sample Size for Test Set and Data Provenance

  • Sample Size for Test Set:
    • For the FDsize and STsize modules' validation tests, the text mentions:
      • "Experimental bench tests using optical imaging of new IMD devices samples in both unconstrained and constrained configurations." (The exact number of samples is not specified.)
      • "In vitro (silicon model) datasets in which the predictability of the simulation model is assessed comparing in-vitro and virtual Flow Diverter devices implanted in silicone phantom based on anatomy of patients presenting with intracranial aneurysms." (The exact number of datasets/phantoms/patients is not specified.)
  • Data Provenance: The phrasing implies the use of in-vitro (silicon model) datasets and experimental bench tests rather than patient data directly for the validation of the modeling. The silicon models are "based on anatomy of patients presenting with intracranial aneurysms," suggesting a derivation from real patient data but not direct use of patient images for this specific performance validation. There is no explicit mention of country of origin for the underlying anatomical data. The study is retrospective in the sense that existing anatomical data or models derived from them are used.

3. Number of Experts and their Qualifications for Ground Truth (Test Set)

The provided text does not specify the number of experts used to establish ground truth for the test set, nor their specific qualifications. The validation tests rely on physical measurements from "optical imaging of new IMD devices samples" and "in vitro (silicon model) datasets" comparing "in-vitro and virtual" results. This suggests the ground truth for these performance tests is derived from direct physical and mathematical conformity, rather than expert consensus on imaging interpretation for the test set performance.

4. Adjudication Method for the Test Set

The document does not describe an adjudication method (like 2+1 or 3+1) for the test set. The performance validation seems to rely on the comparison of the software's computational model outputs with physical measurements from bench tests and in-vitro models, rather than human interpretation.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The focus of the performance data is on the standalone accuracy and predictability of the computational modeling, not on how the software improves human reader performance compared to a baseline without AI assistance.
  • The device explicitly states in its Indications for Use: "Information provided by the software is not intended in any way to eliminate, replace or substitute for, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition." This reinforces that it's an assistive tool, but the study described does not quantify its impact on human readers.

6. Standalone (Algorithm Only) Performance Study

  • Yes, standalone performance (algorithm only) was done and is the primary focus of the performance testing. The "Performance Testing - Bench" section describes verification and validation tests for the "computational modeling of neurovascular devices" in both the FDsize and STsize modules. These tests assess the software's ability to accurately predict IMD length, apposition, conform to mathematical definitions, and be predictive in in-vitro models. This is an assessment of the algorithm's performance in isolation from clinical human-in-the-loop use.

7. Type of Ground Truth Used

The ground truth used for the performance testing (FDsize and STsize modules) includes:

  • Mathematical definitions: For verifying that computed lengths and appositions conform.
  • Experimental bench tests: Using "optical imaging of new IMD devices samples" providing physical measurements.
  • In-vitro (silicon model) datasets: Where the software's predictions are compared against physical implantations in phantoms.

Essentially, the ground truth is a combination of physical measurements, mathematical conformity, and in-vitro experimental results.

8. Sample Size for the Training Set

The document does not provide any information regarding a specific training set size. The description of performance testing focuses on verification and validation of the algorithms without detailing a machine learning training phase or associated dataset. Given the context, it's possible the computational models are based on established biomechanical principles and manufacturer device specifications, rather than being "trained" on a large image dataset in the context of deep learning.

9. How Ground Truth for the Training Set Was Established

As no explicit training set is mentioned in the context of typical machine learning, the question of how its ground truth was established is not addressed. The "simulation model" for the IMD devices is established via "device geometrical parameters that were provided by each of the IMD device manufacturers," and the "in-house numerical solver" for the STsize module, suggesting a rules-based or physics-based modeling approach rather than a data-driven machine learning approach requiring a separate training ground truth.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).