Search Results
Found 1 results
510(k) Data Aggregation
(203 days)
SIMPLEWARE LTD.
ScanIP is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner to an output file. It is also intended as pre-operative software for simulating/evaluating surgical treatment options. Scan1P is not intended to be used for mammography imaging.
ScanIP represents a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner to an output file. ScanIP provides a core image processing interface with several additional modules available to users – these include +CAD, +FE and +NURBS - which provide further options for working with image data. +CAD enables the integration of computer-aided design (CAD) drawings such as implants with patient-specific data; +FE allows segmented image data to be exported as computational models for physics-based simulations in other software; +NURBS is designed to allow users to export segmented data as NURBS IGES files to CAD software.
ScanIP is written in C++ and designed using the integrated development environment (IDE) Microsoft Visual Studio. Minimum hardware requirements for the operating system are Windows 7, Windows 8, Windows Vista, and Windows XP. 32 and 64 bit versions of the software are available. Minimum processor requirements are an Intel Core i3 or equivalent; minimum memory requirements for the software to run are 4096 MB (4GB), while an OpenGL compatible graphics card with 32 MB of RAM is required. The screen resolution of a workstation should be a minimum of 1024 x 768 high colour (16 bit), and 10 GB of disk space is recommended as a minimum.
The software is required to be able to visualise and process medical images using a range of filters and tools, and can export models as output files. ScanIP meets DICOM standards for the transfer of medical images. The software is also intended for use in the early stages of pre-surgical planning for visualising patient-specific data, taking measurements and obtaining statistics (such as bone density, distances and angles between arteries), and for integrating computer drawings of implants with patient data to evaluate fitness for use. This functionality has applications to implant evaluation and export of models for simulation in other software. Output files can be used in these other applications; ScanIP does not integrate with them directly.
Processed medical images can also be exported as output files to 3D printing processes for the creation of physical models that can be used in pre-surgical planning (inspection of implant fit), and as computational models to other software programs for running simulations (e.g. stress/strain limits in bone, fluid flow through vessels and airways).
ScanIP has FDA clearance to generate 3D models and export these models in a format suitable for 3D printing to be used as physical models for visualization or educational purposes only. This clearance does not cover medical devices manufactured from those output files.
This document primarily describes the regulatory clearance of a software device, ScanIP; it does not contain a typical study performed to meet acceptance criteria with specific performance metrics such as sensitivity, specificity, or reader improvement. The "performance data" section states that "Software Verification and Validation Testing" was conducted, and the conclusions state that "Verification and validation testing of ScanIP, and inclusion of the subject device's Reference Guide and the predicate's Reference Guide supports substantial equivalence based on performance testing and detailed descriptive criteria." This generally refers to internal software testing, not a clinical performance study with human subjects or readers as one might typically expect for evaluating the efficacy of an AI-powered diagnostic tool.
The document focuses on establishing substantial equivalence to a predicate device (Mimics K073468) based on technological characteristics and software verification/validation activities.
Therefore, many of the requested items cannot be extracted directly from the provided text.
Here's an attempt to answer the questions based on the information available:
Acceptance Criteria and Study Details for ScanIP
1. Table of Acceptance Criteria and Reported Device Performance
Based on the document, the "acceptance criteria" appear to be related to demonstrating substantial equivalence through software verification and validation, aligning with the predicate device's functionality, and mitigating potential risks associated with software failure. Specific quantitative performance metrics (e.g., sensitivity, specificity, F-score) are not provided.
Acceptance Criteria Category | Description from Document | Reported Device Performance/Status |
---|---|---|
Technological Equivalence to Predicate | The device should have equivalent technological elements to the predicate device (Mimics K073468) including: |
- Visualization, segmentation, processing, and file export of medical images
- Application of software algorithms, filters, and tools
- Compatibility with scanner data (MRI, CT, micro-CT)
- Ability to visualize data in 2D and 3D
- Use of tools to take measurements and record statistics
- Use of algorithms to create surface meshes (e.g., STL)
- Use of filters for morphological image processing
- Use of tools for 3D editing (e.g., paint)
- Use of tools for segmenting images (e.g., thresholding)
- Export files for FEA, CAD, and 3D printing. | ScanIP is stated to be "substantially equivalent" to Mimics (K073468) based on satisfying these technological characteristics. The document explicitly lists these points as "equivalent technological elements." The only difference noted is that the predicate has dedicated surgical planning modules, which the subject device does not. This difference was likely deemed not to impact substantial equivalence for the stated indications. |
| Software Verification and Validation (V&V) | As recommended by FDA's Guidance for the Industry and FDA Staff: "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." This includes system-level tests and validation testing. | "Software verification and validation testing were conducted and documentation was provided." The document concludes that "Verification and validation testing of ScanIP... supports substantial equivalence based on performance testing and detailed descriptive criteria." The software was considered "Moderate" level of concern, and appropriate steps were taken to ensure mitigation of potential risks (e.g., misinterpreting scanned data leading to minor injury for a surgical implant design). |
| Risk Mitigation | Mitigation of hazards, where a failure of the device could result in minor injury (e.g., misinterpreting scanned data causing incorrect surgical implant design). | Software documentation, including V&V activities and related performance data, was provided to demonstrate that "appropriate steps have been taken to ensure mitigation of potential risks." |
| DICOM Compliance | Voluntary compliance with the ACR/NEMA Digital Imaging and Communication in Medicine (DICOM) Standard (Version 3.0). | "ScanIP meets DICOM standards for the transfer of medical images." Both subject and predicate devices are "voluntarily compliant with the ACR/NEMA Digital Imaging and Communication in Medicine (DICOM) Standard (Version 3.0)." |
2. Sample size used for the test set and the data provenance
The document does not describe a "test set" in the context of a clinical performance study with patient data. The "performance data" refers to software verification and validation testing. Therefore, details about sample size (e.g., number of cases/patients) and data provenance (country of origin, retrospective/prospective) are not applicable/provided for a clinical test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not applicable/provided as the document does not describe a clinical evaluation with a test set requiring expert ground truth for specific diagnostic outcomes.
4. Adjudication method for the test set
This information is not applicable/provided as there is no described test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not done and is not described in this document. The device is a software interface and image segmentation system, not an AI-assisted diagnostic tool designed to directly improve human reader performance in a diagnostic task. Its purpose is for image processing, segmentation, measurement, and export for pre-surgical planning or other downstream applications.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance study in the sense of evaluating diagnostic accuracy of an algorithm without human intervention was not conducted or described. The document focuses on the software's functionality and its role as a tool for trained professionals. Its outputs (segmented images, measurements, models) are intended to be used by clinicians who "retain the ultimate responsibility for making a decision."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The concept of "ground truth" used in typical clinical performance studies (e.g., against pathology or clinical outcomes) is not applicable/provided for the software verification and validation described. The "ground truth" for software testing would typically involve expected outputs based on specified inputs and functional requirements, rather than clinical diagnostic ground truth.
8. The sample size for the training set
This document does not describe an AI model that undergoes "training." ScanIP is an image processing and segmentation software; it is not presented as a machine learning or AI-powered model that would require a "training set" in the conventional sense. Therefore, this information is not applicable/provided.
9. How the ground truth for the training set was established
As there is no described "training set" for an AI model, this information is not applicable/provided.
Ask a specific question about this device
Page 1 of 1