Search Results
Found 2 results
510(k) Data Aggregation
(155 days)
Simpleware ScanIP Medical
Simpleware ScanlP Medical is intended for use as a software interface and image segmentation system for the transfer of medical imaging information to an output file. It is also intended as pre-operative software for diagnostic and surgical planning.
For these purposes, output files can also be used for the fabrication of physical replicas using traditive manufacturing methods. The physical replicas can be used for diagnostic purposes in the field of orthopedic, maxillofacial and cardiovascular applications.
The software is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.
Simpleware ScanIP Medical is image processing software that enables users to import, visualize, and segment medical images, and export digital 3D models. These models can be used in the software for pre-surgical tasks, and can also be used to produce output files suitable for additive manufacturing (3D printing). Simpleware ScanIP Medical also has functionality for transferring from and to third-party software packages.
Simpleware ScanIP Medical is a modular product, including the following functionalities:
- Import of medical images in various formats
- Transferring files from and to computer-aided design (CAD) software packages
- Image filtering and segmentation tools
- 2D and 3D visualization of image data and CAD drawings
- Analysis, measurements, and statistics from 3D image data and CAD drawings
- Generating and exporting meshes to Finite Element (FE) packages.
- Generating and exporting models to CAD software
- Support for scripting in a number of programming languages
The provided text is a 510(k) summary for the device "Simpleware ScanIP Medical". It describes the device, its intended use, and compares it to a predicate device. However, it does not contain specific acceptance criteria, detailed study designs with sample sizes for test sets, expert qualifications, or adjudication methods for establishing ground truth. It states that "Validation was carried out for the workflow of going from 3D image to printed model, demonstrating that the anatomical models for cardiovascular, orthopedic, and maxillofacial applications can be printed accurately when using compatible 3D printers." but does not provide the specifics of this validation study in the format you requested.
The document focuses on demonstrating substantial equivalence to a predicate device (Simpleware ScanIP, K142779) through non-clinical bench testing and technological comparisons, rather than providing detailed performance studies with acceptance criteria for a new clinical efficacy claim.
Therefore, many of your requested items cannot be extracted from this document.
Here's an attempt to answer what can be gathered:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly provided in the document in a quantifiable format with specific acceptance criteria and reported performance metrics. The document broadly states: "Validation of the subject device shows it to be equivalent in performance to the predicate device" and "Non-clinical bench-testing results demonstrate that the subject device is as safe, effective, and functional as the predicate device." It also mentions "demonstrating that the anatomical models for cardiovascular, orthopedic, and maxillofacial applications can be printed accurately when using compatible 3D printers." without providing specific accuracy metrics or thresholds.
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly stated and quantified in the document. | Not explicitly stated and quantified in the document. |
2. Sample size used for the test set and the data provenance
The document mentions "Validation was carried out for the workflow of going from 3D image to printed model, demonstrating that the anatomical models for cardiovascular, orthopedic, and maxillofacial applications can be printed accurately when using compatible 3D printers." However, it does not specify the sample size for this validation. The provenance of the data (e.g., country of origin, retrospective/prospective) is also not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The document describes non-clinical bench testing, not studies involving expert interpretation for ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states: "No clinical tests were conducted to determine substantial equivalence." The device is intended as a software interface and image segmentation system, and pre-operative software, not as an AI-powered diagnostic aid that improves human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes "bench-testing" and "validation for the workflow of going from 3D image to printed model". This implies a standalone assessment of the software's ability to produce accurate models from images. However, it does not provide specific metrics or a detailed study design for this standalone performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "validation for the workflow of going from 3D image to printed model," the ground truth likely refers to the "accurate" measurements or representations in the physical replicas compared to the original imaging data. The method for establishing this "accuracy" (e.g., precise measurements of physical models, comparison to established standards) is not detailed but it's reasonable to infer a comparison against the input image data or engineering specifications, rather than clinical ground truth like pathology.
8. The sample size for the training set
This document describes a medical device software (Simpleware ScanIP Medical) which is a tool for image processing and segmentation, not a machine learning or AI model that typically requires a "training set" in the context of deep learning. Therefore, the concept of a training set for this type of device is not applicable and is not mentioned in the document.
9. How the ground truth for the training set was established
As the device described is not an AI/ML model with a typical "training set," this question is not applicable.
Ask a specific question about this device
(203 days)
SCANIP; SCANIP: MEDICAL EDITION; SCANIP: MED
ScanIP is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner to an output file. It is also intended as pre-operative software for simulating/evaluating surgical treatment options. Scan1P is not intended to be used for mammography imaging.
ScanIP represents a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner to an output file. ScanIP provides a core image processing interface with several additional modules available to users – these include +CAD, +FE and +NURBS - which provide further options for working with image data. +CAD enables the integration of computer-aided design (CAD) drawings such as implants with patient-specific data; +FE allows segmented image data to be exported as computational models for physics-based simulations in other software; +NURBS is designed to allow users to export segmented data as NURBS IGES files to CAD software.
ScanIP is written in C++ and designed using the integrated development environment (IDE) Microsoft Visual Studio. Minimum hardware requirements for the operating system are Windows 7, Windows 8, Windows Vista, and Windows XP. 32 and 64 bit versions of the software are available. Minimum processor requirements are an Intel Core i3 or equivalent; minimum memory requirements for the software to run are 4096 MB (4GB), while an OpenGL compatible graphics card with 32 MB of RAM is required. The screen resolution of a workstation should be a minimum of 1024 x 768 high colour (16 bit), and 10 GB of disk space is recommended as a minimum.
The software is required to be able to visualise and process medical images using a range of filters and tools, and can export models as output files. ScanIP meets DICOM standards for the transfer of medical images. The software is also intended for use in the early stages of pre-surgical planning for visualising patient-specific data, taking measurements and obtaining statistics (such as bone density, distances and angles between arteries), and for integrating computer drawings of implants with patient data to evaluate fitness for use. This functionality has applications to implant evaluation and export of models for simulation in other software. Output files can be used in these other applications; ScanIP does not integrate with them directly.
Processed medical images can also be exported as output files to 3D printing processes for the creation of physical models that can be used in pre-surgical planning (inspection of implant fit), and as computational models to other software programs for running simulations (e.g. stress/strain limits in bone, fluid flow through vessels and airways).
ScanIP has FDA clearance to generate 3D models and export these models in a format suitable for 3D printing to be used as physical models for visualization or educational purposes only. This clearance does not cover medical devices manufactured from those output files.
This document primarily describes the regulatory clearance of a software device, ScanIP; it does not contain a typical study performed to meet acceptance criteria with specific performance metrics such as sensitivity, specificity, or reader improvement. The "performance data" section states that "Software Verification and Validation Testing" was conducted, and the conclusions state that "Verification and validation testing of ScanIP, and inclusion of the subject device's Reference Guide and the predicate's Reference Guide supports substantial equivalence based on performance testing and detailed descriptive criteria." This generally refers to internal software testing, not a clinical performance study with human subjects or readers as one might typically expect for evaluating the efficacy of an AI-powered diagnostic tool.
The document focuses on establishing substantial equivalence to a predicate device (Mimics K073468) based on technological characteristics and software verification/validation activities.
Therefore, many of the requested items cannot be extracted directly from the provided text.
Here's an attempt to answer the questions based on the information available:
Acceptance Criteria and Study Details for ScanIP
1. Table of Acceptance Criteria and Reported Device Performance
Based on the document, the "acceptance criteria" appear to be related to demonstrating substantial equivalence through software verification and validation, aligning with the predicate device's functionality, and mitigating potential risks associated with software failure. Specific quantitative performance metrics (e.g., sensitivity, specificity, F-score) are not provided.
Acceptance Criteria Category | Description from Document | Reported Device Performance/Status |
---|---|---|
Technological Equivalence to Predicate | The device should have equivalent technological elements to the predicate device (Mimics K073468) including: |
- Visualization, segmentation, processing, and file export of medical images
- Application of software algorithms, filters, and tools
- Compatibility with scanner data (MRI, CT, micro-CT)
- Ability to visualize data in 2D and 3D
- Use of tools to take measurements and record statistics
- Use of algorithms to create surface meshes (e.g., STL)
- Use of filters for morphological image processing
- Use of tools for 3D editing (e.g., paint)
- Use of tools for segmenting images (e.g., thresholding)
- Export files for FEA, CAD, and 3D printing. | ScanIP is stated to be "substantially equivalent" to Mimics (K073468) based on satisfying these technological characteristics. The document explicitly lists these points as "equivalent technological elements." The only difference noted is that the predicate has dedicated surgical planning modules, which the subject device does not. This difference was likely deemed not to impact substantial equivalence for the stated indications. |
| Software Verification and Validation (V&V) | As recommended by FDA's Guidance for the Industry and FDA Staff: "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." This includes system-level tests and validation testing. | "Software verification and validation testing were conducted and documentation was provided." The document concludes that "Verification and validation testing of ScanIP... supports substantial equivalence based on performance testing and detailed descriptive criteria." The software was considered "Moderate" level of concern, and appropriate steps were taken to ensure mitigation of potential risks (e.g., misinterpreting scanned data leading to minor injury for a surgical implant design). |
| Risk Mitigation | Mitigation of hazards, where a failure of the device could result in minor injury (e.g., misinterpreting scanned data causing incorrect surgical implant design). | Software documentation, including V&V activities and related performance data, was provided to demonstrate that "appropriate steps have been taken to ensure mitigation of potential risks." |
| DICOM Compliance | Voluntary compliance with the ACR/NEMA Digital Imaging and Communication in Medicine (DICOM) Standard (Version 3.0). | "ScanIP meets DICOM standards for the transfer of medical images." Both subject and predicate devices are "voluntarily compliant with the ACR/NEMA Digital Imaging and Communication in Medicine (DICOM) Standard (Version 3.0)." |
2. Sample size used for the test set and the data provenance
The document does not describe a "test set" in the context of a clinical performance study with patient data. The "performance data" refers to software verification and validation testing. Therefore, details about sample size (e.g., number of cases/patients) and data provenance (country of origin, retrospective/prospective) are not applicable/provided for a clinical test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not applicable/provided as the document does not describe a clinical evaluation with a test set requiring expert ground truth for specific diagnostic outcomes.
4. Adjudication method for the test set
This information is not applicable/provided as there is no described test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not done and is not described in this document. The device is a software interface and image segmentation system, not an AI-assisted diagnostic tool designed to directly improve human reader performance in a diagnostic task. Its purpose is for image processing, segmentation, measurement, and export for pre-surgical planning or other downstream applications.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance study in the sense of evaluating diagnostic accuracy of an algorithm without human intervention was not conducted or described. The document focuses on the software's functionality and its role as a tool for trained professionals. Its outputs (segmented images, measurements, models) are intended to be used by clinicians who "retain the ultimate responsibility for making a decision."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The concept of "ground truth" used in typical clinical performance studies (e.g., against pathology or clinical outcomes) is not applicable/provided for the software verification and validation described. The "ground truth" for software testing would typically involve expected outputs based on specified inputs and functional requirements, rather than clinical diagnostic ground truth.
8. The sample size for the training set
This document does not describe an AI model that undergoes "training." ScanIP is an image processing and segmentation software; it is not presented as a machine learning or AI-powered model that would require a "training set" in the conventional sense. Therefore, this information is not applicable/provided.
9. How the ground truth for the training set was established
As there is no described "training set" for an AI model, this information is not applicable/provided.
Ask a specific question about this device
Page 1 of 1