(139 days)
SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient-specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures. The version of the software that is the subject of the current submission (Version 3.3.0) can also be employed to co-register a post-operative CT scan with the clinical scan of the same patient from before a surgery (on which the software has already visualized the STN) and to segment in the CT image (where needed), to further assist with visualization.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria and performance data are presented for three main functionalities: STN Visualization, Co-Registration, and Segmentation.
| Functionality | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| STN Visualization | 90% of center of mass distances and surface distances not greater than 2.0mm. Significantly greater than the conservative literature estimate of 20% successful visualizations. | 98.3% of center of mass distances were not greater than 2.0mm (95% CI: 91-100%). 100% of surface distances were not greater than 2.0mm (95% CI: 94-100%). 90% of center of mass distances were below 1.66mm. 90% of surface distances were below 0.63mm. The rate of successful visualizations (98.3%) was significantly greater than 20% (p<0.0001). Dice coefficient was 0.69. |
| Co-Registration | 95% confidence that 90% of registrations will have corresponding reference point distances below 2 mm. | 95% confidence that the error will be below 0.454 mm 90% of the time. (Mean of Maximum Error: 0.242 mm, STD: 0.062 mm). This meets the 2mm criterion. |
| Segmentation | Center of Mass (COM): 95% confidence that 90% of segmentations will have COM distances below 1 mm. | 95% chance that 90% of the cases will be lower than 0.491 mm from the center of mass of the real contact. (Average Mean: 0.30 mm, STD: 0.12 mm). This meets the 1mm criterion. |
| Orientation: 95% confidence that 90% of segmentations will have orientation differences below 5 degrees. | 95% chance that 90% of the cases will be lower than 2.486 degrees from the real orientation of the lead. (Average Mean: 1.00 Degrees, STD: 0.90 Degrees). This meets the 5 degrees criterion. | |
| Anomaly Detection | Minimize False Negatives; acceptable Sensitivity and Specificity; improved overall visualization success compared to version 1.0.0. | Version 3.3.0 showed improved sensitivity (50.00% vs 0.00% for 1.0.0) and a marginally decreased specificity (89.39% vs 92.31% for 1.0.0). Overall system performance (success with AD) improved from 95.24% (1.0.0) to 98.33% (3.3.0). |
| STN Smoothing Functionality | The smoothed STN visualizations should produce acceptable results for COM, DC, and SD; overall system performance remains in line with the verification criteria for the predicate device. | Testing produced acceptable results for COM, DC, and SC. Significant correlation found between smoothed and non-smoothed STN objects, demonstrating that the overall system performance remains in line with the predicate device's verification criteria. |
2. Sample Size Used for the Test Set and Data Provenance
- STN Visualization Test Set: 68 STNs (from 34 subjects).
- Data Provenance: Not explicitly stated regarding country of origin. The data was "completely separate from the data set that was used for development" and "none of the 68 STNs were part of the company's database for algorithm development and none were used to optimize or design the company's software." This indicates it was a prospective test set, in the sense that it was not used for model development.
- Co-Registration Test Set: 5 MR series and 1 CT series of a phantom brain. This suggests a synthetic, controlled test environment rather than patient data.
- Segmentation Test Set: 26 post-surgical CT scans that contained leads, with a total sample size of 45 electrodes.
- Data Provenance: Not explicitly stated regarding country of origin or whether it was retrospective or prospective patient data, but it involved "post-surgical CT scans."
- Anomaly Detection Test Set: The same 68 cases (68 total STNs, 65 successful/3 failed for v1.0.0 and 66 successful/2 failed for v3.3.0) used for STN Visualization.
- Data Provenance: Same as STN Visualization.
- STN Smoothing Functionality Test Set: The shapes of the visualized targets from the "verification accuracy testing" were compared. This likely refers to the same 68 STNs from the STN Visualization study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- STN Visualization: The text mentions "ground truth STNs (manually segmented clinical images superimposed)", but it doesn't specify the number or qualifications of experts who performed these manual segmentations.
- Co-Registration: "6 fiducial points were marked by an expert." The qualification of this expert is not provided.
- Segmentation: "ground truth segmentations were generated by 2 experts." The qualifications of these experts are not provided.
- Anomaly Detection: Ground truth for anomaly detection was defined by whether visualizations were "Inaccurate visualization" or "Accurate visualization," based on the STN visualization success criteria (>2mm vs <=2mm distance relative to ground truth). The establishment of this underlying ground truth (manual segmentation of STNs) is not detailed beyond what's mentioned for STN Visualization.
- STN Smoothing Functionality: Ground truth for accuracy was based on "verification accuracy testing," which likely refers back to the STN visualization ground truth.
4. Adjudication Method for the Test Set
- STN Visualization: Not explicitly stated. The "ground truth STNs (manually segmented clinical images superimposed)" implies a reference standard, but how discrepancies or initial ground truth was agreed upon if multiple experts were involved is not mentioned.
- Co-Registration: A single expert marked points. No adjudication method mentioned.
- Segmentation: "ground truth segmentations were generated by 2 experts." It does not mention an adjudication process if their segmentations differed (e.g., 2+1, 3+1). It's possible they reached consensus, or one might have corrected the other, but this is not stated.
- Anomaly Detection: No (applicable) adjudication as the ground truth was based on quantitative metrics from STN visualization.
- STN Smoothing Functionality: No (applicable) adjudication, as it relies on quantitative comparison to ground truth from STN visualization.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC comparative effectiveness study was mentioned. The study focuses on the device's standalone performance in providing aid for visualization and measurement. The claim is that the device provides "adjunctive information" and is an "aid in visualization." No human reader performance data (with or without AI) is provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the studies described for STN Visualization, Co-Registration, and Segmentation report the performance of the algorithm itself, without human-in-the-loop interaction for the specific quantitative metrics used. The anomaly detection component also describes the algorithm's performance in identifying anomalies.
7. The Type of Ground Truth Used
- STN Visualization:
- Expert Consensus/Manual Segmentation: The ground truth for STN visualization was "manually segmented clinical images superimposed" and "High Field (7T) MRI." The 7T MRI serves as a high-resolution reference considered superior for STN visualization, and the manual segmentations on these images would form the core of the ground truth.
- Co-Registration:
- Expert Marking on Phantom: Ground truth was based on fiducial points marked by an expert on a physical phantom.
- Segmentation:
- Expert Segmentation: Ground truth was established by "2 experts" who generated segmentations of electrodes from CT images and manually aligned 3D components to those segmentations.
- Anomaly Detection:
- Metric-Based (Derived from STN Visualization GT): Ground truth for anomaly detection was defined by the quantitative "accuracy" of the STN visualization (<2mm vs >2mm distance to the expert-derived ground truth).
- STN Smoothing Functionality:
- Metric-Based (Derived from STN Visualization GT): Ground truth for evaluating smoothing was based on "COM, SD and DC" relative to the STN visualization ground truth.
8. The Sample Size for the Training Set
- The document states that the STN visualization validation data set (68 STNs) was "completely separate from the data set that was used for development" and "none were used to optimize or design the company's software."
- Regarding the anomaly detection component, it mentions "two separate commonly used outlier detection machine learning models were trained using the brains from the training set." The specific sample size for this training set is not provided.
- For co-registration, there's no mention of a training set as it appears to be a direct registration process, not a machine learning model.
- For segmentation, it's not explicitly stated if a training set was used for the automated segmentation; the validation focuses on the comparison to expert ground truth.
9. How the Ground Truth for the Training Set Was Established
- For the anomaly detection component, it states the models were "trained using the brains from the training set, from which the same brain geometry characteristics were extracted." It then describes how anomaly scores were combined. However, the method for establishing the ground truth on this training set (i.e., what constituted an "anomaly" vs "non-anomaly" during training) is not detailed in the provided text. It presumably involved similar principles of accurate vs. inaccurate visualizations, but the source and method of that ground truth for training are not specified.
- For any other machine learning components (like the core STN visualization algorithm), the document states the methodology "relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI)." The algorithm "uses the 7T images from a database to find regions of interest within the brain (e.g., the STN) on a patient's clinical (1.5 or 3T MRI) image." This implies the 7T MRI data serves as a form of ground truth for training the algorithm to identify STNs on clinical MRI, but the specific process of creating that ground truth from the 7T data (e.g., manual segmentation by experts on 7T) is not detailed.
{0}------------------------------------------------
Image /page/0/Picture/0 description: The image shows the logo of the U.S. Food and Drug Administration (FDA). The logo consists of two parts: the Department of Health & Human Services logo on the left and the FDA logo on the right. The FDA logo is a blue square with the letters "FDA" in white, followed by the words "U.S. FOOD & DRUG ADMINISTRATION" in blue.
Surgical Information Sciences, Inc. % Ms. Janice M. Hogan Regulatory Counsel Hogan Lovells US LLP 1735 Market Street, 23rd Floor PHILADELPHIA PA 19103
March 19, 2019
Re: K183019
Trade/Device Name: SIS Software version 3.3.0 Regulation Number: 21 CFR 892.2050 Regulation Name: Picture Archiving and communications system Regulatory Class: Class II Product Code: LLZ Dated: February 15, 2019 Received: February 15, 2019
Dear Ms. Hogan:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part
{1}------------------------------------------------
801); medical device reporting of medical device-related adverse events) (21 CFR 803) for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/CombinationProducts/GuidanceRegulatoryInformation/ucm597488.htm); good manufacturing practice requirements as set forth in the quality systems (OS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to http://www.fda.gov/MedicalDevices/Safety/ReportaProblem/default.htm.
For comprehensive regulatory information about mediation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/) and CDRH Learn
(http://www.fda.gov/Training/CDRHLearn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (http://www.fda.gov/DICE) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely,
Michael D. O'Hara
Thalia Mills, Ph.D. Director Division of Radiological Health Office of In Vitro Diagnostics and Radiological Health Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
510(k) Number (if known)
Device Name
SIS Software (version 3.3.0) Indications for Use (Describe)
SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
Type of Use (Select one or both, as applicable)
区 Prescription Use (Part 21 CFR 801 Subpart D)
[ ] Over-The-Counter Use (21 CFR 801 Subpart C)
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Druq Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
{3}------------------------------------------------
510(k) SUMMARY
Surgical Information Sciences, Inc.'s SIS Software
Sponsor's Name, Contact Information, and Date Prepared
Surgical Information Sciences, Inc. 50 South 6th Street, Suite 1310 Minneapolis, MN 55402 Contact Person: Ann Quinlan-Smith Phone: 612-325-0187 E-mail: ann.quinlan.smith@surgicalis.com
Date Prepared: February 15, 2019
Trade Name of Device: SIS Software version 3.3.0
Common or Usual Name/Classification Name: System, Image Processing, Radiological (Product Code: LLZ; 21 C.F.R. 892.2050)
Regulatory Class: Class II
Predicate and Reference Devices
Predicate device: Surgical Information Sciences SIS Software version 1.0 (K162830) Reference device: Merge Healthcare's Merge PACS™ (K173475)
Intended Use / Indications for Use
SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
Technological Characteristics
SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patientspecific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures. The version of the software that is the subject of the current submission (Version 3.3.0) can also be employed to co-register a post-operative CT scan with the clinical scan of the same patient from before a surgery (on which
{4}------------------------------------------------
the software has already visualized the STN) and to segment in the CT image (where needed), to further assist with visualization.
The software makes use of the fact that some structures in the brain are better visualized using high-resolution and high-contrast 7T MRI than via 1.5T or 3T clinical MRI. The methodology relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI). The algorithm uses the 7T images from a database to find regions of interest within the brain (e.g., the STN) on a patient's clinical (1.5 or 3T MRI) image.
With regard to the updated functionality to process post-operative CT images, co-registration of the clinical MR and CT images allows alignment of the spatial positioning of the brains, and segmentation of objects (e.g., when an electrode is performed to ensure that the software accurately reflects their proper position.
STN visualization, image co-registration and the optional additional CT segmentation, are incorporated in the standard-of-care clinical workflow protocols. Use of the device does not require any additional visualization software or hardware platforms.
The subject and predicate devices rely on the same core technological principles. The only major differences between the two are that version 3.3.0 (the subject device) includes the added optional functionality to process post-operative CT images as well as incorporates a user interface. The user interface/labeling has also been enhanced to clarify this optional follow-on process for the clinician.
Performance Data
STN Visualization
Pivotal validation testing of the subject device was completed to confirm performance with device modifications. A set of 68 STNs (from 34 subjects) were scanned with both clinical MRI (1.5T and 3T) and High Field (7T) MRI. None of the 68 STNs were part of the company's database for algorithm development and none were used to optimize or design the company's software. Thus, this validation data set was completely separate from the data set that was used for development. The software development was frozen and labeled before tested on this validation set.
Three measurements were used to compare the SIS visualization via the subject software and ground truth STNs (manually segmented clinical images superimposed): (1) Center of mass distance; (2) Surface distance; and (3) Dice coefficient values.
In sum, 90% of the center of mass distances and surface distances were below 1.66mm and 0.63mm, respectively. Specifically, 98.3% of the center of mass distances and 100% of the surface distances were not greater than 2.0mm. Thus, the study met the pre-specified criteria of 90% of center of mass distances and surface distances not greater than 2.0mm. Furthermore, the proportion of visualizations not greater than 2.0mm was conservatively estimated from the literature to be 20%. Therefore, the rate of successful visualizations from SIS Software (98.3% of the center of mass distances not greater than 2.0mm) is significantly greater than the standard of care (p<0.0001). The corresponding two-sided confidence intervals are as follows:
{5}------------------------------------------------
- (a) 90% of the center of mass distances and surface distances were below 1.66mm and 0.63mm, respectively (95% CI: 79.5 - 96.2%);
- (b) 98.3% of the center of mass distances were not greater than 2.0mm (95% Cl: 91 -100%):
- (c) 100% of the surface distances were not greater than 2.0mm (95% Cl: 94 100%).
In addition, the Dice coefficient in this dataset was 0.69, which was expected given the small size of the STN and substantially similar to the predicate device. In sum, the SIS Software performed as intended and clinical validation data results observed were as expected.
Co-Registration
To ensure that 3D transformation to the CT is accurate, SIS collected 5 MR series and 1 CT series of a phantom brain. For each of the 5 MR series, 6 fiducial points were marked by an expert. Marking the fiducial points allowed SIS to test 30 points of reference. These points were used as reference points in the image series.
If the distance between the fiducial points was smaller than 2 mm, the test passed. This criterion was justified based on SIS' maximum acceptable slice thickness for MRI scans of 2mm. SIS success criteria is to show 95% confidence that 90% of the registrations will have corresponding reference point distances below 2 mm.
The table below summarizes the test data. For each of the MR images, the 6 distances were recorded. The average of all distances and its standard deviation are detailed in the table below:
| N | Mean of Maximum Error | STD | |
|---|---|---|---|
| Distance | 5 | 0.242 mm | 0.062 mm |
Based on the results from the table above the tolerance interval was calculated. ટાટ demonstrated it can register MR images to the CT space. SIS statistics shows there is 95% confidence that the error will be below 0.454 mm 90% of the time.
Segmentation
In addition to the above testing, to validate the optional segmentation feature to ensure any present leads are accurately represented with the co-registered 3D output, SIS used 26 postsurgical CT scans that contained leads with a total sample size of 45 electrodes. For each of the CT scans, ground truth segmentations were generated by 2 experts. To generate the ground truth data, the experts used the same set of 3D components (STL files) that are used by SIS Software version 3.3.0.
First, the experts seqmented the electrode(s) from each CT image. Second, the 3D components were aligned manually to the segmentation from step one (ground truth). Once the system generated the segmentations of the electrode components, and calculated the location and orientation of these components, the differences between the ground truth and the automated objects were calculated:
{6}------------------------------------------------
- . Distance between center of mass (COM) of the electrode tip and contacts of the ground truth and the corresponding automatically segmented objects. If the COM distance was less than 1 mm, the test passed, else it was declared as failure.
- . Angle between the orientation of contacts in the ground truth and the corresponding automatically segmented orientation. If the difference between the orientations relative to the ground truth electrode shaft was less than 5 degrees, the test passed, else it was declared as failure.
These acceptance criteria of 1 mm and 5 degrees were justified based on SIS' maximum acceptable slice thickness of the image, which is 1 mm. SIS success criteria for the tests is to show 95% confidence that 90% of the segmentations will have center of mass distances below 1 mm and orientation differences below 5 degrees.
| N | Average Mean | STD | |
|---|---|---|---|
| COM | 45 | 0.30 mm | 0.12 mm |
| Orientation | 45 | 1.00 Degrees | 0.90 Degrees |
The table below summarizes the test data:
SIS uses the following tolerance intervals formula to calculate the upper tolerance limit for the 2 measurements:
- . For the center of mass distance, SIS shows there is a 95% chance that 90% of the cases will be lower than 0.491 mm from the center of mass of the real contact.
- . For the difference in orientation, SIS shows there is a 95% chance that 90% of the cases will be lower than 2.486 degrees from the real orientation of the lead.
In both cases the criteria of 1 mm and 5 degrees are met with a high level of confidence.
Modified Anomaly Detection
The functionality of this Anomaly Detection component is the same as the original SIS Software version 1.0.0, and while the implementation of that functionality has been modified, the validation testing methodology is identical to what was used in the original version and the results were similarly acceptable.
Briefly, two separate commonly used outlier detection machine learning models were trained using the brains from the training set, from which the same brain geometry characteristics were extracted as described below:
- . One of these models is an elliptic envelope, which defines a volume in feature space based on the distributions of feature values from the training set: visualizations with characteristics (features) that fall outside the envelope will be considered anomalies.
- The second model is an isolation forest, which contains a population of decision trees ●
{7}------------------------------------------------
based on random partitioning of the training set. The scores from each of these models is combined to vield an overall anomaly score, with a threshold separating anomalous from non-anomalous classifications. The anomaly detection in SIS 1.0.0 used a single random forest classifier.
During system verification and validation (V&V) testing, there are 4 possible outcomes:
- True Positive (TP) Inaccurate visualization that was classified as anomaly. ●
- . True Negative (TN) - Accurate visualization that was classified as non-anomaly.
- . False Positive (FP) - Accurate visualization that was classified as anomaly.
- . False Negative (FN) - Inaccurate visualization that was classified as non-anomaly.
SIS' approach for improving the anomaly detection component was to further minimize the number of False Negatives, which would represent inaccurate STN predictions and be reported out to the physician user (i.e., not be flagged as an anomaly). As such, the Sensitivity and Specificity of the anomaly detection component, as well as the overall visualization success of the system, are the criteria used to demonstrate the acceptable performance of this component.
These data demonstrate that more true anomalies were identified with the Version 3.3.0, such that sensitivity was improved, and specificity was only marginally decreased. The tables below demonstrate that the overall performance of version 3.3.0 is improved by the anomaly detection component compared to the original functionality of version 1.0.0.
| Table 1: Anomaly Detection Analysis | ||||
|---|---|---|---|---|
| -- | -- | -- | ------------------------------------- | -- |
| Version | Totalcases | Successfulvisualizations <2mm | Failedvisualizations >2 mm | TP | TN | FP | FN | Sensitivity | Specificity |
|---|---|---|---|---|---|---|---|---|---|
| 1.0.0 | 68 | 65 | 3 | 0 | 60 | 5 | 3 | 0.00% | 92.31% |
| 3.3.0 | 68 | 66 | 2 | 1 | 59 | 7 | 1 | 50.00% | 89.39% |
| Table 2: Overall System Performance | |
|---|---|
| -- | -------------------------------------- |
| Success without AD | Success with AD | |
|---|---|---|
| 1.0.0 | 95.59% | 95.24% |
| 3.3.0 | 97.06% | 98.33% |
{8}------------------------------------------------
STN Smoothing Functionality
SIS validated the smoothed STN visualizations that were produced by the system, based on Center of Mass (COM), Dice Coefficient (DC) and Surface Distance (SC). Testing produced acceptable results.
In addition. SIS also analyzed the results of the difference between the smoothed STN visualization and the non-smoothed STN visualizations to compare the effect of this change at a unit level. The shapes of the visualized targets from the verification accuracy testing were compared using COM, SD and DC. The results demonstrated significant correlation between the smoothed and non-smoothed STN objects. These results, in addition to the overall system accuracy, demonstrate that the overall system performance remains in line with the verification criteria for the predicate device.
Substantial Equivalence
Both the subject and predicate versions of the SIS Software are applications used for visualization, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 2D or 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. In addition, the SIS Software, like the identified predicate and reference devices, use proprietary algorithms to generate 3D segmented anatomical models from patient's MRI scans. The subject device additionally segments post-operative CT scans (when needed) of a patient whose pre-operative MR has already been processed by the software, and enables coregistration of the two images. These additional functionalities serve the same fundamental purpose as those carried over from the predicate - to assist the clinician in surgical case management. Finally, the new features of version 3.3.0 as compared to the version 1.0 predicate device are supported by other cleared PACS systems, which perform image registration/fusion including CT and MR, such as the reference device (K173475), as well as validation testing. The table below provides a summary comparison between the SIS Software and the predicate and reference devices.
| SIS Softwareversion 3.3.0(subject) | SIS Softwareversion 1.0(K162830) | Merge PACS(K173475) | |
|---|---|---|---|
| Allows for importing of digitalimaging sets | Yes | Yes | Yes |
| Uses proprietary softwarealgorithm for 3D imageprocessing | Yes | Yes | Yes |
| Allows for review andanalysis of data in various2D and 3D presentationformats | Yes | Yes | Yes |
| Performs image fusion ofdatasets using automated ormanual image matchingtechnique | Yes | Yes | Yes |
| Segments structures in | Yes | Yes | Unclear from publicly |
| SIS Software Technological Characteristics Comparison Table | ||||
|---|---|---|---|---|
| -- | -- | ------------------------------------------------------------- | -- | -- |
{9}------------------------------------------------
| SIS Softwareversion 3.3.0(subject) | SIS Softwareversion 1.0(K162830) | Merge PACS(K173475) | |
|---|---|---|---|
| images with manual andautomated tools andconverts them into 3Dobjects for display | available information; butthese features arealready supported by thepredicate. | ||
| Creates hybrid datasets byfilling in segmented regionsslice-by-slice on anatomicaldatasets | Yes | Yes | |
| Results can be uploaded toplanning system | Yes | Yes | Yes |
| Segmentation of CT scan toidentify structures in relationto those visualized on MR | Yes | No | Processes images toenable cross-registrationor cross-referencing. |
| Cross-registration of twomulti-modality images andcreation of 3D (fused) model | Yes | No | Yes |
| Uploading and viewingimages via web-based portalor directly via separatelycleared PACS | Yes | No | Yes |
| Anomaly Detection | Yes | Yes | No |
| STN SmoothingFunctionality | Yes; supported bytestingdemonstrating newfeature does notalter device outputcompared topredicate device | No | No |
Conclusions
The updated SIS Software (version 3.3.0) is as safe and effective as the version previously cleared in K162830 (predicate device). The subject device has the same intended use and indications for use as the predicate, and very similar technological characteristics and principles of operation, with minor differences supported by clearance of the reference device (K173475), as well as performance validation testing demonstrating that the subject device is as safe and effective as the predicate device and performs as intended. Thus, the minor technological differences between SIS Software (version 3.3.0) and its predicate device raise no new issues of safety or effectiveness, and the updated SIS Software (version 3.3.0) is substantially equivalent.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).