Search Results
Found 2 results
510(k) Data Aggregation
(56 days)
Microsoft Corp.
Microsoft Radiomics App v1.0 is a software-only medical device intended for use by trained radiation oncologists, dosimetrists and physicists to derive optimal organ and tumor contours for input to radiation treatment planning. Supported image modalities are Computed Tomography and Magnetic Resonance. Radiomics App assists in the following scenarios:
· Load, save and display of medical images and contours for treatment evaluation and treatment planning.
· Creation, transformation, and modification of contours for applications including, but not limited to: transferring contours to radiotherapy treatment planning systems, aiding adaptive therapy, and archiving contours for patient followup.
· Localization and definition of both solid tumors and healthy anatomical structures.
- · Fusion display of compatible images for treatment planning.
· Three-dimensional rendering of medical images and the segmented contours.
Images reviewed using the Radiomics App software should not be used for primary image interpretations. Radiomics App is not for use with digital mammography.
Microsoft Radiomics App v1.0 is a software-only medical device intended for use by trained radiation oncologists, dosimetrists and medical physicists for radiation treatment planning.
Radiomics App stems from more than eight years of research in computerized medical image analysis, computer vision and machine learning. It applies well tested, state-of-the-art algorithms for the assisted delineation of anatomical structures of interest in three-dimensional, clinical radiological scans.
Radiomics App works on computed tomography (CT) and magnetic resonance (MR) scans, and is designed to contour/delineate both healthy anatomical structures as well as lesions such as solid tumors.
Radiomics App integrates into the clinical data network of radiation therapy treatment centers, receiving data from imaging devices such as CT and MR scanners. The purpose of the tool is to assist the expert user in producing segmentations (three-dimensional contours) of anatomical structures, for both solid tumors and healthy tissue structures. The following segmentation tools are provided:
- Assisted Contouring. This module allows for the manual, user-guided segmentation of . structures of interest in both CT and MR images.
- . Machine-learning based contouring. This module uses machine learning algorithms (ML) to provide an initial segmentation of certain structures of interest automatically. The user has the option to accept this initial segmentation or edit and refine it.
- . Contour refinement. This module allows the user to edit and improve segmentations created by either the machine learning or the assisted contouring algorithms.
These segmentations are then exported back into the clinical data network, and subsequently utilized in a radiotherapy treatment planning system to generate a treatment plan for a patient.
The provided document is a 510(k) Summary for the Microsoft Radiomics App v1.0. It describes the device, its intended use, and compares it to a predicate device (MIM 5.2). However, it does not contain detailed acceptance criteria or a specific study proving the device meets those criteria with performance metrics, sample sizes, expert qualifications, or ground truth details.
The document primarily focuses on software verification and validation, asserting that the software performs in accordance with specifications and that its performance is comparable to the predicate device, but without providing quantitative results from validation of specific features.
Therefore, for many of your requested items, the information is explicitly stated as "Not applicable" or is not provided in detail.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not provided in this document. The document states that "Validation testing of the following functions of the Radiomics App demonstrated that the software meets user needs and intended uses and to support substantial equivalence," and lists functions like measurements, volumetric rendering, and contouring. However, it does not specify quantitative acceptance criteria (e.g., "accuracy > X%", "Dice score > Y%") nor does it report specific performance metrics against such criteria.
2. Sample size used for the test set and the data provenance
This information is not provided in this document. The document mentions "Validation Testing" but does not detail the number of cases or images used for these tests, nor the origin (country, retrospective/prospective) of any data used.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in this document. While the device is intended for use by "trained radiation oncologists, dosimetrists and physicists," there is no mention of experts being used to establish ground truth for testing purposes.
4. Adjudication method for the test set
This information is not provided in this document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states under "Clinical Study": "Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This device is an AI-assisted contouring tool, but the submission does not include a study on its comparative effectiveness with human readers.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document mentions "Automatic Contouring - Validation Test" as one of the validation tests performed. This implies some level of standalone algorithm performance was assessed. However, no specific performance metrics or acceptance criteria for this standalone performance are provided. The device description also states: "Machine-learning based contouring. This module uses machine learning algorithms (ML) to provide an initial segmentation of certain structures of interest automatically. The user has the option to accept this initial segmentation or edit and refine it." This confirms a standalone component, but performance details are absent.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This information is not explicitly detailed for the validation testing. Given the context of contouring for radiation treatment planning, it's highly probable that expert-generated contours would serve as ground truth, but the document does not confirm this or specify the method of ground truth establishment.
8. The sample size for the training set
This information is not provided in this document. The document mentions "Machine-learning based contouring" and that "Radiomics App stems from more than eight years of research in computerized medical image analysis, computer vision and machine learning," implying a training phase, but the sample size used for training is not disclosed.
9. How the ground truth for the training set was established
This information is not provided in this document.
Ask a specific question about this device
(146 days)
MICROSOFT CORP.
The Microsoft® Amalga™ UIS Image Processing Module (IPM) is used in conjunction with Microsoft® Amalga™ UIS and Microsoft® Amalga™ UIS Medical Imaging Module to receive medical images from acquisition devices and imaging systems adhering to DICOM protocol. Medical images received from volumetric or planar imaging modalities are processed to derive certain information. The information thus derived is transmitted using the DICOM or HTTP protocol to other devices supporting these standard protocols.
The IPM enables physicians and other healthcare providers to rapidly navigate across images by selecting tagged organs and review them through multi-planar reconstruction (MPR). IPM is not intended to be used for primary diagnosis.
This device is not intended to be used for mammography.
The Microsoft® Amalga™ UIS Image Processing Module (IPM) is used in conjunction with Microsoft® Amalga™ UIS and Microsoft® Amalga™ UIS Medical Imaging Module (MIM). These two software products (Amalga™ UIS and Amalga™ MIM) are Class 1 devices that perform consolidation of disparate health information (Amalga™ UIS) and medical image communication (Amalga™ MIM). Amalga™ IPM enables healthcare providers to rapidly navigate across images by selecting tagged organs and to review these images using multi-planar reconstruction (MPR).
Amalga™ IPM delivers a basic MPR rendering platform that constructs a three-dimensional view from a set of CT, MR, or PET images. Amalga™ IPM also enables organ label based navigation through DICOM CT scans both in 2D image display and MPR viewing mode.
Here’s a summary of the acceptance criteria and study details for the Microsoft® Amalga™ UIS Image Processing Module:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Accuracy of image tagging algorithm | Exceeding threshold of 81% | Ranged between 84% and 99% for 21 organs |
Multiplanar Reconstruction (MPR) function | Operates properly | Confirmed to operate properly |
Navigation using semantic tagging of organs | Operates properly | Confirmed to operate properly |
2. Sample Size Used for the Test Set and Data Provenance
The exact sample size (number of images or patients) for the test set is not explicitly stated. The document mentions "human images from a variety of databases."
- Data Provenance: The data came from "a variety of databases" comprising patients with "a wide variety of medical conditions and body shapes." Scans exhibited "large differences in image cropping, resolution, scanner type and use of contrast agents." The country of origin is not specified, and it is stated as retrospective data (from existing databases).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: The document refers to "physician identified" boundaries for comparison, implying multiple physicians were involved but the exact number is not specified.
- Qualifications of Experts: The experts are referred to as "physicians." Their specific specialties (e.g., radiologists) or years of experience are not provided.
4. Adjudication Method for the Test Set
The adjudication method used to establish ground truth is not specified. It mentions "physician identified" boundaries as the basis for comparison with the software's performance, but the process for resolving discrepancies among physicians or establishing a consensus is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on the standalone performance of the algorithm for image tagging.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone study was conducted. The "image tagging algorithm" was validated for its accuracy, sensitivity, specificity, and precision in identifying organ boundaries. The reported performance metrics (accuracy, sensitivity, specificity, precision) were determined for the algorithm itself.
7. Type of Ground Truth Used
The type of ground truth used for the test set was expert consensus / expert identification. Specifically, "physician identified" boundaries were used as the reference against which the software's identified boundaries were compared.
8. Sample Size for the Training Set
The sample size for the training set is not explicitly stated. The document only mentions using "human images from a variety of databases" for validation. It doesn't differentiate between training and validation/test sets in terms of sample size.
9. How Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It only refers to "physician identified" boundaries for the validation/test phase. It's implied that the training data would also have had some form of expert-derived ground truth, but the method is not detailed.
Ask a specific question about this device
Page 1 of 1