Search Results
Found 3 results
510(k) Data Aggregation
(26 days)
MIM software is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical image modalities include but are not limited to, CT, MR, CR, DX, MG, US, SPECT, PET and XA as supported by ACR/NEMA DICOM 3.0. MIM assists in the following indications:
- · Receive, transmit, store, retrieve, display, print, and process medical images and DICOM objects.
- · Create, display and print reports from medical images.
· Registration, fusion display, and review of medical images for diagnosis, treatment evaluation, and treatment planning.
· Evaluation of cardiac left ventricular function and perfusion, including left ventricular end-diastolic volume, end-systolic volume, and ejection fraction.
· Localization and definition of objects such as tumors and normal tissues in medical images.
· Creation, transformation, and modification of contours for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.
· Quantitative and statistical analysis of PET/SPECT brain scans by comparing to other registered PET/SPECT brain scans.
· Planning and evaluation of permanent implant brachytherapy procedures (not including radioactive microspheres).
· Calculating absorbed radiation dose as a result of administering a radionuclide.
• Assist with the planning and evaluation of ablation procedures by providing visualization and analysis, including energy zone visualization through the placement of virtual ablation devices validated for inclusion in MM-Ablation. The software is not intended to predict specific ablation zone volumes or predict ablation success.
When using the device clinically, within the United States, the user should only use FDA approved radiopharmaceuticals. If using with unapproved ones, this device should only be used for research purposes.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Images that are printed to film must be printed using a FDAapproved printer for the diagnosis of digital mammography images. Mammographic images must be viewed on a display system that has been cleared by the FDA for the diagnosis of digital mammography images. The software is not to be used for mammography CAD.
MIM - Symphony HDR Fusion extends the existing features and capabilities of MIM -Monte Carlo Dosimetry (K232862) by offering enhanced capabilities to better support the High Dose Rate (HDR) brachytherapy workflow. It is designed for use in medical imaging and operates on Windows, Mac, and Linux computer systems. The intended use and indications for use in MIM - Symphony HDR Fusion are unchanged from the predicate device MIM – Monte Carlo Dosimetry.
MIM – Symphony HDR Fusion is a standalone software application within the MIM software suite that uses the existing functionality of the predicate device, applied now in the context of a High Dose Rate (HDR) brachytherapy clinical workflow.
MIM – Symphony HDR Fusion leverages the foundational functionalities that were introduced in the predicate device to support Low Dose Rate (LDR) brachytherapy clinical workflows. These features are extended with necessary enhancements and optimizations to optimally support the HDR workflow. Specifically, the subject device MIM - Symphony HDR Fusion provides the following core processes:
- Reslicing and Predictive Fusion presents data to inform the user's placement of medical devices (in this case, brachytherapy applicators). MIM receives and displays 2D images from a Trans-rectal Ultrasound (TRUS) probe and overlays contours from the registered pre-op image volume. The user is able to modify the position of the TRUS probe in the patient in order to match the visible pre-op contours. The user may also manually adjust the registration using software tools.
- . Ultrasound Capture: MIM receives an image feed from an US machine and position information from a stepper that holds the TRUS probe. The 2D TRUS images are processed into 3D image volumes-enabling their registration, fusion display, and storage as DICOM objects.
- Catheter Digitization provides tools for the user to localize and define HDR . brachytherapy applicators (catheters) in medical images.
- Registration Chaining allows the user to transfer information (contours) from the pre-op image (typically MR) through to the final planning image (US or CT). This is achieved using existing rigid registration tools from the predicate device to sequentially register each new image to its immediate predecessor in the clinical workflow.
- Export Data: The end of the MIM Symphony HDR Fusion workflow is to export the final planning image and user-defined structures-including organs and brachytherapy applicator models—into DICOM files for use in third-party radiation therapy treatment planning systems. Structured reports may also be created.
Here's a breakdown of the acceptance criteria and study information for MIM - Symphony HDR Fusion, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Reslicing and Predictive Fusion | Accurately reslices images and predicts information for medical device placement and registration. | Met "acceptance criteria defined for the verification and validation tests." |
Ultrasound Capture | Receives and processes 2D ultrasound images into 3D image volumes for storage and display. | Met "acceptance criteria defined for the verification and validation tests." |
Catheter Digitization | Allows users to accurately localize and define HDR brachytherapy applicators (catheters) in medical images. | Met "acceptance criteria defined for the verification and validation tests." |
Registration Chaining | Successfully transfers information (contours) between co-registered medical images using existing rigid registration tools to facilitate radiation therapy treatment. | Met "acceptance criteria defined for the verification and validation tests." |
Data Export | Accurately exports final planning images and user-defined structures (organs, brachytherapy applicator models) into DICOM files for third-party systems and generates structured reports. | Met "acceptance criteria defined for the verification and validation tests." |
Overall Software Safety & Effectiveness | Safe and effective for clinical use. | "the entire software product was determined to be safe and effective for clinical use." |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify an exact numerical sample size for the test set. It mentions "each of the five core features" underwent testing.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Unspecified, but referred to as "trained medical professionals with extensive experience in HDR brachytherapy."
- Qualifications of Experts: "trained medical professionals with extensive experience in HDR brachytherapy."
4. Adjudication Method for the Test Set
- The document does not describe a specific adjudication method (e.g., 2+1, 3+1). It states that "external validation by trained medical professionals" was conducted.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, a multi-reader multi-case (MRMC) comparative effectiveness study explicitly measuring the effect size of human readers improving with AI vs. without AI assistance was not reported. The validation involved external medical professionals, but it was to validate the software's functionality, not a comparative study on reader performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, the performance of the device's features was evaluated. The validation included "internal verification by MIM's own qualified testers" and "external validation by trained medical professionals." While the "external validation" implies human interaction for the final assessment of the software's utility in a workflow, the "internal verification" likely involved standalone testing of the algorithms comprising each feature. The description focuses on the software as a "tool to aid in evaluation," implying a human-in-the-loop context for clinical use, but the individual feature testing suggests standalone performance evaluation.
7. The Type of Ground Truth Used
- The document implies an expert consensus/determination based on the involvement of "trained medical professionals with extensive experience in HDR brachytherapy" for external validation. For internal verification, "MIM's own qualified testers" would have established the ground truth based on predefined specifications and expected outputs for each feature.
8. The Sample Size for the Training Set
- The document does not specify a sample size for a training set. This suggests that MIM - Symphony HDR Fusion is an extension of existing features from a predicate device (MIM - Monte Carlo Dosimetry) and leverages "foundational functionalities." The description focuses on verification and validation of the new and extended functionalities rather than the development of entirely new machine learning algorithms requiring a distinct training set.
9. How the Ground Truth for the Training Set Was Established
- As a training set is not explicitly mentioned, the method for establishing its ground truth is also not described.
Ask a specific question about this device
(58 days)
The Imbio Segmentation Editing Tool Software is used by trained medical professionals asa tool to modify the contours of segmentation masks produced by Imbio algorithms or to manually create segmentation mask contours. The Segmentation Editing Tool can provide further support to the users of Imbio's algorithms.
Imbio Segmentation Editing Tool (SET) Software is a segmentation editing tool designed to allow users to optimize segmentations calculated by Imbio's fully-automated suite of algorithms (Each algorithm is a separate Imbio program and either has or will be submitted for regulatory approval independently). Imbio is building a suite of medical image post-processing applications that run automatically after data transfer off the medical imaging scanner. Automatic image segmentation is often an essential step in Imbio's analyses. To date, the automatic segmentation algorithms used in Imbio's applications have been robust, however segmentation failures do occur. The purpose of the Segmentation Editing Tool is to provide customers with a tool to locally correct poor segmentations. Additionally, if the Imbio automatic segmentation fails such that it is unable to produce a result, this tool can be used to semi-manually draw the segmentation required for analysis. SET reads in anatomical images used in an automatic segmentation algorithm and the results of the automated segmentation algorithm (if available). The user is then able to locally correct insufficiencies in the segmentation result, or create a segmentation mask from scratch. The finalized segmentation mask is then pushed back to Imbio's Core Computing Platform and the job is re-processed.
The provided FDA 510(k) summary for the "Imbio Segmentation Editing Tool Software" (K180129) does not contain details about specific acceptance criteria or a study that proves the device meets such criteria.
Instead, it focuses on demonstrating substantial equivalence to a predicate device (MIM 5.2 Brachy K103576). The document mentions "Non-clinical testing was done to show validity of SET software" and "Design validation was performed using the Imbio Segmentation Editing Tool Software in actual and simulated use settings," but it does not provide the results of these tests, specific acceptance criteria, or detailed methodologies.
Here's a breakdown based on the information provided and not provided in the document:
1. A table of acceptance criteria and the reported device performance:
- Not provided. The document does not list any specific quantitative acceptance criteria (e.g., Dice score, mean surface distance, sensitivity, specificity) or present device performance metrics against such criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not provided. The document states "Design validation was performed using the Imbio Segmentation Editing Tool Software in actual and simulated use settings," but it does not specify the number of cases (sample size) used for these tests, nor the origin or nature (retrospective/prospective) of the data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not provided. Since specific test sets and ground truth establishment are not detailed, this information is absent. The device is a segmentation editing tool meant to be used by "trained medical professionals," suggesting a human-in-the-loop context, but not a fully automated algorithm that produces its own segmentations against a pre-established ground truth for validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not provided. This information is typically relevant for studies where multiple readers determine ground truth or interpret results, which is not described for this device's validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not provided. The document states, "This technology is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
- While "Usability testing was completed for this product to ensure proper use of the product by intended users," this is not equivalent to an MRMC comparative effectiveness study measuring improved human performance with AI assistance. The device is a tool to edit segmentations, not an AI that provides initial interpretations.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated as a primary validation method for this specific 510(k). The device is described as "a segmentation editing tool designed to allow users to optimize segmentations calculated by Imbio's fully-automated suite of algorithms." It's an editing tool for other Imbio algorithms' outputs. Therefore, its performance is inherently linked to human interaction. The validation focuses on the tool's functionality for editing, not on a standalone algorithmic diagnostic performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not provided for the "non-clinical testing" or "design validation." Since the device is an editing tool, the "ground truth" in operational use would be the expert's final, corrected segmentation. How the accuracy of the editing tool itself was measured against a truth standard is not detailed.
8. The sample size for the training set:
- Not applicable / Not provided. The Imbio Segmentation Editing Tool Software is described as a software tool for editing segmentation masks, not an AI algorithm that learns from a training set to produce segmentations itself. The document mentions "Imbio's fully-automated suite of algorithms" which do perform segmentation, but this 510(k) is for the editing tool, not for those underlying algorithms. Therefore, a training set for the editing tool is not relevant in the same way it would be for a segmentation algorithm.
9. How the ground truth for the training set was established:
- Not applicable / Not provided. As mentioned above, the editing tool itself does not have a training set in the context of learning to perform segmentation.
Ask a specific question about this device
(134 days)
Keosys Medical Imaging Suite (KSWVWR) is intended to be used by trained medical professionals included, but not limited to, radiologists, nuclear medicine physicians and physicists.
Keosys Medical Imaging Suite is a software application intended to aid in diagnostic and evaluation of medical image data. Although this device allows the visualization of mammography images, it is not intended as a tool for primary diagnosis in mammography.
Keosys Medical Imaging Suite can be used for display, process, temporarily store, print and also create and print reports from 2D and 3D multimodal DICOM medical image data. The imaging data can be Computed Tomography (CT), Magnetic Resonance (MR), Radiography X (CR, DX, XRF, MG), Nuclear Medecine (NM) including planar imaging (Static, Whole body, Dynamic, Gated) and tomographic imaging (SPECT, Gated SPECT), Positon Emission Tomography (PT), Ultrasound (US).
Keosys Medical Imaging suite provides tools like rulers, markers or region of interests (e.g. it can be used in an oncology clinical workflow for tumor burden assessment or therapeutic response evaluation).
It is the user responsibility to check that the ambient luminosity conditions, the images compression ratio and the interpretation monitor specifications are consistent with a clinical diagnostic use of the data.
Keosys' Advanced Medical Imaging Software Suite (aka Viewer, aka KSWVWR) is a multimodality diagnostic workstation for visualization and 3D post-processing of Radiological and Nuclear Medicine medical images. It includes dedicated applications for the therapeutic response evaluation process in a multi-vendor, multi-modal and multi time-points context. The solution also includes the latest recommendations for SUV calculation.
Here's an analysis of the provided text to extract the acceptance criteria and details about the study, as requested.
Note: The provided document is a 510(k) summary for the "Advanced Medical Imaging Software Suite (KSWVWR)". It outlines the device's indications for use and compares it to predicate devices. However, it does not contain detailed acceptance criteria, specific study results, or information about sample sizes, ground truth establishment methods, or expert qualifications for a performance study. The document primarily focuses on demonstrating substantial equivalence to previously cleared devices rather than providing a standalone performance study report. Therefore, many of your requested points cannot be directly addressed from this text.
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document does not explicitly state quantitative acceptance criteria or detailed reported device performance in a study. The focus is on functionality and equivalence.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Ability to display, process, temporarily store, print, and create reports from 2D and 3D multimodal DICOM medical image data. | Stated as a core function and intention of the device. |
Support for various imaging modalities (CT, MR, Radiography X, NM, PET, US). | Stated as compatible with these modalities. |
Provision of tools like rulers, markers, or regions of interest. | Stated as a feature (e.g., for tumor burden assessment). |
Software functionality and performance as described in specifications. | "Performance and functional testing are an integral part of Keosys's software development process." (No specific results provided). |
Substantial equivalence to predicate devices regarding intended use, diagnostic aid, display/manipulation/fuse tools, and multi-modality support. | Claimed substantial equivalence based on a comparison of technical characteristics. |
2. Sample size used for the test set and the data provenance
The document does not provide details on a specific "test set" with sample sizes or data provenance (e.g., country of origin, retrospective/prospective) for a performance study. The testing mentioned is part of the general software development process.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study where human readers used the AI. The device is a "viewer" and "software application intended to aid in diagnostic and evaluation." It provides tools but is not explicitly an "AI" in the sense of providing automated diagnostic suggestions or classifications to be compared with human performance with/without its assistance. It enables the display and manipulation of images and provides measurement tools.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the device as a "software application intended to aid in diagnostic and evaluation of medical image data" and that it "provides tools like rulers, markers or region of interests." It is a diagnostic workstation for visualization and 3D post-processing, and its use is by "trained medical professionals." This implies a human-in-the-loop device. There is no mention of a standalone algorithm performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This information is not provided in the document.
8. The sample size for the training set
The document does not describe any machine learning or AI components that would historically require a "training set" in the context of supervised learning, nor does it mention a sample size for such.
9. How the ground truth for the training set was established
Not applicable, as no training set is described.
Ask a specific question about this device
Page 1 of 1