Search Results
Found 6 results
510(k) Data Aggregation
(132 days)
Simplicit90Y is a standalone software device that is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical images.
Simplicit90Y supports the reading and display of a range of DICOM compliant imaging and related formats including but not limited to CT, PT, NM, SPECT, MR, SC, RTSS. Simplicit90Y enables the saving of sessions in a proprietary format as well as the export of formats including CSV and PDF files.
Simplicit90Y is indicated, as an accessory to TheraSphere®, to provide pre-treatment dosimetry planning support including Lung Shunt Fraction estimation (based on planar scintigraphy) and liver single-compartment MIRD schema dosimetry, in accordance with TheraSphere® labelling. Simplicit90Y provides tools to create, transform, and modify contours/Regions of Interest for calculation of Lung Shunt Fraction and Perfused Volume. Simplicit90Y includes features to aid in TheraSphere® dose vial selection, dose vial ordering and creation of customizable reports.
Simplicit90Y is indicated for post-treatment dosimetry and evaluation following Yttrium-90 (Y-90) microsphere treatment. Simplicit90Y provides tools to create, transform, and modify contours/Regions of Interest for the user to define objects in medical image volumes to support TheraSphere® post-Y-90 treatment calculation. The objects include, but are not limited to, tumors and normal tissues, and liver volumes.
Simplicit90Y is indicated for registration, fusion display and review of medical mages allowing medical professionals to incorporate images, such as CT, MRI, PET, CBCT and SPECT in TheraSphere® Yttrium-90 (Y-90) microspheres pretreatment planning and post-Y-90 treatment evaluation.
For post-Yttrium-90 (Y-90) treatment, Simplicit90Y should only be used for the retrospective determination of dose and should not be used to prospectively calculate dose or for the case where there is a need for retreatment using Y-90 microspheres.
Simplicit®Y is a software device which provides features and tools for use in pre-treatment dosimetry planning of TheraSphere® Y-90 microspheres treatment and post-treatment evaluation of Y-90 microspheres treatment and operates on Windows computer systems.
Simplicit®Y pre-treatment dosimetry planning features include Lung Shunt Fraction estimation (based on planar scintigraphy) and liver single-compartment MIRD schema in accordance with TheraSphere® labelling. Simplicit®Y additionally provides tools to support TheraSphere® dose vial selection, dose vial ordering and includes the creation of customizable reports.
After administration of Y-90 microspheres, Simplicit®V provides post-treatment dosimetry evaluation with multi-compartment and voxel-wise MIRD techniques applying the Local Deposition Model with scaling for known injected activity to assist the clinician in performing assessment of treatment efficacy and quality assurance, including assessment of absorbed dose to structures such as liver, lung and tumors.
Simplicit®Y provides 2D and 3D image display and advanced dosimetry analysis tools including isodose contour line plan and dose volume histograms.
Simplicit®9Y provides tools for multi-modal image fusion using rigid and deformable registration capable of manual, semi-automated and fully automated operation. Simplicit90Y includes evaluation tools for assessment of registration quality.
Simplicit®9Y provides semi-automated and automated tools for segmentation of Region/Volume Of Interest on multi-modal images.
Simplicit®Y supports the reading, rendering and display of a range of DICOM compliant imaging and related formats including but not limited to CT, PT, NM, SC, RTSS. Simplicit90Y enables the saving of sessions in a proprietary format as well as the export of RTSS, CSV and PDF files.
Simplicit®Y should not be used to change a treatment plan after treatment has been delivered with Yttrium-90 (Y-90) microsphere implants.
The provided text does not contain detailed acceptance criteria or a specific study proving the device meets those criteria with numerical results. It focuses on regulatory approval based on substantial equivalence to a predicate device.
However, based on the information provided, here's a breakdown of what can be inferred and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly provided in the document. The text states: "The results of performance, functional and algorithmic testing demonstrate that Simplicit®Y meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate device." This is a general statement of compliance, not a table of specific criteria and corresponding performance metrics.
2. Sample Size Used for the Test Set and Data Provenance
This information is not explicitly provided. The document makes general statements about "performance, functional and algorithmic testing" but does not detail the size or nature of the test sets used.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not explicitly provided. The document mentions general validation and verification but does not detail the process of establishing ground truth with experts.
4. Adjudication Method for the Test Set
This information is not explicitly provided.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not explicitly provided. The document describes Simplicit90Y as a "standalone software device" and "a tool to aid in evaluation and information management of digital medical images," but it does not mention MRMC studies comparing human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the device is described as a "standalone software device." The document implies that "performance, functional and algorithmic testing" was conducted for the device itself.
7. The Type of Ground Truth Used
This information is not explicitly provided. While the device aids in dosimetry planning and evaluation, the method for establishing "ground truth" for the testing mentioned generally is not detailed.
8. The Sample Size for the Training Set
This information is not explicitly provided. The document does not discuss the machine learning aspects of the software, and therefore, does not mention training sets.
9. How the Ground Truth for the Training Set was Established
This information is not explicitly provided.
Summary of what is present in the document:
The provided text is a 510(k) summary for the Simplicit90Y device. The core argument for its acceptance is substantial equivalence to an existing predicate device (MIM – Y90 Dosimetry, K172218) and several reference devices (Mirada RT, Mirada RTx, Mirada XD).
The study that proves the device meets (implicit) acceptance criteria is described as:
- Testing: "Simplicit®Y is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission. The results of performance, functional and algorithmic testing demonstrate that Simplicit®Y meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate device."
- Compliance: "Verification and Validation for Simplicit®Y has been carried out in compliance with the requirements of CFR 21 Part 820 and in adherence to the DICOM standard."
- Conclusion: "In conclusion, performance testing demonstrates that Simplicit®Y is substantially equivalent to, and performs at least as safely and effectively as the listed predicate device. Simplicit®0Y meets requirements for safety and effectiveness and does not introduce any new potential safety risks."
Essentially, the "study" is a set of "performance, functional and algorithmic testing" designed to show that Simplicit90Y performs similarly to its predicate device for its intended use, particularly for:
- Pre-treatment dosimetry planning (Lung Shunt Fraction, liver MIRD schema)
- Post-treatment dosimetry and evaluation (multi-compartment and voxel-wise MIRD techniques)
- Image registration, fusion, and review
- Contouring/Segmentation for regions of interest.
The document highlights that the proposed device has a narrower scope of indications compared to the predicate, focusing specifically on Y-90 dosimetry. This narrower scope, coupled with similar technology, is used to argue that it does not raise new safety or effectiveness issues.
Ask a specific question about this device
(26 days)
Workflow Box is a software system designed to allow users to route DICOM-compliant data to and from automated processing components. Supported modalities include CT, MR. RTSTRUCT
Workflow Box includes processing components for automatically contouring imaging data using deformable image registration to support atlas based contouring of the same patient and machine learning based contouring.
Workflow Box is a data routing and image processing tool which automatically applies contours to data which is sent to one or more of the included image processing workflows. Contours generated by Workflow Box may be used as an input to clinical workflows including, but not limited to, radiation therapy treatment planning.
Workflow Box must be used in conjunction with appropriate software to review and edit results generated automatically by Workflow Box components, for example image visualization software must be used to facilitate the review and edit of contours generated by Workflow Box component applications.
Workflow Box is intended to be used by trained medical professionals.
Workflow Box is not intended to automatically detect lesions.
Workflow Box is a software application that enables the routing of image data and structures to automatic image processing workflows, including atlas based contouring, image registration based re-contouring and machine learning based contouring.
Workflow Box data routing and contouring workflows support CT, MR and RTSTRUCT image data and structures. Workflow Box supports the routing of data to and from DICOM nodes within a hospital network.
Once data is routed to the auto contouring workflows there is no user interaction required and no user interface for visualizing image data. The configuration of workflows and data routing rules are managed via an administration interface.
Workflow Box must be used in conjunction with appropriate software to review and edit results generated automatically by Workflow Box components. Image visualization software, such as a treatment planning system, must be used to facilitate the review and edit of contours generated by Workflow Box component applications.
The provided text is a 510(k) Summary of Safety and Effectiveness for Mirada Medical Ltd.'s "Workflow Box" device. It describes the device, its intended use, and compares it to a predicate device (Mirada RTx). However, it does not contain the specific details about acceptance criteria, device performance, study design, ground truth establishment, or sample sizes as requested in the prompt.
The document states that "The results of performance, functional and algorithmic testing demonstrate that Workflow Box meets the user needs and requirements of the device," but it does not present these results or the criteria for acceptance.
Therefore, I cannot populate the table or answer most of the questions because the information is not present in the provided text.
Here's what I can extract and what is missing:
Information Present:
- Device Name: Workflow Box
- Predicate Device: Mirada RTx (K130393)
- Modality: CT, MR, RTSTRUCT
- Functionality: Data routing, automatic contouring using deformable image registration for atlas-based contouring, and machine learning-based contouring.
- Operating System: Microsoft Windows 10 (64-bit) and Microsoft Windows Server 2016.
- Validation Statement: "Workflow Box is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission. The results of performance, functional and algorithmic testing demonstrate that Workflow Box meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate device."
- Safety Statement: "Workflow Box meets requirements for safety and effectiveness and does not introduce any new potential safety risks."
Missing Information (and thus cannot create the table or answer the specific questions):
- Acceptance Criteria (quantifiable metrics): The document states testing was done, but no specific performance targets or thresholds are listed.
- Reported Device Performance: No actual performance metrics (e.g., Dice similarity coefficient, mean distance to agreement, sensitivity, specificity) are reported.
- Sample Size (Test Set): Not mentioned.
- Data Provenance (Test Set): Not mentioned (e.g., country, retrospective/prospective).
- Number of Experts for Ground Truth (Test Set): Not mentioned.
- Qualifications of Experts: Not mentioned.
- Adjudication Method (Test Set): Not mentioned.
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: Not mentioned as being performed or its results.
- Effect size of human readers with/without AI assistance: Not mentioned.
- Standalone Performance Study: The document indirectly implies standalone performance by stating the device "automatically applies contours" and "processing components for automatically contouring." However, no specific study details are provided.
- Type of Ground Truth: Not explicitly stated how the "ground truth" for contouring was established (e.g., expert consensus, pathology).
- Sample Size (Training Set): Not mentioned.
- Ground Truth for Training Set Establishment: Not mentioned.
Conclusion:
Based on the provided text, a comprehensive answer to your request cannot be given as the detailed study findings, acceptance criteria, and specific performance metrics are not included in this 510(k) summary. The document asserts that testing was performed and demonstrates substantial equivalence, but it does not provide the granular data you are seeking.
Ask a specific question about this device
(246 days)
LMS is indicated for use as magnetic resonance diagnostic device software application for non-invasive liver evaluation that enables the generation, display and review of 2D magnetic resonance medical image data and pixel maps for MR relaxation times.
LMS is designed to utilize DICOM 3.0 compliant magnetic resonance image datasets, acquired from Siemens MAGNETOM Skyra MR Scanners, to display the internal structure of the abdomen including the liver. Other physical parameters derived from the images may also be produced.
LMS provides a number of quantification tools such as rulers and region of interest to be used for the assessment of regions of an image to support existing clinical workflows.
These images and the physical parameters derived from the images, when interpreted by a trained physician, yield information that may assist in diagnosis.
LiverMultiScan (LMS) is a standalone software application for displaying 2D Magnetic Resonance medical image data acquired from Siemens MAGNETOM Skyra MR Scanners. LMS runs on a workstation with color monitor, keyboard and mouse.
LMS is designed to allow the review of DICOM 3.0 compliant datasets stored on the workstation and the user may also create, display, print, store and distribute reports resulting from interpretation of the datasets.
LMS allows the display and comparison of combinations of magnetic resonance images and provides a number of tools for the quantification of magnetic resonance images, including the determination of triglyceride fat fraction in the liver.
LMS provides a number of tools such as rulers and circular region of interest to be used for the assessment of regions of an image to support a clinical workflow.
LMS allows users to create relaxometry parameter maps of the abdomen which can be used by clinicians to help determine different tissue characteristics to support a clinical workflow. Examples of such workflows include, but are not limited to, the evaluation of the presence or absence of liver fat.
LiverMultiScan (LMS) is intended to be used by trained healthcare professionals including, but not limited to, radiologists, gastroenterologists, radiographers and physicists.
LiverMultiScan is an aid to diagnosis. When interpreted by a trained physician, the results provide information, which may be used as an input into existing clinical procedures and diagnostic workflows.
LiverMultiScan offers:
- . Advanced visualization of MR data
- . Processing of MR data to quantify tissue characteristics including MR Relaxivity constants such as T2*, T1. cT1 and liver fat percentage
- . Circular Region of interest statistics
- Snapshot of images to include in a report
- Report to include region statistics, snapshot images and user-entered text
- Export of snapshot images and report to storage
- . Integration with Mirada DBx – a software module that maintains a local temporary cache of DICOM data and can interact with PACS, from which it can receive data
Mirada DBx is a medical device data system (MDDS, product code OUG, regulation number 880.6310) used for DICOM connectivity with other systems.
- Ability to send data from Mirada DBx to PACS or other DICOM nodes for archive and distribution
The provided documents (FDA 510(k) summary and letters for LiverMultiScan) describe the device and its intended use, and generally state that it has been validated and verified. They assert that LiverMultiScan is substantially equivalent to the predicate device, Siemens syngo MR E11A software, for both safety and effectiveness.
However, the provided text does not include specific acceptance criteria, detailed study results, or quantitative performance metrics that would allow for a comprehensive breakdown as requested. The document focuses on regulatory compliance and comparison to a predicate device, rather than providing the granular technical study details.
Here's a breakdown of what can be extracted and what is missing:
1. A table of acceptance criteria and the reported device performance
- Missing. The document generally states that "performance, functional and algorithmic testing demonstrate that LiverMultiScan meets the user needs and requirements of the device," and "performs at least as safely and effectively as the listed predicate device." However, no specific acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or error rates) or reported device performance values are provided.
2. Sample size used for the test set and the data provenance
- Missing. The documents mention "validated with volunteer and phantom scans" for performance but do not specify the sample size for any test set or the provenance (country of origin, retrospective/prospective nature) of the data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Missing. The documents mention that images and parameters should be "interpreted by a trained physician" but do not specify the number or qualifications of experts used to establish ground truth for testing.
4. Adjudication method for the test set
- Missing. No adjudication method (e.g., 2+1, 3+1, none) for a test set is described.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, and the effect size
- Missing. The document does not describe a multi-reader, multi-case comparative effectiveness study, nor does it quantify any effect size of AI assistance on human readers. The comparison is primarily at the device level, concluding substantial equivalence based on intended use and technical characteristics.
6. If a standalone (algorithm only without human-in-the-loop performance) was done
- Implied, but details are missing. LiverMultiScan is described as a "standalone software application." The validation mentions "algorithmic testing," suggesting standalone performance testing, but no specific results or study design for this are provided. The "intended use" explicitly states that the results, "when interpreted by a trained physician, yield information that may assist in diagnosis," indicating that the device is intended to be used with human-in-the-loop.
7. The type of ground truth used
- Partially available, but vague. The document mentions "validated with volunteer and phantom scans." This implies that physical phantoms with known properties were used for part of the validation, and "volunteer" scans suggest potentially healthy subjects or subjects with characterized conditions. However, the exact nature of the "ground truth" (e.g., expert consensus, pathology, outcome data) for the volunteer scans or for clinical validation is not specified.
8. The sample size for the training set
- Missing. The documents do not provide any information about the training set size, or even explicitly state that a machine learning model requiring a training set is part of the device (though "algorithmic testing" might imply this).
9. How the ground truth for the training set was established
- Missing. As the training set size and existence are not described, the method for establishing its ground truth is also not mentioned.
Summary of what is available from the document:
- Device Name: LiverMultiScan (LMS)
- Intended Use: Non-invasive liver evaluation by generating, displaying, and reviewing 2D MR medical image data and pixel maps for MR relaxation times from Siemens MAGNETOM Skyra MR Scanners. Provides quantification tools (rulers, ROI) and physical parameters (like triglyceride fat fraction, T1, cT1, T2* mapping) for interpretation by a trained physician to assist in diagnosis.
- Regulatory Class: Class II (892.1000) Product Code: LNH
- Predicate Device: Software syngo MR E11A for the MAGNETOM systems Aera/Skyra (Siemens AG, 510(k) K141977)
- Basis for Equivalence: Substantial equivalence based on similar intended use, technological characteristics (e.g., utilizing DICOM 3.0 compliant MR datasets, supporting multi-slice MR data, providing quantification tools like ROI measurements, generating reports), and performance validated with volunteer and phantom scans.
- Standards Met: IEC 62304, DICOM 3.0
- Compatibility: Compatible with data from Siemens Skyra 3T MR scanners and Microsoft Windows.
- Validation Statement: "LiverMultiScan is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission."
The provided document serves as a regulatory submission focused on demonstrating substantial equivalence rather than a detailed technical study report. Therefore, it lacks the specific quantitative data points typically found in clinical validation studies.
Ask a specific question about this device
(33 days)
RTx is intended to be used by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, radiation oncologists, dosimetrists and physicists.
RTx is a software application intended to display and visualize 2D & 3D multi-modal medical image data. The user may process, render, review, store, print and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities include static and gated CT, PET, MR, SPECT and planar NM. The user may also create, display, print, store and distribute reports resulting from interpretation of the datasets.
RTx allows the user to register combinations of anatomical and functional images and display them with fused and non-fused displays to facilitate the comparison of image data by the user. The result of the registration operation can assist the user in assessing changes in image data, either within or between examinations and aims to help the user obtain a better understanding of the combined information that would otherwise have to be visually compared disjointedly.
RTx provides a number of tools such as rulers and region of interests, which are intended to be used for the assessment of regions of an image to support a clinical workflow. Examples of such workflows include, but are not limited to, the evaluation of the presence or absence of lesions, determination of treatment response and follow-up.
RTx supports the loading and saving of DICOM RT objects and allows the user to define, import, display, transform, store and export such objects including regions of interest structures and dose volumes to radiation therapy planning systems. RTx allows the user to transform regions of interest associated with a particular imaging dataset to another, supporting atlas-based contouring and rapid re-contouring of the same patient.
RTx is a software application for displaying and visualizing 2D & 3D multi-modal medical image data such as static and gated CT, PET, MR, SPECT and planar NM. RTx runs on a workstation with color monitor(s), keyboard, mouse and optional CD-RW or may be deployed on a server. RTx is designed to enable rendering, reviewing, storing, printing and distribution of DICOM 3.0 compliant datasets and reports within the system and/or across computer networks.
RTx enables automatic and manual registration of combinations of anatomical and functional images that can be displayed with fused and non-fused displays to facilitate the comparison of image data by the user.
RTx provides a number of tools such as rulers and semi-automated and manual regions of interest for the assessment of regions of an image to support a clinical workflow. RTx supports the loading and saving of DICOM RT objects and allows the user to define, import, display, transform and store and export such objects including regions of interest structures and dose volumes to radiation therapy planning systems.
Here's an analysis of the provided text regarding the acceptance criteria and supporting study for the RTx device:
The provided document describes the RTx device as a software application for displaying and visualizing 2D & 3D multi-modal medical image data, with tools for image registration and assessment. However, it does not contain specific, quantitative acceptance criteria for performance metrics such as accuracy, sensitivity, or specificity, nor does it detail a specific study with a test set, ground truth, or expert involvement to prove these criteria.
Instead, the submission for K130393 relies on a general statement of verification and validation. It asserts that RTx meets user needs, requirements, and demonstrates substantial equivalence to predicate devices.
Here's a breakdown of the requested information based on the provided text, highlighting what is present and what is absent:
Acceptance Criteria and Study Details for RTx (K130393)
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Specific quantitative performance metrics (e.g., accuracy, sensitivity, specificity for image registration or lesion detection) are NOT provided in the document. | Specific quantitative performance metrics are NOT provided in the document. The document generally states: "The results of performance, functional and algorithmic testing demonstrate that RTx meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate devices." |
| Meeting user needs and requirements | "RTx meets the user needs and requirements of the device..." |
| Substantial equivalence to predicate devices (K102687, K091373, K093982, K081076) | "...demonstrated to be substantially equivalent to those of the listed predicate devices." |
| Compliance with ISO 13485:2003, CFR 21 Part 820, and DICOM standard | "Verification and Validation for RTx has been carried out in compliance with the requirements of ISO 13485:2003, CFR 21 Part 820 and in adherence to the DICOM standard." |
| No new potential safety risks | "RTx meets requirements for safety and effectiveness and does not introduce any new potential safety risks." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not specified. The document refers generally to "performance, functional and algorithmic testing" but does not detail the size or nature of any test dataset(s) used.
- Data Provenance: Not specified. The country of origin or whether the data was retrospective or prospective is not mentioned.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Not specified. The document does not describe the use of experts to establish ground truth for testing.
- Qualifications of Experts: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable/Not specified. Since no expert-adjudicated test set is described, there is no mention of an adjudication method (e.g., 2+1, 3+1).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No. An MRMC comparative effectiveness study is not mentioned or described in the provided text.
- Effect Size of Human Reader Improvement with AI vs. without AI: Not applicable, as no MRMC study or AI assistance comparison is described. The device is a viewing and processing tool, not explicitly an AI-assisted diagnostic device in the context of human reader performance improvement.
6. Standalone (Algorithm Only) Performance Study
- Standalone Performance Study: No specific standalone performance study with quantitative results (e.g., for algorithms like image registration) is detailed in the submission. The "algorithmic testing" mentioned is general and no metrics are provided.
7. Type of Ground Truth Used
- Type of Ground Truth: Not specified. The document does not describe how ground truth was established for any performance evaluations.
8. Sample Size for the Training Set
- Sample Size: Not applicable/Not specified. The document does not refer to a "training set" in the context of machine learning. RTx is described as a software application providing tools for display, visualization, and processing, rather than an AI/ML model that would typically require a training set.
9. How the Ground Truth for the Training Set was Established
- Ground Truth for Training Set: Not applicable/Not specified, as no training set for a machine learning model is mentioned.
Summary Observation:
The submission for RTx 510(k) K130393 follows a traditional approach for medical image display and processing software. It focuses on functional verification and validation, adherence to standards, and substantial equivalence to existing predicate devices, rather than establishing quantitative performance metrics through specific clinical studies with expert-adjudicated ground truth, as would be expected for an AI/ML-based diagnostic device. The "testing" mentioned is broad and refers to meeting user needs and requirements through performance, functional, and algorithmic testing, without providing specific details on the tests or their outcomes in a quantitative manner.
Ask a specific question about this device
(17 days)
Mirada RT is intended to be used by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, and physicists.
Mirada RT is a software application intended to display and visualize 2D & 3D multi-modal medical image data. The user may process, render, review, store, print and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities include, static and gated CT and PET, and static MR, SPECT and planar NM. The user may also create, display, print, store and distribute reports resulting from interpretation of the datasets.
Mirada RT allows the user to register combinations of anatomical and functional images and display them with fused and non-fused displays to facilitate the comparison of image data by the user. The result of the registration operation can assist the user in assessing changes in image data, either within or between examinations and aims to help the user obtain a better understanding of the combined information that would otherwise have to be visually compared disjointedly.
Mirada RT provides a number of tools such as rulers and region of interests, which are intended to be used for the assessment of regions of an image to support a clinical workflow. Examples of such workflows include, but are not limited to, the evaluation of the presence or absence of lesions, determination of treatment response and follow-up.
Mirada RT allows the user to define, import, transform and store and export regions of interest structures and dose volumes in DICOM RT format for use in radiation therapy planning systems.
Mirada RT is a software application for displaying and visualizing 2D & 3D multi-modal medical image data such as static and gated CT and PET, and static MR, SPECT and planar NM. Mirada RT runs on a workstation with color monitor(s), keyboard, mouse and optional CD-RW. Mirada RT is designed to enable rendering, reviewing, storing, printing and distribution of DICOM 3.0 compliant datasets and reports within the system and/or across computer networks.
Mirada RT enables automatic and manual registration of combinations of anatomical and functional images that can be displayed with fused and non-fused displays to facilitate the comparison of image data by the user.
Mirada RT provides a number of tools such as rulers and region of interests through SUV calculation for the assessment of regions of an image to support a clinical workflow. Mirada RT allows the user to define, import, transform and store and export regions of interest structures and dose volumes in DICOM RT format for use in radiation therapy planning systems.
The provided text describes Mirada RT as a software application for displaying and visualizing 2D & 3D multi-modal medical image data, intended for use by trained medical professionals. The document focuses on showing substantial equivalence to predicate devices rather than providing detailed acceptance criteria or a specific study to prove performance against such criteria.
Here's the information extracted and observations based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a table of acceptance criteria with specific quantitative targets for performance (e.g., accuracy, sensitivity, specificity) for the Mirada RT device. Instead, it states that:
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Substantial Equivalence to Predicate Devices | "The results of performance, functional and algorithmic testing demonstrate that Mirada RT meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate devices.""In conclusion, performance testing demonstrates that Mirada RT is substantially equivalent to, and performs at least as safely and effectively as the listed predicate devices. Mirada RT meets requirements for safety and effectiveness and does not introduce any new potential safety risks." |
| Compliance with User Needs and Requirements | "Mirada RT is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission." |
| Compliance with Standards | "Verification and Validation for Mirada RT has been carried out in compliance with the requirements of ISO 13485:2003 and in adherence to the DICOM standard." |
Observation: The document focuses on demonstrating substantial equivalence to predicate devices rather than establishing novel quantitative performance criteria for Mirada RT. The "performance testing" mentioned is to demonstrate this equivalence, not to achieve specific predefined operating characteristics for an AI component in the typical sense of current AI/ML device submissions.
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the sample size used for any test set or the data provenance (e.g., country of origin, retrospective/prospective). It generally refers to "performance, functional and algorithmic testing."
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not specify the number of experts used to establish ground truth or their qualifications.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for a test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
An MRMC comparative effectiveness study was not mentioned or described in the provided text. The document does not discuss human reader performance with or without AI assistance. Mirada RT is described as a software application with tools for image display, processing, and analysis, not specifically as an AI solution designed to augment human reader performance in the sense of a standalone diagnostic aid.
6. Standalone (Algorithm Only) Performance
The document does not provide details on standalone (algorithm only) performance. While it mentions "algorithmic testing," it does not present specific metrics or results for such testing in isolation from a human user. The device's intended use clearly involves trained medical professionals, suggesting a human-in-the-loop context.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data).
8. Sample Size for the Training Set
The document does not mention or specify any training set sample size, as it does not describe the development or evaluation of an AI/ML model in the typical sense that would require a dedicated training set. The descriptions of "algorithmic testing" are generic and do not refer to machine learning model training.
9. How Ground Truth for the Training Set Was Established
As no training set is discussed, the method for establishing ground truth for a training set is not mentioned.
Summary Observation: This 510(k) summary from 2010 predates the heightened focus on specific AI/ML performance metrics and study designs that are common in more recent submissions. The submission frames Mirada RT as a medical image processing and visualization tool that is substantially equivalent to existing predicate devices, rather than an AI-driven diagnostic or assistive device that would require extensive validation against specific ground truths using large, expertly annotated datasets for training and testing. The validation described is primarily focused on demonstrating the software functions as intended and safely, aligning with general medical device regulations and established standards like DICOM and ISO 13485.
Ask a specific question about this device
(51 days)
Mirada XD is intended to be used by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, and physicists.
Mirada XD is a software application intended to display and visualize 2D & 3D multi-modal medical image data. The user may process, render, review, store, print and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities include, CT, MR, PET, SPECT and planar NM. The user may also create, display, print, store and distribute reports resulting from interpretation of the datasets.
Mirada XD allows the user to register combinations of anatomical and functional images and display them with fused and non-fused displays to facilitate the comparison of image data by the user. The result of the registration operation can assist the user in assessing changes in image data, either within or between examinations and aims to help the user obtain a better understanding of the combined information that would otherwise have to be visually compared disjointedly.
Mirada XD provides a number of tools such as rulers and region of interests, which are intended to be used for the assessment of regions of an image to support a clinical workflow. Examples of such workflows include, but are not limited to, the evaluation of the presence or absence of lesions, determination of treatment response and follow-up.
Mirada XD allows the user to define, transform and export regions of interest structures in DICOM format including RT format for use in radiation therapy planning systems.
Mirada XD is a software application for displaying and visualizing 2D & 3D multi-modal medical image data such as CT, MR, PET, SPECT and planar NM. Mirada XD runs on a workstation with color monitor(s), keyboard, mouse and optional CD-RW. Mirada XD is designed to enable rendering, reviewing, storing, printing and distribution of DICOM 3.0 compliant datasets and reports within the system and/or across computer networks.
Mirada XD enables automatic and manual registration of combinations of anatomical and functional images that can be displayed with fused and non-fused displays to facilitate the comparison of image data by the user.
Mirada XD provides a number of tools such as rulers and region of interests through SUV calculation for the assessment of regions of an image to support a clinical workflow. Mirada XD also allows users to define, transform, store and export regions of interest structures in DICOM format including RTSS format for use in radiation therapy planning systems.
The provided text is a 510(k) summary for the Mirada XD device. It outlines the intended use, device description, and a general statement about testing. However, it does not contain specific acceptance criteria, detailed study results, or the other requested information (sample sizes, expert qualifications, adjudication methods, MRMC study details, standalone performance, ground truth types, or training set details).
The summary states: "Mirada XD is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission. The results of performance, functional and algorithmic testing demonstrate that Mirada XD meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate devices."
This implies that detailed testing was performed and submitted to the FDA, but the summary itself only provides a high-level overview of the testing process and its conclusion regarding substantial equivalence to predicate devices. It specifically states "performance testing demonstrates that Mirada XD is substantially equivalent to, and performs at least as safely and effectively as the listed predicate devices," but it does not specify what those performance metrics or acceptance criteria were.
Therefore, for almost all the requested information, the answer is "Information not provided in the document."
Here is the breakdown based on the provided text:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Not specified. The document generally states that "Mirada XD meets the user needs and requirements of the device" and "performs at least as safely and effectively as the listed predicate devices." | Not specified. No specific metrics or performance results (e.g., accuracy, precision, sensitivity, specificity for image registration or ROI tools) are provided. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Information not provided in the document.
- Data Provenance (Country of Origin, Retrospective/Prospective): Information not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Information not provided in the document.
- Qualifications of Experts: Information not provided in the document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Information not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. It focuses on the device's substantial equivalence to predicate devices, not on human-AI collaboration.
- Effect Size of Human Improvement with AI: Information not provided in the document.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Performance Study: The document mentions "algorithmic testing" was performed, but does not provide details on whether this involved a standalone performance evaluation of the algorithms without human-in-the-loop, nor does it provide any results from such testing.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: Information not provided in the document. The document refers to "user needs and requirements," implying validation against operational criteria, but it does not specify a clinical "ground truth" for algorithmic performance.
8. The sample size for the training set
- Sample Size for Training Set: Information not provided in the document.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Information not provided in the document.
Ask a specific question about this device
Page 1 of 1