(57 days)
The Mirage system is indicated for the acquisition, formatting and storage of scintigraphy camera output data. It is capable of processing and displaying the acquired information in traditional formats, as well as in pseudo three-dimensional renderings, and in various forms of animated sequences, showing kinetic attributes of the imaged organs.
The product (Mirage) is basically a camera driven acquisition, processing and postprocessing (visualization and analysis) software system with a Patient Data Management system, to which minimal processing (e.g. tomographic reconstruction) has been added.
The provided text describes a 510(k) submission for the "MIRAGE" Nuclear Medicine Planar and SPECT Image Processing Software by Segami Corporation. However, it does not contain the detailed information required to describe acceptance criteria and the extensive study methodologies you've requested.
The document focuses on establishing substantial equivalence to predicate devices, rather than presenting a detailed clinical study with specific acceptance criteria and performance metrics.
Here's a breakdown of why the requested information cannot be fully extracted from the provided text:
- No specific acceptance criteria are listed for the device itself. The document states that "Substantial equivalence was shown non-clinically by demonstrating that Generic Image Operations, Generic Tomographic post-processing, First Pass Ventriculography and Planar Gated Blood-pool Analysis did not yield substantially different results from those of the predicate devices." It also mentions "Substantial equivalence was evidenced clinically by demonstrating that in the clinical applications (of current clinical cases or archival cases) the clinician's conclusion would not have differed substantially." This points to a comparative assessment against predicate devices, but not a set of predefined performance thresholds for MIRAGE itself.
- No specific study is described in detail. Instead, the document refers to "demonstrating" certain aspects for substantial equivalence. It doesn't outline a formal study design, sample sizes, ground truth establishment, or expert involvement as a typical performance study would.
Therefore, for most of your questions, the answer will be "Information not provided in the document."
Despite the limitations, here's what can be inferred or explicitly stated based on the provided text, aligned with your request:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Inferred from Substantial Equivalence Claim) | Reported Device Performance (Inferred from Substantial Equivalence Claim) |
---|---|
Non-Clinical: Generic Image Operations yield results not substantially different from predicate devices. | MIRAGE's Generic Image Operations did not yield substantially different results from ICON COMPUTER SYSTEM (K914350) and SOPHY NXT (K913641). |
Non-Clinical: Generic Tomographic post-processing yields results not substantially different from predicate devices. | MIRAGE's Generic Tomographic post-processing did not yield substantially different results from ICON COMPUTER SYSTEM (K914350) and SOPHY NXT (K913641). |
Non-Clinical: First Pass Ventriculography yields results not substantially different from predicate devices. | MIRAGE's First Pass Ventriculography did not yield substantially different results from ICON COMPUTER SYSTEM (K914350) and SOPHY NXT (K913641). |
Non-Clinical: Planar Gated Blood-pool Analysis yields results not substantially different from predicate devices. | MIRAGE's Planar Gated Blood-pool Analysis did not yield substantially different results from ICON COMPUTER SYSTEM (K914350) and SOPHY NXT (K913641). |
Clinical: Clinician's conclusion in clinical applications (current/archival cases) would not differ substantially compared to predicate devices. | In clinical applications (of current clinical cases or archival cases), the clinician's conclusion would not have differed substantially when using MIRAGE vs. predicate devices. |
Technical: Data validity and integrity in Patient Data Management. | The Patient Data Management went through a separate technical test in which the validity and integrity of the data was evaluated, and concluded to be similar to predicate devices. |
2. Sample sizes used for the test set and the data provenance
- Sample Size (Test Set): Not specified. The document mentions "current clinical cases or archival cases" but does not quantify the number of cases or images.
- Data Provenance: "Current clinical cases or archival cases." Country of origin is not specified. It is an unspecified mix of retrospective and potentially current cases.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified.
- Qualifications of Experts: The document mentions "the clinician's conclusion." This implies that qualified clinical professionals (likely Nuclear Physicians, given the intended use) were involved in forming conclusions, but their specific qualifications or experience level are not detailed.
4. Adjudication method for the test set
- Adjudication Method: Not specified. The phrase "the clinician's conclusion would not have differed substantially" suggests a comparison, but the method of adjudication (e.g., 2+1, 3+1, or independent reviews) is not described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: Not explicitly described. The study was focused on demonstrating "substantial equivalence" of the MIRAGE system (an image processing software) to predicate devices, rather than assessing human reader improvement with or without AI assistance. The MIRAGE system itself is an image processing tool, not an AI assistant in the modern sense.
- Effect Size: Not applicable/not measured.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The non-clinical demonstrations ("Generic Image Operations," "Generic Tomographic post-processing," etc.) suggest an assessment of the algorithm's output compared to predicate devices, implying a form of standalone evaluation. However, precise metrics for this standalone performance (e.g., accuracy, precision) relative to a true ground truth are not provided; the comparison is to the output of predicate devices. The "Patient Data Management" also underwent a "separate technical test" for validity and integrity.
7. The type of ground truth used
- Ground Truth Type:
- For non-clinical performance (image operations, tomographic processing, etc.): The 'ground truth' was effectively the output/results from the predicate devices (ICON COMPUTER SYSTEM and SOPHY NXT). The goal was to show MIRAGE's output was "not substantially different."
- For clinical conclusions: The 'ground truth' was "the clinician's conclusion" from using predicate devices in "current clinical cases or archival cases." The aim was to show the clinician's conclusion "would not have differed substantially" with MIRAGE.
- For Patient Data Management: "validity and integrity of the data" was evaluated, suggesting a comparison against expected data standards.
8. The sample size for the training set
- Training Set Sample Size: Not specified. As this is a 510(k) for an image processing system from 1997, the concept of a "training set" in the context of modern machine learning/AI might not be directly applicable in the same way. The software likely relied on established algorithms and mathematical models, rather than data-driven machine learning models requiring extensive training data.
9. How the ground truth for the training set was established
- Training Set Ground Truth Establishment: Not specified. (See point 8 regarding the applicability of "training set" in this context).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).