Search Results
Found 1 results
510(k) Data Aggregation
(174 days)
Imaging of:
- The Whole Body (including head, abdomen, breast, heart, pelvis, joints, neck, TMJ, spine, blood vessels, limbs, and extremities). [Application terms include MRCP (MR Cholangiopancreatography), MR Urography, MR Myelography, MR Fluoroscopy, SAS (Surface Anatomy Scan), Dynamic Scan, Cine Imaging, and Cardiac tagging.]
- Fluid Visualization
- 2D/3D Imaging
- MR Angiography/MR Vascular Imaging
- Blood Oxygenation Level Dependent (BOLD) Imaging
This submission consists of a software upgrade to the MRT-50GP/E2 (FLEXARTTM), MRT-50GP/H2 (FLEXARTTM/Hyper), MRT-150/F1 (VISARTTM), MRT-150/F2 (VISARTTM/Hyper)
Here's an analysis of the provided 510(k) summary relating to acceptance criteria and the study conducted:
Disclaimer: The provided document (K983110) is a 510(k) Premarket Notification summary from 1998 for a software upgrade to existing Magnetic Resonance Diagnostic Devices (FLEXART™ and VISART™). It focuses on demonstrating substantial equivalence to previously cleared devices. It primarily discusses safety parameters and imaging performance specifications rather than a typical clinical study with acceptance criteria for a new AI/CAD device.
This document predates widespread AI in medical imaging and the standard AI/CAD study structure. Therefore, many of the requested fields (like sample size for test/training sets, ground truth establishment methods, MRMC studies, effect sizes, and standalone performance) are not directly addressed in the provided text as they pertain to a different type of device evaluation.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the document, the "acceptance criteria" are more akin to specifications that the software upgrade maintains, and the "reported device performance" indicates that these specifications are met or comparable to the predicate devices.
| Parameter/Criteria | Acceptance Criteria (V3.5 s/w) | Reported Device Performance (V4.0 s/w) | Outcome/Met? |
|---|---|---|---|
| Safety Parameters | |||
| Maximum static field strength (FLEXART™) | 0.5 T | 0.5 T | Met |
| Maximum static field strength (VISART™) | 1.5 T | 1.5 T | Met |
| Rate of change of magnetic field (FLEXART™) | 11 T/sec. | 11 T/sec. | Met |
| Rate of change of magnetic field (FLEXART™/Hyper) | 13.3 T/sec. | 13.3 T/sec. | Met |
| Rate of change of magnetic field (VISART™) | 13.3 T/sec. | 13.3 T/sec. | Met |
| Rate of change of magnetic field (VISART™/Hyper) | 19.5 T/sec. | 19.5 T/sec. | Met |
| Maximum RF power deposition (FLEXART™) | <0.4 W/kg | <0.4 W/kg | Met |
| Maximum RF power deposition (VISART™) | <1.0 W/kg | <1.0 W/kg | Met |
| Acoustic noise levels (FLEXART™) | 100.2 dB(A) | 100.2 dB(A) | Met |
| Acoustic noise levels (FLEXART™/Hyper) | 98.5 dB(A) | 98.5 dB(A) | Met |
| Acoustic noise levels (VISART™) | 105.3 dB | 105.3 dB | Met |
| Acoustic noise levels (VISART™/Hyper) | 105.1 dB | 105.1 dB | Met |
| Imaging Performance Parameters | |||
| Specification volume: Head | 16cm dsv | 16cm dsv | Met |
| Specification volume: Body | 28cm dsv | 28cm dsv | Met |
| Functionality: New sequences (e.g., cardiac tagging, Cine imaging) | Not explicitly listed as "acceptance criteria" but included as new features | "Sample clinical images are presented for new sequences" and substantial equivalence claimed. | Implied Met |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a distinct "test set" in the context of an algorithmic performance evaluation. The evaluation primarily relies on:
- Engineering specifications and measurements: For safety and imaging performance parameters (e.g., static field strength, SAR, acoustic noise, specification volume).
- Demonstration of "Sample clinical images": For new sequences. The number of images or patients is not specified.
- Comparison to predicate devices: The core of a 510(k) submission is to show substantial equivalence.
Data Provenance: Not explicitly stated, however, the manufacturing site is "Toshiba Corporation, Japan". The context suggests data would likely be from internal testing and validation, potentially clinical data from relevant medical sites if "sample clinical images" implies actual patient scans. It is retrospective in the sense that it's comparing against existing cleared versions.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This information is not provided in the document. The evaluation focuses on engineering specifications and "sample clinical images" which are presented, but details on expert review or ground truth establishment are absent.
4. Adjudication Method for the Test Set
This information is not provided in the document. Given the type of submission (software upgrade, substantial equivalence), a formal adjudication process for a clinical test set is not explicitly mentioned as being part of the presented summary.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported or described in the provided document. The submission is a 510(k) for a software upgrade, demonstrating substantial equivalence, not a comparative effectiveness study comparing human readers with and without AI assistance.
6. If a Standalone Performance Study Was Done
A standalone performance study in the context of evaluating an algorithm only without human-in-the-loop performance was not described or reported. This submission concerns a software upgrade to an MRI device, not an AI or CAD algorithm.
7. The Type of Ground Truth Used
The "ground truth" for this submission primarily consists of:
- Engineering measurements and specifications: For safety and scanner performance parameters (e.g., measured static field strength, SAR, acoustic noise, image volume).
- Clinical observation/demonstration: "Sample clinical images" are presented to show the functionality and quality of new sequences. The specific type of "ground truth" for these images (e.g., pathology, clinical follow-up) is not specified.
8. The Sample Size for the Training Set
The concept of a "training set" in the context of machine learning is not applicable to this document. This submission does not describe an AI or machine learning device that requires a training set. It's a software upgrade to an existing MRI system.
9. How the Ground Truth for the Training Set Was Established
As stated in point 8, the concept of a "training set" is not applicable to this submission.
Ask a specific question about this device
Page 1 of 1