Search Results
Found 1 results
510(k) Data Aggregation
(78 days)
Imaging of the whole body (including the head, abdomen, heart, pelvis, spine, blood vessels, limbs and extremities), fluid visualization, 2D/3D Imaging, MR Angiography, MR Fluoroscopy.
The MRT-35A v9.0 is a modification of the MRT-35A system which uses a 0.35T superconducting magnet. The MRT-35A v9.0 is an incremental upgrade to an MRT-35A system which is configured with version 8 hardware and software. The 9.0 upgrade replaces the version 8 computer system. The computer architecture, operational characteristics and user software follow the same design considerations cleared by Flexart™ and Visart™ systems. This 9.0 upgrade makes no change to the MRT-35A version 8 series magnet, gradient system or coil set.
Here's a breakdown of the acceptance criteria and study information for the MRT-35A v9.0 device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the MRT-35A v9.0 are implicitly defined by its equivalency to previously cleared devices (MRT-35A v8.0 and Flexart™) and its demonstrated adherence to "consensus standards requirements" for image quality. The performance is reported by comparing key safety and image performance parameters to these predicate devices.
| Acceptance Criteria Category | Specific Parameter | Acceptance Criterion (Implicitly based on Predicate Device Performance) | Reported MRT-35A v9.0 Performance |
|---|---|---|---|
| Safety Parameters | Maximum static field strength | 0.35 T (MRT-35A v8.0) / 0.5 T (Flexart™) | 0.35 T |
| Rate of change of magnetic field | 6.83 T/sec (MRT-35A v8.0) / 11 T/sec (Flexart™) | 6.83 T/sec | |
| Max. radio frequency power deposition | 0.34 W/kg (MRT-35A v8.0) / <0.256 W/kg (Flexart™) | 0.3 W/kg | |
| Acoustic Noise levels | 101.6 db (max) (MRT-35A v8.0) / 100.2 db (max) (Flexart™) | 110.4 db (max) | |
| Image Performance | Specification Volume: Head | 10cm dsv (MRT-35A v8.0) / 10.4cm dsv (Flexart™) | 10cm dsv |
| Specification Volume: Body | 20cm dsv (MRT-35A v8.0) / 10.4cm dsv (Flexart™) | 20cm dsv | |
| Signal-to-Noise ratio | Conformance with consensus standards requirements | Demonstrated conformance | |
| Uniformity | Conformance with consensus standards requirements | Demonstrated conformance | |
| Slice Profiles | Conformance with consensus standards requirements | Demonstrated conformance | |
| Geometric Distortion | Conformance with consensus standards requirements | Demonstrated conformance | |
| Slice Thickness/Interslice Spacing | Conformance with consensus standards requirements | Demonstrated conformance |
2. Sample size used for the test set and the data provenance
The document does not explicitly state the sample size for a test set in terms of clinical cases or patient numbers. It mentions that "Sample phantom images and clinical images were presented for all new sequences." This implies a qualitative assessment rather than a quantitative, numerically defined test set.
The data provenance (country of origin of the data, retrospective or prospective) is not mentioned. Given the nature of a 510(k) submission for a device upgrade, it's likely previous clinical data from the predicate devices or internal testing data were leveraged, but this is not definitively stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. The assessment appears to be based on "consensus standards requirements" for image quality, which implies an understanding of established medical imaging quality metrics by the reviewers.
4. Adjudication method for the test set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for the test set. The evaluation seems to be based on a comparison to "consensus standards requirements" and visual assessment of "sample phantom images and clinical images."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned. This device is an upgrade to an existing MRI system and does not involve AI for interpretation or improvement of human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This section is not applicable. The device is an MRI system, not an algorithm, and its performance is inherently tied to human operation and interpretation of images.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for image performance appears to be based on "consensus standards requirements" for various image quality metrics (Signal-to-Noise ratio, Uniformity, Slice Profiles, Geometric Distortion, and Slice Thickness/Interslice Spacing). For the clinical images, the implicit ground truth would be the expected visual quality and diagnostic utility that a radiologist would expect from an MRI system. There's no mention of pathology or outcomes data being used for the performance evaluation in this summary.
8. The sample size for the training set
The document does not describe a training set. As this is an upgrade to an MRI system, the "training" would have been the development and optimization of the underlying software and hardware components, rather than a machine learning training set in the modern sense.
9. How the ground truth for the training set was established
Not applicable, as no training set (in the machine learning context) is described. The development of the system's "software technology" and "applications software" would have been guided by established engineering principles and medical imaging requirements, rather than a specific "ground truth" derived for a training set.
Ask a specific question about this device
Page 1 of 1