Search Results
Found 2 results
510(k) Data Aggregation
(104 days)
Swoop Point-of-Care Magnetic Resonance Imaging (POC MRI) Scanner System
The Swoop Point-of-Care Magnetic Resonance Imaging System is a bedside magnetic resonance imaging device for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis.
The Swoop™ Point-of-Care MRI System is a portable MRI device that allows for patient bedside imaging. The system enables visualization of the internal structures of the head using standard magnetic resonance imaging contrasts. The main interface is a commercial off-the-shelf device that is used for operating the system, providing access to patient data, exam setup, exam execution, viewing MRI image data for quality control purposes, and cloud storage interactions. The system can generate MRI data sets with a broad range of contrasts. The Swoop™ Point-of-Care MRI System user interface includes touchscreen menus, controls, indicators, and navigation icons that allow the operator to control the system and to view imagery.
This subject device in this submission includes a change to the image reconstruction algorithm of the Swoop POC MRI device for the T1W, T2W, and FLAIR sequences. The image reconstruction change utilizes deep learning to provide improved image quality, specifically in terms of reductions in image noise and blurring. This change replaces the non-uniform FFT-gridding operation in the reconstruction pipeline with Advanced Gridding and adds an Advanced Denoising step in the image postprocessing stage. All other sections of the image reconstruction pipeline are unchanged with respect to those used in the previously cleared system (K201722/K211818).
The provided text describes the acceptance criteria for a new image reconstruction algorithm in the Swoop™ Point-of-Care Magnetic Resonance Imaging (POC MRI) System and the testing conducted to demonstrate substantial equivalence.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Advanced Reconstruction Verification | |
Advanced reconstruction models do not alter image features or introduce artifacts. | Passed |
Ability for expert-mode users to toggle between linear reconstruction and advanced reconstruction. | Passed |
Image quality with advanced reconstruction is acceptable. | Passed (specifically, provides "improved image quality, specifically in terms of reductions in image noise and blurring.") |
Basic software functionality is unchanged between releases. | Passed |
NESSUS scan test to verify any vulnerabilities and serve as a security baseline. | Passed |
Advanced Reconstruction Performance Analysis | |
Robustness, stability, and generalizability of the advanced reconstruction models. | Passed |
Image Performance | |
Meets all image quality criteria (based on NEMA and ACR standards). | Passed |
Advanced Reconstruction Validation | |
Meets user needs and performs as intended. | Passed |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not explicitly state the sample size for the test set or the data provenance (country of origin, retrospective/prospective). It mentions "testing to verify image quality with advanced reconstruction is acceptable" and "Validation studies to confirm that the device meets user needs and performs as intended" but lacks specific details about the patient data used for these tests.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions "expert-mode users" and "trained physician" (in the Indications for Use), but no further details about their roles in evaluating the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method (like 2+1 or 3+1) used for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size related to human reader improvement with AI assistance. The focus of the submission is on the image reconstruction algorithm itself, which utilizes deep learning to improve image quality, rather than focusing on a human-in-the-loop performance study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, standalone performance was assessed for the algorithm. The testing described focuses on the "image reconstruction algorithm" and its "image quality" improvements, such as "reductions in image noise and blurring." This implies an evaluation of the algorithm's output (images) without necessarily involving human interpretation as the primary endpoint for all tests. The "Advanced Reconstruction Verification," "Advanced Reconstruction Performance Analysis," and "Image Performance" tests are all indicative of standalone algorithm evaluation.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
The document does not explicitly state the type of ground truth used for evaluating the image quality and performance of the advanced reconstruction algorithm. Given the nature of MRI image quality assessment, it is highly likely that expert visual assessment and potentially quantitative metrics (derived from NEMA and ACR standards, which are referenced) against established benchmarks or phantom data would have been used. However, "expert consensus," "pathology," or "outcomes data" are not directly cited as the ground truth.
8. The sample size for the training set
The document does not specify the sample size used for the training set of the deep learning image reconstruction algorithm.
9. How the ground truth for the training set was established
The document does not describe how the ground truth for the training set was established. It only states that the device "utilizes deep learning to provide improved image quality" but does not elaborate on the training process or ground truth generation.
Ask a specific question about this device
(26 days)
Swoop Point-of-Care Magnetic Resonance Imaging (POC MRI) Scanner System
The Swoop Point-of-Care Magnetic Resonance Imaging Device is a bedside magnetic resonance imaging device for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis.
The Swoop POC MRI Scanner System is a portable MRI device that allows for patient bedside imaging. The system enables visualization of the internal structures of the head using standard magnetic resonance imaging contrasts. The main interface is a commercial off-the-shelf device that is used for operating the system, providing access to patient data, exam execution, viewing MRI image data for quality control purposes, and cloud storage interactions. The system can generate MRI data sets with a broad range of contrasts. The Swoop POC MRI Scanner System user interface includes touchscreen menus, controls, indicators and navigation icons that allow the operator to control the system and to view imagery.
The purpose of this submission is to gain clearance for updates to the software to include automatic alignment and motion correction features.
The provided text describes a 510(k) summary for the Hyperfine Swoop™ Point-of-Care Magnetic Resonance Imaging (POC MRI) Scanner System (K211818). However, it focuses primarily on demonstrating substantial equivalence to a predicate device (K201722) through non-clinical performance data and risk analysis of software updates (automatic alignment and motion correction, UI updates, security features).
Crucially, the provided document does NOT contain information about a clinical study that proves the device meets specific acceptance criteria based on human reader performance, nor does it detail a standalone AI performance study. The content explicitly states that the submission is for software updates and that performance testing was conducted "to evaluate the modifications" and demonstrates that the device "passed all the testing in accordance with internal requirements and applicable standards to support substantial equivalence." This refers to engineering and regulatory compliance testing rather than clinical performance evaluation against specific diagnostic accuracy metrics.
Therefore, I cannot fulfill all parts of your request with the provided information. I will address the parts that can be inferred or directly stated from the text, and clearly mark where the requested information is not available in the provided document.
Here's a breakdown based on the provided text, indicating where information is present and where it is absent:
Acceptance Criteria and Device Performance:
The document does not specify quantitative clinical acceptance criteria (e.g., minimum sensitivity, specificity, or reader agreement) for diagnostic performance or present a table of device performance against such criteria. The "performance testing" mentioned relates to non-clinical verification of software and hardware changes for safety and effectiveness in the context of substantial equivalence.
Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified for clinical diagnostic performance. The document states: | |
"The subject device passed all the testing in accordance with internal requirements and applicable standards to support substantial equivalence." | |
This primarily refers to: |
- Software Verification (IEC 62304)
- Cybersecurity Information
- Biocompatibility (ISO 10993-1)
- Electrical Safety, EMC, Essential Performance (ANSI/AAMI ES 60601-1, IEC 60601-2-33, IEC 60601-1-2)
- Usability (IEC 60601-1-6)
- SNR (NEMA MS 1)
- Image Uniformity (NEMA MS 3)
- SAR (NEMA MS 8)
- Phased Array Coils (NEMA MS 9)
- Geometric Distortion (NEMA MS 12) | Not specified for clinical diagnostic performance. The document does not provide quantitative results for these non-clinical tests, only a statement that the device "passed all the testing." Specific values for SNR, uniformity, etc., are not reported in this summary. |
Missing Information Regarding Clinical Study:
The document does not describe a clinical study designed to demonstrate diagnostic performance of the device itself (or the AI functions if they were to assist human readers). Thus, the following information is not available from the provided text:
- Sample sizes used for the test set and the data provenance: Not available. The document refers to "testing" but not in the context of human reader performance or a clinical test set.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not available.
- Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not available.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not available. The document mentions "automatic alignment and motion correction features" as software updates, but it does not describe a study evaluating their impact on human reader performance. The device is described as "for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis." This implies the image quality is assessed, but no study details are provided.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not available. The software updates are applied to the device, but there's no mention of an AI algorithm performing diagnostic tasks in a standalone manner.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not available.
- The sample size for the training set: Not applicable and not available. The document describes software updates to an existing device, not the development of a new AI model requiring a separate training set description. The "automatic alignment and motion correction features" are likely engineering solutions rather than AI models needing large image datasets for training in a diagnostic context.
- How the ground truth for the training set was established: Not applicable and not available.
Summary of what the document implies about "study that proves the device meets the acceptance criteria":
The "study" in this context refers to the non-clinical performance testing and risk analysis described in the 510(k) summary. These tests ensure the device, with its updated software features, continues to meet safety and technical standards for an MRI scanner intended for bedside use, consistent with its predicate. The "acceptance criteria" are the passing criteria for these engineering and regulatory tests, which are not listed specifically but are implied by the statement "passed all the testing." The document implies that by passing these tests and demonstrating substantial equivalence to a predicate, the device is considered to meet the necessary criteria for market clearance. It does not disclose a clinical study evaluating the device's diagnostic accuracy or the impact of its features on human diagnostic performance.
Ask a specific question about this device
Page 1 of 1