Search Results
Found 3 results
510(k) Data Aggregation
(136 days)
Hyperfine Research, Inc
BrainInsight is intended for automatic labeling, spatial measurement, and volumetric quantification of brain structures from a set of low-field MR images and returns annotated images, color overlays, and reports.
BrainInsight is a fully automated MR imaging post-processing medical software that image alignment, whole brain segmentation, ventricle segmentation, and midline shift measurements of brain structures from a set of MR images from patients aged 18 or older. The output annotated and segmented images are provided in a standard image format using segmented color overlays and reports that can be displayed on third-party workstations and FDA cleared Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in routine patient care as a support tool for clinicians ir assessment of low-field (64mT) structural MRIs. BrainInsight provides overlays and reports based on 64mT 3D MRI series of a T1 and T2-weighted sequence. The outputs of the software are DICOM images which include volumes that have been annotated with color overlays, with each color representing a particular segmented region, spatial measurement of anatomical structures, and information reports computed from the image data, segmentations, and measurements. The BrainInsight processing architecture includes a proprietary automated internal pipeline that performs whole brain segmentation, ventricle segmentation, and midline shift measurements based on machine learning tools. Additionally, the system's automated safety measures include automated quality control functions, such as tissue contrast check and scan protocol verification. The system is installed on a standard computing platform, e.g. server that may be in the cloud, and is designed to support file transfer for input and output of results.
The provided text describes the BrainInsight device and references its 510(k) summary (K202414). However, it does not contain specific acceptance criteria or a detailed study description with performance metrics, sample sizes, or ground truth establishment relevant to those criteria. The "Non-clinical Performance Data" section lists areas of evaluation but doesn't provide the results against specific criteria.
Therefore, I cannot fulfill the request to provide a table of acceptance criteria and reported device performance based solely on the provided text.
However, I can extract information related to the studies mentioned and other requested points:
1. Table of Acceptance Criteria and Reported Device Performance
- Not explicitly provided in the text. The document lists areas of non-clinical performance data (Cybersecurity and PHI protection, Midline shift, 3D Coordinates and alignment, Segmentation, Data Quality Control, Audit trail, User Manual information, Software control, Ventricle segmentation, Midline shift measurement, Skull stripping). However, it does not state specific acceptance criteria (e.g., "midline shift accuracy > X%") or the actual performance achieved against such criteria.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not specified in the provided text.
- Data Provenance: Not specified in the provided text. The device processes MRI scans from "Hyperfine FSE MRI scans acquired with specified protocols." Whether these were retrospective or prospective, or from specific countries, is not mentioned.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not explicitly provided in the text. The document states that "Results must be reviewed by a trained physician," implying human review, but does not detail how ground truth for a test set was established (e.g., number of experts, their qualifications, or their role in defining ground truth for segmentation or measurement accuracy).
4. Adjudication Method for the Test Set
- Not explicitly provided in the text.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC study mentioned. The document focuses on the device's standalone capabilities and its equivalence to a predicate. There is no mention of a study involving human readers with and without AI assistance or effect sizes.
6. Standalone (Algorithm Only Without Human-in-the-loop) Performance
- Yes, a standalone evaluation was performed. The "Non-clinical Performance Data" section describes software evaluations conducted to confirm various aspects like midline shift, 3D coordinates and alignment, segmentation, ventricle segmentation, and skull stripping. This indicates an assessment of the algorithm's performance independent of human input during the processing phase.
7. Type of Ground Truth Used
- Not explicitly provided in the text. The document describes the device's function (automatic labeling, spatial measurement, volumetric quantification, segmentation, midline shift measurements) and states "Performance data was limited to software evaluations to confirm...". While this implies comparison to some form of truth, the type of ground truth (e.g., expert consensus, manual tracings, pathology, outcomes data) for the segmentation, measurements, and other evaluated features is not detailed.
8. Sample Size for the Training Set
- Not explicitly provided in the text. The device uses "machine learning tools" for its processing architecture, indicating the use of a training set, but its size is not disclosed.
9. How the Ground Truth for the Training Set Was Established
- Not explicitly provided in the text. While it states machine learning is used, the process for establishing the ground truth labels or segmentations used to train these models is not detailed.
Ask a specific question about this device
(49 days)
Hyperfine Research, Inc.
The Point-of-Care Magnetic Resonance Imaging Device is a bedside magnetic resonance imaging device for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis.
The Point-Of-Care Magnetic Resonance Imaging (POC MRI) system is an MRI device that is portable allowing patient bedside imaging. It enables visualization of the internal structures of the head using standard magnetic resonance imaging contrasts. The main interface is a commercial off-the-shelf device, used to operate the system, provide access to patient data, exam set-up, exam execution, and MRI image data viewing for quality control purposes as well as for cloud storage interactions. The POC MRI system can generate MRI data sets with a broad range of contrasts.
The user interface includes touch screen menus, controls, indicators and navigation icons that allow the operator to control the system and to view imagery.
Based on the provided text, the device in question is the Hyperfine Point-Of-Care Magnetic Resonance Imaging Scanner System.
Here's an analysis of the acceptance criteria and the study information, extracting what's available in the document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a specific table of quantitative acceptance criteria for image quality or diagnostic performance, nor does it report specific performance metrics for the device against such criteria. Instead, it refers to verification and validation activities conducted to demonstrate that the device meets "predetermined performance specifications" and several recognized standards.
2. Sample Size Used for the Test Set and Data Provenance
The document does not provide any details regarding the sample size used for a test set, nor does it specify the data provenance (e.g., country of origin, retrospective/prospective) for any studies involving human subjects or patient data. The evaluation appears to be focused on technical standards and phantom testing as opposed to clinical study results presented in this 510(k) summary.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
Given the lack of information on a clinical test set with human subjects, the document does not specify the number of experts used to establish ground truth or their qualifications. The "Indications for Use" section mentions that images, "When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis." However, this is a general statement about usage, not a description of a ground truth establishment process for a study.
4. Adjudication Method for the Test Set
As no test set with human subjects or expert assessment is described, there is no information on any adjudication method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done; If so, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it mention any AI assistance or the effect size of such assistance on human readers. The summary focuses on the MRI system itself and its equivalence to a predicate device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The document does not describe any standalone algorithm performance testing. The device is a diagnostic imaging system, and its output is intended for interpretation by a trained physician.
7. The Type of Ground Truth Used
The document does not specify the type of ground truth used for any clinical performance evaluation, as no such evaluation is detailed in this summary. The standards listed (e.g., NEMA standards for SNR, uniformity, geometric distortion) relate to technical performance characteristics typically evaluated using phantoms and standardized testing procedures, not clinical ground truth.
8. The Sample Size for the Training Set
The document does not provide any information about a training set or its sample size. This suggests the 510(k) summary does not involve a machine learning model that would require a distinct training set for diagnostic classification in the typical sense.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, there is no information on how ground truth for a training set was established.
Summary of Information from the Document:
The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (K192002 - Lucy Point-of-Care Magnetic Resonance Imaging Device, Hyperfine Research, Inc.) based on technological characteristics, safety, and performance testing against recognized standards. It does not contain details about specific clinical performance studies involving patient data, expert interpretations, or AI-driven diagnostic capabilities. The performance evaluation primarily refers to adherence to engineering and safety standards using in-house protocols.
Standards cited for evaluating performance:
- ANSI AAMI 60601-1: 2005 (+ Amendment 1) Medical Electrical Equipment -Part 1: General Requirements for Safety
- IEC 60601-2-33: Edition 3.2 - 2015. Medical Electrical Equipment - Part 2-33: Particular Requirements for the Basic Safety and Essential Performance of Magnetic Resonance Equipment for Medical Diagnosis
- IEC 60601-1-2: Edition 4.0 - 2014, Medical Electrical Equipment- Part 1-2: General Requirements for Safety - Collateral Standard: Electromagnetic Compatibility - Requirements and Tests
- IEC 60601-1-6: Edition 3.1 - 2013. Medical Electrical Equipment - Part 1-6: General Requirements for Basic Safety and Essential Performance -Collateral Standard: Usability
- ISO 10993-1: Edition 5 - 2018 Biological Evaluation of Medical Devices, Part 1
- ISO 14971: 2007 (R)2010 Application of Risk Management to Medical Devices
- IEC 62304: 2006 (+ Amendment 1) Medical Device Software - Software Lifecycle Process
- NEMA MS 1-2008 (R2014) Determination of Signal-to-Noise Ratio (SNR) in Diagnostic Magnetic Resonance Imaging
- NEMA MS 3-2008 (R2014) Determination of Image Uniformity in Diagnostic Magnetic Resonance Images
- NEMA MS 8-2008 Characterization of the Specific Absorption Rate for Magnetic Resonance Imaging Systems
- NEMA MS 9-2008 (R2014) Characterization of Phased Array Coils for Diagnostic Magnetic Resonance Images
- NEMA MS 12-2016 Quantification and Mapping of Geometric Distortion for Special Applications
Ask a specific question about this device
(195 days)
Hyperfine Research, Inc.
The Lucy Point-of-Care Magnetic Resonance Imaging Device is a bedside magnetic resonance imaging device for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis.
Lucy is a magnetic resonance imaging (MRI) device. Its portability allows patient bedside imaging. It enables visualization of the internal structures of the head using standard magnetic resonance imaging contrasts. The main interface is a commercial off the shelf device, used to operate the system, provide access to patient data, exam set-up, exam execution, and MRI image data viewing for quality control purposes as well as for cloud storage interactions. Lucy can generate MRI data sets with a broad range of contrasts. The user interface includes touch screen menus, controls, indicators and navigation icons that allow the operator to control the system and to view imagery.
The provided text is a 510(k) summary for the Lucy Point-of-Care Magnetic Resonance Imaging Device. While it describes the device, its intended use, and a comparison to a predicate device, it does not contain information regarding an AI component or a study that specifically proves the device meets AI-related acceptance criteria. The acceptance criteria and performance data outlined below are based on general medical device regulatory submissions and what one would expect for an AI/ML-driven medical device, assuming the device had such a component.
Therefore, many of the requested details, particularly those related to AI algorithm performance (e.g., sample sizes for test and training sets, expert adjudication, MRMC studies, standalone performance, ground truth for AI), cannot be extracted from this specific document.
Based on the provided text, the device described is a Magnetic Resonance Imaging (MRI) device, and the submission is for its substantial equivalence to a predicate MRI device. There is no mention of an AI/ML component in the provided documentation, nor any study proving an AI component meets acceptance criteria.
Therefore, the following information is what would be expected for an AI-powered medical device, but cannot be directly extracted or inferred from the provided text.
Hypothetical Acceptance Criteria and Study (if the Lucy device had an AI component):
Given that the provided text describes a hardware medical device (MRI scanner) rather than an AI/ML algorithm, the concept of "acceptance criteria" and "study that proves the device meets the acceptance criteria" in the context of AI applies to the performance of an algorithm, not the hardware. Since no AI algorithm is mentioned in this document, the following is a hypothetical structure for what such a response would look like if an AI component were present.
1. Table of Acceptance Criteria and Reported Device Performance
If the Lucy device included an AI component (e.g., for automated lesion detection or image quality assessment), the acceptance criteria would typically revolve around diagnostic accuracy metrics.
Metric (Hypothetical for AI Component) | Acceptance Criteria (Hypothetical) | Reported Device Performance (Hypothetical) |
---|---|---|
Primary Endpoint (e.g., Sensitivity for detecting XYZ condition) | ≥ [Target %] for primary indication | [Achieved %] |
Secondary Endpoint (e.g., Specificity for detecting XYZ condition) | ≥ [Target %] | [Achieved %] |
ROC AUC (for classification tasks) | ≥ [Target value] | [Achieved value] |
Negative Predictive Value (NPV) | ≥ [Target %] | [Achieved %] |
Positive Predictive Value (PPV) | ≥ [Target %] | [Achieved %] |
Detection Rate (for certain pathologies) | Within [X]% of expert consensus | [Achieved %] |
False Positives per scan | ≤ [Target number] | [Achieved number] |
False Negatives per scan | ≤ [Target number] | [Achieved number] |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Hypothetical): Typically, several hundreds to thousands of relevant cases are used for a robust test set for AI/ML medical devices. For example, 500-1000 unique patient studies.
- Data Provenance (Hypothetical): Data from diverse geographic locations (e.g., multi-center studies including US, Europe, Asia) to ensure generalizability. Data would ideally be a mix of retrospective (for efficiency) and prospective (for real-world validation) collection. For initial clearance, often retrospectively collected data is used, but for broader clinical claims, prospective data is valuable.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts (Hypothetical): Typically 3 to 5 independent experts are common, sometimes more for complex or ambiguous cases.
- Qualifications (Hypothetical): Board-certified radiologists with specific subspecialty expertise related to the device's intended use (e.g., neuroradiologists for head MRI), with significant years of experience (e.g., 5-10+ years) in interpreting the relevant imaging studies.
4. Adjudication Method for the Test Set
- Adjudication Method (Hypothetical): Common methods include:
- Majority Rule (e.g., 2+1 or 3+1): If 2 out of 3, or 3 out of 4, experts agree, that serves as the consensus ground truth. If no majority, a senior expert or a consensus discussion among experts may be employed for final arbitration.
- Consensus Panel: Experts meet and discuss all discordant cases to reach a unanimous decision.
- Primary Reader + Adjudicator: One expert makes the initial read, and another adjudicates discordant cases or a percentage of cases for quality control.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Study (Hypothetical for AI): Yes, for devices intended to assist human readers, an MRMC study is standard practice.
- Effect Size (Hypothetical): The effect size would quantify the improvement in diagnostic performance (e.g., AUC, sensitivity, specificity, accuracy) of human readers with AI assistance compared to without AI assistance. For example, an MRMC study might show that radiologists' diagnostic accuracy for a specific condition increased by X% (e.g., 5-10%) and/or their reading time decreased by Y% when using the AI tool.
6. If a Standalone (algorithm-only without human-in-the-loop performance) was done
- Standalone Performance (Hypothetical for AI): Yes, standalone performance is almost always assessed for AI algorithms to understand the intrinsic capability of the algorithm before combining it with human input. This would be reported against the adjudicated ground truth.
7. The Type of Ground Truth Used
- Type of Ground Truth (Hypothetical for AI):
- Expert Consensus: The most common for imaging-based AI, established by independent highly-qualified experts.
- Pathology: Biopsy-proven results, considered the gold standard for many disease states (e.g., cancer).
- Clinical Outcomes Data: Longitudinal patient follow-up, lab tests, or other clinical findings that confirm the presence or absence of a condition.
- Hybrid: A combination of the above, often employing pathology or clinical outcomes where available, and expert consensus for cases where definitive pathological or outcome data is not feasible.
8. The Sample Size for the Training Set
- Training Set Sample Size (Hypothetical for AI): This varies significantly depending on the complexity of the task, the variety of conditions, and the imaging modality. It could range from thousands to hundreds of thousands or even millions of images/studies, often augmented with data synthesis techniques. For medical imaging, tens of thousands of studies are often used for robust training.
9. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth (Hypothetical for AI): This is typically less rigorously established than the test set ground truth due to the sheer volume of data, but must still be reliable. Methods include:
- Single Expert Annotation: A single trained expert (e.g., radiologist, technologist) labels the data.
- Automated Labeling from Reports: NLP tools might extract labels from existing clinical reports, followed by human review of a subset.
- Crowdsourcing (with Quality Control): For certain tasks, a large group of annotators might be used, with mechanisms for quality control and consensus.
- Referral to Clinical Records/EHR: Labels derived from the electronic health record (e.g., diagnosis codes, lab results) can serve as weak labels.
- Existing Clinical Labels: Utilizing labels already present in de-identified clinical datasets.
In summary, the provided document from the FDA clearance K192002 for the "Lucy Point-of-Care Magnetic Resonance Imaging Device" describes a hardware MRI system and its substantial equivalence to a predicate MRI system. It does not refer to any AI/ML component or associated performance studies.
Ask a specific question about this device
Page 1 of 1