Search Results
Found 3 results
510(k) Data Aggregation
(265 days)
The DormoTech Vlab is a physiological data recorder intended to collect and record data from multiple physiological channels for use by clinical software used in polysomnography and sleep disorder studies. It is intended for use by or on the order of a physician is intended for use on adults in a supervised (hospital) or unsupervised (home) environment.
The DormoTech Vlab is a physiological data recorder intended to collect and record data from multiple physiological channels for use by clinical software used in polysomnography and sleep disorder studies. It consists of: The Head Unit, The Body Unit, and The Central Unit.
This FDA 510(k) summary describes the DormoTech Vlab, a physiological data recorder intended for polysomnography and sleep disorder studies. The acceptance criteria and the study proving it meets these criteria are detailed below.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't explicitly state "acceptance criteria" in a tabular format with specific thresholds. However, it presents a clinical study comparing the DormoTech Vlab to a "gold standard PSG study" by evaluating the agreement of various physiological parameters. The "conclusion" section of the clinical study acts as the implicit acceptance criteria, indicating that "Most parameters show good agreement between the devices, as indicated by the mean difference values close to zero and narrow limits of agreement." This implies that the device is considered acceptable if its measurements are statistically comparable to the gold standard.
The table below summarizes the device's performance based on the clinical study's agreement analysis (Bland-Altman statistics) between the DormoTech Vlab and a gold standard PSG (likely the NOX Sleep System, K192469, which is used as a reference).
Parameter | Mean Difference (Lower CI, Upper CI) | Upper Limit of Agreement (Lower CI, Upper CI) | Lower Limit of Agreement (Lower, Upper CI) |
---|---|---|---|
AHI (events/h) | -0.1927 (-1.323, 0.9372) | 6.823 (4.866, 8.78) | -7.209 (-9.166, -5.252) |
ODI (events/h) | -0.3244 (-1.108, 0.4597) | 4.544 (3.186, 5.902) | -5.193 (-6.551, -3.835) |
Snore (%) | 1.085 (-0.525, 2.523) | 10.01 (7.524, 12.5) | -7.843 (-10.33, -5.353) |
Sleep Latency (Minutes) | 4.653 (-0.9411, 10.25) | 38.01 (28.32, 47.7) | -28.7 (-38.39, -19.01) |
REM Latency | -15.64 (-25.95, -5.327) | 44.98 (27.12, 62.85) | -76.27 (-94.13, -58.4) |
Wake after Sleep Onset (Minutes) | -4.300 (-10.53, 1.926) | 31.77 (20.98, -42.55) | -40.37 (-51.15, -29.58) |
REM (%) | 0.4816 (-0.801, 1.764) | 8.129 (5.908, 10.35) | -7.166 (-9.388, -4.945) |
N1 (%) | 0.3263 (-1.839, 2.492) | 13.24 (9.488, 16.99) | -12.59 (-16.34, -8.836) |
N2 (%) | -2.484 (-5.084, 0.1152) | 13.02 (8.513, 17.52) | -17.98 (-22.49, -13.48) |
N3 (%) | 1.011 (-0.07236, 2.093) | 7.468 (5.592, 9.343) | -5.447 (-7.322, -3.571) |
Wake (%) | 0.1972 (-1.301, 1.696) | 8.877 (6.282, 11.47) | -8.483 (-11.08, -5.887) |
Total Sleep Time (Minutes) | 0.72222 (-6.869, 8.313) | 44.69 (31.55, 57.84) | -43.25 (-56.4, -30.1) |
Sleep Efficiency (%) | -0.03333 (-1.536, 1.47) | 8.673 (6.07, 11.28) | -8.74 (-11.34, -6.136) |
Position (Up) (%) | 0.01316 (-0.4649, 0.4913) | 2.864 (2.036, 3.692) | -2.838 (-3.666, -2.01) |
Position (Supine) (%) | 0.9974 (-0.3433, 2.338) | 8.991 (6.669, 11.31) | -6.997 (-9.319, -4.675) |
Position (Left) (%) | 0.3579 (-0.9967, 1.712) | 8.435 (6.089, 10.78) | -7.719 (-10.07, -5.373) |
Position (Right) (%) | -0.3974 (-1.61, 0.8149) | 6.831 (4.732, 8.931) | -7.626 (-9.726, -5.526) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 47 subjects.
- Data Provenance: Prospective clinical study conducted in two sleep labs in Israel:
- Shamir Medical Center – Be'er Ya'akov, Israel
- Millenium Sleep Clinic - Be'er Sheva, Israel
The study was "comparative, self-controlled, randomized, prospective study designed to assess the Vlab and compare its performance to a gold standard polysomnogram (PSG) conducted over 1 night in a sleep lab."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used for ground truth establishment or their specific qualifications (e.g., "radiologist with 10 years of experience"). However:
- The "gold standard polysomnogram (PSG)" implies scoring by trained sleep technologists or physicians, as PSG analysis typically requires specialized expertise.
- The "Conclusion" section mentions "the role of human scoring," which suggests human experts were involved in generating the ground truth from the gold standard PSG data.
4. Adjudication Method for the Test Set
The document does not specify the adjudication method used for the test set (e.g., 2+1, 3+1, none). It only mentions that the study compared the Vlab's performance to a "gold standard PSG study" with implied human scoring.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study, as typically described (measuring human reader improvement with AI vs. without AI assistance), was not done. The study focused on the standalone performance of the DormoTech Vlab device in comparison to a gold standard PSG system, not on how the Vlab system assists human interpretation.
6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)
Yes, a standalone performance study was done. The clinical study directly compared the measurements obtained from the DormoTech Vlab ("the device") to those from a "gold standard PSG study." The Vlab device collects and records physiological data for use by "clinical software," and the performance reported is of the device's data collection compared to the gold standard, rather than evaluating the accuracy of any integrated AI for interpretation or how it assists a human.
The 510(k) summary states: "The subject and predicate device are sensor arrays for use in collecting and transmitting data from sleep studies that is analyzed by automated, FDA-cleared software, which is not part of this submission." This reinforces that the Vlab's performance study is focused on its ability to acquire signals accurately, which is then fed into other (cleared) software for analysis.
7. Type of Ground Truth Used
The ground truth used was established via a "gold standard PSG study." This gold standard involves the comprehensive recording of physiological signals during sleep, which are then typically scored and interpreted by trained professionals according to established clinical guidelines (e.g., AASM rules). The measurements from these gold standard PSG studies (e.g., AHI, sleep stages) served as the reference against which the DormoTech Vlab's measurements were compared.
8. Sample Size for the Training Set
The document does not provide any information about a training set or its sample size. This is a performance study comparing the device to a gold standard, not a study describing the development or training of an AI algorithm within the DormoTech Vlab itself. The Vlab is described as a "physiological data recorder," and any software for analysis is mentioned as "clinical software used in polysomnography and sleep disorder studies" that is "FDA-cleared software, which is not part of this submission."
9. How the Ground Truth for the Training Set Was Established
As no information about a training set is provided, there is no description of how ground truth for a training set was established.
Ask a specific question about this device
(116 days)
The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD), which produces transverse, sagittal, coronal, and oblique cross sectional images that display the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used.
These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist in diagnosis.
The subject device software version, syngo MR XA50A, can support the following two MRI systems:
- MAGNETOM Free.Max, which has been cleared with its initial software version syngo MR XA40A, through K210611 on July 1, 2021;
- MAGNETOM Free.Star, a new product.
With the introduction of MAGNETOM Free.Star, we extend the Free. platform, which consists of two products with a field strength of 0.55 Tesla on our high-value MRI platform. The main difference between these two products is the bore size, MAGNETOM Free.Star is equipped with a 60 cm patient bore while the MAGNETOM Free.Max is equipped with an 80 cm patient bore. The Gradient system, body coil and the system cover for MAGNETOM Free.Star are modified based those of the predicate device MAGNETOM Free.Max with syngo MR XA40A (K210611) to accommodate the smaller bore diameter. The other main components for the new device MAGNETOM Free.Star are the same as those of MAGNETOM Free.Max as cleared with K210611.
Apart from the hardware adaption applied for MAGNETOM Free.Star for the smaller bore diameter, the new / modified hardware and software features for the subject devices comparing to the predicate device MAGNETOM Free.Max with software version syngo MR XA40A (K210611, cleared on July 1, 2021) are listed below:
MAGNETOM Free.Max software version syngo MR XA50A
New/Modified Hardware
Common for both subject devices:
- Scanner User Interface (SUI): introduce option to have two SUI set on both sides of the scanner, while there is only one set available on the left hand side as the standard configuration for the predicate device MAGNETOM Free.Max with software version syngo MR XA40A (K210611); Swap the orientation of patient pictogram on Select&GO is supported syngo MR XA50A.
- myExam 3D Camera: auto registration with detection of patient height. weight and orientation are supported in the subject device software version syngo MR XA50A. The hardware remains unchanged as cleared with K210611 on July 1, 2021.
- New Patient Video: A new patient video with 1920×1080 pixels is introduced.
Applicable to the following subject device(s)
MAGNETOM Free.Max
MAGNETOM Free.Star
New Local Coils
Contour M Coil
Contour M Coil
New Patient table – High-load patient table: a new fixed patient table with vertical movement for heavy load patient is introduced.
N/A
Software Features
Common for both subject devices:
New Software Platform/Workflow
myExam Autopilot is extended the supporting body region to shoulder:
- myExam Shoulder Autopilot: it helps users to automate a shoulder examination.
Migrated Software feature - EP2D FID: Single-shot FID EPI pulse sequence type optimized for perfusion-imaging in the brain.
- Inline Perfusion: Automatic real-time calculation of parameter maps with Inline technology based on image data acquired with the ep2d fid pulse sequence type.
- Access-i: Provides an interface to enable the connection of a 3rd party workstation to the MR syngo Acquisition Workplace via a network router and secure local network connection.
Modified Software Platform/Workflow
Modify in Scan assistance: Modified guidance of off-center imaging is provided to users who encounter the scan suspension by too off-centered shim volume.
The provided text is a 510(k) summary for the MAGNETOM Free.Max and MAGNETOM Free.Star with syngo MR XA50A. This document asserts substantial equivalence to a predicate device (MAGNETOM Free.Max with syngo MR XA40A) and does not describe specific acceptance criteria or a study designed to prove the device meets those criteria in detail as requested.
Instead, it states that:
- "The results from each set of tests demonstrate that the devices perform as intended and are thus substantially equivalent to the predicate device to which it has been compared."
- "No clinical tests were conducted to support substantial equivalence for the subject device; however, as stated above, sample clinical images were provided."
- "The device labeling contains instructions for use and any necessary cautions and warnings to ensure safe and effective use of the device."
Therefore, it is not possible to provide the specific details requested in your prompt based solely on the provided text, as this document focuses on demonstrating substantial equivalence to a previously cleared device through non-clinical testing, rather than reporting on a study with detailed acceptance criteria for standalone performance or comparative effectiveness.
The document mentions non-clinical tests conducted, which include:
- Sample clinical images: To evaluate coils, new and modified software features, and pulse sequence types.
- Performance bench test: For SNR and image uniformity measurements for coils, and heating measurements for coils.
- Software verification and validation: Mainly for new and modified software features.
These tests were conducted in accordance with guidance documents like "Guidance for Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices" and "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
In summary, the provided document does not contain the specific information requested in your prompt regarding detailed acceptance criteria and a study proving the device meets them, sample sizes, expert qualifications, adjudication methods, or MRMC study results for human reader improvement. The document focuses on demonstrating substantial equivalence through non-clinical testing against established standards and predicate devices.
Ask a specific question about this device
(128 days)
Your MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
Your MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
MAGNETOM Vida, MAGNETOM Sola, MAGNETOM Lumina, MAGNETOM Altea with software syngo MR XA31A includes new and modified hardware and software compared to the predicate device, MAGNETOM Vida with software syngo MR XA20A.
This document describes the Siemens MAGNETOM MR system (various models) with syngo MR XA31A software, and it does not describe an AI device. The information provided is a 510(k) summary for a Magnetic Resonance Diagnostic Device (MRDD). The "Deep Resolve Sharp" and "Deep Resolve Gain" features are mentioned as using "trained convolutional neuronal networks" but the document does not provide details on acceptance criteria or studies specific to the AI components as requested.
Therefore, many of the requested items (e.g., sample sizes for training/test sets for AI, expert consensus for ground truth, MRMC studies) cannot be extracted from this document because it is primarily focused on the substantial equivalence of the overall MR system and its general technological characteristics, not a specific AI algorithm requiring detailed performance studies against a clinical ground truth.
However, I can extract the available information, especially concerning the "Deep Resolve Sharp" and "Deep Resolve Gain" features, and note where the requested information is not present.
Here's the breakdown of available information, with specific answers to your questions where possible:
1. A table of acceptance criteria and the reported device performance
The document does not specify quantitative acceptance criteria for the "Deep Resolve Sharp" or "Deep Resolve Gain" features, nor does it present a table of reported device performance metrics for these features in the context of clinical accuracy or diagnostic improvement specifically. The performance testing mentioned is general for the entire system ("Image quality assessments," "Performance bench test," "Software verification and validation"), concluding that devices "perform as intended and are thus substantially equivalent."
2. Sample sizes used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated for specific features like "Deep Resolve Sharp" or "Deep Resolve Gain." The document broadly mentions "Sample clinical images" were used for "Image quality assessments."
- Data Provenance (Country/Retrospective/Prospective): Not specified in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not specified. The document states "Image quality assessments by sample clinical images" and that the "images...when interpreted by a trained physician yield information that may assist in diagnosis," but it does not detail the number or qualifications of experts involved in these assessments for specific software features or for establishing ground truth for any AI component.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC study was not described for the "Deep Resolve Sharp" or "Deep Resolve Gain" features or any other AI component. The document references clinical publications for some features (e.g., Prostate Dot Engine, GRE_WAVE, SVS_EDIT) but these are general publications related to the underlying clinical concepts or techniques, not comparative effectiveness studies of the system's AI features versus human performance. The statement "No additional clinical tests were conducted to support substantial equivalence for the subject devices" reinforces this.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
While "Deep Resolve Sharp" and "Deep Resolve Gain" involve "trained convolutional neuronal networks," the document does not describe standalone performance studies for these algorithms. Their inclusion is framed as an enhancement to the overall MR system's image processing capabilities, rather than a separate diagnostic AI tool. The stated purpose of Deep Resolve Sharp is to "increases the perceived sharpness of the interpolated images" and Deep Resolve Gain "improves the SNR of the scanned images," both being image reconstruction/enhancement features.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not specified for any AI-related features. For general image quality assessment, the "trained physician" is mentioned as interpreting images to assist in diagnosis, implying clinical interpretation, but no formal ground truth establishment process is detailed.
8. The sample size for the training set
Not specified for the "trained convolutional neuronal networks" used in "Deep Resolve Sharp" or "Deep Resolve Gain."
9. How the ground truth for the training set was established
Not specified.
Ask a specific question about this device
Page 1 of 1