Search Results
Found 108 results
510(k) Data Aggregation
(425 days)
K241882**
Trade/Device Name: Fetal & Maternal Monitor (F15A, F15A Air)
Regulation Number: 21 CFR§ 884.2740
name:** Perinatal monitoring system
Regulation Number/Classification Name/Product Code:
- 21 CFR 884.2740
Fetal & Maternal Monitor (Model: F15A, F15A Air) is intended for providing Non-Stress testing or fetal monitoring for pregnant women from the 28th week of gestation. It is intended to be used only by trained and qualified personnel in antepartum examination rooms, labor and delivery rooms.
Fetal & Maternal Monitor (Model: F15A, F15A Air) is intended for real time monitoring of fetal and maternal physiological parameters, including non-invasive monitoring and invasive monitoring:
Non-invasive physiological parameters:
- Maternal heart rates (MHR)
- Maternal ECG (MECG)
- Maternal temperature (TEMP)
- Maternal oxygen saturation (SpO2) and pulse rates (PR)
- Fetal heart rates (FHR)
- Fetal movements (FM)
- FTS-3
Note: SpO2 and PR are not available in F15A Air.
Invasive physiological parameters:
- Uterine activity
- Direct ECG (DECG)
The F15A series fetal and maternal monitor can monitor multiple physiological parameters of the fetus/mother in real time. F15A series can display, store, and print patient information and parameters, provide alarms of fetal and maternal parameters, and transmit patient data and parameters to Central Monitoring System.
F15A series fetal and maternal monitors mainly provide following primary feature:
Non-invasive physiological parameters:
- Maternal heart rates (MHR)
- Maternal ECG (MECG)
- Maternal temperature (TEMP)
- Maternal oxygen saturation (SpO2) and pulse rates (PR)
- Fetal heart rates (FHR)
- Fetal movements (FM)
- FTS-3
Note: SpO2 and PR are not available in F15A Air.
Invasive physiological parameters:
- Uterine activity
- Direct ECG (DECG)
The provided FDA 510(k) clearance letter and summary for the Fetal & Maternal Monitor (F15A, F15A Air) do not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the study that proves the device meets them.
The document focuses primarily on demonstrating substantial equivalence to a predicate device (Edan Instruments, Inc., F9 Express Fetal & Maternal Monitor, K173042) through comparison of intended use, technological characteristics, and conformance to various safety and performance standards. It mentions "functional and system level testing to validate the performance of the devices" and "results of the bench testing show that the subject device meets relevant consensus standards," but it does not specify quantitative acceptance criteria for each individual physiological parameter (e.g., FHR accuracy, SpO2 accuracy) nor the specific results of those tests beyond stating that they comply with standards.
Specifically, the document does not include information on:
- A table of acceptance criteria with specific quantitative targets for each parameter and the reported device performance values against those targets. It only states compliance with standards.
- Sample sizes used for a "test set" in the context of clinical performance evaluation (it mentions "bench testing," but this is typically laboratory-based and doesn't involve patient data in a "test set" sense for AI/algorithm performance validation).
- Data provenance for such a test set (e.g., country of origin, retrospective/prospective).
- Number or qualifications of experts used to establish ground truth.
- Adjudication methods.
- Multi-Reader Multi-Case (MRMC) studies or human reader improvement data with AI assistance.
- Standalone (algorithm-only) performance, as this is a monitoring device, not a diagnostic AI algorithm.
- Type of ground truth (beyond "bench testing" which implies engineered signals or controlled environments).
- Sample size for a training set or how ground truth for a training set was established. This device is a traditional medical device, not an AI/ML-driven diagnostic or interpretative algorithm in the way your request implies.
Therefore, based solely on the provided text, I can only address what is present or infer what is missing.
Here's a breakdown based on the available information:
Analysis of Acceptance Criteria and Performance Testing based on Provided Document
The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (F9 Express Fetal & Maternal Monitor, K173042) by showing that the new device (F15A, F15A Air) has the same intended use and fundamentally similar technological characteristics, with any differences not raising new safety or effectiveness concerns.
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table with quantitative acceptance criteria for each physiological parameter (e.g., FHR accuracy, SpO2 accuracy) and the corresponding reported performance values obtained in testing. Instead, it states that the device was assessed for conformity with relevant consensus standards. For example, it lists:
- IEC 60601-2-37:2015: Particular requirements for the basic safety and essential performance of ultrasonic medical diagnostic and monitoring equipment (relevant for FHR).
- ISO 80601-2-61:2017+A1:2018: Particular requirements for basic safety and essential performance of pulse oximeter equipment (relevant for SpO2).
- ISO 80601-2-56:2017+A1:2018: Particular requirements for basic safety and essential performance of clinical thermometers for body temperature measurement (relevant for TEMP).
- IEC 60601-2-27:2011: Particular requirements for the basic safety and essential performance of electrocardiographic monitoring equipment (relevant for MECG/DECG).
Acceptance Criteria (Inferred from standards compliance): The acceptance criteria are implicitly the performance requirements specified within these listed consensus standards. These standards set limits for accuracy, precision, response time, and other performance metrics for each type of measurement.
Reported Device Performance: The document states: "The results of the bench testing show that the subject device meets relevant consensus standards." This implies that the measured performance statistics (e.g., accuracy, bias, precision) for each parameter fell within the acceptable limits defined by the respective standards. However, the specific measured values are not provided in this summary.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Bench Testing" which implies laboratory-based testing using simulators, controlled signals, or phantoms, rather than a "test set" involving patient data. There is no information provided regarding:
- Sample size (e.g., number of recordings, duration of recordings, number of simulated cases) for the bench tests for each parameter.
- Data provenance (e.g., country of origin, retrospective or prospective) as this is not a study involving patient data collection for performance validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable and not provided. For a traditional physiological monitor, ground truth for bench testing is typically established using:
- Calibrated reference equipment/simulators: e.g., ECG simulators to generate known heart rates, SpO2 simulators to generate known oxygen saturation levels.
- Physical standards/phantoms: e.g., temperature baths at known temperatures.
- Known physical properties: e.g., precise weights for pressure transducers.
Clinical experts are not involved in establishing ground truth for bench performance of these types of physiological measurements.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This is not applicable and not provided. Adjudication methods are relevant for human expert review of complex clinical data (e.g., medical images for AI validation) to establish a consensus ground truth. For bench testing of physiological monitors, ground truth is objectively determined by calibrated instruments or defined physical parameters.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is not applicable and not provided. An MRMC study is typically performed to evaluate the diagnostic accuracy of AI-assisted human interpretations versus unassisted human interpretations for AI-driven diagnostic devices. The Fetal & Maternal Monitor is a physiological monitoring device, not an AI-assisted diagnostic imaging or interpretation system. It measures and displays physiological parameters; it does not provide AI-driven assistance for human "readers" to interpret complex clinical information.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is a monitor that directly measures physiological parameters. It is not an "algorithm only" device in the sense of an AI model providing a diagnostic output. Its performance (e.g., FHR accuracy) is its standalone performance, as it directly measures these parameters. The document states "functional and system level testing to validate the performance of the devices," which would represent this type of standalone performance for the measurement functionalities. However, specific quantitative results are not given, only compliance with standards.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
As explained in point 3, the ground truth for bench testing of physiological monitors is established using calibrated reference equipment/simulators and physical standards.
8. The sample size for the training set
This is not applicable and not provided. This device is a traditional physiological monitor, not a machine learning model that requires a "training set." Its algorithms for parameter measurement are based on established physiological principles and signal processing techniques, not on statistical learning from large datasets.
9. How the ground truth for the training set was established
This is not applicable and not provided for the same reasons as point 8.
Ask a specific question about this device
(265 days)
Saint Paul, Minnesota 55114
Re: K241368
Trade/Device Name: Sonicaid Team3 Regulation Number: 21 CFR 884.2740
|
| Regulation Number: | 21 CFR 884.2740
|
| Regulation | 21 CFR 884.2740
| 21 CFR 884.2740
The Team3 fetal monitors are indicated for use by trained healthcare professionals in non-invasive and invasive monitoring of physiological parameters in pregnant women and fetuses, during the antepartum and intrapartum periods of pregnancy. The Team3 fetal monitors are intended for pregnant women from the 28th week of gestation, through to term and delivery. The devices are intended for use in clinical and hospital-type facilities.
Sonicaid Team3 Antepartum is suitable for use when there is a need to monitor the following physiological applications:
- Single or twin fetal heart rates by means of ultrasound
- Uterine activity externally sensed
- Fetal movement maternally sensed and externally via ultrasound
- Maternal heart rate and oxygen saturation via pulse oximetry
- Maternal non-invasive blood pressure
- CTG analysis advises whether a number of defined criteria indicative of a normal cardiograph record has been met
Sonicaid Team3 Intrapartum is suitable for use when there is a need to monitor the following physiological applications:
- Single or twin fetal heart rates by means of ultrasound and/or FECG
- Maternal heart rate via ECG electrodes
- Uterine activity externally or internally sensed
- Fetal movement maternally sensed and externally via ultrasound
- Maternal heart rate and oxygen saturation via pulse oximetry
- Maternal non-invasive blood pressure
The Sonicaid Team3, subject device, is a fetal monitoring device designed for perinatal monitoring and includes a software function, the Dawes-Redman CTG Analysis, previously cleared under K992607. The subject device provides non-invasive and invasive monitoring of physiological parameters in pregnant women and fetuses during antepartum and intrapartum periods. The subject device includes systems and accessories intended to perform perinatal monitoring as aligned with product code HGM.
Features included in the subject device are the Dawes Redman analysis, used to assess clinically indicated antepartum cardiotocographs (CTGs) in pregnancies from 26 weeks gestation onwards, assisting physicians in identifying normal and nonreassuring traces. The Dawes-Redman software is embedded in the subject device, ensuring integration with the existing hardware. The device is not intended for use in latent or established labor due to the influence of additional factors such as labor contractions and pharmacological agents.
The provided text indicates that the Sonicaid Team3 fetal monitor has incorporated the Dawes-Redman CTG Analysis software, which was previously cleared under K992607, into its hardware. The submission for K241368 aims to demonstrate substantial equivalence by addressing this integration.
However, the provided document does not contain the specific acceptance criteria or performance study details for the Dawes-Redman CTG Analysis software as requested in the prompt. The "Performance Data" section (page 10), under "Software Performance Testing," generically states: "Software verification and validation testing were conducted, and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Guidance for the Content for the Premarket Submissions for Software Contained in Medical Devices'. The software for this device was considered as a 'major' level of concern."
This statement confirms that software testing was performed and documentation provided, and that the software was considered "major" in terms of concern, but it does not include the acceptance criteria, reported performance, sample size, ground truth establishment, expert qualifications, or MRMC study details.
Therefore, I cannot fulfill all parts of your request based on the provided text. I can only infer what was stated:
Here's what can be extracted/inferred from the provided text, and what cannot:
1. A table of acceptance criteria and the reported device performance:
* Cannot be provided. The document states software V&V was performed, but does not specify the acceptance criteria for the Dawes-Redman CTG Analysis or the performance metrics achieved against those criteria.
2. Sample size used for the test set and the data provenance:
* Cannot be provided. The document does not specify the sample size for the test set or the provenance of the data (e.g., country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Cannot be provided. The document does not mention the number or qualifications of experts used for ground truth establishment.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
* Cannot be provided. The document does not describe any adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* Cannot be provided. The document specifically mentions the Dawes-Redman CTG Analysis assists physicians in "identifying normal and nonreassuring traces," which implies a human-in-the-loop scenario. However, it does not state whether an MRMC study was performed or any effect size related to human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Cannot be explicitly confirmed or denied. While the indication for use states it "assists physicians," the document does not detail individual study types (standalone vs. human-in-the-loop).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Cannot be provided. The document does not specify the type of ground truth used to evaluate the Dawes-Redman CTG Analysis.
8. The sample size for the training set:
* Cannot be provided. The document does not mention any details about a training set for the software.
9. How the ground truth for the training set was established:
* Cannot be provided. Since no training set details are given, how its ground truth was established is also not available.
In summary, the provided FDA premarket notification document for K241368 focuses on demonstrating substantial equivalence by integrating a previously cleared software (Dawes-Redman CTG Analysis) into new hardware. It confirms that general software verification and validation were conducted according to FDA guidance for "major" level of concern software and cybersecurity testing was performed. However, it does not include the detailed performance study results, acceptance criteria, sample sizes, ground truth methodologies, or expert qualifications for the Dawes-Redman CTG analysis itself. Such specific performance data would typically be found in more detailed test reports, which are not part of this summary document.
Ask a specific question about this device
(273 days)
7510002 Israel
Re: K241009
Trade/Device Name: PeriCALM Patterns 3.0 Regulation Number: 21 CFR§ 884.2740
accessories |
| Regulation Number | 21 CFR 884.2740
PeriCALM Patterns is intended for use to provide additional secondary information as an adjunct to qualified clinical decision-making during antepartum or intrapartum obstetrical monitoring at ≥32 weeks gestation for annotation and summary of the fetal heart rate recording for baseline, accelerations and the uterine pressure recording for contractions.
WARNING: Evaluation of FHR during labor and patient management decisions should not be based solely on PeriCALM Patterns annotations or summaries and should include inspection of the fetal monitor tracing and consideration of all pertinent clinical information.
PeriCALM Patterns 3.0 is a software device to be used with fetal/maternal monitoring systems. The subject device is a software algorithm to detect, label and measure features (accelerations, decelerations, baseline, and contractions) in electronic fetal monitoring (EFM) records. PeriCALM Patterns 3.0 uses fetal monitor data imported through an interface with an external source or with a third-party clinical information system. PeriCALM Patterns can function in a networked environment or as a standalone workstation.
The subject device includes present-day Long and Short-Term Memory (LSTM) neural networks to identify segments of a fetal heart rate tracing corresponding to accelerations, decelerations, baseline as well as uninterpretable segments where is missing tracing. Contraction detection is achieved using the same processes as the predicate device.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Metric / Parameter | Acceptance Criteria (Non-inferiority Margin) | Reported Device Performance |
---|---|---|
Acceleration & Deceleration Detection | ||
Sensitivity (Se) | Non-inferiority margin of 15% (Device performance not significantly worse than 15% of Clinician Readers' Se) for 12 co-primary endpoints (Term & Preterm) | Passed all acceptance criteria for performance testing regarding pattern detection. (Specific values not provided in the text, but the study "passed all acceptance criteria"). |
Specificity (Sp) | Non-inferiority margin of 15% (Device performance not significantly worse than 15% of Clinician Readers' Sp) for 12 co-primary endpoints (Term & Preterm) | Passed all acceptance criteria for performance testing regarding pattern detection. |
Positive Predictive Value (PPV) | Non-inferiority margin of 15% (Device performance not significantly worse than 15% of Clinician Readers' PPV) for 12 co-primary endpoints (Term & Preterm) | Passed all acceptance criteria for performance testing regarding pattern detection. |
FHR Baseline Measurements | ||
Bias (Mean difference) | Non-inferiority margin of 10% (for 4 co-primary endpoints) | Passed all acceptance criteria for performance testing regarding pattern detection. (Specific values not provided) |
95% CI of Bias | ≤ 5 bpm (for 4 co-primary endpoints) | Passed all acceptance criteria for performance testing regarding pattern detection. (Specific values not provided, but the study "passed all acceptance criteria"). The text states, "The acceptance criteria for 95% CI of Bias for the subject device was set to threshold of ≤ 5 bpm." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A total of 70 subjects were used in the study, with one tracing per subject. This included 30 preterm subjects and 40 term subjects.
- Data Provenance: Tracings were obtained from hospitals using a variety of fetal monitor models and manufacturers (Corometrics 170, 250 series by GE HealthCare, Avalon FM50 and FM40 by Philips Medical, and S1 from Neoventa). The study was retrospective. The specific country of origin is not explicitly stated, but PeriGen, Inc. is located in Israel, however, the data provenance is through collaborations with US Hospitals through various monitor OEMs.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: 3 experts (referred to as "Truthers") were used.
- Qualifications: They were specified as "a panel of 3 experts." While the specific medical specialty (e.g., Obstetrician) is implied by the context of "fetal heart rate tracings" and "obstetrical monitoring," their exact qualifications (e.g., years of experience, specific board certifications) are not detailed in the provided text.
4. Adjudication Method for the Test Set Ground Truth
- Adjudication Method: A majority opinion approach was used to resolve inter-observer variation among the 3 experts.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC study was done. The study design is described as "a retrospective study with multi-reader/multi-case technique."
- Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance: The study's primary objective was to demonstrate the non-inferiority of the device (PeriCALM Patterns 3.0) to a set of qualified and experienced Obstetrician Gynecologists (Clinician Readers) in terms of pattern detection and baseline measurement. It does not describe a study where human readers were assisted by AI and then compared to human readers without AI assistance. Therefore, an effect size of human improvement with AI assistance is not reported in this text.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, implicitly. The study compared the "performance of PeriCALM Patterns 3.0" directly to "Ground Truth" and "Clinician Readers." This suggests the algorithm's performance was evaluated in isolation (without human interaction) in comparison to both human consensus and individual human expert performance. The metrics (Sensitivity, Specificity, PPV, Bias, LoA) are typical of standalone algorithm evaluations.
7. Type of Ground Truth Used
- Expert Consensus: The ground truth for this study was established by a panel of 3 experts, with a majority opinion approach used to resolve inter-observer variation.
8. Sample Size for the Training Set
- The provided text does not specify the sample size used for the training set. It only describes the test set.
9. How the Ground Truth for the Training Set Was Established
- The provided text does not specify how the ground truth for the training set was established. It only describes the ground truth establishment for the test set.
Ask a specific question about this device
(258 days)
(F&M) Pod (866488), Avalon CL Fetal & Maternal (F&M) Patch (989803196341) Regulation Number: 21 CFR 884.2740
|
| Regulation Name /
Regulation Number: | 21 CFR 884.2740
The Avalon CL Fetal & Maternal (F&M) Pod & Patch is a device indicated for use by healthcare professionals in a clinical setting for non-invasive monitoring of maternal heart rate (aHR), fetal heart rate (aFHR), and uterine activity (aToco) in women who are at >36 completed weeks, in labor, with singleton pregnancy, using surface electrodes on the maternal abdomen.
The Avalon CL Fetal & Maternal (F&M) Pod and the Avalon CL Fetal & Maternal (F&M) Patch is a beltless battery-powered maternal-fetal monitoring system that non-invasively measures abdominal fetal heart rate (aFHR), abdominal uterine activity (aToco), and abdominal maternal heart rate (aHR). The Avalon CL Fetal & Maternal (F&M) Patch is a single-use disposable adhesive electrode patch designed to be affixed to the maternal abdomen. The Avalon CL Fetal & Maternal (F&M) Pod is a reusable device which, when connected to the Avalon CL Fetal & Maternal (F&M) Patch, picks up electrical signals and converts it to Short Range Radio (SRR). The Avalon CL Fetal & Maternal Pod communicates the data measurement values to the Avalon CL Base Station using Short-Range Radio (SRR). The Avalon CL Base Station in turn relays the information to the connected Philips Fetal-Maternal (FM) Monitor (i.e., FM20, FM30, FM40, and FM50).
The provided FDA 510(k) summary for the Philips Avalon CL Fetal & Maternal (F&M) Pod & Patch focuses heavily on demonstrating substantial equivalence to a predicate device through non-clinical testing and comparison of technical characteristics rather than a detailed clinical study report with specific acceptance criteria and performance metrics for the device's accuracy in monitoring FHR, MHR, and UA.
Therefore, much of the requested information regarding "acceptance criteria and the study that proves the device meets the acceptance criteria" in terms of clinical performance (e.g., accuracy, sensitivity, specificity, agreement with ground truth for FHR, MHR, and UA) is not explicitly detailed in this document. The document primarily discusses non-clinical tests for safety, electrical performance, and biocompatibility.
However, based on the information provided, here's what can be extracted and inferred:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a table with specific clinical performance acceptance criteria (e.g., accuracy ranges for FHR) and reported device performance from an effectiveness standpoint. Instead, it details non-clinical technical acceptance criteria related to safety, electrical performance, and biocompatibility, which the device met.
Criterion Category | Specific Criterion / Test | Acceptance Criterion (Implicit) | Reported Device Performance (Implicit) |
---|---|---|---|
Biocompatibility | Cytotoxicity (ISO 10993-5) | Met acceptance criteria as defined in test requirements | Met |
Sensitization (ISO 10993-10) | Met acceptance criteria as defined in test requirements | Met | |
Irritation (ISO 10993-10) | Met acceptance criteria as defined in test requirements | Met | |
Electrical Safety | ANSI AAMI ES60601-1 | Compliance with standard for basic safety and essential performance | Passed |
EMC/Wireless | IEC 60601-1-2 | Compliance with standard for electromagnetic disturbances | Passed |
IEEE ANSI C63.27 | Compliance with standard for evaluation of wireless coexistence | Passed | |
IEC/TR 60601-4-2 | Compliance with standard for electromagnetic immunity | Passed | |
Alarm Systems | IEC 60601-1-8 | Compliance with standard for alarm systems | Passed |
Battery Safety | IEC 62133-2 | Compliance with standard for lithium systems | Passed |
Software/Firmware | FDA Guidance compliance | Compliance with "Content of Premarket Submissions for Device Software Functions" | Documentation provided and reviewed |
Cybersecurity | FDA Guidance compliance | Compliance with "Cybersecurity in Medical Devices" guidance | Documentation provided and reviewed |
Performance Bench | Inspection of labeling and pouch sealing | N/A (Visual inspection) | Met |
Impedance/tensile strength/pull-off force/noise level/conductivity/offset voltage/defibrillation overload (new and aged patches) | Met acceptance criteria as defined in test requirements | Met | |
In vivo testing: integrity, detachment/reattachment, and performance (impedance, noise level, MHR, conductivity) after shower and usage | Met acceptance criteria as defined in test requirements | Met | |
Peel-off force of each electrode and central sticker | Met acceptance criteria as defined in test requirements | Met | |
MHR/FHR/UA accuracy after storage at various temperatures | Met acceptance criteria as defined in test requirements | Met | |
Signal transmission continuity | Met acceptance criteria as defined in test requirements | Met |
Regarding MHR/FHR/UA accuracy, the document states for "Performance Bench" that "MHR/FHR/UA accuracy after stored in room (23℃), high (32℃) and low (2-8℃) temperature" were conducted and "met the acceptance criteria as defined in the test requirements." However, the specific numerical acceptance criteria for accuracy (e.g., mean absolute difference, percentage agreement, etc.) and the reported numerical performance regarding MHR/FHR/UA accuracy are not provided in this summary. This suggests that these accuracy tests were likely bench tests under controlled conditions, not a clinical trial comparing device readings to a clinical ground truth.
2. Sample Size for Test Set and Data Provenance
The document does not explicitly mention a "test set" in the context of a clinical performance study with human subjects to evaluate the accuracy of FHR, MHR, and UA measurements. The in-vivo testing mentioned under "Performance Bench" refers only to "integrity, detachment/reattachment, and performance (impedance, noise level, MHR, conductivity) after shower and usage (8 hours/32 hours) for the patch (Novii Patch)." This does not sound like a large-scale clinical accuracy study.
Therefore, based on the provided text alone:
- Sample size for the test set: Not explicitly stated for clinical performance as commonly understood for device accuracy. The "in vivo testing" details are too limited to determine sample size or its direct relation to device accuracy claims.
- Data provenance: Not explicitly stated. The type of testing suggests it might be internal company testing rather than an independent clinical trial.
3. Number of Experts and Qualifications for Ground Truth
Given the lack of a detailed clinical performance study report, there is no information provided regarding the number or qualifications of experts used to establish ground truth for a clinical test set for FHR, MHR, or UA.
4. Adjudication Method
Again, due to the absence of a detailed clinical performance study, there is no information provided on any adjudication method (e.g., 2+1, 3+1) for a clinical test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No. The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss human readers or AI assistance in this context. This device appears to be a monitoring system for physiological parameters, not an AI-assisted diagnostic imaging or interpretation tool.
6. Standalone Performance
The device itself is a "standalone" monitoring system in the sense that it performs its measurements (aHR, aFHR, aToco) via its electrodes and pod, then relays this data to a Philips Fetal-Maternal (FM) Monitor for display. The performance tests ("Performance Bench") assess the device's ability to measure these parameters. However, the exact "standalone" clinical accuracy metrics (e.g., sensitivity, specificity, accuracy vs. a gold standard) are not provided. The phrase "standalone performance" is generally associated with diagnostic algorithms, which doesn't seem to be the primary claim here.
7. Type of Ground Truth Used
For the non-clinical performance "MHR/FHR/UA accuracy after stored in room (23℃), high (32℃) and low (2-8℃) temperature," the type of ground truth used is not specified. It likely refers to controlled laboratory measurements against calibrated reference standards, rather than clinical ground truth like pathology, expert consensus, or outcomes data. For clinical performance data (which is not detailed), common ground truths for FHR, MHR, and UA would be internal fetal monitoring (IUPC for UA, fetal scalp electrode for FHR) or expert interpretation of existing monitoring tracings (though this isn't mentioned).
8. Sample Size for the Training Set
No information is provided about a "training set." This term is typically associated with machine learning or AI algorithm development. While the device uses signal processing (template matching, filtering, confidence tagging) to identify fECG and mECG complexes, the document does not describe the development or training of such algorithms or any associated data sets used for this purpose.
9. How Ground Truth for the Training Set Was Established
As no training set is discussed, there is no information provided on how ground truth for a training set was established.
In summary, the provided FDA summary focuses on demonstrating substantial equivalence through technical and non-clinical performance and safety testing. It lacks detailed clinical performance data (e.g., accuracy, sensitivity, specificity) against a clinical ground truth, specific sample sizes for clinical evaluations, or information about expert consensus or adjudication methods for such clinical data, which are typically found in clinical study reports for devices claiming diagnostic or interpretative capabilities.
Ask a specific question about this device
(158 days)
Wauwatosa, WI 53226
Re: K231964 Trade/Device Name: Novii+ Wireless Patch System Regulation Number: 21 CFR§ 884.2740
|
| Regulation Number: | 21 CFR 884.2740
The Novii+ Pod is an antepartum and intrapartum Maternal/Fetal Monitor that non-invasively measures fetal heart rate (FHR), uterine activity (UA) and maternal heart rate (MHR). The Novii+ Pod acquires the FHR tracing from abdominal surface electrodes that pick up the fetal ECG (fECG) signal. Using the same surface electrodes, the Pod also acquires the UA tracing from the uterine electromyography (EMG) signal and the MHR tracing from the maternal ECG signal (mECG). The Pod is indicated for use on women who are at 34 weeks and 0/7 days and greater with singleton pregnancies, using surface electrodes on the maternal abdomen.
The Novii Patch is an accessory to the Novii+ Pod that connects directly to the Novii+ Pod and contains the surface electrodes that attach to the abdomen.
The Novii+ Interface is an accessory to the Novii+ Pod which provides a means of interfacing the wireless output of the Novii+ Pod to the transducer inputs of a Maternal/Fetal Monitor. The Novii+ Interface enables signals collected by the Novii+ Pod to be printed and displayed on a Maternal/Fetal Monitor and sent to a central network, if connected.
The Novii+ Pod Maternal/Fetal Monitor and its accessories are intended for use by healthcare professionals in a clinical setting.
The Novii+ Wireless Patch System (Novii+ system) is a battery-powered maternal-fetal monitoring system that measures abdominal fetal heart rate (FHR), abdominal uterine activity (UA), and abdominal maternal heart rate (MHR). The Novii+ Wireless Patch system is designed as an ambulatory device for the monitoring of a pregnant mother. The monitor enables the abdominal electrophysiological signal to be picked up from three different positions on the maternal abdomen using the 5 electrodes on the Novii Patch. The monitor filters the abdominal signals, converts the abdominal electrophysiological data into a digital format and then processes it in real time to extract the fetal heart rate, maternal heart rate and uterine activity. The result of the processing is transmitted via a Bluetooth connection to the Novii+ Interface device which is an accessory to the Novii+ Pod.
The Novii+ Pod (proposed device) is indicated for use on women who are at 34 weeks and 0/7 days and greater with singleton pregnancies with cephalic fetal presentation.
Here's a breakdown of the acceptance criteria and study details for the Novii+ Wireless Patch System, based on the provided FDA 510(k) summary:
1. Acceptance Criteria and Reported Device Performance
Parameter (Metric) | Acceptance Criteria (Lower limit of 95% two-sided CI) | Reported Device Performance | Outcome |
---|---|---|---|
FHR (PA) | >80% | 83.45% | PASS |
MHR (PA) | >80% | 97.26% | PASS |
UA (RI) | >80% | 100% | PASS |
UA (PPA) | >80% | 84.67% | PASS |
FHR Deming Slope | 0.958 - 1.042 (95% two-sided CI) | 1.02 | PASS |
FHR Deming Intercept | -10 to 10 BPM (95% two-sided CI) | -3.18 BPM | PASS |
MHR Deming Slope | 0.958 - 1.042 (95% two-sided CI) | 1.01 | PASS |
MHR Deming Intercept | -10 to 10 BPM (95% two-sided CI) | -1.18 BPM | PASS |
MHR RMSE (Novii vs. GS) |
Ask a specific question about this device
(195 days)
Francisco, CA 94105
Re: K222327
Trade/Device Name: Bloomlife MFM-Pro Regulation Number: 21 CFR§ 884.2740
| Common Name: | Maternal-Fetal Monitor |
| Regulation Number: | 21 CFR 884.2740
|
| Classification | 884.2740
| 884.2740
Bloomlife MFM-Pro is indicated for monitoring of maternal heart rate (MHR) and fetal heart rate (FHR) during the antepartum period for singleton pregnancies 32 weeks gestation or later.
It is to be used by healthcare professionals, clinics, physicians' offices, antepartum rooms, and in the patient's home on the order of a physician.
Bloomlife MFM-Pro is not intended for use in critical care situations or those patients hospitalized for or suspected to have preterm labor.
Bloomlife MFM-Pro is not intended to be used for antepartum monitoring (e.g., non- stress testing).
Bloomlife MFM-Pro is a non-invasive, wireless, external monitoring system used to measure fetal heart rate (FHR) and maternal heart rate (MHR) during the antepartum period on pregnant women with a singleton pregnancy at 32 weeks gestation or later. The healthcare professional applies the device to the patient and uses Bloomlife MFM-Pro to generate the Bloomlife MFM-Pro report that provides 5 minutes of fetal heart rate (FHR) and maternal heart rate (MHR) monitoring to the clinic. A typical Bloomlife MFM-Pro session is expected to take 12 minutes; approximately 7 minutes to perform system quality checks and 5 minutes to record data. Signal quality checks continue during recording to ensure data quality.
Bloomlife MFM-Pro consists of three main components: the Bloomlife Sensor, the Bloomlife App, and the Bloomlife Cloud.
The Bloomlife Sensor measures biopotential signals picked up on the abdominal surface using electrodes and transfers the data to the Bloomlife App via Bluetooth Low Energy.
The Bloomlife App is used by a healthcare professional to enter patient information, start'stop recording sessions, and get feedback on data quality and recording status during a recording. The App does not process or visualize data; it acts as a gateway for the raw data measured by the sensor.
The Bloomlife Cloud receives the data from the app, stores and processes the data into a Report, which is provided to the clinic via electronic fax. The Cloud Algorithm extracts the maternal heart rate from 50 bpm to 240 bpm within ±7 beats per minute, and fetal heart rate from 50 bpm to 240 bpm within ±10 beats per minute at a sample rate of 4 samples per second.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Pre-defined acceptable range of LoA) | Reported Device Performance (95% CI of LoA) |
---|---|---|
Fetal Heart Rate (FHR) | Not explicitly stated as a single value, but implied to be sufficient for an "acceptable range of LoA" | Lower Limit: -6.80 bpm |
Upper Limit: 7.19 bpm | ||
Maternal Heart Rate (MHR) | Not explicitly stated as a single value, but implied to be sufficient for an "acceptable range of LoA" | Lower Limit: -2.00 bpm |
Upper Limit: 2.93 bpm |
Note: The document states that the reported performance values were "within the pre-defined acceptable range of LoA for this pivotal study." However, the exact numerical thresholds for the acceptance criteria were not explicitly provided in the text. I've inferred that the reported values met these unstated criteria.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (for analysis): 56 participants
- Sample Size (enrolled): 61 participants
- Data Provenance: Prospective, collected from two clinical sites in the USA.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the number of experts or their qualifications used to establish the ground truth. It mentions that the Philips Avalon FM30 was used as a "standard of cardiotocograph (CTG) monitoring device comparator." This implies that the ground truth was established by comparing the Bloomlife MFM-Pro's readings against the output of this established medical device, rather than through expert consensus on the raw data.
4. Adjudication Method for the Test Set
The document does not describe a specific adjudication method (like 2+1 or 3+1). The ground truth was established by comparison to a "standard of cardiotocograph (CTG) monitoring device comparator" (Philips Avalon FM30).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. The study focused on the performance of the Bloomlife MFM-Pro device itself in comparison to a reference device (Philips Avalon FM30) for heart rate measurements, not on the improvement of human readers with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the study appears to be a standalone performance evaluation of the Bloomlife MFM-Pro device. The "Bloomlife Cloud Algorithm" processes the data, and the report is generated from the device's output. While healthcare professionals apply the device and enter information, the core performance assessment is on the device's ability to accurately measure FHR and MHR. The algorithm's performance is implicit in the "Bloomlife MFM-Pro 5-minute Report data" used for analysis.
7. The Type of Ground Truth Used
The ground truth used was comparison to a legally marketed predicate device (Philips Avalon FM30 CTG monitor), which is described as a "standard of cardiotocograph (CTG) monitoring device comparator."
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. The clinical study described is for validation/performance testing, not for training.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established, as it focuses on the performance of the device in a pivotal study.
Ask a specific question about this device
(101 days)
Wisconsin 53226
Re: K220732
Trade/Device Name: Mural Perinatal Surveillance Regulation Number: 21 CFR 884.2740
|
| Regulation Number: | 21 CFR 884.2740
Mural Perinatal Surveillance is a perinatal monitoring system intended for electronic collection, display, and documentation of clinical data with optional features to store, export, annotate, calculate, and retrieve clinical data. Data is acquired from medical devices, electronic health records, or other data sources on a hospital's network. This device is intended for use by healthcare professionals in clinical support settings for obstetric patients during and after pregnancy.
This product is not intended to control of the medical devices providing data across the hospital network. All information or indications provided are intended to support the judgment of medical professionals and are not intended to be the sole source of information for decision making.
Mural Perinatal Surveillance is a software only, information management system designed for the obstetrical (OB) care environment. Its use covers patients during pregnancy, labor, birth and covers newborn documentation. The software interfaces with a healthcare facility's Electronic Medical Records (EMR) and patient monitoring network to collect, display and document relevant patient data.
The software combines patient surveillance and alarm capabilities with patient documentation and record keeping into a single application to support patient care for their complete obstetrical care journey.
The provided document is a 510(k) summary for the GE Medical Systems Information Technologies, Inc. Mural Perinatal Surveillance system. This document outlines the device's indications for use, comparison to a predicate device, and a summary of non-clinical tests.
However, it does not contain information about specific acceptance criteria and the study that proves the device meets those criteria, particularly regarding algorithmic performance. The document focuses on software validation, risk analysis, cybersecurity, and interoperability, confirming that the software was developed according to GE Healthcare's Quality Management System and relevant IEC standards.
The "Mural Perinatal Surveillance" is described as a software-only information management system intended for electronic collection, display, and documentation of clinical data, with optional features to store, export, annotate, calculate, and retrieve clinical data primarily for obstetric patients. It explicitly states that it is "not intended to control or alter any of the medical devices providing data across the hospital network" and that "all information or indications provided are intended to support the judgment of medical professionals and are not intended to be the sole source of information for decision making."
This indicates that the device functions as a data management and display tool, rather than an AI-driven diagnostic or prognostic tool that would require extensive performance studies with acceptance criteria based on metrics like sensitivity, specificity, or AUC, or comparative effectiveness studies with human readers.
Therefore, I cannot provide the requested information for acceptance criteria and a study proving the device meets those criteria in the context of AI performance, because such a study is not described in the provided document, nor would it typically be required for a device of this nature (a perinatal monitoring system for data management, not an AI for diagnosis or risk assessment). The "computed items & assessment tools" (Shoulder Dystocia Risk, Postpartum Hemorrhage Risk Score, and Bishop Score) are stated to be derived from "standard general computes widely accepted" or "well-established industry standards or evidence-based studies and peer-reviewed research journals," implying they are based on established clinical rules or formulas, not novel AI algorithms requiring new performance validation studies.
However, based on the provided text, I can infer the "acceptance criteria" related to the device's functions and the "study" (non-clinical tests) demonstrating its adherence to regulatory and quality standards.
Here's an interpretation based on the provided information, focusing on what is present in the document rather than what is absent related to AI performance:
Inferred Acceptance Criteria and Proof of Meeting Criteria for Mural Perinatal Surveillance
Given that Mural Perinatal Surveillance is a data management and display system, not a diagnostic AI, the acceptance criteria and proof of meeting them are primarily centered around its functionality, reliability, security, and adherence to quality systems.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria/Goals Implicit in Document | Reported Device Performance (Summary from document) |
---|---|---|
Functional Performance | - Electronic collection, display, and documentation of clinical data. | - Successfully collects, displays, and documents clinical data. |
- Provides optional features for storing, exporting, annotating, calculating (based on established computes), and retrieving data.
- Software capabilities for clinical annotations & record archive demonstrated.
- Software capabilities for alarms demonstrated (capable of generating alarm conditions within the software). |
| - Data acquisition from medical devices, EHRs, etc. | - Acquires physiological data from compatible measuring devices (HL7, fetal monitors on a network). |
| Safety (Software & Cybersecurity) | - Software operates safely without unintended harm or error. | - Safety classification and performance testing in accordance with IEC 62304 Edition 1.1 2015 successfully completed. - Risk Analysis / Management Requirements Reviews successfully completed.
- Cybersecurity evaluated as recommended in the 2014 FDA guidance document, successfully completed. |
| Effectiveness (Software & Interoperability) | - Software performs as intended and integrates effectively with hospital systems. | - Software Verification and Software Validation successfully completed, confirming that software and user requirements have been met. - Interoperability evaluated as recommended in the 2017 FDA guidance document, successfully completed. |
| Usability | - Device is user-friendly for healthcare professionals. | - Usability Testing successfully completed. |
| Quality System & Regulatory Compliance | - Developed under a robust quality management system; adheres to relevant standards. | - Developed following the GE Healthcare Quality Management System (QMS). - Design Reviews successfully completed.
- Testing in accordance with IEC 60601-1-8 Edition 2.2 2020-07 for alarm functionality successfully completed. |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" in the context of a dataset for validating AI performance. Instead, it refers to "Non-Clinical Tests" which include software verification and validation activities. These tests typically involve:
- Unit testing: Testing individual components of the software.
- Integration testing: Testing the interaction between different software components.
- System testing: Testing the complete integrated system.
- Regression testing: Ensuring changes don't break existing functionality.
- Usability testing: Testing with intended users to ensure ease of use.
- Cybersecurity testing: Penetration testing, vulnerability scanning.
- Interoperability testing: Testing data exchange with other systems (e.g., HL7 interfaces).
The document does not specify the "sample size" of data records or patient cases used for these non-clinical tests, nor does it mention data provenance (country of origin, retrospective/prospective). This level of detail is typically not required for the type of software validation described, which focuses on code quality, functional correctness, and adherence to software engineering best practices and relevant standards rather than a clinical performance study.
3. Number of experts used to establish the ground truth for the test set and their qualifications
This information is not applicable as the document does not describe a study involving human experts establishing "ground truth" for a performance test set in the way it would be for an AI diagnostic algorithm. The acceptance criteria are based on software engineering principles, regulatory standards, and functional specifications, not on expert consensus on clinical data for diagnostic accuracy.
4. Adjudication method for the test set
This information is not applicable for the same reasons as point 3. There is no mention of adjudication for a clinical test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and its effect size.
No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not reported. This type of study is relevant for evaluating the impact of an AI (or other support tool) on human reader performance, typically in diagnostic imaging or similar fields. Since Mural Perinatal Surveillance is a data management and display system intended to support judgment rather than provide independent diagnostic conclusions, such a study would not be expected or relevant based on the information provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done.
The document implies that the "computed items & assessment tools" (e.g., Shoulder Dystocia Risk, Postpartum Hemorrhage Risk Score, Bishop Score) perform "standalone" calculations based on pre-defined rules/algorithms. However, these are based on "widely accepted" or "well-established industry standards," indicating they are deterministic calculations, not AI algorithms requiring standalone performance validation against ground truth data in the context of novel algorithm output. The document explicitly states the information is "not intended to be the sole source of information for decision making," meaning the system outputs are always viewed by human clinicians.
7. The type of ground truth used
For the non-clinical software verification and validation, the "ground truth" is typically defined by:
- Functional specifications: How the software is designed to behave.
- Requirements documents: What the software is supposed to do.
- Industry standards: Adherence to IEC standards (e.g., 62304 for software lifecycle, 60601-1-8 for alarms).
- Known good outputs: For calculations, the "ground truth" is the correct mathematical or rule-based output.
- Security best practices: For cybersecurity.
- Clinical workflows: For usability testing.
No "ground truth" in the sense of expert consensus, pathology, or outcomes data is described as being used for a performance study of the device's own outputs.
8. The sample size for the training set
This information is not applicable. The device is described as operating based on "standard general computes" and "complex computes derived directly from well-established industry standards or evidence-based studies and peer-reviewed research journals." This suggests rule-based or empirically derived algorithms rather than a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
This information is not applicable for the same reason as point 8. There is no mention of a machine learning training set or associated ground truth establishment process.
Ask a specific question about this device
(540 days)
Regulation Name: | 21 CFR 884.2720 (External uterine contraction monitor and
accessories)
21 CFR 884.2740
The LaborView ™ LV1000 Wireless Electrode System is a transabdominal electromyography and electrocardiography intrapartum maternal-fetal sensor. It works non-invasively via surface electrodes on the maternal abdomen with appropriate monitors to measure fetal heart rate (FHR), uterine activity (UA), and maternal heart rate (MHR). It is indicated for use on women who are at term (>36 completed weeks), in labor, with singleton pregnancies. It is intended for use by a healthcare professional in a clinical setting.
The LaborView™ LV1000 Wireless Electrode System is a uterine activity (UA), maternal (MHR) and fetal (FHR) heart rate sensor replacement intended to interface with existing Philips Avalon fetal monitors in hospital delivery environments.
Labor View™ LV1000 Wireless Electrode System is comprised of an electrode array, a wireless transmitter ("Transmitter"), computational base station"), a power supply module, and adapters to connect to compatible fetal monitors. The electrode array is sensitive to changes in the electrical activity at the skin surface due to muscle contractions, maternal, and fetal ECG when placed on the expectant mothers abdomen. These signals are passed to the device, converted to a contraction curve, maternal heart rate (MHR), and fetal heart rate (FHR), and subsequently passed to the Philips monitor.
All the components of LaborView™ LV1000 Wireless Electrode System work together with the compatible fetal monitors to complete a system that can detect maternal contractions, MHR and FHR during labor. The fetal monitor, in turn, may interface to a central monitoring system in order to conveniently present contraction information to clinicians.
The provided text describes a 510(k) premarket notification for the LaborView™ LV1000 Wireless Electrode System, but it primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting a performance study with acceptance criteria and a detailed study report for a novel AI device.
Therefore, the requested information on acceptance criteria, device performance, sample sizes, expert ground truth establishment, adjudication methods, MRMC studies, standalone performance, and training data details is not available in the provided document.
The document discusses non-clinical testing which includes:
- Biocompatibility
- Software Verification (compliance with FDA guidance, but no specific performance metrics)
- Electrical Safety, EMC, and Wireless Capability (compliance with standards like ANSI/AAMI ES60601-1, IEC 60601-1-2014)
- Performance Testing (bench testing verifying performance to specifications, including Electrode Array, Transmitter, Base Station, Monitor Interface Cable, System Validation, EC13 Compliance Verification for Maternal Heart Rate (MHR), Comparative Testing, and Testing with compatible patient monitors).
While "Performance Testing" is mentioned, no specific acceptance criteria for these tests or the results demonstrating the device meets them are provided. The focus is on verifying compliance with design specifications and industry standards rather than a clinical performance study with predefined acceptance metrics for accuracy, sensitivity, or specificity.
In summary, based on the provided text, it is not possible to fill out the requested table or answer most of the detailed questions regarding acceptance criteria and performance study specifics for an AI device, as the document describes a 510(k) for a medical device (a wireless electrode system) that is seeking substantial equivalence to a predicate, not a performance study of a novel AI algorithm.
Ask a specific question about this device
(72 days)
Trade/Device Name: Sonicaid Team3 Antepartum, Sonicaid Team3 Intrapartum Regulation Number: 21 CFR§ 884.2740
|
| Regulation Number: | 21 CRF 884.2740
|
| Regulation | 21 CFR 884.2740
| 21 CFR 884.2740
The Sonicaid Team3 Antepartum and Sonicaid Team3 Intrapartum fetal monitors) are indicated for use by trained healthcare professionals in non-invasive monitoring of physiological parameters in pregnant women and fetuses, during the antepartum and intrapartum periods of pregnancy. The Team3 fetal monitors are intended for pregnant women from the 28th week of gestation, through to term and delivery. The devices are intended for use in clinical and hospital-type facilities.
Sonicaid Team3 Antepartum is suitable for use when there is a need to monitor the following physiological applications:
- Single or twin fetal heart rates by means of ultrasound
- Uterine activity externally sensed
- Fetal movement maternally sensed and externally via ultrasound
- Maternal heart rate and oxygen saturation via pulse oximetry
- Maternal non-invasive blood pressure
Sonicaid Team3 Intrapartum is suitable for use when there is a need to monitor the following physiological applications:
- Single or twin fetal heart rates by means of ultrasound and/or FECG
- Maternal heart rate via ECG electrodes
- Uterine activity externally or internally sensed
- Fetal movement maternally sensed and externally via ultrasound
- Maternal heart rate and oxygen saturation via pulse oximetry
- Maternal non-invasive blood pressure
The Sonicaid Team3 is a mains / battery powered multi-function fetal monitor designed for use in clinical and hospital environments during antepartum and intrapartum phases of pregnancy.
The Sonicaid Team3 is designed for use at the bedside; there is a wall mounting bracket available as well as a trolley for fixed or transportable use. The unit may also be used free-standing on a work surface.
The units are powered either from local mains electrical supply or an optional internal rechargeable battery.
The Sonicaid Team3 fetal monitors include the following:
- 8.4" Color LCD Display with LED backlighting.
- Touch screen user interface.
- Monitoring of up to two fetal heart rates via independent ultrasound transducers.
- Monitoring of maternal uterine activity via external tocodynamometer (Toco) or internal intra-uterine pressure (IUP) transducers.
- Monitoring of maternal oxygen saturation (SpO2) and heart rate via pulse oximetry sensor.
- Monitoring of maternal Non-Invasive Blood Pressure . (NIBP).
- Monitoring of fetal heart rate via ECG.
- Maternal heart rate (eMHR).
- Capture of maternally sensed fetal movements via a cabled switch.
- Chart printout via (optional) inbuilt thermal printer
- Data output via RS232.
The provided text describes the acceptance criteria and study for a medical device (Sonicaid Team3 Antepartum and Sonicaid Team3 Intrapartum fetal monitors) in the context of an FDA 510(k) submission. However, it does not contain information about acceptance criteria in terms of numerical performance thresholds (e.g., specific accuracy, sensitivity, or specificity targets for the physiological parameters being monitored). Instead, it primarily focuses on demonstrating substantial equivalence to a predicate device through adherence to recognized standards and various types of engineering and performance testing.
Therefore, the following information is extracted based on the provided text, and where specific details are not present, it will be noted.
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of numerical "acceptance criteria" and "reported device performance" in the typical sense of a clinical performance study with specific metrics like sensitivity, specificity, or accuracy targets. Instead, it demonstrates performance through compliance with recognized standards and various engineering and functional tests. The implied acceptance criterion is "demonstrating substantial equivalence to the predicate device" by meeting safety and performance standards and showing that technological differences do not raise new questions of safety or effectiveness.
Category | Acceptance Criteria (Implied) | Reported Device Performance and Compliance |
---|---|---|
General Device Comparison | Be substantially equivalent to the predicate device (Sonicaid FM820E and FM830E (K090285)) in intended use, safety, and effectiveness. | The Sonicaid Team3 Antepartum and Sonicaid Team3 Intrapartum have the same intended use as the predicate device – to monitor the progress of labor and fetal status. Although there are different technological characteristics, these do not raise different questions of safety and effectiveness. |
Biocompatibility | Compliance with ISO 10993-1:2009 for skin-contacting devices ( |
Ask a specific question about this device
(258 days)
Trade/Device Name: Philips IntelliSpace Perinatal Revision K.00 Regulation Number: 21 CFR§ 884.2740
| Obstetrical and
gynecological devices | §884.2740
The Philips IntelliSpace Perinatal Obsterical Information Management System is indicated for obsterior and after pregnancy, who require monitoring in a healthcare setting.
The Philips IntelliSpace Perinatal system provides:
- Basic and advanced fetal trace alarming for both antepartum and intrapartum patients.
- Central monitoring of maternal alarming.
- Documentation capabilities and data storage.
- Viewing and alarming of patient physiologic data, at remote locations, via the healthcare facility Remote Desktop Session (RDS).
- An interface to launch the Philips IntelliVue XDS Remote viewing and operating of compatible patient monitors.
The Philips IntelliSpace Perinatal Rev. K.00 is a patient-oriented, departmental information management system for the obstetrical care environment. It covers OB care in so far as it is relevant for GYN visits, pregnancy, labor, birth and newborn documentation. It combines surveillance and alarming with comprehensive patient documentation and data storage into one system that covers the continuum of obstetrical care across one or more pregnancies, from the first antepartum visit until delivery and discharge.
The provided document is a 510(k) summary for the Philips IntelliSpace Perinatal Revision K.00. This document describes the device, its intended use, and the testing performed to demonstrate its substantial equivalence to a predicate device.
However, it does not contain details about specific acceptance criteria related to a performance study (e.g., accuracy, sensitivity, specificity) for an AI/ML-based device. The document primarily focuses on non-clinical verification and validation (V&V) activities for software modifications and does not describe a clinical performance study with human readers or an algorithm-only standalone performance study.
Therefore, I cannot populate the requested table and answer many of the questions directly from the provided text. The information given is at a higher level, focusing on regulatory compliance for a software system that manages obstetrical information and integrates with other monitoring devices, rather than a diagnostic AI algorithm with quantifiable performance metrics.
Here is what I can extract and infer based on the document:
1. Table of Acceptance Criteria and Reported Device Performance:
-
The document states that "All specified pass/fail criteria have been met" for the nonclinical V&V activities. However, it does not specify what those exact performance criteria are in terms of numerical metrics (e.g., sensitivity, specificity, accuracy) for an AI/ML component. The "performance" described is about the functionality and safety of the software modification, not a diagnostic accuracy metric.
Acceptance Criterion Type Specific Criterion (If available in document) Reported Device Performance (If available in document) Functionality Correctly hand over start-up parameters to XDS software Met: "conducted tests demonstrate that start-up parameters provided by the modified IntelliSpace Perinatal software are correctly handed over to the XDS software" Correct operation of patient monitors via XDS Met: "compatible IntelliVue patient monitors can be correctly operated via the XDS software." Safety Effectiveness of implemented risk mitigation measures Met: "Test results confirmed the effectiveness of implemented risk mitigation measures." General V&V All specified pass/fail criteria Met: "All specified pass/fail criteria have been met."
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated as this was a software engineering V&V activity, not a clinical trial or performance study on a specific dataset. The tests were likely performed on a simulated environment or
test cases designed to validate software functionality. - Data Provenance: Not applicable in the context of a "test set" for an AI/ML performance study. The testing was of the software's functionality and safety, not its diagnostic accuracy on patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- Not applicable. The document describes software verification and validation, which typically involves software engineers and quality assurance professionals, not clinical experts establishing ground truth for diagnostic outputs.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable. As no clinical ground truth was established by experts for a test set, no adjudication method was mentioned.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was performed or described. The document explicitly states: "Therefore, a clinical study was not needed for the changes provided with the Philips IntelliSpace Perinatal Obstetrical Information Management System software revision Rev.K.00." The modifications were software updates related to integration and data management, not an AI assisting human readers in diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No standalone algorithm performance study was performed or described. The device is a perinatal information management system, not a standalone diagnostic AI algorithm. Its function is to provide "Basic and advanced fetal trace alarming," "Central monitoring of maternal alarming," and "Documentation capabilities and data storage." These are system functions, not typically evaluated as a "standalone algorithm" in the same way a diagnostic AI would be. The closest described is functional testing of the software's ability to hand off data and control.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable. The ground truth in this context refers to the expected behavior and output of the software functions as defined by the system requirements and design specifications, not clinical markers. This is a V&V process for software, where "ground truth" would be software specifications and expected functional outcomes.
8. The sample size for the training set:
- Not applicable. The document describes the V&V of a non-AI software system; there is no mention of a training set as would be found with an AI/ML model.
9. How the ground truth for the training set was established:
- Not applicable, as no training set was used or mentioned.
Summary of what the document does convey regarding validation:
- The modifications to the Philips IntelliSpace Perinatal system (Revision K.00) are primarily software-based, focusing on:
- Launching the Philips IntelliVue XDS Application for remote access to patient monitors.
- Adapting web TS client resolution for mobile devices.
- Adding configurable columns to electronic chalkboards.
- Providing additional options for documentation and reporting.
- HL7-based interfacing with ADT systems.
- Updating Windows Server compatibility.
- Adding more supported HL7-IHE profiles.
- Introducing roles and permission-based patient data access control.
- Adjusting alarm sound pressure levels.
- Adjusting system scalability (fetal monitors and client sessions).
- Nonclinical V&V activities were performed:
- Safety and Performance testing according to ANSI/AAMI/IEC 62304:2006.
- Tests required by Hazard Analysis.
- Functional tests of the modified software, including the new XDS launch feature, based on FDA guidance for software in medical devices.
- The software was categorized as having a "MAJOR" level of concern.
- Conclusion from V&V: The tests demonstrated that start-up parameters are correctly handed over to XDS, and patient monitors can be operated via XDS. All pass/fail criteria were met.
- Clinical Data: "a clinical study was not needed" because the "similarities and differences... were determined not to have a significant impact on the device's performance, the clinical performance, and the actual use scenarios."
In essence, this document is a regulatory submission for a software update to an existing information management system, not a submission for a novel AI/ML diagnostic tool requiring a performance study against a clinical ground truth.
Ask a specific question about this device
Page 1 of 11