Search Results
Found 5 results
510(k) Data Aggregation
(235 days)
QOCA® image Smart RT Contouring System is a post-processing software intended to automatically contour DICOM CT imaging data using deep-learning-based algorithms.
Contours that are generated by QOCA® image Smart RT Contouring System may be used as input for clinical workflows including external beam radiation therapy treatment planning. QOCA® image Smart RT Contouring System must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by OOCA® image Smart RT Contouring System. The output of QOCA® image Smart RT Contouring System in the format of RTSTRUCT objects are intended to be used by radiation oncology department.
QOCA® image Smart RT Contouring System does not provide a user interface for data visualization. System settings, user settings, progress status, and other functionalities are managed via a web-based interface.
The software is not intended to automatically detect or contour lesions. Only DICOM images of adult patients are considered to be valid input.
QOCA® image Smart RT Contouring System is a post-processing software used to automatically contour DICOM CT imaging data using deep-learning-based algorithms. OOCA® image Smart RT Contouring System contouring workflow supports CT inout data and produces RTSTRUCT outputs. Contours that are generated by QOCA® image Smart RT Contouring System may be used as input for clinical workflows including external beam radiation therapy treatment planning.
The output of QOCA® image Smart RT Contouring System, in the form of RTSTRUCT objects, are intended to be used by radiation oncology department. The output of QOCA® image Smart RT Contouring System must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by QOCA® image Smart RT Contouring System.
QOCA® image Smart RT Contouring System includes the following functionality:
- Automated contouring of organs at risk (OAR) workflow
- Input - DICOM CT
- Output - DICOM CT (Original), DICOM RTSTRUCT
- Web-based interface of system settings, user settings, and checking progress status
QOCA® image Smart RT Contouring System is intended to be used on adults undergoing treatment that requires the identification of anatomical structures in the body considered to be OAR. QOCA® image Smart RT Contouring System is intended to be used in the head, neck, and pelvis regions.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Body Part | OARs | Acceptance Criteria (DSC) | Model Performance (DSC ± std dev) | Model Performance (HD95 ± std dev) |
---|---|---|---|---|
Head and Neck | Brain stem | 0.87 | 0.942 (±0.0215) | 4.173 (±20.9737) |
Esophagus | 0.76 | 0.875 (±0.0859) | 4.694 (±5.5237) | |
Mandible | 0.93 | 0.956 (±0.0167) | 1.413 (±0.9036) | |
Pharyngeal constrictor muscle | 0.70 | 0.820 (±0.0692) | 2.232 (±1.3013) | |
Spinal cord | 0.87 | 0.931 (±0.0282) | 2.330 (±3.3562) | |
Thyroid | 0.83 | 0.873 (±0.1756) | 3.249 (±5.7852) | |
Right eye | 0.91 | 0.956 (±0.0149) | 2.038 (±0.9599) | |
Left lens | 0.80 | 0.876 (±0.1150) | 1.526 (±1.0436) | |
Left optic nerve | 0.66 | 0.805 (±0.0849) | 3.548 (±3.0927) | |
Right parotid | 0.86 | 0.924 (±0.0303) | 3.825 (±2.7730) | |
Pelvis | Anorectum | 0.70 | 0.929 (±0.0755) | 7.929 (±14.2608) |
Bladder | 0.82 | 0.959 (±0.0912) | 4.402 (±9.7696) | |
Bowel bag | 0.70 | 0.944 (±0.0338) | 11.237 (±8.5063) | |
Lumbar spine L5 | 0.90 | 0.960 (±0.0648) | 5.985 (±31.2018) | |
Bilateral seminal vesicles | 0.64 | 0.818 (±0.3178) | 3.638 (±6.6927) | |
Right iliac | 0.90 | 0.985 (±0.0111) | 10.108 (±51.8553) | |
Right proximal femur | 0.90 | 0.980 (±0.0195) | 13.193 (±68.4094) |
Note: The reported device performance (Model Performance) shows the Dice Similarity Coefficient (DSC) as the primary metric for acceptance criteria. Hausdorff Distance 95 (HD95) is also provided as a secondary metric for model performance, but specific acceptance criteria for HD95 are not explicitly stated in the table. The text states "The subject device achieved a median DSC > 0.80," indicating an overarching criterion as well.
Study Details
-
Sample Size used for the test set and the data provenance:
- Sample Size (Test Set): 220 cases (110 head and neck CT images and 110 pelvis CT images).
- Data Provenance:
- 50 cases from Taiwan (for each anatomical site, totaling 100 cases).
- 60 cases from United States public datasets (TCIA - The Cancer Imaging Archive) (for each anatomical site, totaling 120 cases).
- Type of Study: Retrospective performance study.
- Independence: This test set is explicitly stated to be independent of the data used for nonclinical tests (which included training, validation, and a smaller test set).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not explicitly state the number of experts used to establish the ground truth for the test set, nor their specific qualifications.
- It only mentions that "Ground truth annotations were established following CT-based delineation of organs at risk in the head and neck region: DAHANCA, EORTC, GORTEC, HKNPCSG, NCIC CTG, NCRI, NRG Oncology and TROG consensus guidelines and Pelvic Normal Tissue Contouring Guidelines for Radiation Therapy: A Radiation Therapy Oncology Group Consensus Panel Atlas." This implies that the ground truth was created by human experts adhering to well-established clinical guidelines for radiation therapy contouring.
-
Adjudication method for the test set:
- The document does not specify an adjudication method (e.g., 2+1, 3+1). It only mentions that ground truth was "established following" various consensus guidelines.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not done. The study described is a standalone performance validation of the AI algorithm.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance study was done. The section title is "Segmentation Performance Test" and it states, "The standalone performance of the subject device has been validated in a retrospective performance study on CT data previously acquired for RT treatment planning."
-
The type of ground truth used:
- Expert Consensus/Clinical Guidelines: The ground truth annotations for the test set were established by human experts "following CT-based delineation of organs at risk" based on several recognized consensus guidelines (DAHANCA, EORTC, GORTEC, HKNPCSG, NCIC CTG, NCRI, NRG Oncology and TROG for Head and Neck; RTOG Consensus Panel Atlas for Pelvis).
-
The sample size for the training set:
- Total Initial Data: 317 cases of head and neck images and 351 cases of pelvic images (total 668 cases).
- Training Set Size: These initial cases were distributed in an "8:1:1 ratio into Training datasets, Validation datasets, and Test datasets."
- Therefore, the training set would be approximately 8/10ths of the total initial data:
- Head and Neck training: 0.8 * 317 ≈ 254 cases
- Pelvic training: 0.8 * 351 ≈ 281 cases
- Total Training Set: Approximately 535 cases (254 + 281).
- Therefore, the training set would be approximately 8/10ths of the total initial data:
- Note: This training set data is distinct from the 220 cases used for the final standalone performance test.
-
How the ground truth for the training set was established:
- The document states that the initial data (used for training, validation, and an internal test set) was "retrospectively collected from 2000 to 2021 from two hospitals in Taiwan".
- It doesn't explicitly detail the process of ground truth establishment for the training set, but given the context of medical imaging for radiation therapy, it's highly implied that these contours were also created by clinical experts (e.g., radiation oncologists or dosimetrists) at those hospitals, likely following standard clinical practices. The subsequent "Segmentation Performance Test" details how ground truth for the final evaluation set was established ("following CT-based delineation... consensus guidelines"), suggesting a similar rigorous approach for the data used in training.
Ask a specific question about this device
(214 days)
QOCA® image Smart CXR Image Processing System is a software as medical device (SaMD) used, through artificial intelligence/deep learning technology, to analyze chest X-ray images of adult patient, and then identify cases with suspected pneumothorax. This product shall be used in conjunction with Picture Archiving and Communication System (PACS) at the hospital. This product will automatically analyze the DICOM files automatically pushed from PACS, and then make a notation next to the cases with suspected pneumothorax. This product is only used to remind radiologists to prioritize reviewing cases with suspected pneumothorax. Its results cannot be used as a substitute for a diagnosis by a radiologist, nor can it be used on a stand-alone basis for clinical decision-making.
This product, QOCA® image Smart CXR Image Processing System, is a web-based medical device using a locked artificial intelligence algorithm. It provides features such as cases sorting and image viewing, and supports multiple users at a time.
After connecting to Picture Archiving and Communication System (PACS) at the hospital, this product is capable of automatically analyzing either posteroanterior (PA) view or anteroposterior (AP) erect view chest X-ray images automatically pushed from PACS. Once a case with suspected pneumothorax is identified, a notation will be made next to the case in question, so the radiologist can prioritize to review cases with suspected pneumothorax in the Viewer Page. This product will not directly indicate, however, the specific portions or anomalies on the image.
Here's a breakdown of the acceptance criteria and the study details for the QOCA® image Smart CXR Image Processing System:
1. Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Predicate Device K190362 Performance) | Reported Device Performance (QOCA® image Smart CXR) | Overall Performance |
---|---|---|---|
AUC | 98.3% (95% CI: [97.40%, 99.02%]) | 97.8% (95% CI: [97.0%, 98.5%]) | Met |
Sensitivity | 93.15% (95% CI: [87.76%, 96.67%]) | 92.5% (95% CI: [90.5%, 94.2%]) | Met |
Specificity | 92.99% (95% CI: [90.19%, 95.19%]) | 94.0% (95% CI: [93.9%, 94.6%]) | Met |
Average Performance Time | 22.1 seconds | 4.94 seconds | Met |
Note: The reported device performance is an overall performance across both the MIMIC and Taiwanese datasets. Individual performance for each dataset is also provided in the document.
2. Sample Size Used for the Test Set and Data Provenance
The device's performance was assessed using two separate pivotal studies/datasets:
-
MIMIC Dataset:
- Sample Size: 3,105 radiographs (336 positive pneumothorax cases, 2,769 negative pneumothorax cases).
- Data Provenance: US patient population (MIMIC dataset). This was an independent medical institution from the training dataset.
-
Taiwanese Dataset:
- Sample Size: 2,947 radiographs (472 positive pneumothorax cases, 2,475 negative pneumothorax cases).
- Data Provenance: Taiwanese hospital. This was an independent medical institution from the training dataset.
Overall Test Set: 6,052 radiographs (3,105 from MIMIC + 2,947 from Taiwan). Both datasets were retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
For both the MIMIC dataset and the Taiwanese dataset:
- Number of Experts: Three radiologists.
- Qualifications: The document states "truthed by three radiologists" without specifying their years of experience or sub-specialty.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that the datasets were "truthed by three radiologists," implying a consensus-based approach, but the specific process for resolving disagreements is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study being performed to assess how much human readers improve with AI vs. without AI assistance. The study focused on the standalone performance of the AI algorithm.
6. Standalone Performance Study
Yes, a standalone performance study was done. The document explicitly states: "Bases on the results of the standalone performance assessment, this product achieves, identification accuracy of AUC > 95% with Sensitivity > 91% and Specificity > 92%." The performance metrics provided in section 1 (AUC, sensitivity, specificity) reflect the algorithm's performance without human-in-the-loop.
7. Type of Ground Truth Used
The ground truth for the test sets (MIMIC and Taiwanese) was established by "three radiologists," which indicates expert consensus diagnosis.
8. Sample Size for the Training Set
The document states: "The training dataset is used to train the model, and divided into three sets: the training set, the validation set, and the test set." However, the specific sample size for the entire training dataset (including training, validation, and its own internal test set used during development) is not provided in the summary. It only indicates that it was "collected from two hospitals, and additional data from the US National Institutes of Health (NIH) was added to the test set to improve its US patient population representativeness during training."
9. How the Ground Truth for the Training Set Was Established
The document states that the "model training dataset was collected from two hospitals, and additional data from the US National Institutes of Health (NIH) was added to the test set." While it implies the data was labeled for training, it does not explicitly describe how the ground truth for the training set was established (e.g., whether it was also by expert radiologists, pathology, etc.).
Ask a specific question about this device
(557 days)
The QOCA Portable ECG Monitoring Device is intended to capture continuous electrocardiogram (ECG) information for long-term (up to 14 days). It is indicated for use on adult patients 21 years or older who may be asymptomatic or who may suffer from transient symptoms such as palpitations, shortness of breath, dizziness, light-headedness, pre-syncope, syncope, fatigue, or anxiety, ECG and heart rate data are stored in the device for later viewing by healthcare professionals.
The QOCA Portable ECG Monitoring Device consists of 3 parts: a rechargeable and reusable ECG sensor with Bluetooth technology, a single-use electrode and hydrogel patch, and an optional mobile platform app (QOCA ecg App). The QOCA Portable ECG Monitoring Device provides a continuous, single-channel recording for up to 14 days. The QOCA Portable ECG Monitoring Device records ECG without patient interaction, and patients have the option of pressing the power button on the ECG sensor to trigger an event record. Patients can also choose to use the QOCA Portable ECG Monitoring Device along with a mobile platform app so that the event trigger can be triggered with the App and the data can be transmitted via BLE from the ECG sensor to the mobile platform for display. The subject device provides operational alarms such as lead off detection and battery monitoring. The operational alarms display on both the sensor and QOCA ecg App to inform the patient of the operation status of the sensor. The subject device does not provide any alarms based on physiological data setting. After the recording period (up to 14 days) ends, the patient returns to his/her healthcare provider, and the data stored in the sensor can be transferred to the computer by connecting with a USB cable and viewed with QOCA ecg Reader, a non-device MDDS for displaying data. The device is intended to be used on general care patients who are 21 years of age or older. This device is solely intended for manual interpretation of the recorded ECG and heart rate detection using the integrated software. The ECG signal recorded by this device is not intended for automated analysis. This device is not intended to be used for real-time and/or continuous patient monitoring.
The provided text describes the QOCA Portable ECG Monitoring Device and its substantial equivalence to a predicate device, the ZIO® SkyRunner (SR) Electrocardiogram (ECG) Monitoring Service. However, it does not contain information about:
- Specific acceptance criteria values for device performance (e.g., accuracy percentages for heart rate detection or ECG capture).
- A clinical study performed to prove the device meets these criteria. The document explicitly states: "The ECG signal recorded by this device is not intended for automated analysis. This device is solely intended for manual interpretation of the recorded ECG and heart rate detection using the integrated software." This indicates that there was no algorithm with its own performance metrics validated.
- Sample sizes used for a test set, data provenance, number of experts for ground truth, adjudication methods, multi-reader multi-case studies, standalone algorithm performance, type of ground truth used for testing, training set sample size, or how ground truth was established for training data. These details are typically associated with clinical validation studies for AI/algorithm-driven devices, which this ECG monitoring device, as described, is not.
The document primarily focuses on demonstrating equivalence to a predicate device through physical and technical characteristics, as well as adherence to various safety and performance standards.
Here's a summary of the information that is present, organized as per your request where applicable, noting the absence of the requested details related to performance studies:
1. Table of Acceptance Criteria and Reported Device Performance
No specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy for arrhythmia detection) are provided in the document. The performance evaluation focuses on compliance with recognized medical device standards rather than algorithm-specific performance metrics.
Acceptance Criteria (Not explicitly quantified in terms of algorithmic performance) | Reported Device Performance (Compliance with standards) |
---|---|
Biocompatibility | Conformed to ISO 10993-1, ISO 10993-5, ISO 10993-10 |
Electrical Safety | Conformed to ANSI AAMI ES60601-1, IEC 60601-1 |
Electromagnetic Compatibility (EMC) | Conformed to IEC 60601-1-2 |
Home Healthcare Environment Requirements | Conformed to IEC 60601-1-11 |
Ambulatory ECG System Performance | Conformed to IEC 60601-2-47 |
Wireless Coexistence | Conformed to ANSI IEEE C63.27-2017 |
Software Life Cycle Processes | Conformed to IEC 62304 |
2. Sample size used for the test set and the data provenance
Not applicable. The document does not describe a clinical performance study with a test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. No clinical performance study with expert-established ground truth is described.
4. Adjudication method for the test set
Not applicable. No clinical performance study involving adjudication of a test set is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was mentioned. The device's ECG data is intended for manual interpretation by healthcare professionals, and the device itself does not provide automated analysis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study was mentioned. The device's ECG signal is "not intended for automated analysis" but for "manual interpretation."
7. The type of ground truth used
Not applicable. The document does not describe a performance study requiring ground truth for an algorithm.
8. The sample size for the training set
Not applicable. The device does not employ an AI algorithm requiring a training set, according to the provided text.
9. How the ground truth for the training set was established
Not applicable. No training set is mentioned for an AI algorithm.
Ask a specific question about this device
(140 days)
The Quanta Pulse Oximeter , Model no. Pulse Link 1000 (or QH100) is a handheld pulse oximeter with alarm. It is intended to be used by trained healthcare professionals in hospital, hospital type facilities, as well as in the home care environment .
The Pulse Link 1000 Pulse Oximeter is indicated for non-invasive continuous monitoring of functional oxygen saturation of arterial hemoglobin (SpO₂) and pulse rate of patients on fingers (forefinger or middle finger).
The Pulse Link 1000 Pulse Oximeter is re-useable. It is indicated for adult patients under no-motion conditions.
The Quanta Pulse Oximeter , Model no.: Pulse Link 1000 (or QH100) is a digital handheid pulse oximeter that displays numerical values for blood oxygen saturation (%SpO2) and pulse rate. It provide audible and visual alarms for both medium and high priority conditions.
The Quanta Pulse Oximeter , Model no.: Pulse Link 1000 (or QH100) will typically operate for 24 hours continuously between alkaline battery replacements. The QH100 Oximeter requires no routine calibration or maintenance other than replacement of alkaline batteries and basic cleaning.
The Quanta Pulse Oximeter , Model no.: Pulse Link 1000 (or QH100) determines functional oxygen saturation of arterial hemoglobin (SpO2) by measuring the absorption of red and infrared light passing through perfused tissue. Changes in absorption caused by the pulsation of blood in the vascular bed are used to determine oxygen saturation and pulse rate. Oxygen saturation and pulse rate values are displayed by LCM monitor. On each detected pulse, the LED indicates the health condition of the patient. If the health condition of the patient is bad ( under some specific criteria), the LED will blink red and beeps alarm from the speaker. A sensor disconnect is also indicated by the LED blinking yellow and beeps alarm from the speaker. The remaining energy of the battery is indicated by the marked scale of the battery indicator on the LCM monitor.
The provided text does not contain specific acceptance criteria or a detailed study proving the device meets those criteria. The submission is a 510(k) summary for a pulse oximeter, and it focuses on establishing substantial equivalence to predicate devices rather than providing detailed performance study results against specific acceptance criteria.
However, based on the information provided, here's what can be extracted and inferred regarding performance and compliance:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with specific numerical values for accuracy or other performance metrics. Instead, it states that the device conforms to applicable standards. The "performance" section mentions:
Acceptance Criteria (Inferred from Standards) | Reported Device Performance |
---|---|
Operating specifications | Conforms to ISO 9919 |
Safety requirements | Conforms to IEC 60601-1 |
EMC requirements | Conforms to IEC 60601-1-2 |
It is inferred that the acceptance criteria are the requirements outlined in these standards. The device's reported performance is simply that it "conforms" to these standards, implying it meets their respective requirements. Specific accuracy ranges for SpO2 or pulse rate are not provided in this summary.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the given 510(k) summary. The document states "bench testing contained in this submission demonstrate that any differences in their technological characteristics do not raise any new questions of safety or effectiveness," indicating that some testing was done, but details regarding sample size, data provenance, or study design (retrospective/prospective) are absent.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided. Given that this is a pulse oximeter and the claim is conformance to standards, the "ground truth" would typically come from reference instrumentation in a controlled environment, not expert human assessment in the way it might for imaging diagnostics.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided. Adjudication methods are not typically relevant for performance testing of a pulse oximeter where objective measurements against a reference are paramount.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC study is not applicable and was not done. This device is a standalone pulse oximeter, not an AI-assisted diagnostic tool that relies on human readers interpreting results.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance assessment was done, as detailed in the "bench testing" mentioned in the conclusions. The device's performance is determined by its ability to accurately measure SpO2 and pulse rate independently. The summary states: "bench testing contained in this submission demonstrate that any differences in their technological characteristics do not raise any new questions of safety or effectiveness." This implies the device was tested to perform its intended function on its own.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The direct ground truth used is not explicitly stated, but for a pulse oximeter conforming to ISO 9919, the ground truth for SpO2 accuracy would typically be established using invasive arterial blood gas analysis (CO-Oximetry) as a reference method in controlled desaturation studies on healthy volunteers. The ground truth for pulse rate would be a highly accurate heart rate monitor.
8. The sample size for the training set
This information is not provided. Pulse oximeters generally do not use "training sets" in the same way machine learning algorithms do. Their operation is based on established physical principles of light absorption by oxygenated and deoxygenated hemoglobin. If any calibration or algorithm tuning occurred, the data used for that is not disclosed here.
9. How the ground truth for the training set was established
This information is not provided and is likely not relevant in the context of a traditional pulse oximeter device design.
Ask a specific question about this device
(41 days)
The Quanta Blood Pressure Meter , Model no.: Cardiac Elite 1000 (or QH200) is intended to measure the blood pressure (systolic and diastolic) and pulse rate by oscillometric method. The measurements are conducted by using an cuff which is wrapped around the upper arm. The device is designed for adult patient population.
The Quanta Blood Pressure Meter, Model no.: Cardiac Elite 1000 (or QH200) is designed to measure the systolic and diastolic blood pressure, and pulse rate (heart of an individual). The device uses an inflated cuff which is wrapped around the upper arm. The cuff is inflated by an electrical air pump. The systolic and diastolic blood pressures are determined by oscillometric method. The deflation rate is controlled by a preset mechanical valve at a constant rate. At any moment of measurement, the user can deflate the cuff. The measurement results are displayed on the LCD.
No acceptance criteria related to device performance (e.g., accuracy, precision) were explicitly stated or reported in the provided text. The document primarily focuses on regulatory approval and substantial equivalence to predicate devices based on intended use, technological characteristics, and conformance to general safety and EMC standards.
Therefore, the requested tables and specific details about a study proving the device meets acceptance criteria are not available in the given text.
Based on the provided information:
-
Table of acceptance criteria and the reported device performance: Not available. The document mentions conformance to technical standards but does not provide specific performance metrics or acceptance criteria for blood pressure accuracy.
-
Sample sized used for the test set and the data provenance: Not available. The document mentions "bench testing" but does not detail the sample size, specifics of the test set, or data provenance.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable as no specific test set or ground truth establishment by experts for performance evaluation is described.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable as no specific test set or adjudication process is described.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is a non-invasive blood pressure monitor, not an AI-assisted diagnostic imaging device or similar system where human reader performance would be a relevant metric.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This device is a standalone blood pressure monitor, and its performance is inherently standalone. However, no specific performance study results are provided.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable as no specific ground truth for performance evaluation is described. For blood pressure monitors, the "ground truth" typically involves comparison against a reference standard (e.g., auscultatory method by trained observers). This information is not present.
-
The sample size for the training set: Not applicable. This document does not describe the development or training of an algorithm in the context of machine learning or AI.
-
How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
Page 1 of 1