Search Results
Found 10 results
510(k) Data Aggregation
(226 days)
|
| Regulation | 870.2330
| 870.2330
WARD-CSS is a clinical decision support system that remotely integrates, analyzes and displays continuous vital sign data (via a mobile or web application) from medical devices for nonpediatric hospitalized patients within non-critical care units.
WARD-CSS uses a set of standardized rules based on scientific and clinical evidence to detect and alert on clinically relevant vital sign deviations when used by trained health care professionals in hospitals.
WARD-CSS is not intended to replace current monitoring practices or replace health care professionals' judgment. WARD-CSS is a tool intended to help health care professionals manage monitored patients and make clinical care decisions.
WARD-CSS is a stand-alone software intended for use in continuous monitoring of patients and near real-time analysis of vital signs for the purpose of notifying healthcare professionals in case of clinically relevant vital sign deviations.
WARD-CSS utilizes knowledge-based algorithms to evaluate clinically relevant vital signs deviations to help drive clinical management.
The system is intended to be used as an adjunct to current monitoring practice in the general med/surg floors of the hospitals
The system assists healthcare professionals when monitoring patients on their wards by:
- Providing a real-time monitoring overview of vital signs for all patients. ●
- Alerting the healthcare professionals when a patient deteriorates. ●
The following types of alerts are detected by WARD-CSS in the vital sign data:
- Desaturation ●
- Hypertension
- Hypotension ●
- Bradypnea ●
- Tachypnea ●
- Tachycardia
- Bradycardia
- Hypotension and Bradycardia
- Hypotension and Tachycardia ●
- Bradypnea and Desaturation
- Fever
The WARD-CSS consists of a Mobile App, Web App and Backend Server. The Mobile App is used by healthcare professionals (HCPs) to monitor patients. The HCP will receive notifications of the alerts to their mobile phones. Within this app, the HCP can also document vital signs into an electronic health record system. The Web App is used by administrative users to manage hospitals, wards, users, and monitors. The Backend Server is used to receive and process all incoming data and manage all data used in the apps.
The provided text describes the acceptance criteria and a study to prove the device, WARD-CSS, meets these criteria, primarily focusing on alert reduction.
Here's the breakdown of the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria here are implicitly related to the reduction of alert overload, as this is the primary focus of the clinical testing described for WARD-CSS. The performance is measured by the reduction in alert rates compared to a baseline (thresholds only) and an intermediate step (thresholds with time durations).
Acceptance Criteria Category | Specific Criteria | Reported Device Performance (WARD-CSS: Thresholds, Time Durations, and Alert Filters) |
---|---|---|
Alert Reduction (Overall) | Significant reduction in clinically irrelevant alerts compared to standard monitoring practices. | 97.8% total reduction in alerts compared to monitors alerting only upon thresholds. (From 417.0 median alerts to 9.0 median alerts over all alert types). |
Alert Reduction (Specific Alert Groups) | Reduction in alerts for individual vital sign deviation categories. | Hypertension + Hypotension: 0.0 median (mean 0.3) |
Bradypnea + Tachypnea: 1.6 median | ||
Tachycardia + Bradycardia: 0.0 median (mean 1.0) | ||
Desaturation + Desaturation/Bradypnea: 4.7 median | ||
Hypotension/Tachycardia + Hypotension/Bradycardia: 0.0 median (mean 0.0) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 794 patients
- Data Provenance: Retrospective analysis of four cohorts from prospective clinical safety studies conducted from 2020-2024. The country of origin is not explicitly stated, but the submission is for the US FDA, implying an interest in data relevant to this regulatory body. The data consists of vital sign data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The study focuses on quantifying alert rates based on set thresholds and algorithms rather than human expert-established ground truth for specific events that led to the alerts. The 'ground truth' here is the objective vital sign data and the predefined rules/thresholds that trigger alerts.
4. Adjudication Method for the Test Set
This information is not provided as the study's focus is on algorithmic alert reduction based on vital sign data and predefined rules, not on expert adjudication of alert significance.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Improvement with AI vs. Without AI Assistance
A MRMC study comparing human readers with and without AI assistance was not done. The study's objective was to quantify the reduction in system-generated alerts due to the WARD-CSS algorithms, not to measure human performance improvement.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the study described is a standalone (algorithm only) performance assessment of the WARD-CSS system's alert reduction methodology. It retrospectively analyzes vital sign data against the different alert generation methodologies (thresholds only, thresholds + time durations, and WARD-CSS's full algorithm including alert filters) to demonstrate the reduction in the number of alerts produced by the algorithm itself.
7. The Type of Ground Truth Used
The ground truth used in this study is objective vital sign data combined with predefined, standardized rules and thresholds based on scientific and clinical evidence. The analysis quantifies how often these pre-defined rules would trigger an alert under different algorithmic conditions (basic thresholds, thresholds with time durations, and thresholds with time durations and alert filters). It is not based on expert consensus, pathology, or outcomes data in the traditional sense of clinical event validation.
8. The Sample Size for the Training Set
The document does not explicitly mention a separate training set size. The clinical testing section refers to a "literature review to support software algorithm development and determine the alert thresholds," suggesting that the rules and thresholds were established based on existing clinical knowledge and literature. The 794 patients were used for the retrospective analysis of alert rates, which effectively acts as a test/validation set for the alert reduction logic rather than a dataset for algorithm training in a machine learning sense.
9. How the Ground Truth for the Training Set Was Established
Since an explicit training set (for machine learning) is not detailed, the "ground truth" for the algorithm's rules and thresholds was established through a "literature review to support software algorithm development and determine the alert thresholds." This implies that the rules are based on scientific and clinical evidence from medical literature.
Ask a specific question about this device
(28 days)
Trade/Device Name: Ultra ICE Plus - PI 9 MHz Peripheral Imaging Catheter Regulation Number: 21 CFR 870.2330
| Echocardiograph (DXK) is Class II per 21 CFR 870.2330
The Ultra ICE Plus - PI 9 MHz Peripheral Imaging Catheter is indicated for patients with vascular occlusive disease for which angioplasty, atherectomy, the placement of stents, or other intervention is contemplated.
Ultra ICE Plus - PI 9 MHz Peripheral Imaging Catheter is intended for use with the Boston Scientific iLab™ equipment and motor drive unit, MDU5 PLUS™. When used together, the catheter, motor drive unit (MDU), and iLab equipment form a complete imaging system that allows for ultrasonic visualization of peripheral intravascular structures.
The catheter consists of two main components: the catheter body and the imaging core.
The catheter body consists of three sections: the braided proximal shaft, single lumen mid-shaft, and the sonolucent distal tip. The catheter body comprises the usable length of the catheter (110 cm).
The braided proximal shaft provides the pushability to the catheter and serves as a lumen to the imaging core. The mid-shaft provides a flexible transition between the stiffer proximal shaft and the acoustically transparent distal tip. The distal tip serves as the imaging window and houses a septum situated between the inner lumen and the atraumatic rounded tip of the catheter. The self-sealing septum serves as the distal-flush entry point, as the catheter is flushed with water prior to use. This provides the acoustic coupling media required for ultrasonic imaging.
The imaging core consists of a proximal hub assembly, a rotating drive cable, and a radiopaque tip. The hub assembly provides an electro-mechanical interface between the catheter and the motor drive unit. The drive cable houses the low frequency piezoelectric (PZT) transducer at the distal imaging window.
The PZT transducer and the drive cable rotate independently from the sheath to provide 360° image resolution. The transducer converts electrical impulses sent by the motor drive into transmittable acoustic energy. Reflected ultrasound signals are converted back to electrical impulses, returned to the motor drive unit, and are ultimately processed by the iLab equipment for live visualization of intravascular structures.
The provided text is a 510(k) summary for the Ultra ICE Plus - PI 9 MHz Peripheral Imaging Catheter. It outlines the device's characteristics, intended use, and substantial equivalence to a predicate device based on non-clinical performance data. Since this is a submission for a medical device that relies on physical characteristics and performance rather than an AI/ML algorithm that predicts or classifies, many of the requested categories are not applicable.
Here's an analysis based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't present specific numerical acceptance criteria in a table format, nor does it present specific numerical performance results against those criteria. Instead, it describes general performance categories tested. The primary "acceptance criterion" is demonstrating substantial equivalence to the predicate device.
Performance Category | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Physical Integrity | Device maintains structural integrity over expected use. | Bench testing performed to evaluate physical integrity. |
Functionality | Device operates as intended. | Bench testing performed to evaluate functionality. |
Image Quality | Provides clear ultrasonic visualization of intravascular structures. | Bench testing performed to evaluate image quality and general imaging capabilities. |
Measurement Accuracy | Provides accurate measurements. | Bench testing performed to evaluate measurement accuracy. |
Deliverability | Catheter can be delivered to target safely. | Bench testing performed to evaluate deliverability. |
Guide Catheter Compatibility | Compatible with specified guide catheters. | Bench testing performed to evaluate guide catheter compatibility. |
Non-Uniform Rotational Distortion (NURD) | Meets acceptable levels of NURD for accurate imaging. | Bench testing performed to evaluate non-uniform rotational distortion. |
Dimensional Requirements | Meets specified dimensional requirements. | Bench testing performed to evaluate dimensional requirements. |
Visibility Under Fluoroscopy | Visible under fluoroscopy. | Bench testing performed to evaluate visibility under fluoroscopy. |
Interface with Ancillary Devices | Interfaces correctly with Boston Scientific iLab™ equipment and MDU5 PLUS™. | Bench testing performed to evaluate interface with ancillary devices. |
Environmental Requirements | Withstands specified environmental conditions. | Bench testing performed to evaluate environmental requirements. |
User Interface Requirements | Meets user interface specifications. | Bench testing performed to evaluate user interface requirements. |
Catheter Fatigue | Withstands expected fatigue during use. | Bench testing performed to evaluate catheter fatigue. |
Bending Stiffness | Meets specified bending stiffness. | Bench testing performed to evaluate bending stiffness. |
Biocompatibility | Biocompatible for its intended use. | Biocompatibility testing in accordance with ISO 10993-1, microbial assessments (bioburden, endotoxin), and pyrogenicity/sterility assurance testing performed. Device is shown to be biocompatible. |
Acoustic Output | Below FDA Track 1 limits. | Acoustic Output evaluated in accordance with FDA Guidance (September 9, 2008). Results are below FDA Track 1 limits. |
Electromagnetic Compatibility | Compliant with IEC 60601-1-2 (3rd Edition). | Electromagnetic compatibility testing conducted, demonstrating compliance. |
Packaging Integrity | Maintains sterility and protects device post-sterilization and during distribution. | Packaging validation in accordance with ISO 11607-1 and ISO 11607-2, with testing after E-beam sterilization, climatic conditioning, and distribution challenge. |
Sterility | Sterile for its intended use. | Sterility assurance testing and electron beam (E-Beam) irradiation method. |
2. Sample Size Used for the Test Set and Data Provenance:
The document describes non-clinical bench testing. It does not specify a "test set" in terms of patient data. The "samples" would be the catheters themselves. The exact number of catheters used for each type of bench test (e.g., how many were fatigue tested, how many were dimensionally checked) is not provided.
- Sample size for test set: Not specified (refers to physical devices, not patient data).
- Data provenance: Not applicable as it's non-clinical bench testing.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
Not applicable. This is a physical medical device (intravascular ultrasound catheter) where performance is measured using engineering and biological standards, not expert interpretation of images or data to establish a ground truth for an AI/ML algorithm.
4. Adjudication Method for the Test Set:
Not applicable, as it's non-clinical bench testing of a physical device.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI Vs. Without AI Assistance:
Not applicable. This is not an AI/ML device, and no MRMC study was performed or required. The submission explicitly states "Clinical Performance Data: Not applicable; determination of substantial equivalence is based on an assessment of non-clinical performance data."
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done:
Not applicable. This is not an AI/ML device.
7. The Type of Ground Truth Used:
For the non-clinical performance tests, "ground truth" refers to established engineering specifications, regulatory standards (e.g., ISO, FDA guidance), and physical measurements. For example:
- Dimensional requirements: Engineering specifications for length, diameter, etc.
- Biocompatibility: ISO 10993-1 standards.
- Acoustic Output: FDA guidance for diagnostic ultrasound systems.
- Sterility: Sterility assurance level (SAL).
- Electrical/Mechanical Safety: IEC 60601-1-2.
8. The Sample Size for the Training Set:
Not applicable. This is not an AI/ML device, so there is no "training set."
9. How the Ground Truth for the Training Set Was Established:
Not applicable. This is not an AI/ML device.
Ask a specific question about this device
(254 days)
|
| | Volcano Angio-IVUS Mapping system (K060483, Echocardiograph, 21 CFR 870.2330
CAAS Workstation is a modular software product intended to be used by or under supervision of a cardiologist or radiologist in order to aid in reading, co-registering and interpreting cardiovascular X-Ray images to support diagnoses and for assistance during intervention of cardiovascular conditions.
CAAS Workstation features segmentation of cardiovascular structures, 3D reconstruction of vessel segments and catheter path based on multiple angiographic images, measurement and reporting tools to facilitate the following use:
- Calculate the dimensions of cardiovascular structures;
- Quantify stenosis in coronary and peripheral vessels;
- Quantify the motion of the left and right ventricular wall;
- Perform density measurements;
- Determine C-arm position for optimal imaging of cardiovascular structures;
- Enhance stent visualization and measure stent dimensions;
- Co-registration of angiographic X-Ray images with IVUS and OCT images.
CAAS Workstation is intended to be used by or under supervision of a cardiologist or radiologist. When the results provided by CAAS Workstation are used in a clinical setting to support diagnoses and for assistance during intervention of cardiovascular conditions, the results are explicitly not to be regarded as the sole, irrefutable basis for clinical decision making.
CAAS Workstation is designed as a stand-alone modular software product for viewing and quantification of X-ray angiographic images intended to run on a PC with a Windows operating system. CAAS Workstation contains the analysis modules QCA, QCA3D, QVA, LVA, RVA and StentEnhancer of the previously cleared predicate device CAAS Workstation (K133993) for calculating dimensions of coronary and peripheral vessels and the left and right ventricles, quantification of stenosis, performing density measurements, determination of optimal C-arm position for imaging of vessel segments and functionality to enhance the visualization of a stent and to measure stent dimension. Semi-automatic contour detection forms the basis for the analyses.
Functionality to co-register X-ray angiographic images and intravascular imaging techniques (such as intravascular ultrasound and optical coherence tomography) is added by means of the analysis module IV-LINQ. With co-registration a common frame of intravascular imaging techniques with X-ray angiographic images is provided using a three-dimensional model. This functionality is based on the Volcano Angio-IVUS Mapping system (K060483).
In the IV LINQ workflow the user has to select two angiographic X-ray images in DICOM format. The user indicates a catheter path starting at the imaging tip. This path can be optimized manually by adding, deleting or moving control point on the drawn path. After the catheter path is drawn in both angiographic X-ray images, a 3D reconstruction of the catheter path is calculated.
The user then has to select one IVUS or OCT dataset in DICOM format or the data is streamed from the intravascular imaging console with a DVI streamer. The IVUS or OCT pullback must be acquired using a motorized pullback device. After the 3D catheter path from X-ray angiographic images is calculated and the IVUS or OCT pullback is loaded, IV-LINQ co-registers each IVUS or OCT frame with a position on the 3D catheter path using a distance mapping algorithm. On intravascular images diameter and area measurements can be performed.
The quantitative results of CAAS Workstation support diagnosis and intervention of cardiovascular conditions. The analysis results are available on screen, and can be exported in various electronic formats. The functionality is independent of the type of vendor acquisition equipment.
The provided text describes the CAAS Workstation and its regulatory submission. It mentions performance data and validation efforts but does not provide explicit acceptance criteria in a table format, nor does it detail a specific study with quantitative results proving adherence to such criteria.
However, I can extract the information provided regarding the device's validation and testing:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not provide a table of acceptance criteria with corresponding performance metrics. Instead, it offers a general statement about performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
System Requirements | System testing showed that the system requirements were implemented correctly. |
Algorithm Functioning | For each analysis workflow, a validation approach is created, and the proper functioning of the algorithms is validated. |
Regression Testing | For analysis workflows already implemented in earlier versions of CAAS Workstation, regression testing is performed to verify equivalence in numerical results. |
Distance Mapping Algorithm (IV LINQ) | The validation of the distance mapping algorithm used in IV LINQ demonstrated that the length on which co-registration is based meets the accuracy and reproducibility requirements. (Specific accuracy/reproducibility values are not provided). |
Usability Testing (IV-LINQ) | Usability testing is performed to validate the IV-LINQ workflow of CAAS Workstation and demonstrated that the user is able to use IV LINQ for the purpose it was developed for. |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not specify the sample size used for any test sets, nor does it provide information on the data provenance (e.g., country of origin, retrospective or prospective) for training or testing.
3. Number of Experts Used to Establish Ground Truth and Qualifications:
The document does not specify the number of experts used to establish ground truth or their qualifications for any part of the testing.
4. Adjudication Method for the Test Set:
The document does not mention any adjudication method used for a test set.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study:
The document does not mention a Multi Reader Multi Case (MRMC) comparative effectiveness study being done, nor does it provide any effect size for human reader improvement with or without AI assistance.
6. Standalone Performance Study:
The document implies a standalone performance for the algorithm through statements like "System testing showed that the system requirements were implemented correctly" and "proper functioning of the algorithms is validated." However, it does not explicitly detail a dedicated standalone study with specific metrics. The focus is on the software's functionality and accuracy of its calculations.
7. Type of Ground Truth Used:
The document describes "validation approaches" and "proper functioning of the algorithms," and "accuracy and reproducibility requirements" for length measurements. This suggests the ground truth was likely established through:
- Reference measurements or calculations for quantitative aspects (e.g., vessel dimensions, stenosis quantification).
- Comparison to accepted standards or methods for qualitative aspects or algorithmic outputs.
However, the document does not explicitly state the specific type of ground truth used (e.g., expert consensus, pathology, outcomes data).
8. Sample Size for the Training Set:
The document does not provide any information regarding the sample size used for the training set.
9. How Ground Truth for the Training Set Was Established:
As no training set size is provided, the document does not explain how ground truth for a training set was established.
Ask a specific question about this device
(30 days)
Trade/Device Name: Ultra ICE Plus 9 MHz IntraCardiac Echo Catheter Regulation Number: 21 CFR 870.2330
Name | Ultrasound, Echocardiograph (DXK) has been classified as
Class II per 21 CFR 870.2330
The Ultra ICE Plus rounded tip catheter is indicated for enhanced ultrasonic visualization of intracardiac structures.
Ultra ICE Plus is intended for use with Boston Scientific's (BSC)'s iLab™ equipment and latest motor drive unit, MDU5 PLUS™. When used together, the catheter, motor drive unit (MDU), and iLab equipment form a complete imaging system that allows for ultrasonic visualization of intracardiac structures. The catheter consists of two main components: the catheter body and the imaging core. The catheter body consists of three sections: the braided proximal shaft, single lumen mid-shaft, and the sonolucent distal tip. The catheter body comprises the usable length of the catheter (110 cm). The braided proximal shaft provides pushability to the catheter and serves as a lumen to the imaging core. The mid-shaft provides a flexible transition between the stiffer proximal shaft and the acoustically transparent distal tip. The distal tip serves as the imaging window and houses a septum situated between the inner lumen and the atraumatic rounded tip of the catheter. The self-sealing septum serves as the distal-flush entry point; as the catheter must be flushed with water prior to use. This provides the acoustic coupling media required for ultrasonic imaging. The imaging core consists of a proximal hub assembly and a rotating drive cable that houses a low frequency piezoelectric (PZT) transducer at the distal imaging window. The hub assembly provides an electro-mechanical interface between the catheter and the motor drive unit. The drive cable and PZT transducer rotate independently of the sheath to provide 360° image resolution. The transducer converts electrical impulses sent by the motor drive in to transmittable acoustic energy. Reflected ultrasound signals are converted back to electrical impulses, returned to the motor drive unit, and are ultimately processed by the iLab equipment for live visualization of intracardiac structures.
This document is a 510(k) summary for the Ultra ICE Plus 9 MHz IntraCardiac Echo Catheter. It focuses on demonstrating substantial equivalence to a predicate device through non-clinical performance data, rather than outlining a study to prove the device meets specific acceptance criteria for a new clinical claim. Therefore, much of the requested information regarding clinical studies and ground truth cannot be extracted directly from this document.
However, I can extract the acceptance criteria and the type of studies conducted for non-clinical performance, as well as what the document reports about the device's performance against these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Performance Criteria Evaluated (Implicit/Explicit from text) | Reported Device Performance |
---|---|---|
Bench Testing | Deliverability | Successfully met evaluation conditions |
Guide catheter compatibility | Successfully met evaluation conditions | |
Image quality | Successfully met evaluation conditions | |
Non-uniform rotational distortion | Successfully met evaluation conditions | |
Measurement accuracy | Successfully met evaluation conditions | |
General imaging capabilities | Successfully met evaluation conditions | |
Dimensional requirements | Successfully met evaluation conditions | |
Visibility under fluoroscopy | Successfully met evaluation conditions | |
Interface with ancillary devices | Successfully met evaluation conditions | |
Environmental requirements | Successfully met evaluation conditions | |
User interface requirements | Successfully met evaluation conditions | |
Catheter fatigue and bending stiffness | Successfully met evaluation conditions | |
Biological Safety | Biocompatibility (ISO 10993-1) | Successfully met evaluation conditions |
Microbial assessments (bioburden, endotoxin, pyrogenicity) | Successfully met evaluation conditions | |
Sterility assurance | Successfully met evaluation conditions | |
Electrical & Mechanical Safety | Acoustic Output (FDA Guidance, below Track 1 limits) | Below FDA Track 1 limits |
Electromagnetic compatibility (IEC 60601-1-2 (3rd Edition)) | Demonstrated compliance | |
Packaging Validation | Integrity of packaging configuration (ISO 11607-1, ISO 11607-2) | Successfully met evaluation conditions |
Explanation of "Successfully met evaluation conditions": The document states for bench testing, biological safety, and packaging validation that "Non-clinical performance evaluations, as described above, indicate that the subject device is substantially equivalent to, and at least as safe and effective as the predicate device (Ultra ICE, K902245)." This implies that the device performed acceptably against the established criteria for each of these tests.
2. Sample size used for the test set and the data provenance:
- Test Set (Non-clinical): Specific sample sizes for each non-clinical test (e.g., number of catheters tested for fatigue, number of units for packaging validation) are not provided in this summary.
- Data Provenance: The studies were non-clinical bench, lab, and engineering tests conducted by the manufacturer (Boston Scientific Corporation). There is no mention of country of origin of data in terms of patient data, as no clinical studies were performed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- This information is not applicable as the studies described are non-clinical (bench testing, biocompatibility, electrical/mechanical safety, packaging validation). No human experts were used to establish ground truth in the context of diagnostic performance or clinical outcomes for these tests.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- This information is not applicable as the studies described are non-clinical tests. Adjudication methods are typically used in clinical studies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not stated as part of this submission. The submission explicitly states "Clinical Not applicable; determination of substantial equivalence is based on an assessment of non-clinical performance data." This device is an imaging catheter itself, not an AI-assisted diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone algorithm-only performance study was not stated as part of this submission. This device is an imaging catheter.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the non-clinical tests, the "ground truth" was based on engineering specifications, regulatory standards (e.g., ISO, IEC, FDA guidance), and established scientific methods for evaluating device performance, safety, and compatibility. For example, acoustic output limits are defined by FDA guidance, and biocompatibility by ISO 10993-1.
8. The sample size for the training set:
- This information is not applicable. This is a submission for a medical device (catheter), not an AI algorithm requiring a training set.
9. How the ground truth for the training set was established:
- This information is not applicable. This is a submission for a medical device (catheter), not an AI algorithm requiring a training set.
Ask a specific question about this device
(77 days)
Echo Imaging System (892.1560) with a Diagnostic Ultrasonic Transducer (892.1570) or Echocardiograph (870.2330
The ImaCor Zura system with ClariTEE probe is intended for use in the episodic assessment of cardiac function using transesophageal echocardiography. It is indicated for use in clinical settings, including long term settings such as the ICU. for an indwelling time period not to exceed 72 hours. The ImaCor Zura system with ClariTEE probe is not intended for pediatric use
The ImaCor Zura system with ClariTEE probe consists of three main components:
- Ultrasound Machine: A TEE predicate device optimized for use with ImaCor ClariTEE probe.
- Ultrasound Probe (ClariTEE formerly known as the Blue Probe in K080223): A miniaturized TEE probe optimized for longer dwell time relative to standard TEE probes enables use in longer term clinical settings such as the ICU. The probe distal tip is flexed upward transiently to obtain standard TEE images.
- Ultrasound Imaging Software: The software controls standard ultrasound machine functions such as imaging, recording and measuring. Continuous imaging is limited by a 20 minute software interlock should the operator mistakenly leave the machine in continuous imaging mode, thus limiting the potential unintentional exposure of the patient's mucosal tissue to acoustic energy. Maximum probe face temperature is limited according to FDA consensus standard IEC 60601-2-37. Latest software version enables two modes of operation; type B and color flow Doppler. The subsequent version cleared for marketing under K080223 enabled type B only.
The ImaCor Zura system with ClariTEE probe, a Transesophageal Echo Imaging System, received 510(k) clearance for software modification. The primary change was the addition of a color flow Doppler mode of operation.
1. Table of Acceptance Criteria and Reported Device Performance
Feature | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Intended Use | Identical to predicate device (ImaCor Zura K080223) | "The intended use of the ImaCor Zura with ClariTEE is identical to that of the ImaCor Zura (K080223)." |
Indications for Use | For episodic assessment of cardiac function using TEE; indicated for clinical settings including ICU for indwelling time not exceeding 72 hours; not for pediatric use. | "The ImaCor Zura system with ClariTEE probe is intended for use in the episodic assessment of cardiac function using transesophageal echocardiography. It is indicated for use in clinical settings, including long term settings such as the ICU, for an indwelling time period not to exceed 72 hours. The ImaCor Zura system with ClariTEE probe is not intended for pediatric use." (Verbatim from K100989) |
Technological Characteristics | Similar to predicate devices, with specified modifications related to the probe and software. | System consists of ultrasound machine (predicate optimized for ClariTEE), ClariTEE probe (miniaturized TEE probe for longer dwell time, flexed for images), and ultrasound imaging software (controls functions, 20-minute continuous imaging interlock, max probe face temp limited by IEC 60601-2-37). |
Software Modes | Addition of color flow Doppler mode to support cardiac function assessment. | "Latest software version enables two modes of operation; type B and color flow Doppler." |
Safety - Acoustic Energy | Limit potential unintentional exposure of patient's mucosal tissue to acoustic energy. | "Continuous imaging is limited by a 20 minute software interlock should the operator mistakenly leave the machine in continuous imaging mode, thus limiting the potential unintentional exposure of the patient's mucosal tissue to acoustic energy." |
Safety - Probe Temperature | Maximum probe face temperature limited by FDA consensus standard IEC 60601-2-37. | "Maximum probe face temperature is limited according to FDA consensus standard IEC 60601-2-37." |
Effectiveness (New Feature) | Color flow mode effectively demonstrates design modification for assessing cardiac function. | "Performance data using a flow phantom demonstrates the effectiveness of the design modification; the addition of a color flow mode of operation to support the assessment of cardiac function." |
Overall Safety & Effectiveness | As safe and effective as predicate devices. | "The ImaCor Zura System with ClariTEE probe is as safe and effective as the predicate devices." (Claim of Substantial Equivalence) |
2. Sample Size Used for the Test Set and Data Provenance
The document explicitly states: "Performance data using a flow phantom demonstrates the effectiveness of the design modification; the addition of a color flow mode of operation to support the assessment of cardiac function."
- Test Set Sample Size: Not specified for the flow phantom. Flow phantom studies typically involve simulating various flow conditions but don't involve human subject sample sizes in the traditional sense.
- Data Provenance: The study was conducted on a "flow phantom," which is a controlled, laboratory-based setup designed to simulate fluid flow. This implies an in-vitro or ex-vivo experimental setting, not human subject data. The country of origin is not specified, but the submission is to the US FDA, so presumably the testing was done in a manner compliant with US regulatory expectations, likely in the US or a country with comparable standards. It is a prospective test specifically for this submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. The performance data was gathered from a "flow phantom" for the new color flow mode. This type of testing typically relies on engineering specifications and physical measurements (e.g., flow velocity, direction accuracy) rather than expert human interpretation to establish ground truth for the device's technical function.
4. Adjudication Method for the Test Set
Not applicable. As noted above, the performance data cited is from a "flow phantom," which means the "ground truth" for evaluating the function of the color flow mode would be based on the known parameters of the phantom and direct measurements from the device, not on expert adjudication of medical images.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done. The document only mentions performance data from a "flow phantom" and a claim of substantial equivalence to predicate devices based on this and other characteristics.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was Done
Yes, a standalone performance assessment was done for the device's new feature (color flow mode) using a "flow phantom." This is an algorithm-only evaluation in the sense that the device's ability to accurately depict flow was tested independently of human interpretation of clinical images. The data pertains to the technical function of the device itself.
7. The Type of Ground Truth Used
For the specific performance data cited (color flow mode), the ground truth was known physical properties and measurements of the flow phantom. This is typically based on pre-established parameters of the phantom and/or reference measurements from other calibrated equipment, not on expert consensus, pathology, or outcomes data, as those relate more to clinical accuracy studies.
8. The Sample Size for the Training Set
Not applicable to this 510(k) submission. This submission is for modifications to an existing device (software change to add color flow Doppler), not the initial development of an AI algorithm that would require a distinct training set. The "performance data" refers to validation testing, not model training.
9. How the Ground Truth for the Training Set was Established
Not applicable, as there is no mention of a training set or an AI algorithm being independently trained in this 510(k) summary. The submission focuses on modifications to an existing, cleared ultrasound system and probe.
Ask a specific question about this device
(147 days)
Echo Imaging System (892.1560) with a Diagnostic Ultrasonic Transducer (892.1570) or Echocardiograph (870.2330
The ImaCor Zura TEE System is intended for use in the episodic assessment of cardiac function using transesophageal echocardiography. It is indicated for use in clinical settings, including long-term settings such as the ICU, for an indwelling time period not to exceed 72 hours. The ImaCor Zura TEE System is not intended for pediatric use.
The ImaCor TEE System consists of three main components:
- Ultrasound Machine: A TFE predicate device optimized for use with ImaCor miniaturized probe,
- Ultrasound Probe (The "Blue Probe"): A miniaturized TEE probe optimized for longer dwell time relative to standard TEE probes enables use in longer term clinical settings such as the ICU. The probe distal tip is flexed upward transiently to obtain standard TEE images.
- Ultrasound Imaging Software: The software controls standard ultrasound machine functions such as imaging, recording and measuring. Continuous imaging is limited by a 20 minute software interlock should the operator mistakenly leave the machine in continuous imaging mode, thus limiting the potential unintentional exposure of the patient's mucosal tissue to acoustic energy. Maximum probe face temperature is limited according to FDA consensus standard IEC 60601-2-37.
The provided text is a 510(k) summary for the ImaCor Zura TEE System, which indicates it's a submission for marketing clearance based on substantial equivalence to existing predicate devices, not a de novo clearance requiring extensive clinical performance studies. As such, the information you're requesting regarding acceptance criteria and detailed study results is largely not present in this type of document.
The document focuses on demonstrating that the new ImaCor Zura TEE System is "as safe and effective as the predicate devices" by having "the same intended uses and similar indications, technological characteristics, and principles of operation." The "minor technological characteristics" are asserted to "raise no new issues of safety or effectiveness."
However, I can extract the available information and highlight what is not present in this specific 510(k) summary regarding performance studies:
Key Takeaways from the document:
- Basis for Clearance: Substantial Equivalence to predicate devices (Ultrasonix Modulo, Sonosite Ultrasound Diagnostic System, GE Vivid 7). This means specific, quantitative acceptance criteria for this new device's performance are typically not established and proven in the same way they would be for a novel device.
- Performance Data Mentioned: "Phantom measurement data shows that the ImaCor Zura TEE device is equivalent to predicate TEE devices with respect to effectiveness." It also mentions "preclinical (animal) studies and confirmatory clinical studies." However, no details on their design, sample sizes, methodology, or specific acceptance criteria are provided in this summary.
Here's a breakdown of your requested information based on the provided text, indicating where information is absent (as is common for 510(k) summaries relying on substantial equivalence):
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Equivalence to Predicate TEE devices with respect to effectiveness | Phantom measurements showed the ImaCor Zura TEE device is equivalent to predicate TEE devices in effectiveness. |
Safety | Performance data demonstrate that the miniaturization of the ImaCor probe (relative to standard size) and ultrasound transducer does not impact safety. Maximum probe face temperature is limited according to FDA consensus standard IEC 60601-2-37. Continuous imaging is limited by a 20-minute software interlock to prevent unintentional patient exposure. |
Intended Use / Indications for Use: episodic assessment of cardiac function using transesophageal echocardiography in clinical settings (including ICU) for up to 72 hours, not for pediatric use. | The device meets these stated indications for use (implied through substantial equivalence and confirmed by FDA clearance letter). |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size (Test Set): Not specified in the 510(k) summary. The document mentions "phantom measurement data" and "confirmatory clinical studies" but does not provide details on sample sizes for either.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified. This information is typically found in detailed study protocols and reports, which are not part of this 510(k) summary.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable/mentioned. The device is an ultrasound imaging system ("algorithm only" in the context of image acquisition and processing), not an AI-assisted diagnostic tool for human readers as described in an MRMC study setup. The summary focuses on the physical device's performance compared to predicates.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, implicitly. The "phantom measurement data" evaluates the device directly, demonstrating its standalone effectiveness in acquiring images and measurements, without human interpretation as part of the primary performance metric for substantial equivalence here.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For "phantom measurement data," the ground truth would typically be the known physical characteristics or measurements of the phantom itself.
- For "confirmatory clinical studies," the precise ground truth is not specified, but for a TEE system, it would generally involve comparing images and measurements obtained from the device to established diagnostic criteria or other validated imaging modalities.
8. The sample size for the training set
- Not applicable/specified. This is not an AI/ML algorithm that undergoes a training phase in the conventional sense for a diagnostic output. The "software" component controls standard ultrasound machine functions.
9. How the ground truth for the training set was established
- Not applicable/specified. (See #8)
Summary of Study that Proves the Device Meets Acceptance Criteria:
The document states:
- "Performance data demonstrate that the miniaturization of the ImaCor probe and ultrasound transducer relative to standard size probes does not impact safety or effectiveness."
- "Phantom measurement data shows that the ImaCor Zura TEE device is equivalent to predicate TEE devices with respect to effectiveness."
- "The ImaCor Zura TEE system was also subject to preclinical (animal) studies and confirmatory clinical studies."
The "study" that proves the device meets the (implicit) acceptance criteria for substantial equivalence primarily relies on phantom measurement data showing equivalence to predicate devices, supported by preclinical and clinical studies (details not provided in the summary). The FDA's clearance (K080223) indicates they found this evidence sufficient to establish substantial equivalence based on the device having the "same intended uses and similar indications, technological characteristics, and principles of operation as its predicate devices."
It's important to understand that 510(k) submissions, particularly for devices relying on "substantial equivalence," often do not include the same level of detailed clinical trial data and statistical analysis that would be present in a PMA (Premarket Approval) application or a de novo clearance pathway for novel technologies that raise new questions of safety and effectiveness.
Ask a specific question about this device
(119 days)
System |
| Classification Name,
Number, Product Code: | Echocardiograph
21 CFR 870.2330
The Volcano Angio-IVUS Mapping System is intended to provide a common frame of reference for intravascular ultrasound and angiographic images during diagnostic and/or interventional procedures for the coronary vasculature. The system provides indication of the location of the IVUS cross-sectional imaging plane for any given IVUS image as it relates to a two-dimensional (2D) angiographic image and/or the associated three-dimensional (3D) model formulated from multiple 2D image projections.
The Angio-IVUS Mapping System is an image acquisition and processing system on the In-Vision Gold Imaging console designed to process traditional X-ray angiographic images. A standard Ethernet cable links the In-Vision Gold Imaging System to the catheterization lab DICOM network, allowing the angiographic images to be transferred to the Angio-IVUS Mapping System. The system can be used to make a 3D reconstruction of a coronary subtree from two angiogram images taken at different angles. IVUS pullbacks acquired with the In-Vision Gold System can be mapped to the 3D reconstruction and the original 2D angiograms, resulting in a better tool for IVUS orientation and interpretation. In addition, Volcano's VH IVUS can (optionally) be performed, combining the detailed information of the arterial wall with the familiar angiographic roadmap.
The provided text is a 510(k) summary for the Volcano Angio-IVUS Mapping System, which describes the device and claims substantial equivalence to predicate devices. However, it does not contain information about specific acceptance criteria or a detailed study proving the device meets those criteria.
Instead, the document states:
- Performance Data: "The information provided in the premarket notification demonstrates that the Angio-IVUS Mapping System is substantially equivalent to the predicate devices, for which there is FDA clearance. This equivalence was demonstrated through comparison of intended use and fundamental scientific technology to a commercially available device."
- Conclusion: "The Volcano Angio-IVUS Mapping System has the same indication for use, and utilizes the same fundamental scientific technology as that of the predicate devices. The information provided in this premarket notification submission, along with the Declaration of Conformity to design controls supports a determination of substantial equivalence of the Volcano Angio-IVUS Mapping System to the predicate devices."
This indicates that the submission relies on comparison to predicate devices rather than a new performance study with explicit acceptance criteria for the new device's performance. Therefore, I cannot fill out the requested table or additional study details as that information is not present in the provided text.
The 510(k) process for this device type primarily focuses on demonstrating substantial equivalence in intended use and technological characteristics to a legally marketed predicate device, rather than requiring a detailed de novo clinical performance study against specific acceptance metrics for the new device.
Ask a specific question about this device
(48 days)
Rancho Cordova, CA 95670
Re: K031148
Trade Name: In-Vision™ Imaging System Regulation Number: 21 CFR 870.2330
In-Vision™ Imaging System is used for the qualitative and quantitative evaluation of vascular morphology in the coronary arteries and vessels of the peripheral vasculature. It is also incicated as an adjunct to conventional angiographic procedures to provide an image of vessel lumen and wall structures.
ChromaFlo is indicated for qualitative blood flow information from peripheral and coronary vaculature; flow information can be an adjunct to other methods of estimating blood flow and blood perfusion.
The JOMED Inc. In-Vision™ Imaging System consists of the imaging catheter, the patient interface module, and the system console. The system console gathers and displays high-resolution intra'uminal images that can be analyzed both qualitatively and quantitatively. In addition to supplying diagnostic information, the In-Vision™ Imaging Systems can be adjunct to interventional therapies, such as balloon angioplasty. With ChromoFloTM, a two-dimensional color map of relative blood flow is overlaid on the grayscale image, providing a ditional information for vessel analysis. The In-Line Digital™ option displays a two-dimensional, 360° rotations, and longitudinal view of the vessel.
Here's an analysis of the provided text regarding the In-Vision™ Imaging System's acceptance criteria and studies:
Based on the provided 510(k) summary, the document does not contain specific acceptance criteria tables or detailed performance study results in the manner requested. The submission is a "Special 510(k)" for a software modification (version upgrade to V4.1) of an existing device. For such submissions, the focus is often on demonstrating that the changes do not raise new questions of safety or efficacy, rather than presenting extensive de novo performance data.
Therefore, many of the requested fields cannot be filled directly from this document. However, I can extract the available information and highlight what is missing.
Acceptance Criteria and Device Performance
Not explicitly stated in the document. The document states: "Applicable testing was performed to evaluate the modifications to the In-Vision™ Imaging System. The test results were found to be acceptable as required by the respective test plans and protocols."
This indicates that internal acceptance criteria and protocols were used, but the specific metrics and performance values are not disclosed in this 510(k) summary.
Study Details (Based on Available Information)
Since detailed performance data is not provided, many of these sections will indicate "Not specified in document."
# | Feature | Description |
---|---|---|
1 | A table of acceptance criteria and the reported device performance | Not specified in document. The document states "Applicable testing was performed... The test results were found to be acceptable as required by the respective test plans and protocols," but no specific criteria or performance values are provided. |
2 | Sample size used for the test set and data provenance | Not specified in document. The document does not mention sample sizes for any test sets or the origin (country, retrospective/prospective) of any data. |
3 | Number of experts used to establish the ground truth for the test set and qualifications | Not specified in document. No information on expert involvement or ground truth establishment is provided for any testing. |
4 | Adjudication method (e.g., 2+1, 3+1, none) for the test set | Not specified in document. No information on adjudication methods is present. |
5 | If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size | Not specified in document. The document does not mention an MRMC study or any comparison of human readers with vs. without AI assistance. This device is an imaging system, not an AI-assisted diagnostic tool in the modern sense. |
6 | If a standalone (i.e. algorithm only without human-in-the loop performance) was done | Not specified in document. While "performance data" is mentioned, there's no detail on whether this involved standalone algorithm performance, or if the "device" implicitly includes a human operator for interpretation. Given the era and device type, it's highly likely human interpretation is integral. |
7 | The type of ground truth used (expert consensus, pathology, outcomes data, etc.) | Not specified in document. |
8 | The sample size for the training set | Not specified in document. It's unlikely a "training set" in the modern machine learning sense was employed given the context of a software modification to an intravascular ultrasound system in 2003. "Training" would more likely refer to system configuration or calibration. |
9 | How the ground truth for the training set was established | Not specified in document. |
Summary of Information from the Document:
- Device Name: In-Vision™ Imaging System
- Purpose of Submission: Software modification (version upgrade to V4.1) to an existing intravascular ultrasound imaging system.
- Comparison: Claims substantial equivalence to predicate devices based on fundamental scientific technology, intended use, and clinical applications.
- Performance Data Statement: "Applicable testing was performed to evaluate the modifications... The test results were found to be acceptable as required by the respective test plans and protocols." This indicates internal testing was done and met predefined criteria, but the specifics are not public in this summary.
- Conclusion: The software modification "raises no new questions about safety and efficacy."
This 510(k) summary focuses on demonstrating that a software update to an already cleared device does not alter its fundamental safety or effectiveness profile, rather than providing a detailed performance study as might be expected for a novel device or an AI/ML product today.
Ask a specific question about this device
(346 days)
Class II in 21 CFR.
870.2100 Cardiovascular Blood Flowmeter 870.2120 Extravascular Blood Flow Probe 870.2330
This modification has no effect on intended use of the HP ultrasound systems.
The modification addressed in this submission is a change from analog to digital circuit technology for the front end of the HP ultrasound imaging systems listed above.
This document is a 510(k) summary for a modification to an existing ultrasound imaging system. It focuses on demonstrating substantial equivalence to predicate devices for a change from analog to digital circuit technology in the front end. Therefore, it does not contain the detailed performance study information typically found in submissions for novel devices or those requiring clinical efficacy studies with specific acceptance criteria.
Based on the provided text, here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria and reported device performance in the way typically expected for a new device's clinical performance. Since this is a modification to an existing device, the "acceptance criteria" discussed are primarily related to safety, compliance with medical device standards, and functional equivalence to predicate devices.
Acceptance Criteria | Reported Device Performance |
---|---|
Compliance to medical device safety standards (e.g., IEC 601, UL 2601) | Stated as "shown by compliance" |
Software safety verified by hazard analysis and software validation | Stated as "verified by hazard analysis and software validation to ensure performance specifications are met" |
Performance specifications met | Stated as "ensure performance specifications are met" |
Substantial equivalence to legally marketed predicate devices (ATL HDI 3000, Toshiba SSA-380A) regarding safety, effectiveness, and intended use | Stated as "demonstrate that the modified HP ultrasound imaging systems are substantially equivalent to legally marketed predicate devices" |
2. Sample size used for the test set and the data provenance
Not applicable. This submission is for a technological modification and primarily relies on engineering validation, safety testing, and comparison to predicate devices, not a clinical "test set" with a specific sample size of patient data. There is no mention of patient data being used for this specific filing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. Ground truth for a clinical test set is not discussed as this is a technological modification submission.
4. Adjudication method for the test set
Not applicable. There is no mention of a clinical test set or adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This document pertains to an ultrasound imaging system and does not mention AI or MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. This is not an algorithm-only device; it's a hardware modification to an imaging system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. Ground truth in the context of clinical data is not discussed. For this submission, the "ground truth" for the claims of safety and equivalence would be the established safety standards, the validated performance specifications of the device, and the characteristics of the predicate devices.
8. The sample size for the training set
Not applicable. There is no mention of a training set as this is not a machine learning or AI-driven device.
9. How the ground truth for the training set was established
Not applicable (as above).
Ask a specific question about this device
(140 days)
21 CFR:
- 870.2100 Cardiovascular Blood Flowmeter 870.2120 Extravascular Blood Flow Probe 870.2330
This modification expands the intended use statement for the HP SONOS 100CF Ultrasound Imaging System to include obstetrics and gynecology applications.
This 510(k) submission is to add an endovaginal transducer and a new EV(EndoVaginal)/Pelvic study type to the HP SONOS 100CF Ultrasound Imaging System.
This document is a 510(k) summary for a modification to an ultrasound imaging system. It focuses on demonstrating substantial equivalence to predicate devices for regulatory approval, rather than providing a detailed study that proves the device meets specific acceptance criteria in the manner typically seen for novel AI/software devices. Therefore, much of the requested information cannot be found in this summary.
Here's a breakdown of what can and cannot be extracted:
- Acceptance Criteria and Reported Device Performance: This document does not present specific quantitative acceptance criteria or reported device performance in the form of a table for clinical effectiveness. The acceptance is based on compliance to general safety standards and substantial equivalence to predicate devices.
- Sample size used for the test set and the data provenance: Not applicable. There is no mention of a clinical test set with specific data provenance for performance evaluation.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth for clinical performance evaluation is not discussed.
- Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This summary predates common AI integration in medical devices and does not describe such a study. The device is an ultrasound imaging system, not an AI-powered diagnostic tool.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This is not an AI algorithm.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable for clinical performance. The "ground truth" here is compliance with safety standards and functional equivalence to predicate devices.
- The sample size for the training set: Not applicable. This is not a machine learning device that requires a training set.
- How the ground truth for the training set was established: Not applicable.
Based on the provided 510(k) summary, here's what can be stated:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicit and revolve around:
Acceptance Criterion | Reported Device Performance |
---|---|
Safety Compliance | Compliance to medical device safety standards (IEC 601 and UL 544). Acoustic output data provided. Software safety verified by hazard analysis and software validation. |
Effectiveness (Functional) | Performance specifications are met (due to software validation). Identical OEM endovaginal transducer to predicate device (Sharplan Usight 9010). All other technological characteristics consistent with currently marketed HP SONOS 100CF. |
Intended Use Expansion | The modification expands the intended use to include obstetrics and gynecology applications. |
Substantial Equivalence | Demonstrated substantial equivalence to legally marketed predicate devices (HP/Philips P800 Ultrasound Imaging System and Sharplan Usight 9010 Laparoscopic Ultrasound System) with regards to safety, effectiveness, and intended use. |
2. Sample size used for the test set and the data provenance: Not applicable as this submission relies on compliance to standards, predicate device comparison, and internal validation, not a clinical performance test set with retrospective/prospective data or specific data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No, an MRMC comparative effectiveness study was not done. The device is an ultrasound imaging system, and this submission predates the widespread use of AI in medical imaging.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This device is an imaging system, not a standalone algorithm.
7. The type of ground truth used:
* Safety: Compliance with established medical device safety standards (IEC 601, UL 544).
* Functionality/Effectiveness: Verification that performance specifications are met through software validation, and comparison to the known performance of predicate devices.
* Acoustic Output: Measured acoustic output data.
8. The sample size for the training set: Not applicable. This is not a machine learning device that requires a training set.
9. How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
Page 1 of 1