Search Results
Found 96 results
510(k) Data Aggregation
(77 days)
EchoPAC Software Only / EchoPAC Plug-in is intended for diagnostic review and analysis of ultrasound images, patient record management and reporting, for use by, or on the order of a licensed physician. EchoPAC Software Only / EchoPAC Plug-in allows post-processing of raw data images from GE ultrasound scanners and DICOM ultrasound images.
Ultrasound images are acquired via B (2D), M, Color M modes, Color, Power, Pulsed and CW Doppler modes, Coded Pulse, Harmonic, 3D, and Real time (RT) 3D Mode (4D).
Clinical applications include: Fetal/Obstetrics; Abdominal (including renal and GYN); Urology (including prostate); Pediatric; Small organs (breast, testes, thyroid); Neonatal and Adult Cephalic; Cardiac (adult and pediatric); Peripheral Vascular; Transesophageal (TEE); Musculo-skeletal Conventional; Musculo-skeletal Superficial; Transrectal (TR); Transvaginal (TV); Intraoperative (vascular); Intra-Cardiac; Thoracic/Pleural and Intra-Luminal.
EchoPAC Software Only / EchoPAC Plug-in provides image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality to ultrasound images that are acquired via the GE Healthcare Vivid family of ultrasound systems, as well as DICOM images from other ultrasound systems. EchoPAC Software Only will be offered as SW only to be installed directly on customer PC hardware and EchoPAC Plug-in is intended to be hosted by a generalized PACS host workstation. EchoPAC Software Only / EchoPAC Plug-in is DICOM compliant, transferring images and data via LAN between systems, hard copy devices, file servers and other workstations.
The provided 510(k) clearance letter and summary discuss the EchoPAC Software Only / EchoPAC Plug-in, including a new "AI Cardiac Auto Doppler" feature. The acceptance criteria and the study proving the device meets these criteria are primarily detailed for this AI-driven feature.
Here's an organized breakdown of the information:
1. Acceptance Criteria and Reported Device Performance (AI Cardiac Auto Doppler)
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Feasibility score of more than 95% | The verification requirement included a step to check for a feasibility score of more than 95%. (Implies this was met for the AI Cardiac Auto Doppler). |
| Expected accuracy threshold calculated as the mean absolute difference in percentage for each measured parameter. | The verification requirement included a step to check mean percent absolute error across all cardiac cycles against a threshold. All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits, passed this check. These results indicate that observed accuracy of each of the individual clinical parameters met the acceptance criteria. |
| For Tissue Doppler performance metric: Threshold not explicitly stated, but comparative values for BMI groups are provided. | BMI < 25: Mean performance metric = -0.002 (SD = 0.077) |
| For Flow Doppler performance metric: Threshold not explicitly stated, but comparative values for BMI groups are provided. | BMI $\ge$ 25: Mean performance metric = -0.006 (SD = 0.081) |
| BMI < 25: Mean performance metric = 0.021 (SD = 0.073) | |
| BMI $\ge$ 25: Mean performance metric = 0.003 (SD = 0.057) |
2. Sample Size and Data Provenance for the Test Set
-
Sample Size:
- Tissue Doppler: 4106 recordings from 805 individuals.
- Doppler Trace: 3390 recordings from 1369 individuals.
- BMI Sub-analysis: 41 patients, 433 Doppler measurements (subset of Vivid Pioneer dataset).
-
Data Provenance: Retrospective, collected from standard clinical practices.
- Countries of Origin: USA (several locations), Australia, France, Spain, Norway, Italy, Germany, Thailand, Philippines.
3. Number of Experts and Qualifications for Ground Truth
-
Number of Experts:
- Annotators: Two cardiologists.
- Review Panel: Five clinical experts.
-
Qualifications of Experts:
- Annotators: Cardiologists, implying medical expertise in cardiac imaging and diagnosis. They followed US ASE (American Society of Echocardiography) based annotation guidelines.
- Review Panel: Clinical experts, implying medical professionals with experience in the relevant clinical domain.
4. Adjudication Method for the Test Set
The ground truth establishment process involved:
- Two cardiologists performed initial annotations.
- A review panel of five clinical experts provided feedback on these annotations.
- Annotations were corrected (as needed) until a consensus agreement was achieved between the annotators and reviewers. This suggests an iterative consensus-based adjudication method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was explicitly mentioned. The provided document focuses on the standalone performance of the AI algorithm against expert-derived ground truth, not human-in-the-loop performance.
- Therefore, an effect size of how much human readers improve with AI vs. without AI assistance is not provided.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance evaluation was done. The "AI Auto Doppler Summary of Testing" section describes the performance of the AI Cardiac Auto Doppler algorithm itself, without human intervention for the critical performance metrics (e.g., "All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits passed this check").
7. Type of Ground Truth Used
- The ground truth was established by expert consensus (two cardiologists performing annotations, reviewed and corrected by a panel of five clinical experts until consensus).
- It was based on manual measurements and assessments of Doppler signal quality and ECG signal quality on curated images, following US ASE based annotation guidelines.
8. Sample Size for the Training Set
- Tissue Doppler: 1482 recordings from 4 unique clinical sites.
- Doppler Trace: 2070 recordings from 4 unique clinical sites.
9. How the Ground Truth for the Training Set Was Established
- The ground truth for both development (training) and verification (testing) datasets was established using the same "truthing" process:
- Annotators (two cardiologists) performed manual measurements after assessing Doppler signal quality and ECG signal quality of curated images.
- These annotations followed US ASE based annotation guidelines.
- A review panel of five clinical experts provided feedback, and corrections were made until a consensus agreement was achieved between the annotators and reviewers.
- It is explicitly stated that the development dataset was selected from clinical sites not used for the testing dataset, ensuring independence between training and test data.
Ask a specific question about this device
(117 days)
EchoConfidence is Software as a Medical Device (SaMD) that displays images from a Transthoracic Echocardiogram, and assists the user in reviewing the images, making measurements and writing a report.
The intended medical indication is for patients requiring review or analysis of their echocardiographic images acquired for their cardiac anatomy, structure and function. This includes automatic view classification; segmentation of cardiac structures including the left and right ventricle, chamber walls, left and right atria and great vessels; measures of cardiac function; and Doppler assessments.
The intended patient population is both healthy individuals and patients in whom an underlying cardiac disease is known or suspected; the intended patient age range is for adults (>= 22 years old) and adolescent in the age range 18 – 21 years old.
EchoConfidence is Software as a Medical Device (SaMD) that displays images from a Transthoracic Echocardiogram, and assists the user in reviewing the images, making measurements and writing a report.
Here's an analysis of the provided FDA 510(k) clearance letter for EchoConfidence (USA), incorporating all the requested information:
Acceptance Criteria and Device Performance Study for EchoConfidence (USA)
The EchoConfidence (USA) device, a Software as a Medical Device (SaMD) for reviewing, measuring, and reporting on Transthoracic Echocardiogram images, underwent a clinical evaluation to demonstrate its performance against predefined acceptance criteria.
1. Acceptance Criteria and Reported Device Performance
The primary acceptance criteria for EchoConfidence were based on the "mean absolute error" (MAE) of the AI's measurements compared to three human experts. The reported performance details indicate that the device met these criteria.
| Acceptance Criteria Category | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Primary Criteria (AI vs. Human Expert MAE) | The upper 95% confidence interval of the difference between the MAE of the AI (against 3 human experts) and the MAE of the 3 human experts (against each other) must be less than +25%. | In the majority of cases, the point estimate (of the difference between AI MAE and human expert MAE) was substantially below 0% (indicating the AI agrees with humans more than they agree with each other). The reporting consistently showed that the upper 95% confidence interval was <0%, and well below the +25% criterion standard. |
| Subgroup Analysis (Consistency) | The performance criteria should be met across various demographic and technical subgroups to ensure robust and generalizable performance. | Across 20 subgroups (by age, gender, ethnicity, cardiac pathologies, ultrasound equipment vendor/model, year of scan, and qualitative image quality), the finding was consistent: the point estimation showed the AI agreed with human experts better than the humans agreed with themselves, and the upper 95% confidence interval was <0% and well below the +25% criterion. |
2. Sample Size and Data Provenance
- Test Set Sample Size: 200 echocardiographic cases from 200 different patients.
- Data Provenance: All cases were delivered via a US Echocardiography CoreLab. The data used for validation was derived from non-public, US-based sources and was kept on servers controlled by the CoreLab, specifically to prevent it from entering the training dataset. The study was retrospective.
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: Three (3) human experts.
- Qualifications of Experts: The experts were US accredited and US-based, employed by the US CoreLab that supplied the data. While specific years of experience are not mentioned, their accreditation and employment by a CoreLab imply significant expertise in echocardiography and clinical measurements.
4. Adjudication Method for the Test Set
The ground truth was established by having each of the three human experts independently perform the measurements for each echocardiogram, as if for clinical use. A physician then reviewed and adjusted, if needed, approximately 10% of the measurements. This could be interpreted as a form of a 3-expert reading with a final physician review/adjudication for a subset of cases. The primary analysis method, however, preserved the individual measurements of each expert rather than averaging them, by comparing the AI's MAE to each expert's measurements and then comparing inter-expert MAE.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not explicitly describe a MRMC comparative effectiveness study where human readers' performance with AI assistance is compared to their performance without AI assistance to measure improvement (effect size). The study rather focuses on comparing the AI's performance to human experts directly, and comparing inter-human expert variability. The device is described as assisting the user in reviewing images, making measurements, and writing reports, suggesting a human-in-the-loop application, but a specific MRMC study measuring reader improvement with AI assistance is not detailed.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance study was done. The primary acceptance criteria directly evaluate the "mean absolute error" (MAE) of the AI against the 3 human expert reads. This directly assesses the algorithm's performance in generating measurements without human intervention during the measurement process, assuming the output measurements are directly from the AI. The comparison with inter-expert variability helps contextualize this standalone AI performance.
7. Type of Ground Truth Used
The ground truth used was expert consensus / expert measurements. The process involved three human experts independently performing measurements, with a physician reviewing and potentially adjusting ~10% of these measurements. This establishes a "clinical expert gold standard" based on their interpretation and measurement.
8. Sample Size for the Training Set
The sample size for the training set is not explicitly stated in the provided document. It only mentions that the dataset used for development and internal testing was derived from a separate source and was not from the US-based CoreLab that provided the validation data.
9. How Ground Truth for the Training Set Was Established
The method for establishing ground truth for the training set is not explicitly described in the provided document. It only states that the development dataset was separate from the validation dataset and that within the development dataset, source patients were specifically tagged as being used for either training or internal testing.
Ask a specific question about this device
(166 days)
EchoGuide is a vascular ultrasound imaging device meant to aid in identification of the cannulation site on the skin of mature arteriovenous fistulas/grafts (AVFs/AVGs) in adult patients by appropriately trained healthcare providers in clinical settings. This device is not meant to replace the current standard of care cannulation methods.
EchoGuide is a 3D automated ultrasound solution designed to provide the benefits of ultrasound for arteriovenous fistula/graft cannulation without the need for extensive training. EchoGuide uses a three-dimensional probe to acquire live coronal plane images, in addition to automating imaging settings, to allow users to quickly assess the position, trajectory, and size of an arteriovenous fistula/graft. Users can then mark the position and trajectory of the access on the patient's skin before removing the probe and proceeding with cannulation.
The EchoGuide probe houses a 2D array on a track. The piezoelectric material in the transducer is used as an ultrasound source to transmit sound waves into the body. Sound waves are reflected back to the transducer and converted to electrical signals that are processed. The transducer is a 52 mm linear (192 element) motorized probe capable of capturing a volume of data. The motorized probe collects a series of 2D images to capture a volume. Through image analysis and processing, the volumes are sliced to create live coronal plane renderings.
The ultrasound system has a laptop form factor, with a bottom touch screen for user interaction and an additional top screen for display. The ultrasound system includes a transmitter and receiver, are all self-contained within the case. The ultrasound system interfaces with the probe through a port on the right side of the system.
The EchoGuide user interface defaults to a conventional 2D ultrasound image when the system powers on. Users can switch between this view, and the live coronal imaging via controls on the bottom screen of the ultrasound. Users can freeze the imaging and capture a snapshot of the fistula/graft as well. The snapshot displays a static view of the fistula/graft in the coronal and transverse planes.
EchoGuide is intended to be used in a clinical setting at the point of hemodialysis care.
This document primarily focuses on the FDA 510(k) clearance process for "EchoGuide (Version 1)" and its substantial equivalence to predicate devices, rather than a detailed report of a study proving the device meets specific performance acceptance criteria for an AI algorithm. The provided text touches on non-clinical and clinical testing but does not provide the granular details required to answer all parts of your request, particularly regarding specific performance metrics (e.g., sensitivity, specificity, accuracy), expert qualification for ground truth, and the specifics of AI-assisted human reader studies.
Here's an analysis based on the information provided, highlighting what can be discerned and what is missing:
Acceptance Criteria and Device Performance for EchoGuide (Version 1)
The provided FDA 510(k) clearance letter and summary primarily discuss the substantial equivalence of the EchoGuide device to existing predicate devices, focusing on its intended use, technological characteristics, and conformance to general safety and performance standards for imaging devices. It does not explicitly state acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, accuracy for identifying cannulation sites) for an AI component.
The document mentions "imaging accuracy and quality" as confirmed by a clinical study, but without providing the quantitative acceptance criteria or the reported performance values against these criteria.
Missing Information:
- A specific table of acceptance criteria for AI-driven performance metrics (e.g., a specific target sensitivity or accuracy).
- Reported device performance values against these specific AI-driven criteria. The text only vaguely states "confirms the imaging accuracy and quality of EchoGuide for in vivo use."
Study Details (Based on available information):
1. Table of Acceptance Criteria and Reported Device Performance:
As noted above, this level of detail is not provided in the given FDA 510(k) document. The document focuses on showing substantial equivalence and conformance to general device standards.
2. Sample Size and Data Provenance for Test Set:
- Sample Size: The document states, "Data from a non-significant risk observational study was used to confirm in vivo imaging adequacy of EchoGuide for the exam of hemodialysis accesses." It further specifies that "Images were collected on upper arm arteriovenous fistulae." However, the exact sample size (number of patients or images) for this "test set" is not specified.
- Data Provenance: The study was described as a "prospective, single arm, non-randomized, observational study." The country of origin is not explicitly stated, but given the FDA clearance, it's highly likely to be US data or data suitable for US regulatory submission.
3. Number and Qualifications of Experts for Ground Truth:
- The document does not provide any information on the number of experts used to establish ground truth or their specific qualifications (e.g., "Radiologist with X years of experience").
- It mentions the device is "meant to aid in identification of the cannulation site...by appropriately trained healthcare providers in clinical settings," but doesn't detail how the 'true' cannulation sites were established for the study.
4. Adjudication Method:
- The document does not provide any information on the adjudication method used for establishing ground truth (e.g., 2+1, 3+1 consensus).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. The focus is on the device's imaging quality for in vivo use.
- Therefore, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance.
6. Standalone (Algorithm-Only) Performance:
- The device is described as an "ultrasound solution designed to provide the benefits of ultrasound for arteriovenous fistula/graft cannulation" and automates "imaging settings" to allow users to "assess the position, trajectory, and size." It also states, "EchoGuide is a vascular ultrasound imaging device meant to aid in identification of the cannulation site...by appropriately trained healthcare providers."
- This phrasing suggests that the device (likely including algorithmic processing for image presentation and perhaps automated measurements/guidance, but without explicit AI claims) is intended to be used by a human operator to assist in a task. It is not presented as a standalone diagnostic algorithm that outputs a decision on its own.
- The document does not explicitly describe a standalone ("algorithm-only") performance study in terms of metrics like sensitivity, specificity, or AUC as one might see for a diagnostic AI. The "imaging accuracy and quality" mentioned is likely related to the visual representation and utility for a human user.
7. Type of Ground Truth Used:
- The document refers to a "non-significant risk observational study" where "Images were collected on upper arm arteriovenous fistulae." The study's primary objective was "data collection." It confirms "in vivo imaging adequacy."
- The nature of the ground truth is not explicitly stated beyond being related to "in vivo imaging adequacy" for "hemodialysis accesses." For specific "identification of the cannulation site," ground truth could potentially involve:
- Expert consensus (e.g., radiologists/sonographers outlining the vessel)
- Pathology (unlikely for this application)
- Outcomes data (e.g., successful cannulation rates after using the device, but this is usually a separate clinical utility study)
- Perhaps direct measurements or annotations from images performed by clinicians in the study.
- The specific method for establishing the 'true' cannulation site or vessel parameters is not detailed.
8. Sample Size for Training Set:
- The document describes a clinical evaluation/validation study, but it does not provide information on the sample size of any training set used for the development of the EchoGuide's algorithms (if it uses machine learning/AI models). This information is typically found in design validation documentation, not necessarily in the public 510(k) summary focused on substantial equivalence.
9. How Ground Truth for Training Set Was Established:
- Since information on a training set is not provided, there is no information on how ground truth for such a set would have been established.
Summary of Missing Information Critical for Full Response:
The provided document, being a 510(k) clearance letter and summary, primarily focuses on demonstrating substantial equivalence to predicate devices and adherence to general performance standards, rather than providing a detailed clinical study report for an AI-driven device with specific performance metrics against pre-defined acceptance criteria. Therefore, many of the specific details requested regarding AI performance studies, sample sizes, expert qualifications, and ground truth establishment are not present in this particular type of FDA document.
Ask a specific question about this device
(195 days)
The Echo Intracranial Base Catheter is indicated for the introduction of interventional devices into the neurovasculature.
The Echo Intracranial Base Catheter is a single lumen, flexible, variable stiffness catheter with a 0.100 inch inner diameter, designed for use in facilitating the insertion and guidance of appropriately sized interventional devices into the neurovascular system. It has a radiopaque marker band on the distal end a luer hub at the proximal end. The distal catheter shaft has a 14 cm lubricious coating to reduce friction during use. It is packaged with a dilator and two rotating hemostatic valves. The Echo Intracranial Base Catheter is compatible with introducer sheaths with an inner diameter of 9F or greater. The Echo Intracranial Base Catheter is supplied sterile, non-pyrogenic, and intended for single use only.
The provided text describes a medical device called the "Echo™ Intracranial Base Catheter", but it does not contain the acceptance criteria or a study proving that an AI-driven device meets acceptance criteria.
Instead, the document is a 510(k) premarket notification for a traditional medical device (a catheter) and discusses its substantial equivalence to a predicate device. It details bench testing, animal safety testing, biocompatibility, sterilization, and shelf life, which are standard for such device submissions. It does not mention any AI component or software, nor does it refer to acceptance criteria in the context of device performance metrics like sensitivity, specificity, or any other statistical measure typically used for AI/software-driven diagnostic or assistive tools.
Therefore, I cannot extract the requested information regarding AI acceptance criteria or studies from this document.
If you have a document describing an AI-driven device and its performance studies, please provide that text.
Ask a specific question about this device
(232 days)
EchoGo Amyloidosis 1.0 is an automated machine learning-based decision support system, indicated as a screening tool for adult patients aged 65 years and over with heart failure undergoing cardiovascular assessment using echocardiography.
When utilised by an interpreting physician, this device provides information alerting the physician for referral to confirmatory investigations.
EchoGo Amyloidosis 1.0 is indicated in adult patients aged 65 years and over with heart failure. Patient management decisions should not be made solely on the results of the EchoGo Amyloidosis 1.0 analysis.
EchoGo Amyloidosis 1.0 takes a 2D echocardiogram of an apical four chamber (A4C) as its input and reports as an output a binary classification decision suggestive of the presence of Cardiac Amyloidosis (CA).
The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls.
The A4C echocardiogram should be acquired without contrast and contain at least one full cardiac cycle. Independent training, tune and test datasets were used for training and performance assessment of the device.
EchoGo Amyloidosis 1.0 is fully automated without a graphical user interface.
The ultimate diagnostic decision remains the responsibility of the interpreting clinician using patient presentation, medical history, and the results of available diagnostic tests, one of which may be EchoGo Amyloidosis 1.0.
EchoGo Amyloidosis 1.0 is a prescription only device.
The provided text describes the acceptance criteria and a study proving that the EchoGo Amyloidosis 1.0 device meets these criteria.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as clear, quantitative thresholds in a "table" format within the provided text. Instead, the document describes the study that was conducted to demonstrate performance against generally accepted metrics for such devices (e.g., sensitivity, specificity, PPP, NPV, repeatability, reproducibility).
However, based on the results presented in the "10.2 Essential Performance" and "10.4 Precision" sections, we can infer the achieved performance metrics. The text states: "All measurements produced by EchoGo Amyloidosis 1.0 were deemed to be substantially equivalent to the predicate device and met pre-specified levels of performance." It does not, however, explicitly list those "pre-specified levels."
Here's a table summarizing the reported device performance:
| Metric | Reported Device Performance (95% CI) | Notes |
|---|---|---|
| Essential Performance | ||
| Sensitivity | 84.5% (80.3%, 88.5%) | Based on native disease proportion (36.7% prevalence) |
| Specificity | 89.7% (87.0%, 92.4%) | Based on native disease proportion (36.7% prevalence) |
| Positive Predictive Value (PPV) | 82.7% (78.8%, 86.5%) | At 36.7% prevalence |
| Negative Predictive Value (NPV) | 90.9% (88.8%, 93.2%) | At 36.7% prevalence |
| PPV (Inferred) | 15.6% (11.0%, 20.8%) | At 2.2% prevalence |
| NPV (Inferred) | 99.6% (99.5%, 99.7%) | At 2.2% prevalence |
| No-classifications Rate | 14.0% | Proportion of data for which the device returns "no classification" |
| Precision | ||
| Repeatability (Positive Agreement) | 100% | Single DICOM clip analyzed multiple times |
| Repeatability (Negative Agreement) | 100% | Single DICOM clip analyzed multiple times |
| Reproducibility (Positive Agreement) | 85.5% (82.4%, 88.2%) | Different DICOM clips from the same individual |
| Reproducibility (Negative Agreement) | 79.9% (76.5%, 83.2%) | Different DICOM clips from the same individual |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 1,164 patients
- 749 controls
- 415 cases
- Data Provenance: Retrospective case:control study, collected from multiple sites spanning nine states in the USA. The data also included some "Non-USA" origin (as seen in the subgroup analysis table, but the overall testing data seems to be primarily US-based based on the description).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number of experts or their specific qualifications (e.g., radiologists with X years of experience) used to establish the ground truth for the test set. It mentions that clinical validation was conducted to "assess agreement with reference ground truth" but does not detail how this ground truth was derived or by whom.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) used for the test set's ground truth establishment.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, the document does not describe an MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance. The study described is a standalone performance validation of the algorithm against a defined ground truth.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The results presented (sensitivity, specificity, PPV, NPV) are for the algorithm's performance without a human-in-the-loop. The device is described as "fully automated without a graphical user interface" and is a "decision support system" that "provides information alerting the physician for referral." The performance metrics provided are directly from the algorithm's output compared to ground truth.
7. The Type of Ground Truth Used
The document states: "The clinical validation study was used to demonstrate consistency of the device output as well as to assess agreement with reference ground truth." However, it does not specify the nature of this "reference ground truth" (e.g., expert consensus, pathology, outcomes data).
8. The Sample Size for the Training Set
The training data characteristics table shows the following sample sizes:
- Controls: 1,262 (sum of age categories: 118+197+337+388+222)
- Cases: 1,302 (sum of age categories: 122+206+356+389+229)
- Total Training Set Sample Size: 2,564 patients
9. How the Ground Truth for the Training Set Was Established
The document states: "The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls." It mentions that "Algorithm training data was collected from collaborating centres." However, it does not explicitly describe how the ground truth labels (cases/controls) for the training set were established. It is implied that these were clinically confirmed diagnoses of cardiac amyloidosis (cases) and non-amyloidosis (controls), but the method (e.g., biopsy, clinical diagnosis based on multiple tests, expert review) is not detailed.
Ask a specific question about this device
(142 days)
iCardio.ai EchoMeasure is software that is used to process previously acquired DICOM-compliant cardiac ultrasound images, and to make measurements on these images in order to provide automated estimation of several cardiac measurements. The data produced by this software is intended to be used to support qualified cardiologists, sonographers, or other licensed professional healthcare practitioners for clinical decision-decision-making.
iCardio.ai EchoMeasure is indicated for use in adult patients.
iCardio.ai EchoMeasure is a software device used to process previously acquired DICOM-compliant transthoracic cardiac ultrasound images. The software provides automated view classification and quality check of images to then provide several automated estimation of cardiac anatomical measurements and quantities.
iCardio.ai EchoMeasure is a comprehensive software application that seamlessly integrates image pre-processing and quality check of standard cardiac ultrasound views and provides automated measurements of standard cardiac parameters and measurements.
iCardio.ai EchoMeasure is designed to sort through and determine the eligibility criteria for downstream processing, including image quality, and appropriate cardiac view. The following pre-processing steps are considered in making a determination about image eligibility for processing:
- Echocardiographic view classification
- Echocardiographic view overall image quality -
- -End-Diastolic and End-Systolic frame identification
iCardio.ai EchoMeasure automatically sorts through and recognizes these key parameters to then allow an image to pass for automated processing for measurement of several cardiac parameters, including:
-
- Left Ventricular Volume (A2C, A4C, and Biplane; Systole and Diastole)
-
- Left Ventricular Diameter (Systole and Diastole)
-
- Right Ventricular Diameter
-
- Posterior Wall Thickness
-
- Aortic Annulus Diameter
-
- Left Ventricular Outflow Tract Diameter
-
- Sinus of Valsalva Diameter
-
- Sinotubular Junction Diameter
-
- Left Atrium Dimension
-
- Interventricular Septal Thickness
Machine learning based view detection, quality grading, key frame selection, automated keypoint detection and segmentation form the basis of the software's automated analysis.
iCardio.ai EchoMeasure output is intended for consumption by 3rd party software and hardware vendors. Additionally iCardio.ai has a native browser interface for reviewing the report summary as well as a functionality to download the available report in PDF format. The iCardio.ai EchoMeasure browser interface allows the end user to view both 2D image and cine loops determined by the software and to review the automated measurements produced. It is the option of the reviewing clinician to accept, reject, edit, or ignore the output provided by iCardio.ai EchoMeasure.
A report, automatically generated from the calculated parameters, is returned to the interpreting clinician. This software device aims to aid diagnostic review and analysis of echocardiographic data, patient record management, and reporting. It also features tools for organizing and displaying quantitative data from cardiovascular images acquired from ultrasound scanners. It is exclusively for use by qualified clinicians.
Here's an analysis of the acceptance criteria and study detailed in the provided text:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria & Reported Device Performance
The acceptance criteria for iCardio.ai EchoMeasure's performance were based on the Bi-variate Linear Regression Coefficient Slope (BLRSC). The device was designed to estimate the "worst-case" error, defined as the difference between the software output and the mean of three clinician-derived annotations. The acceptance criterion was that the estimated worst-case BLRSC (based on the 95% CI) for each endpoint must be above a certain predetermined threshold. The study's conclusion explicitly states that "In no instance did the worst-case BLRSC for a given measurement (calculated based on the 95% confidence interval) fall below the predetermined, minimum allowable BLRSC threshold."
| Measurement | Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Value [95% CI] BLRSC) |
|---|---|---|---|
| Aortic Annulus Diameter | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.952 [0.829, 1.082] |
| Left Ventricular Outflow Tract Diameter | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 1.112 [0.970, 1.255] |
| Sinus of Valsalva Diameter | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.932 [0.848, 1.015] |
| Sinotubular Junction Diameter | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.773 [0.676, 0.869] |
| Left Atrial Diameter | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.888 [0.830, 0.944] |
| Left Ventricular Diameter (Systole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.860 [0.776, 0.945] |
| Left Ventricular Diameter (Diastole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.791 [0.710, 0.869] |
| Right Ventricular Diameter (Diastole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.786 [0.715, 0.854] |
| Interventricular Septal Thickness | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.833 [0.731, 0.934] |
| Posterior Thickness | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.785 [0.664, 0.904] |
| Left Ventricular Volume (A4C-Systole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 1.059 [0.977, 1.158] |
| Left Ventricular Volume (A4C-Diastole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.943 [0.869, 1.013] |
| Left Ventricular Volume (A2C-Systole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.936 [0.777, 1.048] |
| Left Ventricular Volume (A2C-Diastole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 1.005 [0.917, 1.096] |
| Biplane LV Volume (Systole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.906 [0.795, 0.993] |
| Biplane LV Volume (Diastole) | BLRSC | Worst-case BLRSC (lower bound of 95% CI) above a predetermined minimum allowable threshold. | 0.972 [0.893, 1.054] |
2. Sample Size for Test Set and Data Provenance
- Sample Size: 200 comprehensive echocardiography studies from 200 distinct patients. A single DICOM was selected for each relevant view (PLAX, A2C, or A4C).
- Data Provenance: Retrospective, sampled from two independent clinical sites from two different US states. This was done to assure a wide sample of imaging data and patient demographics. No data from these sites was used for the training or tuning of the algorithm.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Number of Experts: Three (3)
- Qualifications: Experienced US-based cardiac sonographers.
4. Adjudication Method for Test Set
The ground truth was established using the mean of three (3) clinician-derived annotations per case. This implies a consensus-based approach or averaging of independent expert measurements.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study to assess the effect of AI assistance on human reader performance. The study described is a standalone performance study.
6. Standalone Performance Study
Yes, a standalone performance study was conducted. The objective was to demonstrate successful device performance using prospectively-defined success criteria for each endpoint, specifically evaluating the "worst-case" error for linear and volumetric measurements against clinician-derived ground truth.
7. Type of Ground Truth Used
The ground truth used was expert consensus based on manual measurements and segmentations performed by experienced clinicians (the mean of three experienced US-based cardiac sonographers).
8. Sample Size for Training Set
The text does not specify the sample size for the training set. It only mentions that the sonographers used for the standalone study were independent of those used to annotate the training data, and that data from the two clinical sites used for the test set was not used for training or tuning.
9. How Ground Truth for Training Set was Established
The text does not explicitly detail how the ground truth for the training set was established, other than noting that different sonographers were involved compared to the test set ground truth establishment.
Ask a specific question about this device
(154 days)
EchoSolv AS is a machine learning (ML) and artificial intelligence (AI) based decision support software indicated for use as an adjunct to echocardiography for assessment of severe aortic stenosis (AS).
When utilized by an interpreting physician, this device provides information to facilitate rendering an accurate diagnosis of AS. Patient management decisions should not be made solely on the results of the EchoSolv AS analysis.
EchoSolv AS includes both the algorithm based AS phenotype analysis, and the application of recognized AS clinical practice quidelines.
Limitations: EchoSolv AS is not intended for patients under the age of 18 years or those who have previously undergone aortic valve replacement surgery
EchoSolv AS is a standalone, cloud-based decision support software which is intended to be used certified cardiologist to aid in the diagnosis of Severe Aortic Stenosis. EchoSolv AS analyzes basic patient demographic data and measurements obtained from a transthoracic echo examination to provide a categorical assessment as to whether the data are suggestive of a high, medium or low probability of Severe AS. EchoSolv AS is intended for patients who 18 years or older who have an echocardiogram performed as part of routine clinical care (i.e., for the evaluation of structural heart disease).
Patient demographic and echo measurement data is automatically processed through the artificial intelligence algorithm which provides an output regarding the probability of a Severe AS phenotype to aid in the clinical diagnosis of Severe AS during the review of the patient echo study and generation of the final study report, according to current clinical practice guidelines. The software provides an output on the following assessments:
- Severe AS Phenotype Probability
Whether the patient has a high, medium, or low probability of exhibiting a Severe AS phenotype, based on analysis by the EchoSolv AS proprietary Al algorithm, that the determined predicted AVA is ≤1.0cm². The Al probability score requires a minimum set of data inputs to provide a valid output but is based on all available echocardiographic measurement data and does not rely on the traditional LVOT measurements used to in the continuity equation.
- Severe AS Guideline Assessment
Whether the patient meets the definition for Severe AS based on direct evaluation of provided echocardiogram data measurements (AV Peak Velocity, AV Mean Gradient and AV Area) with current clinical practice guidelines (2020 ACC/AHA Guideline for the Management of Patients with Valvular Heart Disease).
EchoSolv AS is intended to be used by board-certified cardiologists who review echocardiograms during the evaluation and diagnosis of structural heart disease, namely aortic stenosis. EchoSolv AS is intended to be used in conjunction with current clinical practices and workflows to improve the identification of Severe AS cases.
Here's an analysis of the acceptance criteria and study detailed in the provided document for the EchoSolv AS device:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a tabulated format. However, based on the performance data presented, the implicit acceptance criteria can be inferred from the reported performance and comparison to a predicate device. The performance metrics reported are AUROC, Sensitivity, Specificity, Diagnostic Likelihood Ratios (DLR), and improvement in reader AUROC and concordance in the MRMC study.
| Performance Metric | Implicit Acceptance Criterion (Based on context/predicate) | Reported Device Performance (EchoSolv AS) |
|---|---|---|
| Standalone Performance | ||
| AUROC (Overall) | Expected to be high, comparable to or better than predicate (Predicate: 0.927 AUROC) | 0.948 (95% CI: 0.943-0.952) |
| Sensitivity (at high probability) | High (No specific threshold given, but expected to detect a good proportion of true positive cases) | 0.801 (95% CI: 0.786-0.818) |
| Specificity (at high probability) | High (No specific threshold given, but expected to correctly identify true negative cases) | 0.923 (95% CI: 0.915-0.932) |
| DLR (Low Probability) | Low (Indicative of low probability of disease) | 0.067 (95% CI: 0.057-0.080) |
| DLR (Medium Probability) | Close to 1 (Weakly indicative) | 0.935 (95% CI: 0.829-1.05) |
| DLR (High Probability) | High (Strongly indicative of disease) | 10.3 (95% CI: 9.22-11.50) |
| Cochran-Armitage Trend Test (p-value) | Statistically significant trend (p < 0.05) | <0.0001 (Statistic: 41.362) |
| AUROC across subgroups (Age, Sex, Race, LVEF, BMI, Inputs) | Consistent high performance across demographics and input completeness | Consistently high, ranging from 0.914 to 0.970, demonstrating consistency. Lowest for LVEF <30% but still strong. |
| Clinical Performance (MRMC Study) | ||
| Mean AUROC (assisted vs. unassisted) | Improvement expected with AI assistance (Predicate showed 0.054 improvement) | Improvement of 0.018 (95% CI: 0.037-0.001; p=0.064) |
| Reader Concordance (assisted vs. unassisted) | Improvement expected with AI assistance | Improvement of 0.027 (unassisted: 0.641; assisted: 0.667) |
2. Sample Size Used for the Test Set and Data Provenance
- Standalone Performance Test Set:
- Sample Size: 6,268 studies.
- Data Provenance: The document states the dataset for model development was randomly split into training and test sets. The standalone performance testing was performed on an "independent retrospective cohort study" meaning the data was collected from past records. The country of origin for this specific retrospective cohort is not explicitly stated, but it's implied to be within a clinical setting that would allow for US board-certified cardiologists to review and verify the data.
- Clinical Performance (MRMC) Test Set:
- Sample Size: 200 retrospective transthoracic echocardiogram (TTE) studies (100 disease cases, 100 control studies).
- Data Provenance: Retrospective, performed at "one investigational site in the US."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Standalone Performance Test Set:
- Number of Experts: More than one, implied by "US board certified cardiologists" (plural).
- Qualifications: "US board certified cardiologists." No specific years of experience are mentioned.
- Clinical Performance (MRMC) Test Set:
- Number of Experts: Two.
- Qualifications: "board certified cardiologists." No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- Standalone Performance Test Set: The ground truth was established by "US board certified cardiologists, who reviewed and verified the echocardio data and hemodynamic profile... and were blinded to the device output." This implies a consensus or majority rule adjudication among the "cardiologists" if multiple were involved per case. There is no explicit "2+1" or "3+1" method mentioned, but rather a verification process.
- Clinical Performance (MRMC) Test Set: "The total test dataset was reviewed by two board certified cardiologists to confirm the presence and severity of Severe AS." This suggests a 2-reader agreement for the ground truth. If there was disagreement, an implied adjudication process (e.g., a third reader or consensus discussion) would be needed, but it is not specified.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- Yes, a MRMC comparative effectiveness study was done.
- Effect Size of Improvement:
- Mean AUROC Improvement: 0.018 (95% CI: 0.037-0.001; p=0.064).
- Reader Concordance Improvement: 0.027 (from 0.641 unassisted to 0.667 assisted).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance testing was performed. The objective was to assess the "native system performance of the EchoSolv AS model in its ability to detect Severe AS."
7. The Type of Ground Truth Used
- Standalone Performance Test Set: Expert consensus based on "the assessment of the presence of severe aortic stenosis (defined as an AVA≤1 cm²) by US board certified cardiologists, who reviewed and verified the echocardio data and hemodynamic profile." This combines clinical assessment using a specific echocardiographic measurement (AVA) and expert review.
- Clinical Performance (MRMC) Test Set: Expert consensus. "The total test dataset was reviewed by two board certified cardiologists to confirm the presence and severity of Severe AS."
8. The Sample Size for the Training Set
- 442,276 individuals (from a total dataset of 631,824 individuals).
9. How the Ground Truth for the Training Set Was Established
The document states: "The EchoSolv AS Al Model was developed on a dataset consisting of 631,824 individuals with 1,077,145 transthoracic echocardiograms (TTE)." It also notes that the model was trained "to detect severe AS cases." While implied that the training set also used ground truth related to severe AS detection, the specific methodology for establishing ground truth for the training set is not explicitly detailed in the provided text. It is reasonable to infer it would be similar to the test set ground truth (expert clinical assessment based on echocardiographic data), but this is not directly stated.
Ask a specific question about this device
(265 days)
EchoGo Heart Failure 2.0 is an automated machine learning-based decision support system, indicated as a diagnostic aid for patients undergoing routine functional cardiovascular assessment using echocardiography. When utilised by an interpreting clinician, this device provides information that may be useful in detecting heart failure with preserved ejection fraction (HFpEF).
EchoGo Heart Failure 2.0 is indicated in adult populations over 25 years of age. Patient management decisions should not be made solely on the results of the EchoGo Heart Failure 2.0 analysis.
EchoGo Heart Failure 2.0 takes as input an apical 4-chamber view of the heart that has been captured and assessed to have an ejection fraction ≥50%.
EchoGo Heart Failure 2.0 takes as input a 2D echocardiogram of an apical four chamber tomographic view and reports as output a binary classification suggestive of the presence, or absence of heart failure with preserved ejection fraction (HFpEF). EchoGo Heart Failure 2.0 also provides users with an EchoGo Score ranging from 0 to 100% to support the binary classification. The EchoGo Score informs the binary classification when referenced against the pre-determined decision threshold (50%).
To aid in the interpretation of the EchoGo Score, a comparative visual analysis is provided. A histogram format displays the reported EchoGo Score output against a population of patients with known disease status (Independent Testing Dataset). This allows the user to interpret the EchoGo Score relative to the decision threshold of 50%.
EchoGo Heart Failure 2.0 should receive an input echocardiogram acquired without contrast and contain at least one full cardiac cycle.
EchoGo Heart Failure 2.0 is fully automated and does not comprise a graphical user interface.
EchoGo Heart Failure 2.0 is intended to be used by an interpreting clinician as an aid to diagnosis for HFpEF. The ultimate diagnostic decision remains the responsibility of the interpreting clinician using patient presentation, medical history, and the results of available diagnostic tests, one of which may be EchoGo Heart Failure 2.0.
EchoGo Heart Failure 2.0 is a prescription only device.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
| Criteria | Acceptance Limit | Reported Device Performance |
|---|---|---|
| I. Device Performance (Sensitivity & Specificity) | Implicit within reporting of performance: The device must demonstrate sufficient sensitivity and specificity for detecting HFpEF as a diagnostic aid. The specific acceptance limits are not explicitly stated as numerical thresholds but are demonstrated by the reported performance being "substantively equivalent to the predicate device and met pre-specified levels of performance." | Sensitivity: 90.3% (95% CI: 88.5, 92.4%) when removing "no classification" studies. 84.9% (95% CI: 83.0, 87.5%) when including "no classification" studies. Specificity: 86.1% (95% CI: 83.4, 88.3%) when removing "no classification" studies. 78.6% (95% CI: 75.3, 81.1%) when including "no classification" studies. |
| II. Accuracy of EchoGo Score (AUROC & Goodness-of-Fit) | Implicit within reporting of performance: The EchoGo Score must be accurate and align with known and expected proportions of HFpEF. Statistical significance (p-value > 0.05) for the Hosmer-Lemeshow Test and a sufficiently high AUROC are expected. | Area Under the Receiver Operating Characteristic Curve (AUROC): 0.947 (95% CI: 0.934, 0.958) when removing "no classification" studies. 0.937 (95% CI: 0.924, 0.949) when considering all studies. Hosmer-Lemeshow Test for goodness-of-fit: p=0.304 (not significant, indicating acceptable fit). |
| III. Proportion of Non-Diagnostic Outputs | A priori acceptance limits: The proportion of "no classification" outputs must be within pre-specified limits (the exact numerical limit is not provided, but the text states it was "within a priori acceptance limits"). | 7.4% (116 out of 1,578 studies) were categorized as "No Classification." |
| IV. Precision (Repeatability and Reproducibility) | Implicit within reporting of performance: The device must demonstrate high repeatability and acceptable reproducibility in its classification output. | Repeatability: 100% in all measures. Reproducibility: 82.6% Positive Agreement and 82.4% Negative Agreement. |
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: 1,578 patients (785 controls and 793 cases).
- Data Provenance: Retrospective case:control study. The data was collected from multiple independent clinical sites spanning five states in the US.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document states that the ground truth was established by "ground truth classifications of cases (HFpEF) or controls," but it does not specify the number or qualifications of experts who established this ground truth for the test set.
-
Adjudication method for the test set:
- The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It only refers to "ground truth classifications," implying these were already established.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers with and without AI assistance was not done. The study focuses on the standalone performance of the device. The device is intended as a "diagnostic aid" for use "by an interpreting clinician," but its performance evaluation presented here is not an MRMC study.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The reported sensitivity, specificity, AUROC, and precision values are for the device (algorithm) itself without human intervention in the classification output for the test set. The device provides a "binary classification suggestive of the presence, or absence of heart failure with preserved ejection fraction (HFpEF)" and an "EchoGo Score."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth was based on "ground truth classifications of cases (HFpEF) or controls," and "known and expected proportions of HFpEF." While not explicitly stated as "expert consensus," this terminology strongly implies clinical diagnoses were used to establish the HFpEF status for each patient in the dataset. It does not mention pathology or outcomes data specifically for ground truth.
-
The sample size for the training set:
- The sample size for the training set is not explicitly stated in the provided text. It mentions that the "Subject device AI model was trained on more data and with additional preprocessing steps and data augmentations" compared to the predicate device, and the testing data cohort was a "22.9% increase in data beyond the testing data cohort utilized for the 510k submission of EchoGo Heart Failure 1.0." However, the exact size of the training set is not provided.
-
How the ground truth for the training set was established:
- The document does not explicitly describe how the ground truth for the training set was established. It only states that the AI model was "trained on more data" with "additional preprocessing steps and data augmentations." It is highly probable it was established similarly to the test set ground truth (i.e., using clinical diagnoses or expert classifications), given the nature of the diagnostic task.
Ask a specific question about this device
(90 days)
Ask a specific question about this device
(85 days)
The device is intended to be used in surgical, aesthetic applications in the medical specialties of general and plastic surgery and dermatology, including:
-
Treatment of benign vascular lesions
-
Hair removal
-
Permanent* hair reduction
- permanent hair reduction defined as reduced hair growth without maintenance when measured at 6, 9 and 12 months.
The laser device Echo is a 160 W diode laser emitting at 808 nm laser wavelength. The device is a therapeutic and aesthetic medical laser system for professional use only.
For the release of laser radiation to the patient, the device uses, as delivery system, a fiber with a handpiece plugged on its end.
Echo is a transportable mobile electrical equipment with a graphical user interface (GUI) for user-device interaction.
Laser radiation is delivered through optical fibers connected to handpieces having fix spot dimension.
The device is equipped with an integrated skin cooler. In this case, a specific housing called Skin Cooler handpiece is provided to provide skin cooling and housing the laser handpiece at the same time.
Laser emission can be activated by the footswitch or by a finger-switch placed on Skin Cooler handpieces.
I am sorry, but the provided text does not contain information about acceptance criteria, device performance, sample size, ground truth establishment, or multi-reader multi-case studies. The document is an FDA 510(k) clearance letter and summary for a laser device, primarily focusing on its equivalence to predicate devices based on technical specifications and non-clinical testing for safety and effectiveness according to established standards. It does not include the detailed clinical study information you requested.
Ask a specific question about this device
Page 1 of 10