Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K213275
    Manufacturer
    Date Cleared
    2021-12-20

    (81 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoGo Core is intended to be used for quantification and reporting of results of cardiovascular function to support physician diagnosis. EchoGo Core is indicated for use in adult populations.

    Device Description

    EchoGo Core 2.0 is a software application manufactured by Ultromics to provide a report of left ventricular cardiac function, in the form of secondary capture DICOM files and/or as a structured DICOM report, to aid interpreting physicians with diagnostic decision-making process. EchoGo Core 2.0 applies to ultrasound images of the heart (echocardiograms).

    EchoGo Core 2.0 utilizes artificial intelligence (AI) for the operator-assisted automatic quantification of commonly measured echocardiographic metrics. Independent training, test and validation datasets were used for training and performance assessment of the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria were formalized in terms of Root Mean Square (RMS) error against reference values generated using the comparator device, TomTec Arena TTA2. The text states that "the acceptance criteria were formalized such that EchoGo Core 2.0 would produce measures of left ventricular (LV) length, volume at end diastole (ED), and end systole (ES), ejection fraction (EF), stroke volume, cardiac output, global longitudinal strain (GLS) and segmental longitudinal strain (SLS) with an RMS error below a set, pre-determined threshold." While the exact pre-determined thresholds for acceptance are not explicitly listed in numerical values, the "Performance against the comparator device is summarised as follows," implying these are the reported performance values that met the (unspecified) acceptance thresholds.

    Left Ventricular MetricRoot Mean Square Error (% RMS)
    Length3.06 - 4.59
    Volume at End Diastole and End Systole8.57 - 16.59
    Ejection Fraction6.69 - 8.50
    Stroke Volume10.57 - 13.68
    Global Longitudinal Strain3.36 - 4.79
    Systolic Segmental Longitudinal Strain5.51 - 9.98

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 214 previously unseen studies.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but the submission is from Ultromics Limited in the United Kingdom.
      • Retrospective or Prospective: Retrospective. The study was described as a "formal retrospective, non-interventional validation study."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The ground truth for the test set was established using a comparator device, TomTec Arena TTA2 (K150122), not human experts providing a direct ground truth. The acceptance criteria were formalized in terms of Root Mean Square (RMS) error against reference values generated using the comparator device. There is no mention of experts establishing ground truth for the test set; instead, the comparator device served as the reference.

    4. Adjudication Method for the Test Set

    Not applicable, as the ground truth was established by a comparator device rather than human interpretation requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and the Effect Size

    No, an MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance was not conducted or reported. The study focused on the device's performance against a comparator device.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done

    The primary performance study described seems to align with a standalone assessment against a comparator device. The text states, "EchoGo Core 2.0 would produce measures... with an RMS error below a set, pre-determined threshold" and "Performance against the comparator device is summarised as follows." This suggests the algorithm's output was directly compared to the output of the TomTec Arena TTA2.

    However, it's crucial to note the device description also states: "EchoGo Core 2.0 requires an operator at key steps to confirm or relabel automatically labeled acquisition views (if required) and approve the left ventricle segmentations (contours) proposed by the AI." And "The operator will review the report produced and may be asked to approve cautions that are added to the report." This indicates that while the core performance metrics were evaluated in an algorithm-centric manner against a comparator, the device's intended use involves a "human-in-the-loop" for confirmation/approval steps. The reported performance metrics (RMS error) are likely for the algorithm's core measurements before potential human override, as the study compared them to an automated comparator device.

    7. The Type of Ground Truth Used

    The ground truth was established by comparison to a legally marketed predicate device (TomTec Arena TTA2, K150122). Values generated by the TomTec Arena TTA2 served as the reference values for calculating RMS error.

    8. The Sample Size for the Training Set

    The sample size for the training set is not specified in the provided text. The text only mentions that "Independent training, test and validation datasets were used for training and performance assessment of the device" and that "Test datasets were strictly segregated from algorithm training datasets."

    9. How the Ground Truth for the Training Set was Established

    The method for establishing ground truth for the training set is not explicitly detailed in the provided text. It only states that "Independent training, test and validation datasets were used for training and performance assessment of the device."

    Ask a Question

    Ask a specific question about this device

    K Number
    K200974
    Manufacturer
    Date Cleared
    2020-06-03

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181264, K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips ultrasound systems.

    Device Description

    The Philips QLAB Advanced Quantification Software System (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems.

    The purpose of this Traditional 510(K) Pre-market Notification is to introduce a new to introduce the new 3D Auto MV cardiac quantification application to the Philips QLAB Advanced Quantification Software, which was most recently cleared under K191647. The latest QLAB software version (launching at version 15.0) will include the new Q-App 3D Auto MV, which integrates the segmentation engine of the cleared QLAB HeartModel Q-App (K181264) and the TomTec-Arena 4D MV Assessment application (K150122) thereby providing a dynamic Mitral Valve clinical quantification tool.

    AI/ML Overview

    The document describes the QLAB Advanced Quantification Software System and its new 3D Auto MV cardiac quantification application.

    Here's an analysis of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state acceptance criteria in a quantitative table format (e.g., "accuracy must be > 90%"). Instead, it states that the device was tested to "meet the defined requirements and performance claims." The performance is demonstrated by the non-clinical verification and validation testing, and the 3D Auto MV Algorithm Training and Validation Study.

    The document provides a comparison table (Table 1 on page 6-7) that highlights the features and a technical comparison to predicate devices, but this table does not present quantitative performance against specific acceptance criteria for the new 3D Auto MV feature. It lists parameters that the new application will measure, such as:

    • Saddle Shaped Annulus Area (cm²)
    • Saddle Shaped Annulus Perimeter (cm)
    • Total Open Coaptation Area (cm²)
    • Anterior Closure Line Length (cm)
    • Posterior Closure Line Length (cm)

    However, it does not provide reported performance values for these parameters from the validation study against any predefined acceptance criteria. The statement is that "All other measurements are identical to the predicate 4D MV-Assessment application," implying a level of equivalence, but without specific data.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions that Non-clinical V&V testing also included the 3D Auto MV Algorithm Training and the subsequent Validation Study performed for the proposed 3D Auto MV clinical application. However, it does not specify the sample size used for this validation study (i.e., the test set). The data provenance (e.g., country of origin, retrospective or prospective) is also not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    This information is not provided in the document.

    4. Adjudication Method for the Test Set:

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not indicate that a MRMC comparative effectiveness study was done. It focuses on the software's performance and substantial equivalence to predicate devices, not on how human readers' performance might improve with its assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    The document describes the 3D Auto MV Q-App as a "semi-automatic tool" and states that the "User is able to edit, accept, or reject the initial landmark proposals of the mitral valve anatomical locations." This suggests that a purely standalone (algorithm-only) performance study, without any human-in-the-loop interaction, would not be fully representative of its intended use. The validation study presumably evaluates its performance within this semi-automatic workflow, but specific details are lacking.

    7. Type of Ground Truth Used:

    The document describes the 3D Auto MV application integrating the machine-learning derived segmentation engine of the QLAB HeartModel and the TOMTEC-Arena TTA2 4D MV-Assessment application. The ground truth for the training of the HeartModel (and subsequently the 3D Auto MV) would typically involve expert annotations of anatomical structures. However, the specific type of ground truth used for the validation study mentioned ("3D Auto MV Algorithm Training and the subsequent Validation Study") is not explicitly stated. Given the context of cardiac quantification, it would most likely be based on expert consensus or expert-derived measurements from the imaging data itself.

    8. Sample Size for the Training Set:

    The document mentions "3D Auto MV Algorithm Training" but does not specify the sample size used for the training set.

    9. How the Ground Truth for the Training Set Was Established:

    The document states that the 3D Auto MV Q-App "integrates the segmentation engine of the cleared QLAB HeartModel Q-App (K181264)". For HeartModel, it says: "The HeartModel Q-App provides a semi-automatic 3D anatomical border detection and identification of the heart chambers for the end-diastole (ED) and end-systole (ES) cardiac phases." And for its contour generation: "3D surface model is created semi-automatically without user interaction. User is required to edit, accept, or reject the contours before proceeding with the workflow."

    This implies that the training of the HeartModel's segmentation engine (and inherited by 3D Auto MV) was likely based on expert-derived or expert-validated anatomical annotations/contours, which would have been used to establish the "ground truth" for the machine learning algorithm. However, explicit details on how this ground truth was established for the training data (e.g., number of annotators, their qualifications, adjudication methods) are not provided for this specific submission (K200974). It simply references the cleared HeartModel Q-App (K181264).

    Ask a Question

    Ask a specific question about this device

    K Number
    K191647
    Manufacturer
    Date Cleared
    2019-12-20

    (183 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips ultrasound systems.

    Device Description

    Philips QLAB Advanced Quantification software (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems.

    The subject QLAB 3D Auto RV application integrates the segmentation engine of the cleared QLAB HeartModel (K181264) and the TomTec-Arena 4D RV-function (cleared under K150122) thereby providing a dynamic Right Ventricle clinical functionality. The proposed 3D Auto RV application is based on the automatic segmentation technology of HeartModel applied to the Right Ventricle, and uses machine learning algorithms to identify the endocardial contours of the Right Ventricle.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the QLAB Advanced Quantification Software 13.0, specifically for its 3D Auto RV application:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Device Performance (3D Auto RV vs. predicate 4D RV)Reported Device Performance (3D Auto RV vs. CMR)
    RV End Diastolic Volume Error RateBelow 15% (compared to predicate)Below 15%Less than 15% difference
    RV End Diastolic Volume (RMSE)Not explicitly stated as an independent acceptance criterion, but part of validation.8.3 ml RMSENot explicitly reported for this metric
    RV End Systolic Volume (RMSE)Not explicitly stated as an independent acceptance criterion, but part of validation.2.7 ml RMSENot explicitly reported for this metric
    RV Ejection Fraction (RMSE)Not explicitly stated as an independent acceptance criterion, but part of validation.2.7% RMSENot explicitly reported for this metric
    User Ability to Discern and ReviseHealthcare professional able to successfully determine when contours require revision and capable of revising.Users were able to discern which images needed manual editing on all cases.Not explicitly reported for this metric
    Accuracy and Reproducibility (External Study)Not explicitly stated as a numerical acceptance criterion, but "accurate and highly reproducible"Accurate and highly reproducible. No revision needed in 1/3 of patients, minor revisions in the rest.Less than 15% difference (for RV volume)

    2. Sample Size and Data Provenance

    • Test Set Sample Size: Not explicitly stated for either the internal validation study or the external published study.
    • Data Provenance:
      • Internal Validation Study: "Test datasets were segregated from training data sets." No explicit country of origin is mentioned. It is implied to be retrospective as it uses "data sets."
      • External Published Study: Not specified, but it's an "external study published in the Journal of the American Society of Echocardiography."

    3. Number of Experts and Qualifications for Ground Truth (Test Set)

    • Internal Validation Study: Not specified. However, the comparison is primarily against a "predicate 4D RV" which would have its own established methodology. The "healthcare professional" is mentioned in the context of user evaluation.
    • External Published Study: Not specified. The ground truth method is cross-modality CMR, implying a reference standard rather than expert consensus on the test images themselves.

    4. Adjudication Method (Test Set)

    • Internal Validation Study: Not explicitly stated. The comparison is against the predicate device's measurements.
    • External Published Study: Not explicitly stated. Ground truth was established by cross-modality Cardiac Magnetic Resonance (CMR).

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    • The document does not explicitly describe a formal MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The text mentions that "the healthcare professional was able to successfully determine which contours required revision and was capable of revising," which suggests a human-in-the-loop scenario, but a comparative effectiveness study with effect size is not reported.

    6. Standalone (Algorithm Only) Performance

    • Yes, a standalone performance evaluation of the algorithm is implied. The internal validation study reports "RV end diastolic volume error rates below 15% for every data set tested compared to the predicate 4D RV," and RMSE values for volume and EF. The external study also reports the 3D Auto RV's performance against CMR. While user interaction for editing is a feature, the initial segmentation engine and its quantification are evaluated in a standalone manner before potential revision.

    7. Type of Ground Truth Used

    • Internal Validation Study: The primary comparison for quantitative metrics (volumes, EF) is against the "predicate 4D RV" (TomTec-Arena 4D RV-function, K150122). This suggests the predicate's measurements served as a reference.
    • External Published Study: Cross-modality Cardiac Magnetic Resonance (CMR) was considered the gold standard ("Ground truth in this study was considered to be the cross-modality CMR").

    8. Sample Size for the Training Set

    • Not explicitly stated for the machine learning algorithm. The document only mentions that "Test datasets were segregated from training data sets."

    9. How Ground Truth for the Training Set Was Established

    • Not explicitly detailed. The device description states the 3D Auto RV application "uses machine learning algorithms to identify the endocardial contours of the Right Ventricle." It also mentions "Algorithm Training procedure is same between the subject and the predicate HeartModel." For HeartModel (the segmentation engine's predecessor for LV), expert-defined contours on extensive datasets would typically be used for training, but this is not explicitly stated for the RV training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K190913
    Manufacturer
    Date Cleared
    2019-06-18

    (71 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips ultrasound systems

    Device Description

    Philips QLAB Advanced Quantification software (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems. It can be used for the off-line review and quantification of ultrasound studies. QLAB software provides basic and advanced quantification capabilities across a family of PC and cart based platforms. OLAB software functions through O-App modules, each of which provides specific capabilities. QLAB builds upon a simple and thoroughly modular design to provide smaller and more easily leveraged products.

    AI/ML Overview

    The provided document describes the FDA 510(k) clearance for Philips Healthcare's QLAB Advanced Quantification Software 13.0, primarily focusing on its substantial equivalence to previously cleared predicate devices. The modifications in QLAB 13.0 involve integrating existing TomTec-Arena applications (AutoSTRAIN LV, AutoSTRAIN LA, AutoSTRAIN RV) into the Philips QLAB platform.

    However, the document does not contain specific details about acceptance criteria or a dedicated study design that proves the device meets specific performance criteria. Instead, it relies on the concept of substantial equivalence to predicate devices that have already undergone prior clearance.

    Based on the information provided, here's what can be extracted and what is missing:

    Key Takeaways from the Document:

    • Device: QLAB Advanced Quantification Software 13.0
    • Purpose: To view and quantify image data acquired on Philips ultrasound systems.
    • Modifications: Integration of AutoStrain LV, LA, and RV modules from TomTec-Arena (previously cleared under K150122) with "workflow improvements."
    • Regulatory Pathway: 510(k) premarket notification, based on substantial equivalence.
    • Clinical Testing: "QLAB 13.0 does not introduce new indications for use, modes, or features relative to the predicate (K181264) that require clinical testing." This explicitly states that no new clinical study was performed for this specific 510(k) submission.
    • Performance Data: Relies on "Verification and software validation data" and "Design Control activities" (Requirements Review, Design Review, Risk Management, Software Verification and Validation) to support substantial equivalence.

    Therefore, it's not possible to provide the requested information regarding acceptance criteria and a study proving the device meets those criteria, as such a study (with the specified details) was explicitly stated as not required and not performed for this 510(k) submission.

    The document justifies its clearance based on the following:

    1. The new functionalities (AutoStrain LV, LA, RV) are derived from applications (TomTec-Arena AutoSTRAIN and 2D CPA) that were already cleared under K150122.
    2. The current modifications primarily focus on integrating these existing functionalities into the QLAB platform and making "workflow improvements."
    3. The intended use remains the same as the predicate device.
    4. The manufacturer performed non-clinical performance testing including software verification and validation, design control activities, and risk management to ensure the modified software performs safely and effectively relative to the predicate device and meets defined requirements.

    If a hypothetical scenario were to involve a new device or a significant change requiring a de novo clearance or a more involved 510(k) where clinical performance needed to be demonstrated, the requested information would be crucial. However, for this specific 510(k) for QLAB 13.0, the provided document indicates that the performance evaluation was based on demonstrating equivalence, not on new clinical performance studies with acceptance criteria for the new features.

    To answer your prompt directly, given the provided text, the answer to most of your questions is that this information is not present because a new comparative effectiveness study or standalone performance study with new ground truth establishment was explicitly deemed unnecessary due to the nature of the submission (integration of already cleared components and "workflow improvements").

    Here's a breakdown of the requested information, indicating what is not available from this document due to the nature of the 510(k) submission:


    1. A table of acceptance criteria and the reported device performance

    • Not available in the provided document. The submission relies on substantial equivalence to predicate devices, not on demonstrating new performance against defined acceptance criteria for the integrated features. The document states: "QLAB 13.0 does not introduce new indications for use, modes, or features relative to the predicate (K181264) that require clinical testing."

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Not available in the provided document. No specific clinical test set data is described for QLAB 13.0. The "Verification and software validation data" mentioned are non-clinical, likely internal testing using synthetic data, simulated data, or existing clinical data from the development of the predicate/reference devices, but details are not provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not available in the provided document. No new ground truth establishment process is described for QLAB 13.0 as no new clinical study was conducted.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not available in the provided document. No new clinical test set is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not available in the provided document, and explicitly stated as not required/done. The document explicitly states: "QLAB 13.0 does not introduce new indications for use, modes, or features relative to the predicate (K181264) that require clinical testing." Therefore, no MRMC study was performed as part of this 510(k) submission.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not available in the provided document. Similarly, no new standalone performance study for the integrated algorithms is described beyond the assertion that the underlying algorithms (from K150122) were previously cleared.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not available in the provided document. As no new clinical study requiring ground truth was conducted for QLAB 13.0, this information is not provided. The ground truth for the original cleared components (TomTec-Arena AutoSTRAIN and 2D CPA) would have been established at their time of clearance (K150122), but those details are not in this document.

    8. The sample size for the training set

    • Not available in the provided document. This document describes a 510(k) clearance for a software update integrating existing, cleared algorithms. It doesn't detail the training data for the original development of those algorithms.

    9. How the ground truth for the training set was established

    • Not available in the provided document. (See point 8).

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Reference Devices :

    K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the EPIQ 5, EPIQ 7, Affiniti 30, Affiniti 70 Diagnostic Ultrasound Systems is diagnostic ultrasound imaging and fluid flow analysis of the human body with the following Indications for Use: Abdominal, Cardiac Adult, Cardiac other (Fetal), Cardiac Pediatric, Cerebral Vascular, Cephalic (Adult), Cephalic (Neonatal), Fetal/Obstetric, Gynecological, Intraoperative (Vascular), Intraoperative (Cardiac), Musculoskeletal (Conventional), Musculoskeletal (Superficial), Other: Urology, Pediatric, Peripheral Vessel, Small Organ (Breast, Thyroid, Testicle), Transesophageal (Cardiac), Transrectal, Transvaginal.

    Device Description

    The proposed Philips EPIQ Diagnostic Ultrasound Systems and Philips Affiniti Diagnostic Ultrasound Systems are general purpose, software controlled, diagnostic ultrasound systems. Their function is to acquire ultrasound data and to display the data in various modes of operation. The devices consist of two parts: the system console and the transducers. The system console contains the user interface, a display, system electronics and optional peripherals (ECG, printers).

    The removable transducers are connected to the system using a standard technology, multi-pin connectors. Other than the introductions of the two new transducers, the device description, accessories and components are unchanged, reference table 1.

    AI/ML Overview

    Here's the breakdown of the acceptance criteria and supporting studies for the Philips EPIQ and Affiniti Diagnostic Ultrasound Systems based on the provided text:

    Important Note: The provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed clinical study for the device's original clearance. Therefore, information regarding specific clinical performance metrics (like sensitivity, specificity, accuracy) from a de novo study is not present. Instead, the document emphasizes compliance with standards and equivalence to a previously cleared device.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes the device meeting safety and performance standards rather than specific clinical performance metrics like sensitivity/specificity for a novel algorithm.

    Acceptance Criteria (Safety/Performance Standards)Reported Device Performance (Compliance)
    IEC 60601-2-37 Ed 2.1 (Acoustic Output Display Requirements)Complies
    IEC 62359 Ed 2.0 (Thermal and Mechanical Indices)Complies
    FDA ultrasound guidance document (Information for Manufacturers Seeking Marketing Clearance of Diagnostic Ultrasound Systems and Transducers, issued Sept 9, 2008)Complies
    System Acoustic Output Limits: Ispta.3
    Ask a Question

    Ask a specific question about this device

    K Number
    K173291
    Manufacturer
    Date Cleared
    2018-01-12

    (88 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K161359, K150122, K163077

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Esaote's Model 6440 is intended to perform diagnostic general ultrasound studies including: Fetal, Abdominal, Intraoperative (Abdominal), Pediatric, Small organs, Neonatal, Neonatal Cephalic, Adult Cephalic, Transvaginal, Musculoskeletal (Conventional), Musculoskeletal (Superficial), Urological, Cardiovascular Pediatric, Transoesophageal (cardiac), Peripheral Vessel.

    The 6440 system provides imaging for guidance of biopsy and imaging to assist in the placement of needles and catheters in vascular or other anatomical structures as well as peripheral nerve blocks in Musculoskeletal applications.

    The Virtual Navigator software option for Esaote model 6440 is intended to support a radiological clinical ultrasound examination (first modality) and follow percutaneous procedures or surgical operations providing additional image information from a second imaging modality (CT, MR, US, and PET). The second modality provides additional security in assessing the morphology of the real time ultrasound image. Virtual Navigator can be used in the following applications: Abdominal, Gynecological, Musculoskeletal, Obstetrics, Pediatric, Urologic, Small Organs, Intraoperative (Abdominal), Intraoperative (Neurological), Peripheral Vascular and Transcranial for radiological examinations only.

    The second modality image is not intended to be used as a standalone diagnostic image since it represents information of a patient that could not be congruent with the current (actual) patient position and shall, therefore, always be seen as an additional source of information.

    The Virtual Navigator tracking system is contraindicated for patients, personnel and other people who use an electronic life support device (such as a cardiac pacemaker or defibrillator).

    Device Description

    Model 6440 is a mainframe ultrasound system used to perform diagnostic general ultrasound studies. The primary modes of operation are: B-Mode, Tissue Enhancement Imaging (TEI), M-Mode, Multi View (MView), Doppler (both PW and CW), Color Flow Mapping (CFM), Amplitude Doppler (AD), Tissue Velocity Mapping (TVM), 3D and 4D, Qualitative Elastosonography (ElaXto) and Quantitative Elastosonography (QElaXto).

    Model 6440 has the Virtual Navigator software option integrated, designed to support a radiological clinical ultrasound examination (first modality) and follow a percutaneous procedure providing additional image information from a second imaging modality (CT, MR, US and PET). The user is helped in assessing the patient anatomy by displaying the image generated by the 2nd modality.

    Model 6440 is equipped with a LCD color display where acquired images and advanced image features are shown. Model 6440 control panel is equipped with a pull-out Qwerty alphanumeric keyboard that allows data entry. The touchscreen has an emulation of the Qwerty alphanumeric keyboard that allows data entry and has additional controls and mode-depending keys, integrated in the touchscreen.

    Model 6440 can drive Phased Array (PA), Convex Array (LA), Doppler and Volumetric probes.

    Model 6440 is equipped with an internal Hard Disk Drive and with a DVD-RW disk drive that can be used for image storage. Data can also be stored directly to external archiving media (Hard-Disk, PC, server) via a LAN/USB port.

    The marketing name for Model 6440 will be called MyLab9 eXP.

    AI/ML Overview

    This is a 510(k) premarket notification for the Esaote 6440 Ultrasound System. The document does not describe a study proving the device meets specific acceptance criteria in terms of diagnostic performance (e.g., sensitivity, specificity, accuracy) for an AI/CADe device. Instead, it demonstrates substantial equivalence to predicate devices through technical comparisons and compliance with relevant safety and performance standards for a general ultrasound system and its integrated software options.

    The document discusses performance data related to:

    • Biocompatibility Testing: For transducers, conducted according to FDA Blue Book Memorandum #G95-1 and ISO 10993-1. These tests included cytotoxicity, sensitization, and irritation.
    • Electrical Safety and Electromagnetic Compatibility (EMC): Compliance with IEC 60601-1, IEC 60601-1-2, and IEC 60601-2-37 standards.
    • Software Verification and Validation Testing: Documentation provided as per FDA guidance, and the software was classified as a "moderate" level of concern.
    • Mechanical and Acoustic Testing: Acoustic output testing according to NEMA Standards Publication UD 2-2004 Revision 3 (R2009) and UD 3-2004 Revision 2 (R2009).

    Since this document primarily addresses a substantial equivalence determination for a general ultrasound system and its software options based on compliance with standards and technical comparisons, it does not contain the specific information typically associated with a study proving diagnostic performance against quantitative acceptance criteria for an AI/CADe device.

    Therefore, many of the requested categories (e.g., sample size for test set, data provenance, number of experts, adjudication method, MRMC study, standalone performance, type of ground truth, training set size and ground truth establishment) are not applicable in this context, as they pertain to clinical performance studies for AI/CADe devices, which are not detailed in this submission.

    However, based on the provided text, I can infer a general "acceptance criteria" related to functional performance and safety, and the "reported device performance" is that it "passes" these criteria through various non-clinical tests.

    1. Table of Acceptance Criteria and the Reported Device Performance:

    Acceptance Criteria CategoryReported Device Performance
    BiocompatibilityPassed (Cytotoxicity, Sensitization, Irritation tests completed as per relevant standards for tissue-contacting transducers).
    Electrical Safety & EMCComplies with IEC 60601-1, IEC 60601-1-2, and IEC 60601-2-37 standards.
    Software V&VDocumentation provided as recommended by FDA guidance; software considered "moderate" level of concern. Performance deemed consistent with intended use.
    Mechanical & Acoustic OutputAcoustic output testing conducted according to NEMA UD 2-2004 and UD 3-2004 standards.
    Functional Equivalence"Model 6440 employs the same fundamental technological characteristics as the predicate devices." Clinical uses, acoustic output display, transducers, measurements, analysis packages, and digital storage capabilities are equivalent or comparable. New features (Auto EF, QPack) are equivalent to features cleared on other predicate devices.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
    Not Applicable. The submission describes non-clinical testing (biocompatibility, electrical safety, software V&V, mechanical/acoustic) and comparisons to predicate devices, not a clinical study with a test set of patient data to assess diagnostic performance.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
    Not Applicable. See explanation above.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
    Not Applicable. See explanation above.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    Not Applicable. This submission does not describe an AI-assisted diagnostic study or any MRMC study. The "Virtual Navigator" software option is described as providing "additional image information from a second imaging modality," not as an AI or CADe system intended to assist human readers in improving diagnostic tasks.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    Not Applicable. The device is a diagnostic ultrasound system. While it contains software, no standalone diagnostic algorithm performance study is described.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
    Not Applicable. See explanation above. The "ground truth" for the non-clinical tests would be the established specifications and standards for each test (e.g., successful cell growth for cytotoxicity, compliance with electrical limits).

    8. The sample size for the training set:
    Not Applicable. There is no mention of a training set as this is not an AI/CADe device submission detailing such a development process.

    9. How the ground truth for the training set was established:
    Not Applicable. See explanation above.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1