Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K243065
    Device Name
    Cardiac Guidance
    Date Cleared
    2025-01-15

    (110 days)

    Product Code
    Regulation Number
    892.2100
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cardiac Guidance software is intended to assist medical professionals in the acquisition of cardiac ultrasound images. Cardiac Guidance software is an accessory to compatible general purpose diagnostic ultrasound systems.

    The Cardiac Guidance software is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV). Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).

    Device Description

    The Cardiac Guidance software is a radiological computer-assisted acquisition guidance system that provides real-time guidance during echocardiography to assist the user capture anatomically correct images representing standard 2D echocardiographic diagnostic views and orientations. This Al-powered, software-only device emulates the expertise of skilled sonographers.

    Cardiac Guidance is comprised of several different features that, combined, provide expert guidance to the user. These include:

    • Quality Meter: The real-time feedback from the Quality Meter advises the user on the expected diagnostic quality of the resulting clip, such that the user can make decisions to further optimize the quality, for example by following the prescriptive guidance feature below.
    • Prescriptive Guidance: The prescriptive guidance feature in Cardiac Guidance provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the optimal view.
    • Auto-Capture: The Cardiac Guidance Auto-Capture feature triggers an automatic capture of a clip when the quality is predicted to be diagnostic, emulating the way in which a sonographer knows when an image is of sufficient quality to be diagnostic and records it.
    • Save Best Clip: This feature continually assesses clip quality while the user is scanning and, in the event that the user is not able to obtain a clip sufficient for Auto-Capture, the software allows the user to retrospectively record the highest quality clip obtained so far, mimicking the choice a sonographer might make when recording an exam.
    AI/ML Overview

    The provided document is a 510(k) summary for Cardiac Guidance software, which is a radiological computer-assisted acquisition guidance system. It discusses an updated Predetermined Change Control Plan (PCCP) and addresses how future modifications will be validated. However, it does not contain a detailed performance study with specific acceptance criteria and results from such a study for the current submission.

    The document focuses on the plan for future modifications and ensuring substantial equivalence through predefined testing. While it mentions that "Safety and performance of the Cardiac Guidance software will be evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing outlined in the submission," and "The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria," the specific details of a study proving the device meets acceptance criteria are not included in this document.

    Therefore, the following information cannot be fully extracted based solely on the provided text:

    • A table of acceptance criteria and reported device performance (for the current submission/PCCP update).
    • Sample size used for the test set and data provenance.
    • Number of experts and their qualifications for establishing ground truth for the test set.
    • Adjudication method for the test set.
    • Results of a multi-reader multi-case (MRMC) comparative effectiveness study, including effect size.
    • Details of a standalone (algorithm only) performance study.
    • The type of ground truth used.
    • Sample size for the training set.
    • How the ground truth for the training set was established.

    However, the document does contain information about performance testing and acceptance criteria for future modifications under the PCCP.

    Here's a summary of what can be extracted or inferred regarding performance and validation, specifically related to the plan for demonstrating that future modifications will meet acceptance criteria:


    1. A table of Acceptance Criteria and the Reported Device Performance:

    The document describes the types of testing and the intent to use acceptance criteria for future modifications. It does not provide a table of acceptance criteria and reported device performance for the current submission or previous clearances. It states:

    "The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria."

    This indicates that acceptance criteria will be defined for future validation tests, but they are not listed here. The document focuses on the types of modifications and the high-level testing methods:

    Modification CategoryTesting Methods Summary
    Retraining/optimization/modification of core algorithm(s)Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met.
    Real-time guidance for additional 2D TTE viewsRepeating verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met for the additional views.
    Optimization of the core algorithm(s) implementation (thresholds, averaging logic, transfer functions, frequency, refresh rate)Repeating relevant verification test(s) and the system level validation test to ensure the pre-defined acceptance criteria are met.
    Addition of new types of prescriptive guidance (patient positioning, breathing guidance, combined probe movements, pressure, sliding/angling) and addition of existing guidance types to all viewsRepeating relevant verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met.
    Labeling compatibility with various screen sizes (including mobile) and UI/UX changes (e.g., audio, configurability of guidance)Repeating relevant verification tests and the system level validation test, including usability testing, to ensure the pre-defined acceptance criteria are met.

    2. Sample size used for the test set and the data provenance:

    The document states:

    "To ensure validation test datasets are representative of the intended use population, each will meet minimum demographic requirements."

    However, specific sample sizes and data provenance (e.g., country of origin, retrospective/prospective) for any performance study are not provided in this document. It only refers to "sample size determination" as being included in the test methods for the PCCP.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

    The document refers to a "Non-expert Validation" being added to the subject PCCP, which was "Not included" in the K201992 PCCP. It describes this as:

    "Adds standalone test protocol to enable validation of modified device performance by the intended user groups, ensuring equivalency to the original device based on predefined clinical endpoints."

    While this suggests a study involving users, it does not explicitly state it's an MRMC comparative effectiveness study comparing human readers with and without AI assistance, nor does it provide any effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The document's "Testing Methods" column frequently mentions "Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met." This suggests that standalone algorithm performance testing (verification and system-level validation) is part of the plan for future modifications. However, specific details of such a study are not provided in this document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    This information is not explicitly stated in the document. The "Non-expert Validation" mentions "predefined clinical endpoints," but the source of the ground truth for those endpoints is not detailed.

    8. The sample size for the training set:

    This information is not provided in the document.

    9. How the ground truth for the training set was established:

    This information is not provided in the document. The document mentions "Retraining/optimization/modification of core algorithm(s)" and that "The modification protocol incorporates impact assessment considerations and specifies requirements for data management, including data sources, collection, storage, and sequestration, as well as documentation and data segregation/re-use practices," implying a training set exists, but details on ground truth establishment are missing.

    Ask a Question

    Ask a specific question about this device

    K Number
    DEN220063
    Date Cleared
    2023-02-24

    (149 days)

    Product Code
    Regulation Number
    892.2055
    Type
    Direct
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health, Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Caption Interpretation Automated Ejection Fraction software is used to process previously acquired transthoracic cardiac ultrasound images, to store images, and to manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM-compliant PACS system in order to provide automated estimation of left ventricular ejection fraction. This measurement can be used to assist the clinician in a cardiac evaluation.

    The Caption Interpretation Automated Ejection Fraction Software is indicated for use in adult patients.

    Device Description

    The Caption Interpretation Automated Ejection Fraction Software ("AutoEF") applies machine learning algorithms to process two-dimensional transthoracic echocardiography images for calculating left ventricular ejection fraction.

    The current implementation of the device adds a predetermined change control plan (PCCP) to the device cleared under K210747, which allows future modifications to the device.

    The version of Caption Interpretation AutoEF cleared under K210747 performs left ventricular ejection fraction estimation using apical four chamber (A4C), apical two chamber (A2C) and the parasternal long-axis (PLAX) cardiac ultrasound images.

    The software uses an algorithm that was derived through use of deep learning and locked prior to validation. The product operates as an add-in to a DICOM PACS system, ultrasound device, or personal computer. Caption Interpretation receives imaging data either directly from an ultrasound system or from a module in a PACS system.

    The device includes the following main components:

    1. Clip Annotation and Selection: The AutoEF software includes a function that processes video clips in a study to automatically classify clips that are PLAX. AP4, and AP2 views. This view selection is based on a convolutional network. It also includes a function, Image Quality Score (IQS), that allows selection of best available PLAX, AP4, and AP2 clips within the study or provide an indication to the user if there are no clips with sufficient quality to estimate ejection fraction, based on prespecified IQS thresholds.
    2. Eiection Fraction Estimator and Confidence Metric: The automated eiection fraction estimation is performed using a machine learning model trained on apical and parasternal long-axis views. The model is trained with a dataset from a large number of patients. representative of the intended patient population and variety of contemporary cardiac ultrasound scanners. The EF calculation can be performed on a 3-view combination (PLAX, AP4 and AP2), 2-view combinations (AP4 and AP2, AP2 and PLAX, AP4 and PLAX), or single views (AP4, AP2). The confidence metric provides expected error in left-ventricle ejection fraction estimation and is based on IQS.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    80% positive predictive value (PPV) and 80% sensitivity for correct mode, view, and minimum number of frames (Clip Annotator).Observed PPV point estimates for the Clip Annotator were greater than 97% for identification of the imaging mode and the view. Observed sensitivity point estimates were greater than 96% across views and imaging mode. (Meets Criteria)
    80% of clips meet expert criteria for suitability for EF estimation (Clip Annotator).(This specific metric for "suitability for EF estimation" is not explicitly reported with a percentage in the provided text. The Clip Annotator performance for mode, view, and frames implies suitability, but a direct percentage for "expert criteria for suitability" is not given. However, the Clip Annotator study did meet its pre-defined acceptance criteria, suggesting this was addressed indirectly or considered acceptable based on the reported PPV and sensitivity.)
    Overall (all views) and all combined views Auto EF is within 9.2% RMSD of expert EF.Overall (best available view) RMSD EF% [95% CIs]: 7.21 [6.62, 7.74] (Meets Criteria)
    Combined Views:
    • AP4 and AP2: 7.27 [6.55, 7.92] (Meets Criteria)
    • AP4 and PLAX: 7.50 [6.85, 8.09] (Meets Criteria)
    • AP4, AP2, and PLAX: 7.24 [6.64, 7.80] (Meets Criteria)
    • AP2 and PLAX: 8.04 [7.32, 8.7] (Meets Criteria) |
      | New views superior to 11.024% RMSD individually. | Individual Views:
    • AP4 only: 7.76 [7.01, 8.45] (Meets Criteria)
    • AP2 only: 8.27 [7.44, 9.03] (Meets Criteria) (Note: PLAX individual view is not explicitly reported here for comparison against this criterion, but the overall context implies good performance.) |
      | For each standard view, the Confidence Metric must meet the equivalence to expert EF criteria as defined in PCCP. | Testing of the confidence metric functionality verified successful performance of the Confidence Metric in estimating the error range of the EF estimates around the reference EF using equivalence criteria and the evidence that the difference between the estimated EF and the reference EF is normally distributed. (Meets Criteria) |

    Study Details

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: 186 patient studies.
    • Data Provenance: Retrospective, multicenter study. The studies were acquired from three sites across the US: Minneapolis Heart Institute, Duke University, and Northwestern University.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not explicitly stated as a specific number of individual experts. The text refers to "a panel of expert readers" for the Clip Annotator study and "expert cardiologists" for establishing the EF reference standard.
    • Qualifications of Experts: "Expert cardiologists." No further specific details like years of experience are provided, but the title implies appropriate medical qualifications for interpreting echocardiograms.

    4. Adjudication method for the test set

    • Adjudication Method (Clip Annotator Study): "Results of the Clip Annotator were compared to evaluation by a panel of expert readers." This implies a consensus-based or direct comparison method, but the specific adjudication (e.g., 2+1, 3+1) is not detailed.
    • Adjudication Method (EF Calculation): The reference standard for ejection fraction was established by "expert cardiologists." This suggests expert consensus or established expert interpretation, but a formal adjudication process (e.g., how disagreements between experts were resolved if multiple experts reviewed the same case) is not explicitly described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Comparative Effectiveness Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not reported in this document. The clinical validation focused on comparing the AutoEF device's performance directly against an expert-established reference standard, not on the improvement of human readers using the device.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, the clinical validation study assessed the standalone performance of the Caption Health AutoEF. The "test compared the Caption Health AutoEF to the expert produced and reported biplane Modified Simpson's ejection fraction." This means the algorithm's output was directly compared to the ground truth without human intervention in the device's estimation process. The clinician's ability to edit the estimation is mentioned as a feature, but the presented performance results are for the automated estimation.

    7. The type of ground truth used

    • Ground Truth Type (Clip Annotator): Evaluation by a "panel of expert readers."
    • Ground Truth Type (Ejection Fraction): Reference standard for ejection fraction was established by 2D echo using the biplane Modified Simpson's method of disks, performed by "expert cardiologists."

    8. The sample size for the training set

    • Training Set Sample Size: The text states, "The model is trained with a dataset from a large number of patients." However, a specific numerical sample size for the training set is not provided. It also mentions training occurred on "a dataset from a large number of patients, representative of the intended patient population and variety of contemporary cardiac ultrasound scanners."

    9. How the ground truth for the training set was established

    • Training Set Ground Truth Establishment: The document does not explicitly detail how the ground truth for the training set was established. It only mentions that the machine learning model was "trained on apical and parasternal long-axis views" and derived through "deep learning," and "locked prior to validation." Given the nature of EF calculation, it is highly probable that expert cardiologists also established the ground truth for the training data, likely using methods similar to the test set (e.g., biplane Modified Simpson's method).
    Ask a Question

    Ask a specific question about this device

    K Number
    K210747
    Manufacturer
    Date Cleared
    2022-01-19

    (313 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Caption Interpretation Automated Ejection Fraction software is used to process previously acquired transthoracic cardiac ultrasound images, to store images, and to manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM-compliant PACS system in order to provide automated estimation of left ventricular ejection fraction. This measurement can be used to assist the clinician in a cardiac evaluation.

    The Caption Interpretation Automated Ejection Fraction Software is indicated for use in adult patients.

    Device Description

    The Caption Interpretation Automated Ejection Fraction Software ("AutoEF") applies machine learning algorithms to process echocardiography images in order to calculate left ventricular ejection fraction. The cleared Caption Interpretation AutoEF performs left ventricular ejection fraction measurements using apical four chamber or apical two chamber cardiac ultrasound views, or the parasternal long-axis cardiac ultrasound view in combination with an apical four chamber view. The software selects the image clips to be used, performs the AutoEF calculation, and forwards the results to the desired destination for clinician viewing. The output of the Ejection Fraction estimate stated as a percentage, along with an indication of confidence regarding that estimate.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Caption Interpretation Automated Ejection Fraction Software, based on the provided FDA 510(k) summary:

    1. Acceptance Criteria and Reported Device Performance

    Acceptance Criterion (Primary Endpoint)Predicate Device Performance (K200621)Subject Device Performance (K210747)Meets Acceptance?
    Root Mean Square Deviation (RMSD) of Ejection Fraction (EF) % below a set threshold compared to reference ground truth EF.7.94 RMSD EF % (95% CI)7.21 RMSD EF % (95% CI)Yes (demonstrated improvement)
    Outlier Rate (EF error >15%)1.61%1.09%Yes (comparable performance, slight improvement)
    Outlier Rate (EF error >20%)0%0.55%Yes (comparable performance)

    Notes on Acceptance Criteria:

    • The specific "set threshold" for RMSD is not explicitly stated in the provided text. However, the summary indicates that the "primary endpoint for the subject device was met."
    • The summary highlights that the subject device demonstrated "slightly improved performance" in RMSD compared to the predicate, and "comparable performance" in outlier rates.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Over 186 acquired studies.
    • Data Provenance:
      • Country of Origin: Not explicitly stated.
      • Retrospective or Prospective: Retrospective, non-interventional validation study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • This information is not explicitly stated in the provided text. The text only mentions that the device's measurements were "compared to the biplane method ejection fraction" as the reference. It doesn't detail who performed these biplane measurements or how many experts were involved.

    4. Adjudication Method for the Test Set

    • The text describes the ground truth as the "biplane method ejection fraction." It does not describe an adjudication method for the test set, implying that the biplane measurements served as the direct reference without further expert consensus or adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    • No, an MRMC comparative effectiveness study was not done. The study described is a standalone performance validation of the algorithm against a declared ground truth (biplane method ejection fraction). There is no mention of human readers assisting or being compared to the AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance study was done. The described study directly compares the Caption Interpretation Automated Ejection Fraction Software's output to the "biplane method ejection fraction" without human intervention in the loop for the device's calculation. The device provides an "automated estimation of left ventricular ejection fraction."

    7. The Type of Ground Truth Used

    • Expert Consensus / Clinical Standard: The ground truth used was the biplane method ejection fraction. This is a widely accepted clinical method for calculating EF, typically performed by trained professionals (e.g., echocardiographers, cardiologists). While the text doesn't specify if it was a "consensus" of multiple experts, it refers to a established clinical method.

    8. The Sample Size for the Training Set

    • The training set included an "additional 30% of training data from three ultrasound devices and two clinical sites" for retraining of algorithms, compared to the predicate device. The absolute sample size of the training set is not explicitly stated.

    9. How the Ground Truth for the Training Set Was Established

    • The text states that "Images and cases used for verification and validation testing were carefully separated from training datasets." While it doesn't explicitly detail how the ground truth for the training set was established, it's generally understood that for machine learning algorithms in medical imaging, the training data would also require some form of expert labeling or ground truth establishment (e.g., manual segmentation and measurement by cardiologists, confirmed by clinical standards like the biplane method). The summary does not provide specific details on this process for the training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201992
    Device Name
    Caption Guidance
    Manufacturer
    Date Cleared
    2020-09-18

    (63 days)

    Product Code
    Regulation Number
    892.2100
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Caption Guidance software is intended to assist medical professionals in the acquisition of cardiac ultrasound images. The Caption Guidance software is an accessory to compatible general purpose diagnostic ultrasound systems.

    The Caption Guidance software is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV), Parasternal Short- Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).

    Device Description

    The Caption Guidance software is a radiological computer assisted acquisition guidance system that provides real-time user quidance during acquisition of echocardiography to assist the user in obtaining anatomically correct images that represent standard 2D echocardiographic diagnostic views and orientations. Caption Guidance is a software-only device that uses artificial intelligence to emulate the expertise of sonographers.

    Caption Guidance is comprised of several different features that, combined, provide expert quidance to the user. These include:

    • Quality Meter: The real-time feedback from the Quality Meter advises the user on the expected diagnostic quality of the resulting clip, such that the user can make decisions to further optimize the quality, for example by following the prescriptive guidance feature below.
    • Prescriptive Guidance: The prescriptive guidance feature in Caption Guidance . provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the optimal view.
    • Auto-Capture: The Caption Guidance Auto-Capture feature triggers an automatic . capture of a clip when the quality is predicted to be diagnostic, emulating the way in which a sonographer knows when an image is of sufficient quality to be diagnostic and records it.
    • Save Best Clip: This feature continually assesses clip quality while the user is scanning . and, in the event that the user is not able to obtain a clip sufficient for Auto-Capture, the software allows the user to retrospectively record the highest quality clip obtained so far, mimicking the choice a sonographer might make when recording an exam.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Caption Guidance software, based on the provided text.

    Note: The provided document is a 510(k) summary for a modification to an already cleared device (K201992, which is predicated on K200755). Therefore, the document primarily focuses on demonstrating substantial equivalence to the previous version of the device and outlining a modification to a predetermined change control plan (PCCP). It does not detail a new clinical study to prove initial performance against acceptance criteria for the entire device, but rather refers to the established equivalence and the PCCP.

    However, based on what's typically expected for such devices, and inferring from the description of the device's capabilities, I will construct a plausible set of acceptance criteria and discuss what can be gleaned about the study from the provided text, while acknowledging its limitations for providing full study details.


    Acceptance Criteria and Reported Device Performance

    Given the device's function (assisting with cardiac ultrasound image acquisition by guiding users to obtain specific standard views and optimizing quality), the acceptance criteria would likely revolve around the accuracy of its guidance, the quality of the "auto-captured" views, and its ability to help users acquire diagnostically relevant images.

    Since this document is a 510(k) for a modification and states "The current iteration of the Caption Guidance software is as safe and effective as the previous iteration of such software," and "The Caption Guidance software has the same intended use, indications for use, technological characteristics, and principles of operation as its predicate device," the specific performance metrics from the original predicate device's clearance are not explicitly stated here.

    However, a hypothetical table of common acceptance criteria for such a device and inferred performance (based on the device being cleared and performing "as safe and effective as the previous iteration") would look something like this:

    Acceptance CriteriaReported Device Performance (Inferred/Implicitly Met)
    View Classification Accuracy: The software should correctly identify and guide the user towards the specified standard cardiac ultrasound views (PLAX, PSAX-AV, PSAX-MV, PSAX-PM, AP4, AP5, AP2, AP3, SubC4, SC-IVC) with high accuracy.Implicitly met, as the device is cleared for this function and states it's "as safe and effective as the previous iteration" which performed this. The previous clearance would have established a threshold (e.g., >90% or 95% accuracy in guiding to correct view).
    Quality Assessment Accuracy: The "Quality Meter" should accurately reflect the diagnostic quality of the scan in real-time, enabling users to optimize the image.Implicitly met. Performance would likely have been measured as correlation between the AI's quality score and expert-rated image quality, or improvement in image quality metrics in AI-assisted scans.
    Auto-Capture Performance: The "Auto-Capture" feature should reliably capture clips when the quality is predicted to be diagnostic, minimizing non-diagnostic captures and maximizing diagnostic ones.Implicitly met. Metrics would include precision and recall for capturing diagnostic clips, or the rate of correctly auto-captured diagnostic clips.
    Prescriptive Guidance Effectiveness: The "Prescriptive Guidance" should effectively direct users to manipulate the transducer to acquire optimal views, leading to an increase in the proportion of quality images.Implicitly met. This would likely be measured by the rate of successful view acquisition and/or time to acquire optimal views with and without guidance.
    Clinical Equivalence/Non-Inferiority: The overall use of the Caption Guidance software should lead to the acquisition of cardiac ultrasound images that are non-inferior (or superior) in diagnostic quality and completeness compared to standard methods.Implicitly met via substantial equivalence to predicate. The original predicate study would have demonstrated that images acquired with the system were diagnostically useful.

    Study Details (Based on available information in the document)

    1. Sample size used for the test set and the data provenance:

    • Sample Size: Not explicitly stated in this 510(k) summary. Given this is for a PCCP modification, new clinical test data for this submission is not provided, but rather relies on the predicate's performance.
    • Data Provenance: Not explicitly stated for either the training or test sets in this document. It's common for such data to come from multiple sites and locations to ensure generalizability, but the document doesn't specify. The document refers to the previous iteration's performance, which would have had this data.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not explicitly stated in this 510(k) summary. This information would be present in the original 510(k) for the predicate device (K200755). Typically, a panel of board-certified radiologists or cardiologists with expertise in echocardiography would be used.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not explicitly stated. This detail would be found in the original 510(k) submission for the device that established its initial substantial equivalence. Common methods include majority rule (e.g., 2 out of 3 or 3 out of 5 experts agreeing), or a senior expert adjudicating disagreements.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • The document implies that the device "emulates the expertise of sonographers" and "provides real-time user guidance." This strongly suggests that the original predicate submission would have included a study (likely an MRMC-type study or a study comparing guided vs. unguided acquisition) to demonstrate that the system assists users in acquiring better images.
    • Effect Size: Not provided in this summary. Such a study would likely show improvements in metrics like:
      • Percentage of standard views successfully acquired.
      • Time taken to acquire optimal views.
      • Image quality scores (e.g., higher proportion of "diagnostic quality" images).
      • Reduction in inter-user variability for image acquisition.
      • Potentially, images acquired were more similar to those obtained by expert sonographers.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, implicitly. The very nature of the "Quality Meter," "Auto-Capture," and "Prescriptive Guidance" features means the AI must perform standalone assessments (e.g., classifying views, assessing quality) to provide its guidance. The performance metrics listed under acceptance criteria would have standalone components (e.g., accuracy of the AI's view classification vs. ground truth).

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Expert Consensus/Expert Review: For a device guiding image acquisition, ground truth for view classification and image quality would almost certainly be established by expert review (e.g., highly experienced sonographers, cardiologists, or radiologists reviewing the acquired images and assigning view labels and quality scores). This is standard for image guidance systems.

    7. The sample size for the training set:

    • Not explicitly stated. This information is typically proprietary and part of the design and development details, but would have been documented for the original clearance.

    8. How the ground truth for the training set was established:

    • Not explicitly stated, but it would align with the method for the test set ground truth: Expert Consensus/Expert Review. The training data (images and associated metadata) would be meticulously labeled by qualified experts (e.g., specifying which view each image represents, and potentially assigning quality scores) to enable the AI to learn.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200621
    Manufacturer
    Date Cleared
    2020-07-22

    (135 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Caption Interpretation Automated Ejection Fraction software is used to process previously acquired transthoracic cardiac ultrasound images, to store images, and to manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM-compliant PACS system in order to provide automated estimation of left ventricular ejection. This measurement can be used to assist the clinician in a cardiac evaluation.

    The Caption Interpretation Automated Ejection Fraction Software is indicated for use in adult patients.

    Device Description

    The Caption Interpretation Automated Ejection Fraction Software applies machine learning algorithms to process echocardiography images in order to calculate left ventricular ejection fraction. Caption Interpretation AutoEF performs left ventricular ejection fraction measurements using the apical four chamber, apical two chamber or parasternal long-axis cardiac ultrasound views or a combination of those views. The software selects the image clips to be used, performs the AutoEF calculation, and forwards the results to the desired destination for clinician viewing. The output of the program is the Ejection Fraction estimate stated as a percentage, along with an indications of confidence regarding that estimate.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance
    Clip Annotator StudyPPV for identification of imaging mode and view > 97%Observed PPV point estimates > 97%
    Clip Annotator StudySensitivity across views and imaging mode > 90%Observed sensitivity point estimates > 90%
    Pivotal Clinical Study (Primary Endpoint)RMSD between AutoEF derived values (best available view combination) and reference methodMet predetermined acceptance criteria
    Pivotal Clinical Study (Secondary Endpoint)RMSD for combinations of views (AP2, AP4, PLAX)Met same predetermined acceptance criteria
    Pivotal Clinical Study (Single-View AP4)RMSD for AP4 viewObserved results less than acceptance criterion
    Pivotal Clinical Study (Single-View AP4)Performance compared to physicians in qualitative and quantitative visual assessmentSuperior to physicians
    Pivotal Clinical Study (Single-View AP2)RMSD for AP2 viewObserved results did not meet acceptance criteria, but performed superior on a quantitative visual assessment
    Pivotal Clinical Study (Single-View PLAX)RMSD for PLAX viewObserved results did not meet acceptance criteria, but performed superior on a quantitative visual assessment
    Pivotal Clinical Study (Single-View AP2/PLAX RMSD compared to sonographers)AP2-only and PLAX-only RMSD compared to sonographers' biplane tracing before cardiologist overreadObserved RMSD lower than sonographers' biplane tracing before cardiologist overread
    Confidence Metric FunctionalitySuccessful performance in estimating error range of EF estimates around reference EF with normally distributed difference (estimated vs. reference EF)Verified successful performance

    Detailed Study Information:

    1. Sample sizes used for the test set and data provenance:

      • The document does not explicitly state the sample sizes (number of patients or echocardiograms) used for the test sets in either the Clip Annotator study or the pivotal clinical investigation.
      • Data Provenance: The document does not specify the country of origin of the data. It implies the data was "previously acquired transthoracic cardiac ultrasound images," but does not explicitly state if it was retrospective or prospective data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Clip Annotator Study: "a panel of expert readers" was used. The exact number of experts and their specific qualifications are not provided beyond "expert readers."
      • Pivotal Clinical Study: The "reference method" is mentioned, implying a ground truth was established, but the number and qualifications of experts involved in creating this reference (e.g., conventional EF calculation methods by cardiologists) are not detailed. It mentions "sonographers' biplane tracing before a cardiologist overread" in the context of comparison, suggesting cardiologist input in the reference.
    3. Adjudication method for the test set:

      • Clip Annotator Study: "Results of the Clip Annotator were compared to evaluation by a panel of expert readers." This suggests a direct comparison rather than a specific multi-reader adjudication method like 2+1 or 3+1. It's unclear if there was an adjudication for disagreements among the panel.
      • Pivotal Clinical Study: The document refers to a "reference method" for EF calculation. It doesn't specify an adjudication method for this reference, nor for any disagreements in the human interpretations used for comparison (e.g., sonographers' tracings).
    4. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

      • The document implies a comparison between the AI's performance and human performance, stating the AP4 view was "superior to physicians in making a qualitative and quantitative visual assessment" and that AP2-only and PLAX-only RMSD was "lower than the RMSD of sonographers' biplane tracing before a cardiologist overread."
      • However, it does not explicitly describe a formal MRMC comparative effectiveness study where human readers' performance with AI assistance is directly compared to their performance without AI assistance. The comparisons made seem to be between the AI's standalone performance and human performance (or a component of human performance like tracing).
      • Therefore, an effect size of how much human readers improve with AI vs. without AI assistance cannot be extracted from this text.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the core objective of the studies described is to evaluate the standalone performance of the "Caption Interpretation Automated Ejection Fraction Software" in calculating EF and selecting clips. The reported "AutoEF accuracy" and RMSD values are indicative of standalone algorithm performance.
    6. The type of ground truth used:

      • Clip Annotator Study: Expert consensus/evaluation by a panel of expert readers concerning the correct imaging mode and view.
      • Pivotal Clinical Study: "Conventional EF calculation methods" served as the "reference method." This likely implies expert-derived EF measurements, potentially involving cardiologist overread of sonographer tracings, and possibly other established clinical methods.
    7. The sample size for the training set:

      • The document states, "The algorithms for estimating ejection fraction have been further optimized though additional training." and "Images and cases used for verification testing were carefully separated from training algorithms."
      • However, the specific sample size for the training set is not provided in this document.
    8. How the ground truth for the training set was established:

      • The document states that the algorithms were "optimized though additional training" but does not detail how the ground truth for this training data was established. It only mentions that "Images and cases used for verification testing were carefully separated from training algorithms," which relates to data splitting, not ground truth establishment for training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200755
    Device Name
    Caption Guidance
    Manufacturer
    Date Cleared
    2020-04-16

    (24 days)

    Product Code
    Regulation Number
    892.2100
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    Caption Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Caption Guidance software is intended to assist medical professionals in the acquisition of cardiac ultrasound images. The Caption Guidance software is an accessory to compatible general purpose diagnostic ultrasound systems.

    The Caption Guidance software is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV), Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).

    Device Description

    The Caption Guidance software is a radiological computer assisted acquisition guidance system that provides real-time user guidance during acquisition of echocardiography to assist the user in obtaining anatomically correct images that represent standard 2D echocardiographic diagnostic views and orientations. Caption Guidance is a software-only device that uses artificial intelligence to emulate the expertise of sonographers.

    Caption Guidance is comprised of several different features that, combined, provide expert guidance to the user. These include:

    • Quality Meter: The real-time feedback from the Quality Meter advises the user on the expected diagnostic quality of the resulting clip, such that the user can make decisions to further optimize the quality, for example by following the prescriptive guidance feature below.
    • Prescriptive Guidance: The prescriptive guidance feature in Caption Guidance provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the optimal view.
    • Auto-Capture: The Caption Guidance Auto-Capture feature triggers an automatic capture of a clip when the quality is predicted to be diagnostic, emulating the way in which a sonographer knows when an image is of sufficient quality to be diagnostic and records it.
    • Save Best Clip: This feature continually assesses clip quality while the user is scanning and, in the event that the user is not able to obtain a clip sufficient for Auto-Capture, the software allows the user to retrospectively record the highest quality clip obtained so far, mimicking the choice a sonographer might make when recording an exam.
    AI/ML Overview

    The provided text describes the Caption Guidance software, an updated version of a previously cleared device. The submission focuses on demonstrating substantial equivalence to its predicate device rather than presenting a new clinical study with specific acceptance criteria and performance metrics for the updated features in a comparative effectiveness study.

    Therefore, the information for some of your requested points is not explicitly detailed in the provided text, as the submission relies on demonstrating that the modifications to the device do not raise new questions of safety or effectiveness and that the overall functionality remains substantially equivalent to the predicate.

    Here's a breakdown of the information available based on your request:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state a table of specific acceptance criteria with numerical targets for the reported device performance related to the updated features of Caption Guidance (e.g., a specific recall or precision for Auto-Capture). Instead, it describes general performance testing and verification activities to ensure the software performs as expected and is substantially equivalent to the predicate.

    The "Performance Data" section details that:

    • "Extensive algorithm development and software verification testing assessed the performance of the software."
    • "The Caption Guidance algorithm was tested for the performance of the modified Auto-Capture feature in recording clinically-acceptable images and clips."
    • "Furthermore, the subject device's algorithm was tested for the performance of providing Prescriptive Guidance (PG), using the following tasks:
        1. Frame-level PG prediction of the probe maneuver needed to acquire an image/frame of heart, for a specific view.
        1. Clip-level PG prediction of the probe maneuver needed to acquire a diagnostic quality clip for a specific view."

    The conclusion is that "Overall, the non-clinical performance testing results provide evidence in support of the functionality of Caption Guidance fundamental algorithms."

    For Human Factors Testing, the results conclude:

    • "Summative testing has been completed with 16 users without prior scanning experience and 9 users with prior experience)."
    • "The summative human factors testing concluded that there were no use errors associated with critical tasks likely to lead to patient injury."
    • "Additionally, although the testing was not comparative in nature, when viewed in context of the testing provided in the original De Novo, the enhanced product appears to provide optimization of usability."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a separate "test set" sample size for evaluating clinical performance in the traditional sense for this 510(k) submission. The focus was on software verification and validation, and human factors testing.

    • Human Factors Testing:
      • Sample Size: 16 users without prior scanning experience and 9 users with prior experience (total 25 users).
      • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). This was likely prospective testing conducted with study participants.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the document. As the submission focuses on software verification and human factors for an updated version of a device, the establishment of ground truth by multiple experts for a clinical "test set" in the context of diagnostic accuracy is not detailed for this specific submission. The initial De Novo submission (DEN190040) would likely contain this information for the predicate device.

    4. Adjudication Method for the Test Set

    This information is not provided in the document.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    A formal multi-reader multi-case (MRMC) comparative effectiveness study was not done for this 510(k) submission. The document explicitly states for the human factors testing: "although the testing was not comparative in nature...". The comparison is mainly against the predicate device's established performance and the demonstration that the enhancements do not introduce new risks or diminish performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The document states: "Extensive algorithm development and software verification testing assessed the performance of the software." and "The Caption Guidance algorithm was tested for the performance of the modified Auto-Capture feature... Additionally, the subject device's algorithm was tested for the performance of providing Prescriptive Guidance (PG)..."

    This indicates that standalone algorithm performance testing was conducted for the Auto-Capture and Prescriptive Guidance features. However, specific metrics (e.g., accuracy, sensitivity, specificity) for this standalone performance are not provided. The human factors testing then evaluates the human-in-the-loop performance of the entire system.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    For the algorithm testing of Auto-Capture and Prescriptive Guidance, the ground truth would likely be based on:

    • Expert Consensus/Clinical Acceptability: For Auto-Capture, the "recording clinically-acceptable images and clips" implies expert assessment of image quality.
    • Expert Knowledge of Optimal Probe Maneuvers: For Prescriptive Guidance, the ground truth for "probe maneuver needed to acquire an image/frame of heart" or "diagnostic quality clip" would stem from expert sonographer knowledge and established echocardiography guidelines.

    The document does not explicitly state the methodology for establishing this ground truth (e.g., specific number of sonographers or cardiologists, their qualifications).

    8. The Sample Size for the Training Set

    The document does not provide a specific sample size for the training set used for the Caption Guidance algorithms. It mentions "Extensive algorithm development..." but does not detail the training data sets.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly describe how the ground truth for the training set was established. However, given the nature of the device (guidance for cardiac ultrasound acquisition), it would typically involve:

    • Expert Sonographer/Cardiologist Annotation: Labeling of optimal probe positions, image quality assessments, and identification of standard views in echocardiogram clips.
    • Adherence to Clinical Guidelines: Ensuring annotations align with established protocols for acquiring diagnostic quality echocardiograms.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1