(110 days)
The Cardiac Guidance software is intended to assist medical professionals in the acquisition of cardiac ultrasound images. Cardiac Guidance software is an accessory to compatible general purpose diagnostic ultrasound systems.
The Cardiac Guidance software is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV). Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).
The Cardiac Guidance software is a radiological computer-assisted acquisition guidance system that provides real-time guidance during echocardiography to assist the user capture anatomically correct images representing standard 2D echocardiographic diagnostic views and orientations. This Al-powered, software-only device emulates the expertise of skilled sonographers.
Cardiac Guidance is comprised of several different features that, combined, provide expert guidance to the user. These include:
- Quality Meter: The real-time feedback from the Quality Meter advises the user on the expected diagnostic quality of the resulting clip, such that the user can make decisions to further optimize the quality, for example by following the prescriptive guidance feature below.
- Prescriptive Guidance: The prescriptive guidance feature in Cardiac Guidance provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the optimal view.
- Auto-Capture: The Cardiac Guidance Auto-Capture feature triggers an automatic capture of a clip when the quality is predicted to be diagnostic, emulating the way in which a sonographer knows when an image is of sufficient quality to be diagnostic and records it.
- Save Best Clip: This feature continually assesses clip quality while the user is scanning and, in the event that the user is not able to obtain a clip sufficient for Auto-Capture, the software allows the user to retrospectively record the highest quality clip obtained so far, mimicking the choice a sonographer might make when recording an exam.
The provided document is a 510(k) summary for Cardiac Guidance software, which is a radiological computer-assisted acquisition guidance system. It discusses an updated Predetermined Change Control Plan (PCCP) and addresses how future modifications will be validated. However, it does not contain a detailed performance study with specific acceptance criteria and results from such a study for the current submission.
The document focuses on the plan for future modifications and ensuring substantial equivalence through predefined testing. While it mentions that "Safety and performance of the Cardiac Guidance software will be evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing outlined in the submission," and "The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria," the specific details of a study proving the device meets acceptance criteria are not included in this document.
Therefore, the following information cannot be fully extracted based solely on the provided text:
- A table of acceptance criteria and reported device performance (for the current submission/PCCP update).
- Sample size used for the test set and data provenance.
- Number of experts and their qualifications for establishing ground truth for the test set.
- Adjudication method for the test set.
- Results of a multi-reader multi-case (MRMC) comparative effectiveness study, including effect size.
- Details of a standalone (algorithm only) performance study.
- The type of ground truth used.
- Sample size for the training set.
- How the ground truth for the training set was established.
However, the document does contain information about performance testing and acceptance criteria for future modifications under the PCCP.
Here's a summary of what can be extracted or inferred regarding performance and validation, specifically related to the plan for demonstrating that future modifications will meet acceptance criteria:
1. A table of Acceptance Criteria and the Reported Device Performance:
The document describes the types of testing and the intent to use acceptance criteria for future modifications. It does not provide a table of acceptance criteria and reported device performance for the current submission or previous clearances. It states:
"The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria."
This indicates that acceptance criteria will be defined for future validation tests, but they are not listed here. The document focuses on the types of modifications and the high-level testing methods:
Modification Category | Testing Methods Summary |
---|---|
Retraining/optimization/modification of core algorithm(s) | Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met. |
Real-time guidance for additional 2D TTE views | Repeating verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met for the additional views. |
Optimization of the core algorithm(s) implementation (thresholds, averaging logic, transfer functions, frequency, refresh rate) | Repeating relevant verification test(s) and the system level validation test to ensure the pre-defined acceptance criteria are met. |
Addition of new types of prescriptive guidance (patient positioning, breathing guidance, combined probe movements, pressure, sliding/angling) and addition of existing guidance types to all views | Repeating relevant verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met. |
Labeling compatibility with various screen sizes (including mobile) and UI/UX changes (e.g., audio, configurability of guidance) | Repeating relevant verification tests and the system level validation test, including usability testing, to ensure the pre-defined acceptance criteria are met. |
2. Sample size used for the test set and the data provenance:
The document states:
"To ensure validation test datasets are representative of the intended use population, each will meet minimum demographic requirements."
However, specific sample sizes and data provenance (e.g., country of origin, retrospective/prospective) for any performance study are not provided in this document. It only refers to "sample size determination" as being included in the test methods for the PCCP.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided in the document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:
The document refers to a "Non-expert Validation" being added to the subject PCCP, which was "Not included" in the K201992 PCCP. It describes this as:
"Adds standalone test protocol to enable validation of modified device performance by the intended user groups, ensuring equivalency to the original device based on predefined clinical endpoints."
While this suggests a study involving users, it does not explicitly state it's an MRMC comparative effectiveness study comparing human readers with and without AI assistance, nor does it provide any effect size.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document's "Testing Methods" column frequently mentions "Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met." This suggests that standalone algorithm performance testing (verification and system-level validation) is part of the plan for future modifications. However, specific details of such a study are not provided in this document.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
This information is not explicitly stated in the document. The "Non-expert Validation" mentions "predefined clinical endpoints," but the source of the ground truth for those endpoints is not detailed.
8. The sample size for the training set:
This information is not provided in the document.
9. How the ground truth for the training set was established:
This information is not provided in the document. The document mentions "Retraining/optimization/modification of core algorithm(s)" and that "The modification protocol incorporates impact assessment considerations and specifies requirements for data management, including data sources, collection, storage, and sequestration, as well as documentation and data segregation/re-use practices," implying a training set exists, but details on ground truth establishment are missing.
892.2100 Radiological acquisition and/or optimization guidance system.
892.2100 Radiological acquisition and/or optimization guidance system.
(a)
Identification. A radiological acquisition and/or optimization guidance system is a device that is intended to aid in the acquisition and/or optimization of images and/or diagnostic signals. The device interfaces with the acquisition system, analyzes its output, and provides guidance and/or feedback to the operator for improving image and/or signal quality.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed, technical device description, including a detailed description of the impact of any software and hardware on the device's functions, the associated capabilities and limitations of each part, and the associated inputs and outputs.
(ii) A detailed, technical report on the non-clinical performance testing of the subject device in the intended use environments, using relevant consensus standards when applicable.
(iii) A detailed report on the clinical performance testing, obtained from either clinical testing, accepted virtual/physical systems designed to capture clinical variability, comparison to a closely-related device with established clinical performance, or other sources that are justified appropriately. The choice of the method must be justified given the risk of the device and the general acceptance of the test methods. The report must include the following:
(A) A thorough description of the testing protocol(s).
(B) A thorough, quantitative evaluation of the diagnostic utility and quality of images/data acquired, or optimized, using the device.
(C) A thorough, quantitative evaluation of the performance in a representative user population and patient population, under anticipated conditions and environments of use.
(D) A thorough discussion on the generalizability of the clinical performance testing results.
(E) A thorough discussion on use-related risk analysis/human factors data.
(iv) A detailed protocol that describes, in the event of a future change, the level of change in the device technical specifications or indications for use at which the change or changes could significantly affect the safety or effectiveness of the device and the risks posed by these changes. The assessment metrics, acceptance criteria, and analytical methods used for the performance testing of changes that are within the scope of the protocol must be included.
(v) Documentation of an appropriate training program, including instructions on how to acquire and process quality images and video clips, and a report on usability testing demonstrating the effectiveness of that training program on user performance, including acquiring and processing quality images.
(2) The labeling required under § 801.109(c) of this chapter must include:
(i) A detailed description of the device, including information on all required and/or compatible parts.
(ii) A detailed description of the patient population for which the device is indicated for use.
(iii) A detailed description of the intended user population, and the recommended user training.
(iv) Detailed instructions for use, including the information provided in the training program used to meet the requirements of paragraph (b)(1)(iv) of this section.
(v) A warning that the images and data acquired using the device are to be interpreted only by qualified medical professionals.
(vi) A detailed summary of the reports required under paragraphs (b)(1)(ii) and (iii) of this section.
(vii) A statement on upholding the As Low As Reasonably Achievable (ALARA) principle with a discussion on the associated device controls/options.