K Number
K213278
Device Name
Planmed Verity
Manufacturer
Date Cleared
2022-04-28

(209 days)

Product Code
Regulation Number
892.1750
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Planmed Verity is intended to be used for X-ray computed cone beam tomography imaging of anatomies within upper and lower extremities, head and neck.

Device Description

The Planmed Verity is a cone beam computed tomography x-ray system for generating 3D imaging scans of extremity, head and neck anatomies. The Planmed Verity utilizes an amorphous silicon based digital image receptor to capture digital images. The toroidal shaped gantry includes a rotating x-ray source combined with a flat panel image receptor. The scan rotation angle is less than a full circle and during the scan 300 to 400 projection images are being acquired. The receptor directly converts the incoming X-ray photons to digital image data. Projection image data is used to generate a 3D image volume of the anatomy through a reconstruction software algorithm.

AI/ML Overview

The provided text is a 510(k) summary for the Planmed Verity CBCT system. It focuses on demonstrating substantial equivalence to a predicate device, primarily through performance data related to new software features.

Based on the provided document, here's an analysis of the acceptance criteria and the study proving the device meets them:

1. Table of Acceptance Criteria and Reported Device Performance

The document doesn't explicitly present a formal table of quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or AUC, as might be found for an AI diagnostic algorithm. Instead, the acceptance is demonstrated qualitatively and through the clinical evaluation of specific new software features.

The key reported performance is:

  • Overall image quality was acceptable for all cases and image types.
  • The clinical image quality of the eFoV feature outside the primary field of view is of lower image quality and offers visualization aid, not diagnostic value. (This is a self-acknowledged limitation rather than a "performance metric" per se but is crucial for labeling).
  • The new software features have acceptable image quality.
Acceptance Criteria (Implicit)Reported Device Performance (Qualitative)
Acceptable overall image quality for all cases and image types (for eFoV, improved CALM, MAR, and stitching algorithms)."Overall image quality was acceptable for all cases and image types."
Clinical utility of eFoV feature."The clinical image quality of the eFoV feature outside the primary field of view is of lower image quality and offers visualization aid, not diagnostic value." (Acknowledged in labeling).
New software features have acceptable image quality."The clinical image evaluation study shows that... the new software features have acceptable image quality."

2. Sample Size and Data Provenance

  • Test Set Sample Size: The document states "a number of sample scans and diagnostic images." The exact number is not specified.
  • Data Provenance: Not explicitly stated, but given the manufacturer (Planmed Oy, Finland) and the context of a 510(k) submission, it's highly likely the data originated from a clinical setting, potentially in Finland or Europe. There is no information on whether it was retrospective or prospective.

3. Number of Experts and Qualifications

  • Number of Experts: Two
  • Qualifications: "Two experienced radiologists"

4. Adjudication Method for the Test Set

  • The radiologists "studied independently" the images and "scored different essential image quality related items."
  • "The results have been summarized in a clinical study report."
  • There is no explicit mention of an adjudication method (e.g., 2+1, 3+1 consensus). The phrasing "studied independently" suggests individual scoring, followed by a summary, but not necessarily a formal consensus process for discrepancies.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • No, an MRMC comparative effectiveness study was not done in the sense of evaluating human readers' improvement with AI vs. without AI assistance.
  • The clinical evaluation focused on the image quality produced by the new software features themselves, as assessed by radiologists, not on how these features assisted radiologists in a diagnostic task. The AI here is integrated into the reconstruction and image enhancement, not necessarily a separate diagnostic aid.

6. Standalone (Algorithm Only) Performance

  • Implicitly, yes, to the extent that the radiologists were evaluating the image output of the algorithms directly. The "new software features eFoV, improved CALM motion blur reduction, MAR improved metal artefact removal and stitching algorithm performance have been clinically evaluated." This involved the output of the algorithms being presented to experts for assessment of "image quality related items." This is not a quantitative standalone diagnostic performance (e.g., classifying disease), but rather an assessment of the quality of the images produced by the algorithms.

7. Type of Ground Truth Used

  • Expert Consensus/Opinion on Image Quality: The "ground truth" here is the qualitative assessment of image quality by two experienced radiologists. This is not pathology, outcomes data, or consensus on disease presence/absence, but rather expert subjective evaluation of the technical and diagnostic quality of the images generated by the new algorithms.

8. Sample Size for the Training Set

  • Not specified. The document focuses on the evaluation of the software features, not on the training of any underlying machine learning models. It's possible that these "new software features" (eFoV, CALM, MAR, stitching) are based on traditional image processing algorithms rather than deep learning, in which case a "training set" in the common AI sense might not apply, or was part of internal development.

9. How the Ground Truth for the Training Set was Established

  • Since the training set size is not specified, and the nature of the "new software features" is not detailed (e.g., if they are AI/ML based), the method for establishing ground truth for a training set is not provided in this document.

§ 892.1750 Computed tomography x-ray system.

(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.