K Number
K203518
Device Name
Quicktome
Date Cleared
2021-03-09

(98 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Quicktome is intended for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements, planning, and 3D visualization (MPR reconstructions and 3D volume rendering). Modules are available for image processing and atlas-assisted visualization and segmentation, where an output can be generated for use by a system capable of reading DICOM image sets.

Quicktome is indicated for use in the processing of diffusion-weighted MRI sequences into 3D maps that represent white-matter tracts based on constrained spherical deconvolution methods.

Typical users of Quicktome are medical professionals, including but not limited to surgeons and radiologists.

Device Description

Quicktome is a software-only, cloud-deployed, image processing package which can be used to perform DICOM image viewing, image processing, and analysis.

Quicktome can retrieve DICOM images from picture archiving and communication systems (PACS), acquired with MRI, including Diffusion Weighted Imaging (DWI) sequences, T1, T2, and FLAIR images. Once retrieved, Quicktome removes protected health information (PHI) and links the dataset to an encryption key, which is then used to relink the data back to the patient when the data is exported to the hospital PACS. Processing is performed on the anonymized dataset in the cloud. Clinicians are served the processed output for planning and visualization on their local machine.

The software provides a workflow for a clinician to:

  • Select a patient case for planning and visualization,
  • Confirm image quality,
  • Explore anatomical regions, network templates, tractography bundles, and parcellations,
  • Create and edit regions of interest, and
  • Export objects of interest in DICOM format for use in systems that can view DICOM images.
AI/ML Overview

The provided document is a 510(k) summary for the Quicktome device. It outlines the regulatory clearance process and describes the device's intended use and performance validation. However, it does not contain specific acceptance criteria tables nor detailed results of a study proving the device meets those criteria.

The document broadly mentions performance and comparison validations were performed. It states:

  • "Performance and comparison validations were performed to show equivalence of generated tractography and atlas method."
  • "Evaluations included protocols to demonstrate performance and equivalence of tractography bundle and anatomical region generation (including acceptable co-registration of bundles and regions with underlying anatomical scans), and evaluation of the algorithm's performance in slice motion filtration and skull stripping."

Without specific acceptance criteria and detailed study results from the provided text, I cannot fill out the requested table or fully describe the study in the detail you've asked for points 1-9.

If the information were available, here's how I would structure the answer based on the typical requirements for such a study:


Based on the provided 510(k) summary for Quicktome (K203518), the document states that performance and comparison validations were conducted. However, it does not explicitly detail the specific acceptance criteria, nor does it provide a table of reported device performance against those criteria. Therefore, the following sections will indicate where information is not provided in the given text.

1. A table of acceptance criteria and the reported device performance

Acceptance Criteria CategorySpecific Acceptance Criterion (Not Provided in document)Reported Device Performance (Not Provided in document)
Tractography Bundle Generation(e.g., Accuracy of tract reconstruction)(e.g., Quantitative metrics like Dice similarity, mean distance)
Anatomical Region Generation(e.g., Accuracy of segmentation)(e.g., Quantitative metrics like Dice similarity, boundary distance)
Co-registration with Anatomical Scans(e.g., Alignment accuracy)(e.g., Quantitative metrics like registration error)
Slice Motion Filtration Performance(e.g., Effectiveness in reducing motion artifacts)(No specific metrics provided)
Skull Stripping Performance(e.g., Accuracy of skull removal)(No specific metrics provided)
Equivalence to Predicate Device(Specific metrics for equivalence)(General statement of equivalence)
Usability(e.g., User satisfaction, task completion rate)(Summative usability evaluation performed)

2. Sample sized used for the test set and the data provenance

  • Test Set Sample Size: Not provided. The document states, "Performance and comparison evaluations were performed by representative users on a dataset not used for development composed of normal and abnormal brains." The specific number of cases or subjects in this dataset is not mentioned.
  • Data Provenance: The document does not explicitly state the country of origin. It indicates the dataset included "normal and abnormal brains" and was "not used for development." It does not specify if the data was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not provided. The document states, "Performance and comparison evaluations were performed by representative users." It does not specify how many, if any, specific experts established ground truth, or if ground truth was established by automated means (e.g., via algorithm from the predicate).
  • Qualifications of Experts: Not provided. The document refers to "representative users" but does not detail their professional qualifications (e.g., radiologist, surgeon, years of experience, subspecialty).

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Adjudication Method: Not provided. The document does not describe any specific adjudication process for establishing ground truth or resolving discrepancies in the test set evaluations.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study Conducted: The document mentions "Adjudication of Results for studies conducted per representative users" but does not explicitly state that it was a multi-reader, multi-case (MRMC) comparative effectiveness study designed to show human reader improvement with AI assistance.
  • Effect Size of Human Reader Improvement: Not provided. The document focuses on the device's standalone performance and comparison/equivalence to a predicate device, rather than the performance of human readers assisted by Quicktome versus unassisted.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: Yes, implicitly. The performance and comparison validations, including evaluation of "tractography bundle and anatomical region generation," "co-registration," "slice motion filtration," and "skull stripping," indicate an assessment of the algorithm's output independently, even if "representative users" were involved in judging that output. The device itself is software for processing images.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • Type of Ground Truth: Not explicitly detailed. The document states "performance and equivalence of tractography bundle and anatomical region generation" were evaluated. This implies a reference or ground truth was used for comparison. Given the context, it's highly likely that ground truth for tractography and anatomical regions would be derived either from:
    • Expert Consensus/Manual Delineation: Experts manually segmenting or defining tracts/regions.
    • Validated Reference Software/Algorithm: Using output from an established, highly accurate (perhaps manually curated) system or the predicate device as a "ground truth" for comparison.
    • The document implies equivalence to the predicate device was a key benchmark, suggesting its outputs played a role in the "ground truth" for comparison.

8. The sample size for the training set

  • Training Set Sample Size: Not provided. The document mentions the test set was "not used for development," implying a separate training/development set existed, but its size is not specified.

9. How the ground truth for the training set was established

  • Training Set Ground Truth Establishment: Not provided. The document does not detail how ground truth was established for any data used during the development or training phase of the algorithm.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).