K Number
K191530
Device Name
StoneChecker
Date Cleared
2019-09-26

(108 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

StoneChecker is a standalone post-processing software application which assists trained professionals in evaluating DICOM computed tomography image studies of patients diagnosed with kidney stones. The software provides tools to enable the user to navigate images, select regions of interest, and generate information from those regions.

The generated information consists of regional statistical measurements of image texture and heterogeneity, including means, standard deviation, skewness, and kurtosis. The information also includes regional physical measurements of stone size, volume, and position.

StoneChecker does not make clinical decisions and the information provided by StoneChecker must not be used in isolation when making patient management decisions.

Device Description

StoneChecker (SC) is a standalone software application intended to load DICOMformatted CT studies, let the trained user identify stone regions of interest, and provide computed information consisting of physical measurements and statistical measurements of stone heterogeneity from a single source making it easier for the user to determine the best treatment option. SC is an optional tool used during the treatment planning of a patient diagnosed with kidney stones.

StoneChecker provides the user tools to select and evaluate various physical characteristics of a kidney stone displayed on a non-contrast enhanced Kidneys, Ureters, and Bladder (KUB) CT scan slice. The measurement and calculated values are displayed on the PC screen for the user and the user has an option to generate a report. The calculated output includes stone volume, mean Hounsfield Unit (HU) density, skin to stone distance, and texture values (mean, mean of positive pixels, standard deviation, skewness, kurtosis, and entropy). This data can be used by the physician as an aid to decision making and are intended to be an adjunct to other clinical data such as medical history, physical examination, and urine analysis. Thus, additional analysis of all kidney stones is required. StoneChecker software is designed exclusively for use in assessing kidney stones.

StoneChecker is designed to provide easy-to-acquire useful data for helping clinicians make the best decisions for their patients.

StoneChecker includes the following features:

  • . Processes standard DICOM image sets,
  • . Novel proven statistical algorithms,
  • Time-saving kidney stone regions of interest (ROI) and measurement tools, .
  • . Rapid calculation results, and
  • Saves results in standard Excel spreadsheets. .
AI/ML Overview

The provided text describes the StoneChecker device, its indications for use, and a comparison to predicate devices, but it does not contain detailed information about specific acceptance criteria and a study proving the device meets those criteria with quantitative performance metrics.

The document states:

  • "All product specifications were verified and validated. Testing was performed according to internal company procedures. Software testing and validation were conducted according to written test protocols established before testing was conducted. Test results were reviewed by designated technical professionals before software proceeded to release. Test results support the conclusion that actual device performance satisfies the design intent."
  • "Bench testing (functional and integration) was conducted for StoneChecker during product development. Test results demonstrate StoneChecker output is computed accurately based on input."

However, it lacks the specific numerical acceptance criteria for measurements like stone volume, HU density, or texture values, and thus does not present a table of acceptance criteria and reported device performance as requested. It also doesn't detail a formal comparative study with AI vs. without AI assistance.

Therefore, I cannot populate the requested information in the desired format using only the provided text. The following points represent the information that can be extracted or inferred from the provided text, and explicitly state what is missing:


1. Table of Acceptance Criteria and Reported Device Performance:

Not available in the provided text. The document states that "Test results demonstrate StoneChecker output is computed accurately based on input" but does not provide specific acceptance criteria (e.g., minimum accuracy/error percentage for volume, HU density, etc.) nor the numerical performance results against such criteria.

2. Sample size used for the test set and the data provenance:

  • Test Set Sample Size: Not explicitly stated. The document mentions "usage validation at two clinical sites in Oxford, UK and Beijing, China" but does not provide the number of cases or patients used in this validation.
  • Data Provenance: The usage validation was conducted at "two clinical sites in Oxford, UK and Beijing, China." The data used would therefore be from these locations. It is implied to be prospective or retrospective clinical data given the nature of "usage validation," but the document doesn't specify.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Number of Experts: Not explicitly stated. The text refers to "physicians use StoneChecker to analyze KUB CT scans" during the usage validation, but not how many physicians were involved in establishing ground truth.
  • Qualifications of Experts: The text refers to "trained professionals," "physicians," and "trained physicians, Radiologists" as intended users, but does not specify the qualifications (e.g., years of experience) of those involved in establishing ground truth for the validation.

4. Adjudication method for the test set:

Not available in the provided text. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth during the usage validation.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

Not available in the provided text. The document states StoneChecker "assists trained professionals," but it does not report a formal MRMC comparative effectiveness study measuring the improvement of human readers with AI assistance versus without.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

Yes, implicitly. The bench testing described ("Test results demonstrate StoneChecker output is computed accurately based on input") would likely constitute standalone performance testing for the algorithms' accuracy in calculating measurements. However, no specific metrics from this testing are provided beyond a general statement of accuracy.

7. The type of ground truth used:

  • For the "bench testing (functional and integration)", the ground truth would likely be computational accuracy based on known inputs and expected outputs (i.e., verifying the algorithms correctly compute derived values from an ROI, such as volume or HU density).
  • For the "usage validation," the ground truth would be based on clinician assessment/consensus as they used the tool to "analyze KUB CT scans," but the specific method for establishing this ground truth is not detailed.

8. The sample size for the training set:

Not available in the provided text. The document does not mention the training set size, as it focuses on validation and regulatory aspects. This suggests it might not be a deep learning model requiring a distinct training set in the conventional sense, or the information is simply omitted.

9. How the ground truth for the training set was established:

Not available in the provided text. As the training set size is not mentioned, neither is the method for establishing its ground truth.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).