(721 days)
The Glucommander System is a glycemic management tool intended to evaluate current as well as cumulative patient blood glucose values coupled with patient information including age, weight and height, and, based on the aggregate of these measurement parameters, whether one or many, recommend an IV dosage of insulin, glucose or saline or a subcutaneous basal and bolus insulin dosing recommendation to adjust and maintain the blood glucose level towards a configurable physician-determined target range.
The Glucommander System is indicated for use in adult and pediatric (ages 2 - 17 years) patients.
The G+ System logic is not a substitute for, but rather an adjunct to clinical reasoning. The measurements and calculations generated are intended to be used by qualified and trained medical personnel in evaluating patient conditions in conjunction with clinical history, symptoms, and other diagnostic measurements, as well as the medical professional's clinical judgment. No medical decision should be based solely on the recommended quidance provided by this software program.
The indications for use are identical to the predicate device.
The Glucommander System is a software algorithm device intended to evaluate the current as well as cumulative patient blood glucose values and, based on the aggregate of those measurements, whether one or many, recommend a dosage of insulin, glucose, or saline in order to direct the blood glucose level towards a predetermined target range. Once that target blood glucose range has been reached, the system's function is to recommend a titration of insulin, glucose, and saline for the purpose of maintaining the patient's blood glucose level in that target range. The system is programmed to provide intravenous dosing of glucose, saline, and insulin, as well as subcutaneous dosing of insulin for both pediatric (ages 2-17 years) and adult patients.
This document is a 510(k) summary for the Glytec Glucommander device. It details the device's description, indications for use, and a comparison to a predicate device to establish substantial equivalence.
Based on the provided text, the device "Glytec Glucommander" functions as a software algorithm that recommends insulin, glucose, or saline dosages based on patient blood glucose values and other patient information. The 510(k) submission primarily focuses on demonstrating substantial equivalence to a predicate device and does not involve the type of study that would typically establish quantitative acceptance criteria for performance metrics (e.g., sensitivity, specificity, or dose accuracy compared to a ground truth) and then empirically prove the device meets those criteria through a structured test set evaluation, as is common for AI/ML-based diagnostic devices.
Instead, the submission for Glytec Glucommander focuses on demonstrating that new functionalities (Continuous Tube Feeding Module, Ambulatory Care Module, Subcutaneous Module) "do not raise different questions of safety or effectiveness" compared to the predicate device. The performance data section explicitly states: "The 510k submission included software, cybersecurity, and human factors documentation to demonstrate the performance of the device is substantially equivalent to the predicate."
Therefore, I cannot provide a table of acceptance criteria and reported device performance in the way typically seen for standalone performance claims (e.g., accuracy against a medical ground truth). The acceptance criteria, in this context, relate more to regulatory and engineering standards (e.g., software validation, cybersecurity, human factors) and the demonstration of substantial equivalence rather than a medical performance metric.
Given the information provided, here's an analysis of what can be inferred or explicitly stated regarding the acceptance criteria and the "study" that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
As explained above, the document does not present quantitative performance acceptance criteria (e.g., a specific percentage of cases where the device must be accurate in its recommendations) compared to a medical ground truth. Instead, the "acceptance criteria" are implied by the regulatory standard of "substantial equivalence" and the types of documentation provided.
Acceptance Criterion (Implied) | Reported Device Performance/Evidence |
---|---|
New functionality does not raise different safety/effectiveness questions. (Regulatory/Clinical) | "These features do not raise different questions of safety or effectiveness." Supported by non-clinical performance data comparing to the predicate. |
Device is substantially equivalent to predicate device (K113853). (Regulatory) | "Information presented supports substantial equivalence of the Glucommander System to the predicate device." "The proposed enhancements have the same indications for use, are similar in design, have the same fundamental scientific technology, and are tested the same way as the predicate device." |
Software functionality and integrity. (Engineering/Software) | "The 510k submission included software... documentation to demonstrate the performance of the device is substantially equivalent to the predicate." Implies compliance with software development lifecycle standards and verification/validation. |
Cybersecurity. (Engineering/Security) | "...cybersecurity... documentation to demonstrate the performance of the device is substantially equivalent to the predicate." Implies measures to protect data and system integrity. |
Human factors. (Usability/Safety) | "...human factors documentation to demonstrate the performance of the device is substantially equivalent to the predicate." Implies usability testing and design to minimize user error. |
Compliance with general controls. (Regulatory) | Stated in the FDA letter: "You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting... good manufacturing practice... etc." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states that "Non-clinical performance data was used to demonstrate that the device is substantially equivalent to the predicate device." This suggests that the "testing" involved software, cybersecurity, and human factors validation rather than a clinical study with a patient data test set. Therefore, there is no stated sample size for a patient-based test set, nor information on data provenance (country, retrospective/prospective). This submission relies on engineering and software validation and comparison to a predicate, not clinical performance data from a patient cohort.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Given that "Non-clinical performance data" was used and no clinical test set for algorithmic performance against a medical ground truth is described, there is no information provided regarding experts establishing ground truth for a clinical test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no clinical test set with human adjudication is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No such study is mentioned or implied. The device is a "drug dose calculator" that recommends dosages, and is an "adjunct to clinical reasoning," not typically a diagnostic AI. The submission focuses on substantial equivalence through non-clinical data.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
The "performance data" provided are non-clinical (software, cybersecurity, human factors). The device provides "recommendations," implying human-in-the-loop operation, but no standalone performance data for the algorithm's medical accuracy (e.g., how often its recommendations are correct/optimal compared to a gold standard) is detailed for a specific test set. The claim relies on the new modules being "similar in design" and "tested the same way as the predicate device," which itself would have had to meet performance criteria.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable, as no clinical test set requiring a medical ground truth (e.g., for diagnostic accuracy or treatment efficacy) is described in this 510(k) summary. The ground truth, implicitly, would be the established performance and safety of the predicate device, against which the new functionalities are compared through non-clinical means.
8. The sample size for the training set
The document does not describe the development or training of an AI/ML model in a manner that would typically involve a "training set" of patient data. The Glucommander is described as a "software algorithm device," which could imply rule-based logic or a different type of computational model not necessarily relying on a large data training set in the way modern deep learning AI does. Therefore, there is no information on a training set sample size.
9. How the ground truth for the training set was established
Not applicable, as no training set is described. If the algorithm is rule-based, its "ground truth" would be established by clinical guidelines and expert medical knowledge encoded into the software rather than data-driven training.
§ 868.1890 Predictive pulmonary-function value calculator.
(a)
Identification. A predictive pulmonary-function value calculator is a device used to calculate normal pulmonary-function values based on empirical equations.(b)
Classification. Class II (performance standards).