(126 days)
Graffiti is a software solution that includes integrated team collaboration tools and a smart assistant for the retrieval and display of patient record information and data whenever needed by a Healthcare Provider (HCP). The smart assistant also includes analytical functions to enable user-defined notifications.
When the Parameter-Based Notification feature is enabled, the smart assistant is also intended to keep track of changes in patient information, data and status.
Graffiti may be used by members of the patient care team while on duty physically at the hospital or while on call, which could be remotely.
Graffiti is not intended to replace the Electronic Medical Record system or any patient monitoring or central station devices.
Graffiti is a software-only solution that interfaces with a healthcare facility's information system to retrieve, manage and display patient information on a handheld mobile device. It integrates care team collaboration tools and a smart virtual assistant to retrieve patient data and information from the hospital information systems through a conversational voice or text interface.
The smart assistant also includes analytical functions to enable user-defined custom notifications including time-based, event-based and parameter-based notifications. When the condition for notification is met, Graffiti produces a notification on the user smartphone, thereby helping the user improve her situational awareness.
Graffiti is intended to operate on customer supplied smartphones.
I am sorry, but the provided text does not contain information about the specific acceptance criteria for the device "Graffiti" or a detailed study proving its performance against these criteria.
The document is a 510(k) premarket notification summary for the device "Graffiti", indicating that it has been deemed substantially equivalent to a predicate device (Philips IntelliVue GuardianSoftware).
Here's what I can extract from the provided text regarding the testing and evaluation:
1. A table of acceptance criteria and the reported device performance
This information is not explicitly provided in the document. The document states "Design verification and validation testing was performed to confirm that software and user requirements have been met." and "bench testing was conducted to confirm that the conversational interface can accurately translate user conversational requests into executable data requests or notification requests." However, it does not list specific acceptance criteria or quantitative performance metrics against them.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided. The document mentions "bench testing" and "usability testing" but does not detail the sample sizes or data provenance for these tests.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A MRMC comparative effectiveness study is not mentioned as part of the submission. The document focuses on showing substantial equivalence based on the device's functionality and safety, not on comparative effectiveness with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document states "bench testing was conducted to confirm that the conversational interface can accurately translate user conversational requests into executable data requests or notification requests." This implies some level of standalone testing on the algorithm's ability to interpret queries. However, no specific performance metrics or detailed methodology for this "standalone" test are provided.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
The document does not explicitly state the type of ground truth used for the "bench testing" or "usability testing." For the conversational interface, the "ground truth" would likely be the correct interpretation and execution of the user's request.
8. The sample size for the training set
This information is not provided. The document describes the device as a "software-only solution" with a "digital personal assistant 'Bot'" and includes "analytical functions to enable user-defined notifications." While such systems would typically involve a training phase, the details of the training set are not included in this 510(k) summary.
9. How the ground truth for the training set was established
This information is not provided.
In summary, the provided 510(k) submission primarily focuses on establishing substantial equivalence to a predicate device based on similar intended use and technology. It mentions various quality assurance measures and testing, but it does not delve into the detailed, quantitative performance studies against specific acceptance criteria that you are requesting.
§ 870.1425 Programmable diagnostic computer.
(a)
Identification. A programmable diagnostic computer is a device that can be programmed to compute various physiologic or blood flow parameters based on the output from one or more electrodes, transducers, or measuring devices; this device includes any associated commercially supplied programs.(b)
Classification. Class II (performance standards).