K Number
K240037
Device Name
Revi™ System
Date Cleared
2024-05-02

(118 days)

Product Code
Regulation Number
876.5305
Panel
GU
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Revi System is indicated for the treatment of patients with symptoms of urgency incontinence alone or in combination with urinary urgency.

Device Description

The Revi™ System is an implanted tibial electrical urinary continence device that wirelessly receives power from a non-implanted external wearable unit to provide electrical stimulation of the tibial nerve in proximity to the ankle. The device is intended for the treatment of urge urinary incontinence, alone or in combination with urinary urgency. The implantable device is implanted in the vicinity of the tibial neurovascular bundle. The treatment effect of the system is achieved by the implantable wireless neurostimulation component, which sends pulses to the tibial nerve when energized by the wearable unit transmitted power. The electrical pulses stimulate the nerve along the leg, reaching the sacral plexus and entering the spinal cord. This stimulation has the power to modulate nerve function, relieving symptoms.

AI/ML Overview

The provided text is a 510(k) premarket notification letter from the FDA regarding the Revi™ System, a medical device for treating urgency incontinence. It outlines the FDA's determination of substantial equivalence to a predicate device and includes a 510(k) summary with device specifications and a discussion of performance testing.

However, the document does not contain the detailed information required to answer your specific questions concerning acceptance criteria, device performance, sample size, ground truth establishment, expert qualifications, or MRMC studies. The "Discussion of the Performance Testing" section is very brief and only mentions non-clinical benchtop testing for verification, not clinical studies with human participants that would generate performance metrics against specific acceptance criteria.

Therefore, I cannot populate the requested table or answer most of your questions based solely on the provided text. The document refers to "verification testing to ensure identical specification to predicate," which implies that the acceptance criteria for this 510(k) may have been tied to demonstrating that the new device (v4.3 software) performs comparably to the predicate device (v4.1.2.5 software) in bench tests.

Here's what I can extract, recognizing the limitations:

1. A table of acceptance criteria and the reported device performance:

Acceptance Criteria CategorySpecific Acceptance Criteria (Not explicitly stated, inferred from verification testing)Reported Device Performance (Summary)
Stimulation Frequency ControlCompliance with range and accuracy specifications.Verified to comply with specifications.
Charging Mode ProtectionProtection operates in compliance with specifications.Verified to comply with specifications.
System PerformanceCompliance with technical specifications.Verified to comply with technical specifications.
Software VerificationRequirements met, including cybersecurity.Verified, including cybersecurity testing.
Human Factors/UsabilityNo new critical tasks introduced by v4.3 software compared to predicate.No new critical tasks introduced.

2. Sample size used for the test set and the data provenance:

  • The document only mentions "non-clinical benchtop testing." It does not specify a human "test set" and therefore no sample size, country of origin, or whether it was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Not applicable as no human test set or ground truth establishment by experts for performance evaluation is described in the provided text. The testing mentioned is for device specifications.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

  • Not applicable as no human test set or adjudication process is described.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • This device is an implanted neurostimulation system, not an AI-assisted diagnostic tool. Therefore, an MRMC study and effects on human readers are not relevant and not mentioned.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

  • The performance described is essentially standalone in the sense that it's benchtop testing of the device's technical specifications. There's no "human-in-the-loop performance" described in the context of accepting the device's efficacy, only human factors/usability analysis to ensure no new critical tasks were introduced.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

  • The "ground truth" for the verification testing appears to be the defined technical specifications of the device and its predicate, rather than clinical outcomes or expert consensus on a disease state.

8. The sample size for the training set:

  • Not applicable. This document describes verification testing, not the development or training of an AI model.

9. How the ground truth for the training set was established:

  • Not applicable.

In summary, the provided document focuses on demonstrating substantial equivalence through benchtop verification testing for a software update to an implanted medical device, not on clinical performance studies involving a test set, ground truth, or an AI algorithm.

N/A