Search Filters

Search Results

Found 10 results

510(k) Data Aggregation

    K Number
    K170423
    Manufacturer
    Date Cleared
    2017-10-25

    (254 days)

    Product Code
    Regulation Number
    880.6300
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    medical grade epoxy and sewn into surgical sponge

    • Classification Name: Surgical sponge scale (21 CFR 880.2740
      Patient Identification and
      Health Information (21
      CFR 880.6300) | Surgical sponge scale
      (21 CFR 880.2740
      medical grade epoxy and sewn into surgical sponge
    • Classification Name: Surgical sponge scale (21 CFR 880.2740
      Patient Identification and
      Health Information (21
      CFR 880.6300) | Surgical sponge scale
      (21 CFR 880.2740
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DentureID Microchip is intended to enable access to secure patient identification and device information when used with complete dentures, partial dentures and other removable oral appliances.

    Device Description

    DentureID Microchip enables access to secure patent identification and device information when used with removable oral appliances. The microchip is permanently embedded into a denture and is a digital link to owner contact information and information about the denture in the event that a repair or replacement is needed. All of the information is controlled on the secure DentureID.com website by the denture owner and dental professional. The DentureID Microchip can be read by any NFC compatible Android Smart Phone (after downloading the DentureID.com App from Google Play) and holding the phone against the microchip. The information on the DentureID.com website may be modified at any time by the patient or dental professional by entering a username and password. The information that is on the website will appear on the smart phone when reading the DentureID Microchip. DentureID Microchips are classified as RFID ISO 14443 which use NFC (near-field-communication). DentureID Microchips are designed to be read by ISO 14443 NFC-compliant smart phones with DentureID App installed. DentureID Microchips are encased in a medical-grade epoxy resin. The size is 1.5mm X 6 mm in diameter. They are inserted into the buccal flange of a denture and completely covered with self-cure repair resin. The DentureID Microchip does not directly contact the patient.

    AI/ML Overview

    This document describes the non-clinical performance testing of the DentureID Microchip.

    1. Table of Acceptance Criteria and Reported Device Performance

    Test DescriptionStandardAcceptance Criteria (Implied by Result)Reported Device Performance
    Biocompatibility
    CytotoxicityISO 10993-5:2009Non-cytotoxic (score '0')The test article scored '0' at 24, 48, and 72 ± 4 hours and is considered non-cytotoxic under the conditions of this test.
    IrritationISO 10993-10:2010Mean test and control scores of extract dermal observations less than 1.0The differences in the mean test and control scores of the extract dermal observations were less than 1.0, indicating that the requirements of the ISO Intracutaneous Reactivity Test have been met by the test article.
    SensitizationISO 10993-10:2010No sensitization response greater than '0' for test article extractsNone of the negative control animals challenged with the control vehicles were observed with a sensitization response greater than '0'. None of the animals challenged with the test article extracts were observed with a sensitization response greater than '0'. The normal saline extract of the test material had a sensitization response of '0' under valid test conditions. The sesame oil extract of the test material had a sensitization response of '0' under valid test conditions. Under the conditions of this protocol, the test article did not elicit a sensitization response.
    Chemical Characterization of Materials (Leachate)Proprietary standards of Nelson Laboratories and ChemTech Ford LaboratoriesPotential toxicity from leaching of harmful chemicals is not significantThe incremental substances found in the study are presented and compared to toxicity standards for the respective materials. As a result of the study, we feel that the potential toxicity from leaching of harmful chemicals from the DentureID RFID tag is not significant.
    Information Security ProceduresNot specifiedPassedPassed
    Software ValidationNot specifiedPassedPassed
    Migration Testing of Implanted TransponderNot specifiedPassedPassed
    Performance Testing of Implanted TransponderNot specifiedPassedPassed
    Electromagnetic Compatibility (EMC)
    Radiated EmissionsEN 55011:2009PassPass
    Electrostatic Discharge ImmunityEN 61000-4-2:2008PassPass
    Radiated Electromagnetic Field ImmunityEN 61000-4-3:2010PassPass
    Magnetic Field ImmunityEN 61000-4-8:2009PassPass
    Data Integrity (during EMC)Not specifiedNo loss or corruption of data, latency, or throughputNo loss or corruption of the data, latency or through-put, which was coordinated with the electromagnetic compatibility (EMC) performance of the microchip, scanner and wireless data link.
    Magnetic Resonance Imaging Compatibility (MRI)
    Magnetic Field Interactions at 3-TeslaASTM F2052-15No additional risk or hazard to a patient in the 3-Tesla MRI environment or less with regard to torque (qualitative measurement of 0)The qualitatively measured torque at 3-Tesla for the DentureID was 0, no torque. As such, this device will not present an additional risk or hazard to a patient in the 3-Tesla MRI environment or less with regard to torque.
    MRI-related heating, 1.5-Tesla and 3-TeslaASTM F2182-11aMaximum temperature rise matches background (e.g., 1.5°C at 1.5T, 1.9°C at 3T)1.5-Tesla demonstrated a maximum of 1.5°C temperature rise and the 3-T system demonstrated a 1.9°C rise. Both of these temperature rises matched the maximum background temperature rise. In conclusion, MRI 1.5 and 3 do not induce significant heating to DentureID Microchip.
    Artifacts at 3-TeslaASTM F2119-07 (Reapproved 2013)Localized signal voids corresponding to device size and shape; maximum artifact size around 10-mm relative to device size.The artifacts that appeared on the MR images were shown as localized signal voids (i.e., signal loss) that corresponded to the size and shape of this device. The gradient echo pulse sequence produced larger artifacts than the T1-weighted, spin echo pulse sequence for the device. The maximum artifact size (i.e., as seen on the gradient echo pulse sequence) extends approximately 10-mm relative to the size and shape of this device.
    Effects of MRI at 1.5-Tesla and 3-Tesla on FunctionNot specified (Internal investigation protocol)100% pre and post-exposure performanceDentureID Microchips performed 100% pre and post exposures.
    Simulated Wear (Denture Cleaning)Not specified (Internal protocol based on 5-year simulated wear with cleaning medium/brush)No impact on device performance; clear self-cure dental acrylic not affectedThe clear self-cure dental acrylic placed over the DentureID Microchip was not affected by the scrubbing action. Therefore, cleaning dentures will not impact the performance of DentureID Microchip.

    Study Information:

    The provided document describes non-clinical performance testing to demonstrate the substantial equivalence of the DentureID Microchip to its predicate devices. It does not detail a clinical study involving human readers or a training set in the context of typical AI device evaluation.

    Here's a breakdown of the available information:

    2. Sample Sizes and Data Provenance for Test Set:

    • Cytotoxicity, Irritation, Sensitization: These tests involve biological samples (e.g., cell cultures, animals). The document doesn't specify the exact number of samples/animals used but refers to the standards (ISO 10993-5, ISO 10993-10) which define such sample sizes. The data provenance is implied to be laboratory testing in facilities adhering to these standards, likely in the US given the FDA submission. These are retrospective tests conducted on device materials.
    • Chemical Characterization: Not specified, but involved proprietary standards of Nelson Laboratories and ChemTech Ford Laboratories. Retrospective laboratory testing.
    • EMC Testing: Not specified for the number of devices tested, but it's laboratory testing of the DentureID Microchips. Retrospective.
    • MRI Compatibility: Not specified for the number of devices tested, but it's laboratory testing of DentureID Microchips to ASTM standards. Retrospective.
    • Simulated Wear: Not specified for the number of DentureID Microchips installed in dentures, but it's a non-clinical performance test over a 5-year simulation. Retrospective.

    3. Number of Experts and their Qualifications for Ground Truth:

    • This information is not applicable in the context of this non-clinical performance testing. The "ground truth" for these tests is established by adhering to recognized international standards (ISO, ASTM, EN) and laboratory protocols, with results interpreted by qualified laboratory personnel (e.g., toxicologists, engineers) who execute these specific tests.

    4. Adjudication Method for the Test Set:

    • Not applicable. These are objective, quantitative and qualitative laboratory tests against defined scientific and engineering standards, not subjective interpretations requiring adjudication by experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, an MRMC comparative effectiveness study was NOT done. This document focuses on the technical and safety performance of the device itself (hardware and its interaction with a smartphone app), not on the diagnostic or interpretative ability of human readers or AI in a clinical setting. Therefore, there is no effect size of how much human readers improve with AI vs. without AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance):

    • Yes, in essence, standalone performance testing was done for the core function of the microchip. The various tests (Biocompatibility, EMC, MRI Compatibility, Simulated Wear, Migration Testing, Performance Testing of Implanted Transponder) evaluate the physical and functional characteristics of the DentureID Microchip itself, independent of a human interpreting its output. The "algorithm" here is the RFID chip's ability to respond with its serial number and the smartphone app's ability to retrieve information from a database. "Performance Testing of Implanted Transponder" is a direct measure of this standalone function.

    7. Type of Ground Truth Used:

    • The ground truth for this non-clinical testing is based on:
      • Standardized test methods and protocols: Defined by international standards (ISO, ASTM, EN) and proprietary laboratory standards.
      • Objective measurements and observations: E.g., cytotoxicity scores, temperature rises, torque measurements, pass/fail criteria for EMC.
      • Chemical analysis: For leachate testing.

    8. Sample Size for the Training Set:

    • Not applicable. This device (DentureID Microchip) is a passive RFID transponder and its associated database/app. It does not employ machine learning or AI models that require a "training set" in the conventional sense for image analysis or diagnostic tasks. Its function is to communicate a serial number, which then links to pre-entered data on a website.

    9. How Ground Truth for Training Set Was Established:

    • Not applicable, as there is no training set for an AI/ML model for this device. The information stored on the DentureID.com website, which is retrieved by the microchip's serial number, is entered by a dental lab or dentist. This user-provided information serves as the "ground truth" for the data it's designed to return, but it's not a training set for an algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K121274
    Device Name
    PIXEL APP
    Date Cleared
    2012-06-27

    (61 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Cupertino, California 95014

    Re: K121274

    Trade/Device Name: Gauss Pixel App Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Gauss Pixel App is intended to be used to aid current clinical practices in recording the number of surgical sponges and for visibility for assessment of sponge images.

    Device Description

    Not Found

    AI/ML Overview

    I am sorry I cannot fulfill your request to describe the acceptance criteria and study proving device efficacy based on the provided document. The document primarily consists of a 510(k) clearance letter from the FDA, an "Indications for Use" form, and general regulatory information. It does not contain the specific details about the acceptance criteria, study design, results, or data provenance you are asking for.

    The information provided only states that the device is "substantially equivalent" to legally marketed predicate devices, which is the basis for 510(k) clearance. It does not include the detailed performance testing data that would be part of a full study report.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120473
    Device Name
    PIXEL APP
    Date Cleared
    2012-04-09

    (53 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Medical Device

    Device Classification Name: Counter, Surgical Sponge Device Classification Number: 21 CFR 880.2740
    Cupertino, California 95014

    Re: K120473

    Trade/Device Name: Gauss Pixel App Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Gauss Pixel App is indicated for use to aid current practices in recording the number of surgical sponges and for visibility for assessment of sponge images.

    Device Description

    The Gauss Pixel App is a software program used on an iPad tablet to capture images of sponges to assist surgical personnel in the management of surgical sponges after surgical The App allows surgical personnel to categorize sponges by sponge type and use. provides an automated ongoing count of total sponge images and sponge images by tag. It also provides a visual record of images for further evaluation. This program is not intended to replace existing sponge counting practices and sponges should be retained per the user's standard sponge management practice until the case is complete and sponge counting has been finalized.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study findings for the Gauss Pixel App, based on the provided document:

    1. Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance (Gauss Pixel App)
    Record Images as Indicated by UserConfirmed: The application provided instructions for use and recorded images as indicated by the user.
    Accurately Tag Images as Indicated by UserConfirmed: The application accurately tagged images as indicated by the user.
    Accurately Provide Automated Counting (Total)Confirmed: The application accurately provided automated counting both in total.
    Accurately Provide Automated Counting (by Type)Confirmed: The application accurately provided automated counting by type.
    Allow Visual Review and Management of ImagesConfirmed: The application allowed visual review and management (re-tagging, deletion) of all images as appropriate.
    Function as IntendedDemonstrated: Results of performance testing through the software verification and validation process demonstrate that the Gauss Pixel App functions as intended.
    Substantial Equivalence to Predicate DeviceDemonstrated: The Gauss Pixel App is as safe and effective as the predicate, has the same intended uses and indications, and utilizes a new technological method (software) which complements current clinical practices and does not raise new issues of safety or effectiveness. Software verification and validation demonstrate it functions as intended.

    2. Sample Size and Data Provenance

    The document does not explicitly state the sample size used for a test set in the conventional sense of a clinical trial or independent validation. The performance claims are based on "software verification and validation testing." This typically involves internal testing by the developer to ensure the software meets its specified functional and non-functional requirements.

    There is no information provided regarding the country of origin of data or whether it was retrospective or prospective. Given the nature of a 510(k) submission for a Class I device and the focus described (software verification and validation), it's highly probable that this involved internal testing rather than a large-scale clinical study with external patient data.

    3. Number of Experts and Qualifications

    The document does not specify the number of experts, their qualifications, or their role in establishing a "ground truth" for a test set. The validation described is primarily around the software's functional performance (e.g., counting, tagging, image display) rather than assessing clinical accuracy against expert opinion in a diagnostic context.

    4. Adjudication Method

    No adjudication method (e.g., 2+1, 3+1) is mentioned, as the described validation focuses on software functionality rather than interpreting findings against a 'ground truth' that would require expert consensus.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study is mentioned. The device is described as an adjunctive tool to aid current practices, not as a replacement or independent diagnostic tool. The focus is on verifying its own functionality (counting, image display) rather than its impact on human reader performance. Therefore, an effect size of human readers improving with/without AI assistance is not reported.

    6. Standalone Performance

    Yes, a standalone (algorithm only without human-in-the-loop performance) assessment was done for the core functionalities of the application. The document states: "Results of performance testing confirmed that the application provided instructions for use, recorded images as indicated by the user, accurately tagged images as indicated by the user, accurately provided automated counting both in total and by type and allowed visual review and management (re-tagging, deletion) of all images as appropriate." This describes the independent functional performance of the software.

    7. Type of Ground Truth Used

    For the described performance testing, the "ground truth" appears to be the expected functional behavior of the software as defined by its specifications and design. For example:

    • For "accurately provided automated counting," the ground truth would be the actual number of images taken and the actual type assigned by a user, against which the software's automated count is compared.
    • For "accurately tagged images," the ground truth is the tag the user intended to apply, against which the software's recorded tag is compared.

    It does not refer to medical ground truth like pathology, outcomes data, or expert consensus on a diagnostic interpretation.

    8. Sample Size for the Training Set

    The document does not specify a sample size for a training set. This is consistent with the nature of the device as a functional tool for managing sponges, not a machine learning model that requires a large training dataset to develop its core functionality (like image recognition for diagnostic purposes). The "training" described implicitly refers to the software development and testing cycles designed to ensure correct functionality.

    9. How Ground Truth for the Training Set Was Established

    Given the information provided, the concept of a "ground truth for the training set" as it pertains to typical AI/ML development (e.g., labeled data for model training) is not applicable here. The ground truth for the development and verification of the software's core functions (counting, tagging) would have been established by the software's design specifications and the expected, correct outputs for given inputs during testing. This is standard software engineering practice rather than a data-driven ground truth for a machine learning algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K100551
    Date Cleared
    2010-08-12

    (167 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    System Trade Name:

    Surgical Items Counting System Common:

    Surgical sponge counter, Class I, 21 CFR 880.2740
    Pardes Hanna 37052 Israel

    Re: K100551

    Trade/Device Name: ORLocate™ system Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ORLocate™ system is indicated for use in recording and counting the number of RFID-tagged surgical sponges, laparotomy sponges, towels and other tagged items used during surgical procedures in which counting is required. In addition, the product is indicated for providing a non-invasive means of detecting retained RFID-tagged surgical sponges, towels and other tagged items within a surgical site, as an adjunctive detection method to current surgical counting systems and methods.

    Device Description

    Haldor ORLocate™ system is an RFID system providing a solution that enables the enumeration of sponges and surgical manual instruments, utilizing passive tags for keeping track of the items during surgery and to identify counting problems. In addition, the system provides a non-invasive means of locating retained surgical items within a surgical site. The submission consists of the ORLocate™ system which includes: cart and antennas. Additionaly the submission includes accessories which are: associated single use surgical sponges, gauzes, pads and surgical towels each fitted with a uniquely coded RFID tag and uniquely coded RFID tag used for surgical instruments. The RF frequency the system uses is 13.56 MHz according to ISO 15693. The system supplies also a semi-automatic application to help in counting untagged items, the count information is first entered manually and the calculations are automatic.

    AI/ML Overview

    The ORLocate™ System is an RFID-based system for counting and detecting surgical items. The provided document, a 510(k) Summary, details the device's technical characteristics, intended use, and non-clinical performance data to demonstrate substantial equivalence to predicate devices.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" with numerical thresholds for performance. However, based on the non-clinical performance data section, the criteria can be inferred from the tests performed and the conclusion that the device functions as intended and is as safe and effective as predicate devices.

    Acceptance Criteria (Inferred)Reported Device Performance
    Biocompatibility of tagged itemsDemonstrated
    Permanent attachment of tags to sponges and instrumentsDemonstrated
    Software functions as intendedValidated, properly counting sponges in body fluids
    Safety equivalent to predicate devicesTest results demonstrate this
    Effectiveness equivalent to predicate devicesTest results demonstrate this
    Proper functioning in body fluidsSoftware properly counted sponges in body fluids
    Counting accuracyTesting performed, deemed "as safe and effective"
    System interference with OR devicesTesting performed
    ORLocate sponge X-ray detectionTesting performed
    ORLocate Tag pull testTesting performed
    Electromagnetic compatibility (IEC 60601-1-2:2007)Testing performed
    Electrical safety (IEC 60601-1:1988 + A1:1991 + A2:1995 and EN 60601-1:1990 + A1:1993 + A2:1995 + A3:1996)Testing performed

    2. Sample Size Used for the Test Set and Data Provenance

    The document states "Non-clinical testing included demonstrating performance of system and tagged items in laboratory tests." However, it does not specify the sample sizes used for any of the tests (e.g., how many sponges were tested for counting accuracy, how many instruments for tag pull test).

    The data provenance is described as "laboratory tests," implying controlled settings rather than real-world clinical data. The document does not mention the country of origin of the data explicitly, but the 510(k) owner is based in Israel, suggesting the testing likely occurred there or in collaboration with international labs. The study is retrospective in the sense that the testing was performed and then reported for the 510(k submission.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not provide information on the number of experts used or their qualifications for establishing ground truth for the test set. Given the nature of the non-clinical tests (e.g., biocompatibility, tag attachment, software counting accuracy), ground truth would likely be established through objective measurements and validated procedures rather than solely expert consensus.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method for the test set. Given that the non-clinical tests are largely objective performance evaluations (e.g., measuring count accuracy, pull force, EMC compliance), an adjudication method in the context of expert review is unlikely to be relevant.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document focuses on non-clinical performance data to establish substantial equivalence, not on human reader performance with or without AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done

    Yes, the non-clinical tests described are essentially standalone performance evaluations of the algorithm and device. The "counting accuracy test" and validation that the "software functioned as intended under simulated use, properly counting sponges in body fluids" demonstrate the algorithm's performance without direct human-in-the-loop assistance for the core counting mechanism. The system is designed to perform automatic counts and provide alerts if items are missing, which is a standalone algorithmic function.

    7. The Type of Ground Truth Used

    The ground truth used for the non-clinical tests would have been established through:

    • Objective measurement/validation: For counting accuracy, the actual number of sponges present would be the ground truth. For tag pull tests, the measured force would be compared against a standard.
    • Established standards: Compliance tests like IEC 60601-1-2:2007 and IEC 60601-1:1988 + A1:1991 + A2:1995 refer to external ground truths established by international standards bodies.
    • Simulated environment: For the software, simulated body fluids were used to test performance, implying a controlled and known environment against which the device's output was compared.

    8. The Sample Size for the Training Set

    The document does not mention a training set sample size. The system described is an RFID detection and counting system, which typically relies on pre-programmed logic for tag identification and counting, rather than a machine learning model that requires a "training set" in the conventional sense. If there are any adaptive or learning components, they are not detailed in this summary.

    9. How the Ground Truth for the Training Set Was Established

    Since a training set is not mentioned for machine learning purposes, the method for establishing its ground truth is not applicable/not provided. The system's operational parameters (e.g., RFID tag protocols, counting logic) would be established through engineering design and validation, not model training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K073180
    Date Cleared
    2007-11-19

    (6 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: ClearCount Medical Solutions SmartSponge™ PLUS System Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearCount Medical Solutions SmartSponge™ PLUS System is indicated for use in counting and recording the number of RFID-tagged surgical sponges, laparatomy sponges and towels used during surgical procedures, as well as for providing a non-invasive means of locating retained RFID-tagged surgical sponges, towels, and other tagged items within a surgical site.

    Device Description

    The SmartSponge™ PLUS System includes surgical sponges, laparatomy pads and surgical towels, each of which contains a unique radio frequency identification (RFID) tag permanently attached to the gauze or fabric. The tags allow the sponges and towels to be individually recognized by an RFID reader.

    The SmartBucket is a specially designed cart containing a microcontroller unit with specialized software designed for mobile data collection. Integrated RFID technology allows capture of the information coded on the unique RFID tag on the sponges, pads and towels. The microcontroller unit counts the initial number of sponges introduced into a surgical case, and using the custom software program, reports the total sponges discarded at the end of the procedure, and compares that number to the original. By providing a count of the items entered into surgery, and a count of those discarded and removed permanently from the surgical field, personnel can be alerted to sponges that may still remain in the surgical field prior to closing the patient.

    A Detection Wand is an additional antenna that is tethered by a cable to the SmartBucket. It is powered and controlled by the SmartBucket. The antenna functions as an additional RFID antenna to the system, functioning in an identical manner to the internal SmartBucket antennas. By using a keypad the user may select activate the Detection Wand antenna. When in Detection Wand mode, the system uses the Wand antenna to recognize RFID-tagged items that may be inside the surgical site.

    A Detection Mat is a disposable or reusable element with multiple RFID tags embedded inside, along with several passive printed circuit traces. Like the RFID-tagged sponges, the Detection Mat tags contain unique identifying numbers and are distinguishable by the system software. The Detection Mat is placed on the operating room table before the patient is brought into the room and is covered by the standard sheets or drapes used in surgery, thus not making contact with the patient. The RFID tags in the Matt provide feedback to the user that the Detection Wand is being held close enough to the patient to ensure proper reading. The tags in the Detection Mat also ensure that the Detection Wand scan has covered the appropriate areas of the patient. The passive circuit traces help to enhance the readability of the RFID tags in the Detection Mat.

    AI/ML Overview

    Here's an analysis of the provided text regarding the ClearCount Medical SmartSponge™ PLUS System, focusing on acceptance criteria and supporting study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance
    Counting Functionality: Accurately count and record the initial number of RFID-tagged surgical items and the number discarded post-procedure.The SmartBucket (part of the system) "counts the initial number of sponges introduced into a surgical case, and using the custom software program, reports the total sponges discarded at the end of the procedure, and compares that number to the original." This function is implicitly stated to work as intended. "The customized software program uses the scanned information to count the number of items used at the beginning of a surgical procedure, and then again before surgical closure." No specific numerical accuracy rates are provided for the counting function in this document.
    Detection/Localization Functionality: Non-invasive means of locating retained RFID-tagged surgical items within a surgical site."Non-Clinical testing included simulated use in patient models that represented worst case biological situations... and in all cases the ClearCount SmartSponge™ PLUS System performed as intended." "The validated software functioned as intended under simulated use, properly locating all tags."
    RFID Tag Readability: Read tags through blood, bodily fluids, and tissue."The scanner can read the tag through blood and other bodily fluids and tissue." (This is a design claim rather than a measured performance metric from the study specifically.)
    Biocompatibility of Transponder Tags: Tags are safe for contact within the surgical environment."Biocompatibility of the transponder tag was illustrated and is comparable to the commercially available predicates."
    Electrical Safety Standards: Compliance with IEC 60601-1."The system has also been designed to meet the following electrical safety standards and electromagnetic compatibility standards: IEC 60601-1 Medical Electrical Equipment - Part 1: General Requirements for Safety"
    Electromagnetic Compatibility Standards: Compliance with IEC 60601-1-2."The system has also been designed to meet the following electrical safety standards and electromagnetic compatibility standards: IEC 60601-1-2 (Second Edition, 2001) Medical Electrical Equipment - Part 1: General Requirements for Safety: Electromagnetic Compatibility - Requirements and Tests"

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "simulated use in patient models that represented worst case biological situations." However, no specific sample size (number of simulated cases, number of sponges, or number of tests) for the test set is provided.

    The data provenance is from non-clinical testing, using simulated patient models. The document does not specify the country of origin, but given the FDA submission, it's presumably conducted under U.S. regulatory standards or by a manufacturer seeking to market in the U.S. It explicitly states "Non-Clinical testing," confirming it's not a prospective or retrospective study involving actual patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the document. The "ground truth" for the simulated use in patient models would likely have been established by the study designers or engineers setting up the scenarios of "worst case biological situations" and knowing the expected location and quantity of tagged items. There is no mention of independent experts establishing this ground truth.

    4. Adjudication Method for the Test Set

    This information is not provided. Without details on who assessed the device's performance in the "simulated use," an adjudication method cannot be determined.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not done. The device's primary function is to automate sponge counting and aid in detection, not to assist human readers in interpreting medical images in the traditional sense of an MRMC study. The "Detection Wand" is a tool for locating tagged items, not for improving human interpretation of visual data.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a form of standalone performance was implicitly evaluated for the detection component. The statement "The validated software functioned as intended under simulated use, properly locating all tags" suggests the algorithm's ability to locate tags independently within the simulated environment. While a human operates the Detection Wand, the core detection by the RFID system and its interpretation by the software is a standalone algorithm function. The counting function is also purely algorithmic.

    7. The Type of Ground Truth Used

    The ground truth used was based on pre-defined scenarios within simulated patient models where the exact quantity and location of RFID-tagged items would be known to the experimenters. This is an engineered or experimental ground truth, not derived from expert consensus, pathology, or outcomes data from real patients.

    8. The Sample Size for the Training Set

    The document does not provide information on the sample size for a training set. The device appears to rely on established RFID technology and programmed logic, rather than a machine learning model that would typically require a training set in the conventional sense. If there was any "training" (e.g., for system calibration), its details are not mentioned.

    9. How the Ground Truth for the Training Set was Established

    Since no training set is mentioned or implied for a typical machine learning model, this information is not applicable / not provided. The functionality seems to be based on direct sensing and programming, not a learned model from data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K071355
    Date Cleared
    2007-05-24

    (9 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Nonabsorbable gauze for internal use per 21 CFR 878.4450, GDY
    Surgical sponge counter, unclassified, 21 CFR 880.2740
    K071355

    Trade/Device Name: ClearCount Medical Solutions SmartSponge" System Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearCount Medical Solutions SmartSponge™ System is indicated for use in counting and recording the number of RFID-tagged surgical sponges, laparatomy sponges and towels used during surgical procedures.

    Device Description

    The SmartSponge™ System includes surgical sponges, laparatomy pads and surgical towels, each unit of which contains a unique radio frequency identification (RFID) tag permanently attached to the gauze or fabric. The tags allow the sponges and towels to be individually recognized by an RFID reader.

    The SmartBucket is a specially designed cart containing a microcontroller unit with specialized software designed for mobile data collection. Integrated RFID technology allows capture of the information coded on the unique RFID tag on the sponges, pads and towels. The microcontroller unit counts the initial number of sponges introduced into a surgical case, and using the custom software program, reports the total sponges discarded at the end of the procedure, and compares that number to the original. By providing a count of the items entered into surgery, and a count of those discarded and removed permanently from the surgical field, personnel can be alerted to sponges that may still remain in the surgical field prior to closing the patient.

    AI/ML Overview

    The ClearCount Medical Solutions SmartSponge™ System is designed to count and record RFID-tagged surgical sponges, laparotomy sponges, and surgical towels during surgical procedures. The provided text outlines non-clinical performance data and results.

    Here's a breakdown of the requested information:

    1. Table of acceptance criteria and reported device performance:

    Acceptance CriteriaReported Device Performance
    Permanence of the RFID tag on gauze pads"Results showed that the tags are permanently attached."
    Biocompatibility of the RFID tag material"material is comparable to commercially available predicates in terms of biocompatibility."
    Manufacturing validation: each item has one and only one unique tag"manufacturing validation that one and only one unique tag was placed per item."
    Software validation of the SmartBucket scanning device: proper counting in simulated body fluids"The validated software functioned as intended under simulated use, properly counting sponges in simulated body fluids."
    Overall system performance: safety and accuracy compared to manual counting"Test results demonstrate the RFID tagged sponges are as safe as the predicate device, and the software installed on the microcontroller unit performs accurately, making its use more effective and more accurate than hand counting sponges."

    2. Sample size used for the test set and the data provenance:

    The document states "simulated finished product testing of the total system" and "properly counting sponges in simulated body fluids." However, it does not specify the sample size for this test set nor the data provenance (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    The document does not specify the number of experts used or their qualifications for establishing ground truth for the test set. The ground truth for counting accuracy was established through "simulated use, properly counting sponges in simulated body fluids," implying a controlled experimental setup rather than expert interpretation of medical images or conditions.

    4. Adjudication method for the test set:

    The document does not specify an adjudication method (such as 2+1 or 3+1) for the test set. Given the nature of the device (counting sponges), the ground truth for counting accuracy would likely be directly observable and verifiable rather than requiring expert adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and the effect size:

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided text. The document compares the device's accuracy and effectiveness to "hand counting sponges" but does not provide specific effect sizes for human readers with or without AI assistance in an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, a standalone performance study was done implicitly. The "software validation of the SmartBucket scanning device" and "simulated finished product testing of the total system" describe the device's ability to "properly count sponges" without human intervention for the counting process itself. The system is designed to provide a count that personnel then use to determine if sponges remain.

    7. The type of ground truth used:

    The ground truth used for the counting accuracy of the SmartSponge System was directly observable and verifiable counts in simulated use conditions, specifically "properly counting sponges in simulated body fluids." This is an objective measurement of the device's functionality rather than expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    The document does not specify the sample size for any training set. Given the context of a 2007 510(k) submission for a device using RFID for counting, it's possible that traditional "training sets" in the modern machine learning sense were not explicitly defined or reported in the same way. The software validation suggests a designed and tested algorithm rather than a continuously learning AI model.

    9. How the ground truth for the training set was established:

    The document does not describe how ground truth for a training set (if one was formally used) was established. The software validation likely involved testing against known, correct counts of sponges in controlled environments.

    Ask a Question

    Ask a specific question about this device

    K Number
    K061316
    Date Cleared
    2006-11-02

    (175 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Re: K061316

    Trade/Device Name: Medline Surgical Sponge Scanning System Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Medline Surgical Sponge Scanning System is intended to be used to augment current sponge counting practice by providing a means to electronically detect potentially retained springes,

    Device Description

    Not Found

    AI/ML Overview

    The provided text is a 510(k) clearance letter from the FDA for the Medline Surgical Sponge Scanning System. It does not contain information about acceptance criteria, study design, or device performance metrics.

    Therefore, I cannot answer your request based on the provided input. The document mainly states that the device is substantially equivalent to legally marketed predicate devices and is cleared for marketing.

    Ask a Question

    Ask a specific question about this device

    K Number
    K062642
    Date Cleared
    2006-11-02

    (57 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Detection System Regulation Number: 21 CFR 880.2740 Regulation Name: Surgical sponge scale Regulatory

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RF Surgical Systems Inc. Detection System is intended to provide a non-invasive means of locating retained surgical sponges, gauze and other tagged items within a surgical site. It is to be employed as an adjunctive detection method to current surgical sponge and gauze counting systems and methods

    Device Description

    The RF Surgical Systems Inc. Detection System consists of:

    1. The Power/Control Console contains the electronics that power and control the detector/scanner. The console also includes the user interface for system operation and communication of the system status, operation and alarms to the user.
    2. The Transponder/Tag is a single use electrically passive device that is designed to radiate a magnetic signature when stimulated by magnetic impulses from the detector/scanner. The tag does not store or communicate any information or unique code and is to be mechanically attached to guaze sponges at the manufacturing site and processed as part of the item
    3. The Detection Wand is a transceiver type antenna designed to stimulate the transponder tag assembly with magnetic impulses and then detect the resultant magnetic signature from the tag. The Power/Control Console provides the power and control to the detection scanner. The detection scanner is intended and designed to be a single-use disposable device and will be supplied in a sterile condition as it will enter the surgical field.
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the RF Surgical Systems Inc. Detection System, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly present a table of quantitative acceptance criteria with corresponding performance metrics in the format typically seen for algorithm performance (e.g., sensitivity, specificity, AUC). Instead, it describes functional requirements and then states that the device "performed as intended."

    Based on the "Technological Characteristics" and "Non-Clinical Performance Data" sections, here's an interpretation of the implied acceptance criteria and the reported performance:

    Acceptance Criteria (Implied)Reported Device Performance
    Non-invasive means of locating retained surgical sponges, gauze, and other tagged items within a surgical site."The RF Surgical Systems Inc. Detection System performed as intended."
    Detection of tagged objects within a minimum range of 16" from the detector aperture surface into a patient of average size.Not explicitly quantified, but overall performance "as intended" implies this was met.
    Tags outside of the patient and more than 36" outside of the scan area must not result in false positives.Not explicitly quantified, but overall performance "as intended" implies this was met.
    System capable of reading the tag signal through blood, bodily fluids, and the body wall.Stated as a characteristic; implicit in "performed as intended."
    Biocompatibility of the transponder tag comparable to commercially available predicates."Biocompatibility of the transponder tag was illustrated and is comparable to the commercially available predicates."
    Validated software functions as intended, properly locating all tags."The validated software functioned as intended under simulated use, properly locating all tags."
    Meets electrical safety standards: IEC 60601-1."The console has also been designed to meet the following electrical safety standards..." (does not explicitly state met in testing, but implies compliance in design).
    Meets electromagnetic compatibility standards: IEC 60601-1-2 (Second Edition, 2001)."The console has also been designed to meet the following ... electromagnetic compatibility standards..." (does not explicitly state met in testing, but implies compliance in design).

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: The document does not provide a specific numerical sample size for the test set used in non-clinical testing. It mentions "simulated use in patient models" and "Animal studies using large swine species."
    • Data Provenance:
      • Simulated Use in Patient Models: These are likely laboratory or in-house simulations.
      • Animal Studies: "large swine species" were used. The country of origin for these animal studies is not specified, but it's implied to be part of the manufacturer's testing process for FDA submission.
    • Retrospective or Prospective: Both "simulated use" and "animal studies" would typically be considered prospective in nature, as they were conducted specifically to test the device.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not explicitly state that experts were used to establish ground truth in the traditional sense of consensus reading for image analysis. Rather, the ground truth for "locating all tags" in the simulated and animal studies would have been the known physical presence and location of the tagged sponges or items.

    4. Adjudication Method for the Test Set

    Given the nature of the device (detection of physical tags) and the described testing (simulations, animal studies where tags are physically present), an adjudication method like 2+1 or 3+1 for expert consensus is not applicable. The "ground truth" is directly observable.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document describes a standalone performance study of the device itself (detecting tags), not a comparative effectiveness study involving human readers with and without AI assistance (which is not an AI device in the typical sense, but rather a detection system). Therefore, no effect size of human reader improvement is reported.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance assessment was conducted. The "Non-Clinical Performance Data" section describes testing where the "RF Surgical Detection System performed as intended" and "the validated software functioned as intended under simulated use, properly locating all tags." This indicates an evaluation of the device's ability to detect tags independently.

    7. Type of Ground Truth Used

    The ground truth used was:

    • Known physical presence and location of tagged items: In simulated patient models and animal studies, the researchers would have known precisely where the tagged sponges were placed.
    • Biocompatibility data: For biocompatibility, the ground truth would be established through standard biological testing methods and comparison to known safe materials (predicates), likely through laboratory analysis rather than expert consensus on images.

    8. Sample Size for the Training Set

    The document does not provide information about a "training set" for the device. This device is described as an RF detection system using transponder tags and a detection wand, not an AI/machine learning system that typically requires a large training dataset. The "software" mentioned likely refers to control and processing logic rather than a trainable AI algorithm.

    9. How Ground Truth for the Training Set Was Established

    As no training set is indicated, this question is not applicable based on the provided text. The device's operation is based on physical principles of radio frequency detection, not on learning from data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K060076
    Manufacturer
    Date Cleared
    2006-03-14

    (63 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Re: K060076

    Trade/Device Name: SurgiCount Medical Safety-Sponge System Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SurgiCount Medical Safety-Sponge System™ is indicated for use in counting and recording the number of thermally labeled surgical sponges, laparotomy sponges and towels used during surgical procedure.

    Device Description

    The Safety Sponges include surgical sponges, laparatomy pads and surgical towels, each unit of which contains a unique identification label permanently fused to the gauze or fabric. The labels allow the sponges and towels to be individually recognized by a commercially available sight laser imager.

    The Safety Sponge Counter is a commercially available mobile computer with specialized software designed for mobile data collection. Integrated imaging technology allows capture of the information coded in the unique identification label on the sponges, pads and towels. The computer counts the initial number of sponges opened, and using the custom software program, reports the total sponges used at the end of the procedure or on demand, and compares that number to the original. Individual sponges may be identified as entered into the surgical field but not discarded, so that the surgical field can be explored before surgically closing the patient.

    AI/ML Overview

    Here's an analysis of the provided text regarding the SurgiCount Medical Safety-Sponge System's acceptance criteria and study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria (e.g., "accuracy must be >99%"). Instead, it focuses on qualitative substantial equivalence and functional performance.

    Acceptance Criterion (Implicit)Reported Device Performance
    Label PermanenceLabels do not flake or peel.
    Material BiocompatibilityMaterial is comparable to commercially available predicates.
    Unique Labeling AccuracyManufacturing validation ensured one and only one unique label per pad.
    Software Functionality (Counting)Software functioned as intended under simulated use, properly counting sponges in simulated body fluids.
    Software Functionality (Reporting)Software provided hard copy reports equivalent to those used for blood bag inventory products.
    Clinical Accuracy (Clean Sponges)Accurately counted surgical sponges into and out of the field on clean sponges.
    Clinical Accuracy (Soiled Sponges)Accurately counted surgical sponges into and out of the field on soiled sponges (through blood and bodily fluids).
    Clinical Time EfficiencyCompared accuracy and time spent for counting sponges. (Implicitly, the system is expected to be more effective/efficient than hand counting).
    User AssessmentOperating personnel's assessment of the software program and ease of use was favorable.
    SafetyNo adverse events were reported; labels are as safe as the predicate device.
    EffectivenessSystem provides improved accuracy, comparable to inventory control systems for blood bags.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated. The clinical study involved a "surgical setting" but the number of procedures, sponges, or specific data points is not quantified.
    • Data Provenance: Clinical data was prospective ("tested clinically under non-significant risk IRB approvals in a surgical setting"). The country of origin is not specified, but given the 510(k) submission to the FDA, it is highly likely to be the United States.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    This information is not provided in the document. The document mentions "operating personnel's assessment," but it doesn't specify if these personnel established ground truth for sponge counts or if their assessment was for user experience.

    4. Adjudication Method for the Test Set

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No Multi-Reader, Multi-Case (MRMC) study was explicitly mentioned. The study compared the device (AI) to traditional hand counting and assessed accuracy and time efficacy. It did not evaluate human readers with and without AI assistance in the traditional MRMC sense.
    • Effect Size: Not applicable, as an MRMC study of human readers with/without AI assistance was not performed. The text does state the system makes its use "more effective than hand counting sponges" and provides "improved accuracy," but no specific quantitative effect size is given for human performance enhancement.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    • Yes, a standalone performance study was done. The core of the product is a "mobile computer with specialized software" that performs the counting. The "software validation of the hand-held scanning device" and "simulated finished product testing of the total system" where the software "properly counting sponges in simulated body fluids" directly demonstrate algorithm-only performance. The clinical study also validates the system's ability to "accurately counted surgical sponges into and out of the field, on both clean and soiled sponges," implying standalone algorithm accuracy.

    7. Type of Ground Truth Used

    Based on the description, the ground truth was likely established through manual, independent counting of sponges for comparison with the device's count. The description implies that the study aimed to verify the accuracy of the device's count against a known correct count (the actual number of sponges).

    8. Sample Size for the Training Set

    This information is not provided in the document. The document describes the system as using "commercially available" components and software "similar to the predicate software" for inventory control, suggesting it may not have required extensive de novo training data for a complex machine learning model in the modern sense. It's more likely a rule-based or OCR-like system.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the document, and it's unclear if a distinct "training set" with ground truth, as understood in modern AI development, was even used. Given the technology described (reading unique ID labels with a laser imager), the system might be more of a deterministic counting system rather than an AI model requiring a large, annotated training set. Its functionality relies on reliable label reading and matching, not "learning" from diverse input data to classify or detect.

    Ask a Question

    Ask a specific question about this device

    K Number
    K972302
    Date Cleared
    1997-07-25

    (36 days)

    Product Code
    Regulation Number
    880.2740
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Illinois 60039-9693

    Re: K972302 Trade/Device Name: Pocket Count or SafeTCount Regulation Number: 21 CFR 880.2740

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Containment and Counting of Sponges in the Operating Room or a hospital setting.

    Device Description

    Pocket Count: Transparent, compartmentalized, disposable bag which allows visual confirmation of sponge count and visual estimate of absorbed fluid. Pocket Count provides partial containment of contaminated sponges, thereby, decreasing worker exposure to contents.
    SafeTCount: Transparent, disposable hand shaped covering which is used to collect sponges. When the covering is turned inside out it contains the contaminated sponges thereby decreasing worker exposure to contents. The SafeTCount allows visual confirmation of sponge count and visual estimate of absorbed fluid.

    AI/ML Overview

    This document is a 510(k) premarket notification for the "Pocket Count or SafeTCount" surgical sponge counter. It primarily focuses on demonstrating substantial equivalence to pre-existing predicate devices. As such, it does not contain a typical study where specific acceptance criteria are set and then experimentally proven by the device.

    Instead, the document details a comparison study to establish substantial equivalence based on design and intended use, rather than a performance study with quantitative acceptance criteria.

    Here's an analysis of the information provided:

    1. A table of acceptance criteria and the reported device performance

    The document does not present acceptance criteria in the traditional sense of measurable performance targets (e.g., accuracy, sensitivity, specificity) with associated reported device performance. Instead, the "performance" section of the substantial equivalence comparison table (page 3, {3}) states:

    PerformanceHolds up to 5 lap sponges or 10 - 4x4'sHolds up to 5 lap sponges or 10 - 4x4'sHolds up to 5 lap sponges or 10 - 4x4'sNo Claim Made

    This indicates the device's capacity, which is a design specification, not a performance metric that would typically be tested against acceptance criteria in a clinical study. The "acceptance criteria" here implicitly revolve around demonstrating that the new devices (Pocket Count and SafeTCount) are substantially equivalent in their intended use, design, material, and performance characteristics (like capacity) to legally marketed predicate devices.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not applicable. This submission is for a physical device (surgical sponge counter) and does not involve a "test set" of data in the context of an algorithm or diagnostic device. The evaluation is based on comparing design features and stated capacities to predicate devices.
    • Data Provenance: Not applicable for a typical "test set" as described for an algorithm. The data provenance would refer to the characteristics of the predicate devices themselves (e.g., "legally marketed in interstate commerce prior to May 28, 1976").

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. "Ground truth" in this context would generally refer to a medical diagnosis or outcome used to evaluate an AI/diagnostic device. For a surgical sponge counter, the "truth" is whether it effectively contains and allows counting of sponges as designed, which is assessed through design comparison and manufacturer claims, not expert consensus on diagnostic data.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable. There is no test set in the sense of diagnostic data requiring adjudication. The FDA's review process determines substantial equivalence based on the submitted information comparing the device to predicates.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This document is for a physical medical device (surgical sponge counter), not an AI or diagnostic tool that would involve human readers or cases.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This is not an algorithm or software device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. The "ground truth" for this type of device is its functional design and capacity to hold sponges, which is inherently understood through its physical properties and intended use. The comparison is against predicate devices that perform the same function.

    8. The sample size for the training set

    Not applicable. This is not an AI or algorithm-driven device that requires a training set.

    9. How the ground truth for the training set was established

    Not applicable. There is no training set for this device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1