Search Filters

Search Results

Found 16 results

510(k) Data Aggregation

    K Number
    K161959
    Device Name
    ClearView cCAD
    Date Cleared
    2016-12-28

    (163 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClearView cCAD is a software application designed to assist skilled physicians in analyzing breast ultrasound images. ClearView cCAD automatically classifies shape and orientation characteristics of user-selected regions of interest (ROIs).

    The software allows the user to annotate, and automatically record and/or store selected views. The software also automatically generates reports from user inputs annotated during the image analysis process as well as the automatically generated characteristics. The output of this system will be a DICOM compatible (e.g. grayscale softcopy presentation state (GSPS)) and/or PDF report that can be sent along with the original image to standard film or paper printers or sent electronically to an intranet webserver or other DICOM compatible device.

    cCAD includes options to annotate and describe the image based on the ACR BI-RADS® Breast Imaging Atlas. In addition. the report form has been designed to support compliance with the ACR BI-RADS @ Ultrasound Lexicon Classification Form.

    When interpreted by a skilled physician, this device provides information that may be useful in screening and diagnosis. Patient management decision should not be made solely on the results of the cCAD analysis. The ultrasound images displayed on cCAD must not be used for primary diagnostic interpretation.

    Device Description

    ClearView cCAD is a software application designed to assist skilled physicians in analyzing breast ultrasound images. ClearView cCAD automatically classifies shape and orientation characteristics of user-selected regions of interest (ROIs). The device uses multivariate pattern recognition methods to perform characterization and classification of images.

    For breast ultrasound, these pattern recognition and classification methods are used by a radiologist to analyze such features as shape, orientation, and putative BI-RADS® category which can then be used to describe the lesion in the ACR BI-RADS® breast ultrasound lexicon as well as assigning an ACR BI-RADS® categorization which is intended to support compliance with the ACR BI-RADS® ultrasound lexicon classification form. Similarly, this process can be used to assist in training, evaluation, and tracking of physician performance.

    The cCAD software can be run on any Windows 7 or higher or Windows Embedded platform that has network, Microsoft IIS, and Microsoft SQL support and is cleared for use in medical imaging. The software does not require any specialized hardware, but the time to process ROIs will vary depending on the hardware specifications. ClearView cCAD is based on core BI-RADS models and lesion characteristic extraction algorithms that can use novel statistical, texture, shape, orientation descriptors, and physician input to help with proper ACR BI-RADS® assessment.

    The ClearView cCAD processing software is a platform agnostic web service that queries and accepts DICOM compliant digital medical files from an ultrasound device, another DICOM source, or PACS server. To initiate analysis and processing, images are queried from a compatible location and loaded for display within the application. The user then selects an ROI to analyze by clicking and dragging a bounding box around the region requiring analysis. Once selected, the user then clicks the processing button which initiates the analysis and processing sequence. The results are displayed to the user on the monitor and can then be selected for automated reporting, storage, or modification. The output of this system will be a DICOM compatible overlay (e.g. grayscale softcopy presentation state (GSPS)) and/or PDF report that can be sent along with the original image to standard film or paper printers or sent electronically to an intranet webserver or other DICOM compatible devices distributed by various OEM vendors. All fields may be modified by the user at any time during the analysis and prior to archiving.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the ClearView cCAD device, based on the provided text:

    ClearView cCAD Acceptance Criteria and Study Details

    1. Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Stated Goal)Reported Device Performance
    Overall accuracy of the ClearView cCAD system in discerning BI-RADS® based shape and orientation parameters to fall within the 95% confidence interval of radiologist performance.Achieved overall accuracy that fell within the 95% confidence interval of the radiologist performance, rendering them statistically equivalent.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • 1204 cases for shape analysis.
      • 1227 lesions for orientation analysis.
    • Data Provenance: Not explicitly stated (e.g., country of origin). The study involved skilled physicians evaluating a dataset, implying medical images, but whether these were retrospective or prospective, or from specific geographical regions, is not mentioned.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three MQSA certified skilled physicians.
    • Qualifications of Experts:
      • Each with over 20 years of experience.
      • Each read at least 3000 images per year.

    4. Adjudication Method for the Test Set

    • Adjudication Method: "Majority decision" was used to establish ground truth for shape and orientation. This implies that if at least two out of the three experts agreed on a characteristic, that was considered the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a true MRMC comparative effectiveness study was not explicitly stated as having been performed to measure human reader improvement with AI assistance. The study compared the device's standalone performance to expert performance, showing statistical equivalence, but not how human readers' performance might change with the device.
    • Effect size of human reader improvement with AI vs. without AI assistance: Not measured or reported in this document.

    6. Standalone Performance Study

    • Was a standalone study done? Yes. The study focused on the ClearView cCAD system's "ability to discern BI-RADS® based shape and orientation parameters" independently and compared these results to the ground truth established by expert radiologists.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (majority decision by three MQSA certified skilled physicians).

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated in the provided document. The document describes the "bench testing" for the device's performance but does not specify the size of the dataset used to train the underlying multivariate pattern recognition methods and algorithms.

    9. How Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not explicitly stated. While the document mentions that the device uses "multivariate pattern recognition methods to perform characterization and classification of images" and is "based on core BI-RADS models and lesion characteristic extraction algorithms," it does not describe how the ground truth for these training datasets was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K133373
    Device Name
    CLEARVIEW
    Date Cleared
    2014-12-03

    (397 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClearView is a CT reconstruction software. The end user can choose to apply either ClearView or the filter back-projection (FBP) to the acquired raw data.

    Depending on the clinical task, patient size, anatomical location, and clinical practice, the use of ClearView can help to reduce radiation dose while maintaining Pixel noise, low contrast detectability and high contrast resolution. Phantom measurements showed that high contrast resolution and pixel noise are equivalent between full dose FBP images and reduced dose ClearView images. Additionally, ClearView can reduce body streak artifacts by using iterations between image space and raw data space.

    A Model Observer evaluation showed that equivalent low contrast detectability can be achieved with less dose using ClearView at highest noise reduction level for thin (0.625 mm) reconstruction slices in MITA body and ACR head phantoms for low contrast objects with different contrasts.

    ClearView are not intended to be used in CCT and Pilot.

    Device Description

    ClearView reconstruction technology may enable reduction in pixel noise standard deviation and improvement in low contrast resolution. ClearView reconstruction algorithm may allow for reduced mAs in the acquisition of image, thereby it can reduce the dose required.

    In clinical practice, the use of ClearView reconstruction may reduce CT patient dose depending on the clinical task, patient size, and clinical practice. A consultation with a radiologist and physicist should be made to determine the appropriate dose to obtain diagnostic image quality for the particular clinical task.

    As a reconstruction option, ClearView can be selected before scanning or after scanning. There are 9 ClearView Levels from 10% to 90%. Users can select t.the level of ClearView that is appropriate for the clinical task being performed. According to the comparison based on the requirements of 21 CFR 807.87, we stated that ClearView reconstruction software is substantially equivalent to the FBP of NeuViz 64 Multi-Slice CT Scanner System.

    ClearView is a moderate concern device.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the ClearView device as presented in the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document provides performance claims for ClearView, primarily in comparison to Filtered Back Projection (FBP) and related to dose reduction. It doesn't explicitly state "acceptance criteria" in a numerical table form, but rather describes how the device performs against certain metrics.

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    Radiation DoseReduction in radiation dose while maintaining image quality"can help to reduce radiation dose while maintaining Pixel noise, low contrast detectability and high contrast resolution."
    Pixel NoiseEquivalence to full-dose FBP images at reduced dose"Phantom measurements showed that... pixel noise are equivalent between full dose FBP images and reduced dose ClearView images."
    Low Contrast DetectabilityEquivalence to FBP at reduced dose"A Model Observer evaluation showed that equivalent low contrast detectability can be achieved with less dose using ClearView at highest noise reduction level for thin (0.625 mm) reconstruction slices in MITA body and ACR head phantoms for low contrast objects with different contrasts."
    High Contrast ResolutionEquivalence to full-dose FBP images at reduced dose"Phantom measurements showed that high contrast resolution and pixel noise are equivalent between full dose FBP images and reduced dose ClearView images."
    Artifact ReductionReduction of body streak artifacts"ClearView can reduce body streak artifacts by using iterations between image space and raw data space."

    2. Sample Size Used for the Test Set and Data Provenance

    The provided text does not contain information about the sample size used for a test set based on human patient data. The evaluations mentioned are based on:

    • Phantom measurements: Performed on "MITA body and ACR head phantoms." No specific sample size (number of phantom scans) is provided.
    • Data Provenance: The studies are described as "Phantom measurements" and "A Model Observer evaluation." This implies laboratory or simulated data, not retrospective or prospective human clinical data. The country of origin of the data is not specified beyond the manufacturer being in China.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    The document does not detail the use of human experts to establish ground truth for a test set. The evaluations primarily relied on phantom measurements and a model observer, which are objective, quantitative metrics. While "A consultation with a radiologist and physicist should be made to determine the appropriate dose to obtain diagnostic image quality for the particular clinical task," this refers to clinical practice guidance and not a component of the device's validation study itself as described.

    4. Adjudication Method for the Test Set

    Not applicable. The reported evaluations (phantom measurements, Model Observer) do not involve human adjudication for a test set's ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The evaluation focuses on standalone phantom performance and model observer results, not on how human readers' performance improves with or without the AI (ClearView) assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes. The described studies are standalone evaluations of the ClearView reconstruction software's performance, primarily using phantom measurements and a "Model Observer evaluation." These do not involve a human in the loop for the performance assessment. The device itself is a reconstruction software.

    7. Type of Ground Truth Used

    The ground truth used in the described studies is:

    • Physical Phantom Measurements: For pixel noise, high contrast resolution, and effectively for low contrast detectability (as measured by the Model Observer on phantoms). These are objective physical properties measured with specific phantoms and known targets.
    • Model Observer Results: Used for low contrast detectability, which is a quantitative measure from a computational model mimicking human perception, applied to phantom data with known contrast objects.

    8. Sample Size for the Training Set

    The document does not provide any information about the sample size used for any training set. ClearView is described as CT reconstruction software, but details on its development or any machine learning training data are absent.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide any information on a training set or how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K140139
    Device Name
    CLEARVIEWHD
    Date Cleared
    2014-05-28

    (126 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearView Image Enhancement System is intended for use by a qualified technician or diagnostician to reduce speckle noise, enhance contrast, and transfer ultrasound images. The software provides a DICOM-compliant ClearViewHD-enhanced image along with the original ultrasound image interpretation by the trained physician.

    Device Description

    The ClearViewHD image processing software reduces noise and enhances contrast of medical ultrasound images. The software is a Windows XP or higher, Windows Embedded, and DICOM-compatible platform that may be installed on a standalone PC, laptop, or tablet The software does not require any specialized hardware but the time to process an image will vary depending on the hardware specifications. ClearViewHD is based on a core noise reduction and contrast enhancement algorithm that uses novel statistical techniques to determine whether each pixel location is due to mostly noise or signal (tissue structure) and attenuates the regions due to noise while preserving and accentuating the regions due to tissue structure. The statistical method is based on the a priori knowledge that the ultrasound signal is sparse and compressive sampling theory can be used to reconstruct the signal with fewer samples than the Nyquist Rate specifies.

    The Clear ViewHD image processing software is a DICOM node that accepts DICOM3.0 digital medical files from an ultrasound device or another DICOM source. ClearViewHD processes the image and returns the original and/or enhanced image to another DICOM node such as a specific PC/workstation or the PACS system. The ClearViewHD software is designed to be compatible with any of the DICOM-compliant medical devices distributed by various OEM vendors.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study information for the ClearViewHD device, based on the provided text:

    Acceptance Criteria and Device Performance

    MetricAcceptance Criteria (Implicit)Reported Device Performance
    Speckle Noise Reduction (SNR)Improvement in SNRAverage improvement in Signal-to-Noise Ratio (SNR) of 12 dB on 10,000 simulated A-Scans.
    Contrast Enhancement (CNR)Improvement in CNRAverage improvement of 2 times the original Contrast-to-Noise Ratio (CNR).
    Visual AppearanceLess speckle noise, enhanced contrastVisually confirmed to contain less speckle noise and enhanced contrast.

    Note: The document does not explicitly state numerical acceptance criteria thresholds. Instead, it implies that improvement in SNR and CNR, along with positive visual inspection, constitutes meeting the performance goals.

    Study Information

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: 10,000 simulated A-Scans (for SNR improvement). The number of previously collected clinical images used for CNR and visual inspection is not specified.
    • Data Provenance: Bench testing on phantoms and previously collected clinical images. The country of origin is not specified, and it appears to be retrospective as it uses "previously collected clinical images."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • No information is provided regarding the number of experts or their qualifications for establishing ground truth for the test set. The evaluation seems to rely on objective metrics (SNR, CNR) and general "visual inspection" by unnamed individuals.

    4. Adjudication Method for the Test Set:

    • Not specified. The document only mentions "visual inspection" alongside objective metric measurements.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance?

    • No, an MRMC comparative effectiveness study was not reported. The study focuses on the standalone performance of the algorithm in enhancing images, not on human reader performance with or without AI assistance. The indication for use states the enhanced image assists in interpretation by a trained physician, but this is not scientifically measured in the provided summary.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation was done. The bench testing on phantoms and previously collected clinical images directly assesses the algorithm's ability to reduce noise and enhance contrast, independent of human interaction.

    7. The Type of Ground Truth Used:

    • The ground truth for the quantitative metrics (SNR and CNR) appears to be derived from the simulated A-Scans and the original (unenhanced) clinical images, serving as a baseline for measuring improvement. For the visual inspection, the "ground truth" seems to be expert consensus on ideal image quality (less speckle, enhanced contrast).
    • It's not pathology or outcomes data.

    8. The Sample Size for the Training Set:

    • The document does not specify the sample size used for the training set. It only mentions the "core noise reduction and contrast enhancement algorithm" is based on "novel statistical techniques" and "a priori knowledge."

    9. How the Ground Truth for the Training Set was Established:

    • The document does not specify how the ground truth for the training set was established. It describes the algorithm as using "novel statistical techniques" and "a priori knowledge" of ultrasound signal sparsity and compressive sampling theory, suggesting a model-driven approach rather than human-annotated ground truth for training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K131781
    Device Name
    CLEARVIEW TOTAL
    Date Cleared
    2014-05-28

    (345 days)

    Product Code
    Regulation Number
    884.4530
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearView Total is intended for use in laparoscopic procedures where it is desirable to delineate the vaginal fornices and the surgeon intends to remove or access intraperitonial tissue through the vagina by use of a colpotomy or culdotomy incision; such as laparoscopically assisted vaginal hysterectomies, total laparoscopic hysterectomies, while maintaining pneumoperitoneum by sealing the vagina while a colpotomy is performed.

    Device Description

    Clinical Innovations' ClearView Total is a single-use sterile device used for uterine manipulation. Uterine manipulation is essential for laparoscopies involving the female pelvic organs (uterus, tubes, ovaries) when a uterus is present. Uterine manipulators may be helpful when clinicians perform tubal ligations, diagnostic laparoscopies for evaluating pelvic pain and infertility, treatment of endometriosis, removal of pelvic scars (adhesions) involving the uterus, fallopian tubes and ovaries, treatment of ectopic pregnancy, removal of uterine fibroids, removal of ovarian cysts, removal of ovaries, tubal repair, laparoscopic hysterectomy, laparoscopic repair of pelvic bowel or bladder, sampling of pelvic lymph nodes, laparoscopic bladder suspension procedures for treatment of incontinence, and biopsy of pelvic masses.

    The ColpoCup accessory is a plastic cup which is mechanically screwed into the uterine manipulator. The ColpoCup is compatible with typical surgical devices, including harmonics and electrosurgical tools. Three different sizes of ColpoCups will be included with the device; 3.0cm, 3.5cm. and 4.0cm. Each ColpoCup will be a high contrast color in order to provide the surgeon with clear visibility during laparoscopic dissection.

    At the base of the ColpoCup, past the tip pivot point, is a pre-attached Occluder constructed of an inflatable balloon and will be included to seal off the vagina and prevent pnuemoperitoneum loss. The Occluder Balloon is connected to a separate inflation valve which is located proximally from the balloon and allows for inflation after placement.

    AI/ML Overview

    The provided text is a 510(k) Summary for a medical device called the "ClearView Total," a uterine manipulator. It describes the device, its intended use, and the studies conducted to demonstrate its substantial equivalence to predicate devices, but it does not outline specific acceptance criteria or report performance in the format of a clinical study assessing a device against predefined performance metrics.

    Instead, the document focuses on demonstrating that the ClearView Total is "substantially equivalent" to existing, legally marketed predicate devices through comparison of indications for use, technical characteristics, and various integrity and biocompatibility tests.

    Therefore, many of the requested sections (e.g., sample size for test set, number of experts, adjudication method, MRMC study, ground truth type, training set size) are not applicable or cannot be extracted from this document, as the study described is not a clinical effectiveness study with performance metrics in the way these questions imply for an AI/diagnostic device.

    However, I can extract information related to the device integrity and biocompatibility testing that served as the "study" for this submission.

    Here's a breakdown of the information available based on your request, with relevant sections marked as "Not Applicable" or "Not Provided" where the document does not contain the specific information:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present quantitative acceptance criteria or device performance in the typical format of a clinical study for diagnostic or AI devices. Instead, it states that all tests "met the specified requirements" or "met the appropriate acceptance criteria."

    Acceptance Criteria (Stated as met)Reported Device Performance
    Accelerated Age Testing requirementsMet specified requirements
    Balloon Leak/Burst Testing requirementsMet specified requirements
    Cup Security and Cup Break requirementsMet specified requirements
    Cytotoxicity standardsMet appropriate acceptance criteria
    Intracutaneous Reactivity Irritation standardsMet appropriate acceptance criteria
    Sensitization standardsMet appropriate acceptance criteria

    2. Sample Size Used for the Test Set and Data Provenance

    This document describes engineering and biocompatibility tests, not a clinical test set with patient data.

    • Sample Size: Not specified (refers to device units tested for engineering and biocompatibility).
    • Data Provenance: Not applicable (these are laboratory/bench tests on device components/materials).

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Number of Experts: Not applicable.
    • Qualifications of Experts: Not applicable.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not applicable. (Testing results would likely be determined by laboratory technicians or engineers against predefined test specifications.)

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No. This document describes a submission for a physical medical device, not an AI or diagnostic algorithm, so an MRMC study is not relevant here.
    • Effect Size of Human Readers with vs. without AI: Not applicable.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Study Done: No. This device is a physical surgical instrument, not an algorithm.

    7. Type of Ground Truth Used

    • Type of Ground Truth: For the engineering tests (Accelerated Age, Balloon Leak/Burst, Cup Security/Break), the "ground truth" would be the pre-defined engineering specifications and performance limits for the device's physical properties. For biocompatibility tests, the "ground truth" would be established by industry standards (e.g., ISO 10993 series) for material safety.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable. This is not an AI/machine learning device.

    9. How Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established: Not applicable.

    Summary of the Study Description:

    The "study" described in the 510(k) summary involves device integrity testing and biocompatibility testing.

    • Device Integrity Testing: Included Accelerated Age Testing, Balloon Leak/Burst Testing, and Cup Security and Cup Break tests. The document states that "All device integrity tests for the ClearView Total met the specified requirements." These tests would assess the physical and mechanical performance of the device under various conditions to ensure its structural integrity and functionality.
    • Biocompatibility Testing: Included Cytotoxicity, Intracutaneous Reactivity Irritation, and Sensitization tests. The document states that "All testing met the appropriate acceptance criteria." These tests are conducted to ensure that the device materials are safe for contact with human tissue and do not elicit adverse biological reactions.

    The purpose of these studies was to support the claim that the ClearView Total is "substantially equivalent" to predicate devices, meaning it is as safe and effective as devices already on the market, without introducing new questions of safety or effectiveness.

    Ask a Question

    Ask a specific question about this device

    K Number
    K103610
    Date Cleared
    2011-01-06

    (28 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Clearview® Exact II Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasal swab specimens collected from symptomatic patients. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. It is recommended that negative test results be confirmed by cell culture. Negative results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other management decisions.

    Device Description

    The Clearview Exact II Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A and B nucleoprotein antigens in respiratory swab specimens. These antibodies and a control protein are immobilized onto a membrane support as three distinct lines and are combined with other reagents/pads to construct a Test Strip. Nasal swab samples are added to a Coated Reaction Tube to which an extraction reagent has been added. A Clearview Exact II Influenza A & B Test Strip is then placed in the Coated Reaction Tube holding the extracted liquid sample. Test results are interpreted at 10 minutes based on the presence or absence of pink-to-purcle colored Sample Lines. The yellow Control Line turns blue in a valid test.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Clearview® Exact II Influenza A & B Test, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance figures, as the device was deemed "substantially equivalent" which indicates these performance metrics were acceptable to the FDA. The document doesn't explicitly state pre-defined acceptance thresholds, but rather presents the results of the clinical study for evaluation.

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    Influenza Type A
    SensitivityAdequate for intended use94% (95% CI: 83-98%)
    SpecificityAdequate for intended use96% (95% CI: 93-97%)
    Influenza Type B
    SensitivityAdequate for intended use77% (95% CI: 67-85%)
    SpecificityAdequate for intended use98% (95% CI: 96-99%)
    Invalid Results RateLowLess than 2%
    Analytical Sensitivity (LOD)Detects at specified concentrationsSee detailed table below
    Analytical ReactivityReacts to specified strainsSee detailed table below
    Analytical Specificity (Cross-Reactivity)No cross-reactivity with specified microorganismsAll tested microorganisms were negative
    Interfering SubstancesNo interference with specified substancesNo interference found for most, whole blood interfered with positive samples
    Reproducibility (Type A)
    Moderate Positive DetectionHigh99.2% (119/120)
    Low Positive DetectionHigh94.2% (113/120)
    High Negative DetectionLow9.2% (11/120)
    Reproducibility (Type B)
    Moderate Positive DetectionHigh99.2% (116/120)
    Low Positive DetectionHigh94.2% (113/120)
    High Negative DetectionLow7.5% (9/120)
    Negative Samples100% negative100% negative

    Analytical Sensitivity (LOD) - Reported Device Performance:

    Influenza SubtypeConcentration (TCID50/ml)# Detected per Total Tests% Detected
    Influenza A/HongKong/8/682.37 x 10^464/6697%
    Influenza A/PuertoRico/8/343.16 x 10^537/4288%
    Influenza B/Malaysia/2506/20043.00 x 10^619/2095%
    Influenza B/Lee/404.20 x 10^519/2095%

    Analytical Reactivity Testing - Reported Device Performance:

    Influenza StrainConcentration (TCID50/ml or EIU50/ml)
    Flu A/Port Chalmers/1/73 (H3N2)5.6 x 10^5
    Flu A/WS/33 (H1N1)5.0 x 10^4
    Flu A/Aichi/2/68 (H3N2)3.0 x 10^4
    Flu A/Malaya/302/54 (H1N1)6.0 x 10^5
    Flu A/New Jersey/8/76 (H1N1)2.8 x 10^5
    Flu A/Denver/1/57 (H1N1)8.9 x 10^3
    Flu A/Victoria/3/75 (H3N2)1.8 x 10^4
    Flu A/Solomon Islands/3/2006 (H1N1)1.5 x 10^5
    Flu A/Brisbane/10/07 (H3N2)2.5 x 10^6 EIU50/ml
    Flu A/Puerto Rico/8/34 (H1N1)5.6 x 10^5
    Flu A/Wisconsin/67/2005 (H3N2)1.3 x 10^5
    Flu A/Hong Kong/8/68 (H3N2)7.9 x 10^3
    Flu A/California/04/2009 (H1N1)1.4 x 10^5
    Flu B/Florida/02/20061.4 x 10^4
    Flu B/Florida/04/20067.1 x 10^4
    Flu B/Florida/07/048.5 x 10^4
    Flu B/Malaysia/2506/041.5 x 10^6
    Flu B/Panama/45/901.7 x 10^4
    Flu B/R755.0 x 10^5
    Flu B/Russia/692.2 x 10^6
    Flu B/Taiwan/2/621.0 x 10^5
    Flu B/Mass/3/661.5 x 10^5
    Flu B/Lee/401.8 x 10^5

    2. Sample size used for the test set and the data provenance

    • Sample Size: 478 prospective clinical specimens.
    • Data Provenance: Multi-center, prospective clinical study conducted at seven U.S. trial sites during the 2008-2009 respiratory season. Specimens were nasal swabs collected from symptomatic patients.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The ground truth for the clinical study was established using viral culture. This is a laboratory method and does not involve human experts in the typical "expert consensus" sense for image interpretation or diagnosis. Therefore, information about the number and qualifications of experts in this context is not applicable.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable, as the ground truth was "viral culture," which is an objective laboratory method rather than expert interpretation requiring adjudication. However, for the 19 samples where the Clearview test was negative for influenza B but viral culture was positive, an investigational RT-PCR assay was used as a follow-up ("Ten (10) of these samples were negative for influenza B by PCR"). This could be seen as a form of secondary verification for discrepant results.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is an immunochromatographic rapid diagnostic test for direct antigen detection, not an AI-powered diagnostic imaging or interpretation tool that assists human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the primary clinical performance study evaluated the device in a standalone manner. The results (sensitivity and specificity) represent the performance of the device itself (Clearview® Exact II Influenza A & B Test) compared to the viral culture gold standard, without human interpretation influence (other than reading the test strip, which is part of the device's intended use and not considered "human-in-the-loop AI assistance"). The reproducibility study also assessed the device's inherent performance characteristics.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the clinical study was viral culture.

    8. The sample size for the training set

    This information is not provided in the 510(k) summary. Given that this is an immunochromatographic assay using monoclonal antibodies, it's a traditional in vitro diagnostic, not a machine learning or AI-driven device that requires a "training set" in the computational sense. The "training" of such a device involves developing and optimizing the biochemical components and manufacturing processes, rather than training an algorithm on a dataset.

    9. How the ground truth for the training set was established

    Not applicable, as the device is not an AI/ML-based system requiring a training set with established ground truth in the traditional sense. The analytical studies (sensitivity, reactivity, specificity) demonstrate the performance of the developed assay against known viral strains and other microorganisms.

    Ask a Question

    Ask a specific question about this device

    K Number
    K092349
    Date Cleared
    2010-05-10

    (279 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Clearview Exact II Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasal swab specimens collected from symptomatic patients. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. It is recommended that negative test results be confirmed by cell culture. Negative results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other management decisions.

    Device Description

    The Clearview® Exact II Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A and B nucleoprotein antigens in respiratory swab specimens. These antibodies and a control protein are immobilized onto a membrane support as three distinct lines and are combined with other reagents/pads to construct a Test Strip. Nasal swab samples are added to a Coated Reaction Tube to which an extraction reagent has been added. A Clearview Exact II Influenza A & B Test Strip is then placed in the Coated Reaction Tube holding the extracted liquid sample. Test results are interpreted at 10 minutes based on the presence of pink-to-purple colored Sample Lines. The yellow Control Line tums blue in a valid test.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Clearview® Exact II Influenza A & B Test, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined "acceptance criteria" in numerical terms (e.g., "Sensitivity must be > 90%"). Instead, it presents the device's performance against a gold standard (viral culture) as the evidence for substantial equivalence. The predicate device's performance often serves as an implicit benchmark for acceptance.

    However, we can infer what constitutes acceptable performance from the presented results, as there's no indication that the results were unacceptable.

    Criterion (Inferred from Performance Data)Acceptance Criteria (Implicit/Benchmark)Reported Device Performance
    Influenza Type A Detection
    Sensitivity (vs. Viral Culture)Likely comparable to predicate device94% (95% CI: 83-98%)
    Specificity (vs. Viral Culture)Likely comparable to predicate device94% (95% CI: 91-96%)
    Positive Predictive Value (PPV)Likely comparable to predicate device63% (95% CI: 52-74%)
    Negative Predictive Value (NPV)Likely comparable to predicate device99% (95% CI: 98-100%)
    Influenza Type B Detection
    Sensitivity (vs. Viral Culture)Likely comparable to predicate device78% (95% CI: 68-86%)
    Specificity (vs. Viral Culture)Likely comparable to predicate device97% (95% CI: 95-98%)
    Positive Predictive Value (PPV)Likely comparable to predicate device84% (95% CI = 74-90%)
    Negative Predictive Value (NPV)Likely comparable to predicate device95% (95% CI = 93-97%)
    Analytical Sensitivity (LOD 95%)Likely comparable to predicate device
    A/HongKong/8/68Not explicitly stated$2.37 \times 10^4$ TCID50/ml (97% detected)
    A/PuertoRico/8/34Not explicitly stated$3.16 \times 10^5$ TCID50/ml (88% detected)
    B/Malaysia/2506/2004Not explicitly stated$3.00 \times 10^6$ TCID50/ml (95% detected)
    B/Lee/40Not explicitly stated$4.20 \times 10^5$ TCID50/ml (95% detected)
    ReproducibilityLikely high detection rates for positive samples, very low for negative
    Influenza A Moderate PositiveNot explicitly stated99.2%
    Influenza A Low PositiveNot explicitly stated94.2%
    Influenza A High NegativeNot explicitly stated9.2%
    Influenza B Moderate PositiveNot explicitly stated99.2%
    Influenza B Low PositiveNot explicitly stated96.7%
    Influenza B High NegativeNot explicitly stated7.5%
    Negative Samples (Overall)Not explicitly stated100% (118/118) negative results

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 486 prospective specimens
    • Data Provenance:
      • Country of Origin: U.S. (multi-center, seven trial sites)
      • Retrospective or Prospective: Prospective study, conducted during the 2008-2009 respiratory season. Specimens were collected from symptomatic patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number or specific qualifications of experts involved in establishing the ground truth. It relies on viral culture as the ground truth. Viral culture is a laboratory method, not typically performed by "experts" in the sense of clinicians or radiologists, but by trained laboratory personnel.

    4. Adjudication Method for the Test Set

    The document does not mention an explicit adjudication method (e.g., 2+1, 3+1). The primary comparison is the Clearview® Exact II test result directly against the viral culture result. For discrepant results with Influenza B (19 samples positive by culture, negative by Clearview), an investigational RT-PCR assay was used as a secondary check, showing 10 of these were negative by PCR. This suggests a form of post-hoc investigation for specific discrepancies, rather than a pre-defined adjudication process, but not a consensus reading among multiple human readers.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This study is a standalone performance evaluation of a rapid diagnostic test against a gold standard (viral culture), not a study involving human readers or AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done for the device. The Clearview® Exact II Influenza A & B Test is itself a rapid immunoassay, a "device-only" test. The "performance vs. viral culture" is the standalone performance of the diagnostic test without human interpretation of complex images or data beyond reading simple color lines.

    7. The Type of Ground Truth Used

    The ground truth used for the clinical study was Viral Culture. For the 19 discrepant Influenza B samples, an investigational RT-PCR assay was also used as a secondary reference.

    8. The Sample Size for the Training Set

    The document does not mention a separate "training set" for the clinical performance evaluation. The clinical study described is a prospective validation set. For a device like this, the "training" (development and optimization) would typically involve internal efforts during the assay development process, using laboratory-prepared samples or retrospective samples, but a dedicated "training set" for clinical evaluation is not described for this type of diagnostic device.

    9. How the Ground Truth for the Training Set Was Established

    As no specific "training set" for clinical performance is described, the method for establishing its ground truth is not provided. For analytical studies (e.g., analytical sensitivity, reactivity), the ground truth is typically precisely quantified viral cultures or preparations.

    Ask a Question

    Ask a specific question about this device

    K Number
    K091766
    Manufacturer
    Date Cleared
    2010-02-24

    (252 days)

    Product Code
    Regulation Number
    866.1640
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Clearview® Exact PBP2a Test is a qualitative, in vitro immunochromatographic assay for the detection of penicillin-binding protein 2a (PBP2a) in isolates identified as Staphylococcus aureus, as an aid in detecting methicillin-resistant Staphylococcus aureus (MRSA). The Clearview® Exact PBP2a Test is not intended to diagnose MRSA nor to guide or monitor treatment for MRSA infections.

    Device Description

    The Clearview® Exact PBP2a Test is a rapid immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect the PBP2a protein directly from bacterial isolates. These antibodies and a control antibody are immobilized onto a nitrocellulose membrane as two distinct lines and combined with a sample pad, a blue conjugate pad, and an absorption pad to form a test strip.

    Isolates are sampled directly from the culture plate and eluted into an assay tube containing Reagent 1. Reagent 2 is then added and the dipstick is placed in the assay tube. Results are read visually at 5 minutes.

    AI/ML Overview

    The Clearview® Exact PBP2a Test is a rapid immunochromatographic assay for detecting penicillin-binding protein 2a (PBP2a) in Staphylococcus aureus isolates, aiding in the detection of methicillin-resistant Staphylococcus aureus (MRSA).

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria. However, it reports sensitivity and specificity performance values for the device compared to a reference method. We can infer that the reported performance values were considered acceptable for regulatory clearance.

    Performance MetricReported Device Performance (Tryptic Soy Agar with 5% sheep blood)Reported Device Performance (Columbia Agar with 5% sheep blood)Reported Device Performance (Mueller Hinton with 1 µg oxacillin induction)
    Sensitivity98.1% (95.2-99.3% CI)99.0% (96.6-99.7% CI)99.5% (97.4-99.9% CI)
    Specificity98.8% (96.5-99.6% CI)98.8% (96.5-99.6% CI)98.8% (96.5-99.6% CI)

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: A total of 457 S. aureus samples were evaluated in the clinical performance study.
    • Data Provenance: The study was a multicenter clinical study conducted in 2009 at three geographically-diverse laboratories. The analytical performance section also mentions bacterial strains obtained from the Network on Antimicrobial Resistance in Staphylococcus aureus (NARSA), American Type Culture Collection (ATCC), and a collection from the Department of Infectious Disease Epidemiology of the Imperial College in London, England. This indicates a mix of strains from reference collections and clinical isolates, and at least some data provenance from England in addition to the diverse US laboratories. The study appears to be retrospective in the sense that existing S. aureus samples were evaluated by the new device.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not mention the use of experts to establish ground truth for the test set.

    4. Adjudication Method for the Test Set

    The document does not mention an adjudication method for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study compares the device's performance to a standard method (cefoxitin disk diffusion), not to human readers' performance with and without AI assistance.

    6. Standalone Performance Study

    Yes, a standalone study was done. The clinical performance study directly evaluated the performance of the Clearview® Exact PBP2a Test against the reference method without human intervention in the interpretation of the device's results, as it is a visual read test that relies on the device's output. The reproducibility study also tested the device in a standalone manner.

    7. Type of Ground Truth Used

    The ground truth used for the clinical performance study was cefoxitin (30 ug) disk diffusion, interpreted according to CLSI (Clinical and Laboratory Standards Institute) standards. This is a recognized phenotypic method for determining methicillin resistance in S. aureus.

    8. Sample Size for the Training Set

    The document does not explicitly mention a dedicated "training set" or its size for the development of the Clearview® Exact PBP2a Test. The analytical performance section mentions that 162 MRSA strains and 112 MSSA strains were tested for analytical reactivity and specificity, which might represent samples used during later stages of development or validation, but it's not explicitly labeled as a training set.

    9. How Ground Truth for the Training Set Was Established

    Since a distinct training set is not explicitly defined, the method for establishing its ground truth is not detailed. However, for the strains used in analytical performance (162 MRSA and 112 MSSA), it is implied that their methicillin-resistant/sensitive status was known, likely established through standard microbiological identification and susceptibility testing methods (e.g., CLSI guidelines, reference lab testing) given their origin from reputable collections (NARSA, ATCC, Imperial College).

    Ask a Question

    Ask a specific question about this device

    K Number
    K091489
    Manufacturer
    Date Cleared
    2009-09-04

    (107 days)

    Product Code
    Regulation Number
    866.3740
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Clearview Advanced™ Strep A test is a rapid chromatographic immunoassay for the qualitative detection of Strep A antigen from throat swab specimens as an aid in the diagnosis of Group A Streptococcal infection.

    Device Description

    The Clearview Advanced™ Strep A test is a qualitative, lateral flow immunoassay for the detection of Strep A carbohydrate antigen directly from a throat swab sample. To perform the test, Reagent 1 (R1) is added to the extraction tube which is coated with a mixture of conjugate antibodies and a lytic enzyme extraction reagent. The lytic enzyme is mixed with colloidal gold conjugated to rabbit anti-Strep A and a second colloidal gold control coniugate antibody. The reagents are dried onto the bottom of an extraction tube forming a red spot. The extraction/conjugate pellet is resuspended with R1 and the throat swab is added to the extraction tube. The Strep A antigen is extracted from the sample and the swab is removed. The test strip is immediately placed in the extracted sample. If Group A Streptococcus is present in the sample, it will react with the anti-Strep A antibody conjugated to the gold particle. The complex will then be bound by the anti-Strep A capture antibody and a visible red test line will appear, indicating a positive result. To serve as an onboard procedural control, the blue line observed at the control site prior to running the assay will turn red, indicating that the test has been performed properly. If Strep A antigen is not present, or present at very low levels, only a red control line will appear. If the red control line does not appear, or remains blue, the test result is invalid.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Clearview Advanced™ Strep A Test, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance Criteria (Implicit)Reported Device Performance
    Clinical Performance
    SensitivityNot explicitly stated, but high sensitivity is crucial for diagnostic tests to minimize false negatives.91.5% (95% CI: 85.0% to 95.3%)
    SpecificityNot explicitly stated, but high specificity is crucial to minimize false positives.95.0% (95% CI: 90.7% to 97.3%)
    Analytical Sensitivity (Limit of Detection - LOD)The concentration of Group A Streptococcus bacteria that produces positive results approximately 95% of the time.1 x 10^4 organisms/test
    Analytical Specificity (Cross-Reactivity)No false positives when tested against common commensal and pathogenic microorganisms.All 38 tested microorganisms were negative at 1 x 10^6 organisms/test.
    ReproducibilityConsistent results across different sites, days, and operators, especially for moderate positive and LOD concentrations.Overall Detection:- Diluent (True Negative): 0% (0/179)- 1x10^5 (Moderate Positive): 99% (179/180)- 1x10^4 (LOD/C95 Concentration): 94% (170/180)- 3.2x10^3 (Near the cut-off/C50 Concentration): 49% (88/179)

    Note on Acceptance Criteria: The document does not explicitly state numerical acceptance criteria for sensitivity and specificity. However, regulatory bodies implicitly expect high performance from diagnostic tests for infectious diseases. The provided confidence intervals indicate a robust performance profile. For analytical sensitivity and specificity, the acceptance criteria are described directly in the text (e.g., "produces positive... approximately 95% of the time" for LOD, and "all... were negative" for cross-reactivity).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Clinical Performance Test Set): A total of 297 throat swab specimens.
    • Data Provenance:
      • Country of Origin: United States.
      • Retrospective or Prospective: Prospective clinical study.
      • Study Design: Multi-center study conducted in 2008-2009 at five geographically diverse physician offices, clinics, and emergency departments.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • The document does not specify the number of experts used to establish the ground truth or their qualifications. The ground truth ("bacterial culture") is an objective laboratory method rather than an expert interpretation in this context.

    4. Adjudication Method for the Test Set

    • The document does not describe an adjudication method. The comparison is directly between the Clearview Advanced Strep A test results and the bacterial culture results.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study was not done. This study assesses the performance of a device (immunoassay) against a gold standard (bacterial culture), not how human readers improve with or without AI assistance.

    6. Standalone Performance Study

    • Yes, a standalone performance study was done. The entire clinical performance section describes the algorithm's (device's) performance without human intervention in interpreting the test result. The test is a qualitative, lateral flow immunoassay where the visual appearance of a line directly indicates a positive or negative result, and its accuracy is compared to the bacterial culture.

    7. Type of Ground Truth Used

    • Bacterial Culture. The clinical performance of the Clearview Advanced Strep A test was established by comparing its results to bacterial culture, which is considered the gold standard for diagnosing Group A Streptococcal infection.

    8. Sample Size for the Training Set

    • The document does not specify a separate training set or its sample size. The description of the clinical study refers to the "test set" or "evaluation set" for performance metrics. For traditional immunoassay devices like this, there isn't typically a distinct "training set" in the machine learning sense. The device's design and parameters are developed through analytical studies (e.g., LOD, cross-reactivity) rather than through training on a large dataset of patient samples.

    9. How the Ground Truth for the Training Set Was Established

    • Since there's no explicitly defined "training set" in the context of machine learning for this device, a ground truth establishment method for it is not applicable/not described. The robust design of the immunoassay, informed by analytical studies, serves as its "training" or development process.
    Ask a Question

    Ask a specific question about this device

    K Number
    K080740
    Date Cleared
    2008-12-31

    (289 days)

    Product Code
    Regulation Number
    874.4760
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearView™ Endoscope Cover System is indicated as a one time use sterile jacket, placed over a medical Otolaryngology or Rhinological endoscope, during examination procedures, conducted by trained medical professional. It is designed and sized to fit on to specific manufactures' endoscope models.

    Device Description

    Flexible Endoscope Barrier Sheath.

    AI/ML Overview

    The provided text is a 510(k) premarket notification approval letter for the "Clearview Endoscope Cover" and does not contain detailed information about acceptance criteria or specific study results that prove the device meets such criteria. A 510(k) summary (which is a separate document typically submitted with the 510(k) application) would generally contain this type of performance data if it were required for substantial equivalence.

    Based on the provided text, the device is a Flexible Endoscope Barrier Sheath (ClearView™ Endoscope Cover System). The FDA's letter indicates that they have reviewed the premarket notification and determined the device is substantially equivalent to legally marketed predicate devices, meaning it does not require a full Premarket Approval (PMA). This type of approval often relies on demonstrating that the new device meets the same performance characteristics as an already approved predicate device, or that any differences do not raise new questions of safety and effectiveness.

    Therefore, I cannot provide the requested information from the given text alone. The document does not include:

    1. A table of acceptance criteria and reported device performance.
    2. Sample size for the test set or data provenance.
    3. Number or qualifications of experts used for ground truth.
    4. Adjudication method for the test set.
    5. Information about a multi-reader multi-case (MRMC) comparative effectiveness study or effect sizes.
    6. Information about a standalone (algorithm only) performance study.
    7. Type of ground truth used.
    8. Sample size for the training set.
    9. How ground truth for the training set was established.

    The letter explicitly states that the FDA "reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent...". This implies that the application likely contained information (e.g., in a 510(k) summary) that demonstrated the device's performance, but those details are not present in this approval letter itself.

    Ask a Question

    Ask a specific question about this device

    K Number
    K070297
    Date Cleared
    2007-02-05

    (5 days)

    Product Code
    Regulation Number
    872.4200
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearView™ Handpiece Lubricator is used for the delivery of ClearView™M Handpiece Lubricant to air driven dental handpieces, turbines and air motors for the purpose of maintenance prior to sterilization.

    Device Description

    Lubricator: This information is detailed on the manufacturing drawings and photographs included in the body of this submission.

    Lubricant: ClearView™ Dental Handpiece Lubricant is a liquid lubricant specifically intended for use with the ClearView™ Dental Handpiece Lubricator for the purpose outlined above. Detailed information is contained in the body of this submission.

    AI/ML Overview

    The provided text describes a 510(k) summary for the ClearView™ Dental Handpiece Lubricator and Lubricant, which is a submission to the FDA seeking market clearance based on substantial equivalence to existing predicate devices. This type of submission focuses on comparing the new device's characteristics, safety, and effectiveness to legally marketed predicate devices, rather than presenting a study with specific acceptance criteria and detailed performance data like those found for novel or higher-risk medical devices requiring clinical trials.

    Therefore, based on the provided text, many of the requested details about acceptance criteria and studies (like sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, and ground truth establishment) are not applicable because such a detailed study was not presented or required for this type of 510(k) submission.

    Here's the breakdown of what can be extracted or inferred from the text, and what cannot:


    1. Table of Acceptance Criteria and Reported Device Performance

    As this is a 510(k) substantial equivalence submission for a Class I device, specific quantitative "acceptance criteria" and "reported device performance" in the way one might expect for a new, high-risk device with a clinical trial are not explicitly stated or provided in the document.

    Instead, the acceptance criteria for this type of submission are implicitly "substantial equivalence" to predicate devices in terms of intended use, technological characteristics, and safety and effectiveness. The reported "performance" is a qualitative comparison demonstrating this equivalence.

    Acceptance Criteria (Implicit for 510(k) Substantial Equivalence)Reported Device Performance (as stated in the document)
    Intended Use Equivalence: The device serves the same purpose as the predicate devices.The ClearView™ Handpiece Lubricator is used for the delivery of ClearView™ Handpiece Lubricant to air driven dental handpieces, turbines and air motors for the purpose of maintenance prior to sterilization, which aligns with the purpose of predicate devices.
    Technological Characteristics Equivalence: The device operates on similar principles and has similar features to the predicate devices.Lubricator Comparison: Both the ClearView™ and the Assistina have a reservoir for lubricant, a cover to contain exhaust, a method of connecting handpieces, push button operation, a hose and connection to a pressurized air supply. Both devices use pressurized air for power and do not have electrical components. Lubricant Comparison: Both ClearView™ Dental Handpiece Lubricant and Phase Change Dental Lubricant are liquids suitable for lubrication of dental handpieces.
    Safety and Effectiveness Equivalence: The device is as safe and effective as the predicate devices and does not raise new questions of safety or effectiveness.The differences between ClearView™ and the predicate devices do not raise questions regarding safety and effectiveness. The device employs the same technological characteristics to support its intended use. Both lubricant types are considered completely safe under normal usage and have no expected hazards.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not Applicable. The document does not describe a "test set" in the context of performance testing with a specific sample size. The review focuses on comparing the device's design and intended function to predicate devices, not on a clinical or performance study with a distinct test dataset.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not Applicable. No ground truth establishment for a test set is described. The entire submission is reviewed by the FDA, with the "ground truth" essentially being the established safety and effectiveness of the predicate devices.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not Applicable. No test set or expert adjudication process is described for this type of submission.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not Applicable. This is a mechanical device (lubricator and lubricant), not an AI/software device. Therefore, no MRMC study or AI assistance evaluation was performed or is relevant.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not Applicable. This is a mechanical device; there is no "algorithm only" performance to evaluate.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The "ground truth" in a 510(k) substantial equivalence submission for this type of device is the established safety and effectiveness of the legally marketed predicate devices. The applicant's submission aims to demonstrate that its new device is "substantially equivalent" to these already-approved devices, meaning it works comparably and is just as safe and effective.

    8. The sample size for the training set

    Not Applicable. No "training set" in the context of machine learning or complex algorithm development is mentioned or relevant for this mechanical device submission.

    9. How the ground truth for the training set was established

    Not Applicable. As no training set is described, the process for establishing its ground truth is also not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2