Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN100016

    Validate with FDA (Live)

    Date Cleared
    2012-04-27

    (707 days)

    Product Code
    Regulation Number
    876.2050
    Age Range
    All
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Prostate Mechanical Imager (PMI) is indicated for the production of an elasticity image of the prostate as an aid in documenting prostate abnormalities that were previously identified by digital rectal examination (DRE). The device utilizes a transrectal probe with pressure sensor arrays and a motion tracking system and provides real-time elasticity images of the prostate. This device is limited to use as a documentation tool and therefore is not to be used for cancer diagnosis or for any other diagnostic purpose. This device is only to be used to image and document an abnormality that was already identified by DRE. Clinical management decisions are not to be made on the basis of information from the PMI device, but rather on the basis of the DRE. If there is disagreement between the DRE and the recorded image produced by the device, patient management decisions are to be based on the DRE and other available clinical and diagnostic information (e.g., prostate-specific antigen (PSA) levels) in accordance with standard medical practice.

    Device Description

    The Prostate Mechanical Imager (PMI) is an electronic palpation device that is meant to mimic the digital rectal examination (DRE) by generating images of pressure patterns of the palpated prostate. The information provided by the device is characterized as being similar in nature to the information obtained from a standard DRE (i.e., determination of regions of relative tissue hardness within the prostate) with the utility being that the results are visually displayed and can be electronically saved, transmitted, and/or printed out for documenting in patient's medical records.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study information for the Prostate Mechanical Imager (PMI), based on the provided text:

    Prostate Mechanical Imager (PMI) Acceptance Criteria and Study Details

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state numerical "acceptance criteria" for the device's imaging performance. Instead, it describes capabilities and outcomes. The primary acceptance criteria for this device, based on the Special Controls and Risks to Health sections, are related to its ability to consistently produce an accurate image, and its reproducibility, while ensuring safety and proper use.

    Acceptance Criteria CategorySpecific Criterion (from Special Controls/Risks)Reported Device Performance
    Imaging Performance, Accuracy & ReproducibilityAccuracy of the constructed imageBench Testing: - Initially designed and evaluated using soft tissue phantoms with hard inclusions, demonstrating detection of hard nodules. - Comparison against manual palpation and pathology on 9 excised prostates showed PMI abnormalities correlated closely with palpated nodules and pathologyfindings. - Performance testing on 24 prostate models (720 examinations across 5 operators/systems) demonstrated reliable visualization of abnormalities and production of nodule images in prostate models. Clinical Study (Earlier Generation - 2006): - Sufficient for image reconstruction in 84% (141/168) of study cases. Clinical Study (Latest Generation - 2009): - Capable of visualizing the prostate in 98% (55/56) of patients. - Agreement between DRE and PMI determinations regarding the presence of an abnormality in 89% of cases.
    Reproducibility of the constructed imageBench Testing: - Demonstrated reproducibility over time, inter-system reproducibility, and inter-operator reproducibility for generating a real-time digital image.
    Safety and Usability (Indirectly related to imaging)Electromagnetic CompatibilityComplies with IEC 60601-1-2 (edition 2.1, Amendment 1: 2004) and related immunity standards.
    Electrical SafetyComplies with IEC 60601-1 (1998: Amendment 1 and Amendment 2).
    Thermal SafetyComplies with IEC 60601-1 (1998: Amendment 1 and Amendment 2).
    Mechanical SafetyComplies with IEC 60601-1 (1998: Amendment 1 and Amendment 2).
    BiocompatibilityProbe sheath and lubricant are 510(k)-cleared; direct mucosal contact (< 5 minutes) is consistent with cleared uses.
    Software Verification, Validation, Hazard AnalysisComprehensive documentation provided, deemed sufficient for a minor level of concern; software performs as intended, and risks are mitigated.
    Reprocessing Methods for Reusable ComponentsLabeling provided for single-use disposables and instructions for reusable components (covered by disposable system cover, specific reprocessing instructions). (Validation details not provided in this summary, but mentioned as a special control).
    Labeling/Instruction Effectiveness (Mitigation for risk)User error, misinterpretation, failure to imageLabeling provides explicit instructions on limitations (not diagnostic, DRE aid only), meaning of colors, initialization/calibration, redoing examinations, and conditions for not saving/printing (if images disagree with DRE findings). Clinical Study (Latest Generation - 2009): - Average ease of use score of 4.4 out of 5 from clinical investigators. - No safety concerns or side effects.

    2. Sample size used for the test set and the data provenance

    • Bench Testing:

      • Test Set Size: 24 prostate models were used, resulting in a total of 720 examinations (24 models * 5 operators * 5 PMI systems).
      • Data Provenance: Not explicitly stated, but implies a controlled laboratory environment with specially designed prostate phantoms. Likely based in the country of the manufacturer (Artann Laboratories, Inc., USA). This was a prospective simulation study.
    • Clinical Study (Earlier Generation - 2006):

      • Test Set Size: 168 clinical cases.
      • Data Provenance: Not explicitly stated, but referenced as "clinical testing." Likely prospective clinical data collected in a medical setting.
    • Clinical Study (Latest Generation - 2009):

      • Test Set Size: 56 clinical cases with prostate abnormalities detected by DRE.
      • Data Provenance: Prospective clinical study, collected by five clinical investigators from five clinical sites (country not specified, but likely US given the FDA submission).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Bench Testing:

      • Ground Truth Establishment: The ground truth for the prostate models was inherent in their design: "Hard nodules were positioned in specific locations within the prostate models to simulate various prostate elasticity distributions." The "experts" would be the designers and manufacturers of these phantoms, ensuring the known "abnormalities."
      • Qualficiations: Not specified, but assumed to be engineers and scientists involved in phantom design.
    • Clinical Study (Earlier Generation - 2006):

      • Ground Truth Establishment: "Following a standard DRE performed by an urologist."
      • Number of Experts/Qualifications: At least one urologist per site (not specified how many total, but implied multiple as it was "clinical testing"). Urologists are specialists in prostate examination.
    • Clinical Study (Latest Generation - 2009):

      • Ground Truth Establishment: "prostate abnormalities detected by DRE" (Digital Rectal Examination).
      • Number of Experts/Qualifications: Five clinical investigators (urologists or other qualified medical professionals performing DREs) from five clinical sites. Their specific qualifications (e.g., years of experience) are not detailed but the term "clinical investigators" implies medically qualified personnel.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    The document does not detail a specific adjudication method like "2+1" or "3+1" for establishing ground truth for either the bench or clinical test sets.

    • Bench Testing: Ground truth was by design within the phantoms.
    • Clinical Studies: The "ground truth" was established by the DRE findings of a single examiner (urologist/clinical investigator) per case. The "agreement between DRE and PMI determinations" implies a comparison with this single DRE finding, not an adjudication process among multiple DREs.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • A Multi-Reader Multi-Case (MRMC) comparative effectiveness study, in the sense of human readers improving with AI assistance vs. without, was not explicitly described for the clinical context.
    • The PMI is a documentation tool meant to mimic DRE, not an AI diagnostic aid that assists a human in interpretation. Its purpose is to visualize and document abnormalities already identified by DRE.
    • However, the bench testing did involve multiple "operators" (5 operators) using multiple "systems" (5 PMI systems) on multiple "cases" (24 prostate models over 720 examinations) to assess reproducibility. This has elements of an MRMC study for device performance, but not for human interpretive improvement with the device.
    • The clinical study (latest generation) involved "multiple users" (5 clinical investigators) on "56 clinical cases" to test imaging capability and confirm that modifications did not negatively impact performance and usability. This also has MRMC aspects, but again, the focus was on the device's ability to image and agree with DRE, not on improving human diagnostic accuracy with the PMI's assistance. The statement "agreement between DRE and PMI determinations regarding the presence of an abnormality in 89% of the studied cases" is a measure of concordance, not an effect size of human improvement.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, in essence, standalone performance was evaluated. The PMI device itself generates the elasticity image (algorithm only). The "performance testing - bench" sections describing the device's ability to visualize abnormalities and produce images of nodules in prostate models is a standalone evaluation of the algorithm's output against a known physical ground truth (the phantom's embedded nodules).
    • The clinical studies also evaluated the device's ability to visualize the prostate and correlate with DRE findings, essentially assessing its standalone image generation and its agreement with clinical findings. The device is explicitly not for diagnostic purposes or for influencing clinical decisions, underscoring its role as a tool producing an image rather than an interpretive aid.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Bench Testing:

      • Known physical properties and design: For the phantoms, the ground truth was the "hard inclusions of different sizes prepositioned at different depths" and "Hard nodules were positioned in specific locations within the prostate models." This is essentially a known, controlled physical ground truth.
      • Pathology: For excised prostate glands, ground truth was histological findings from all posterior segments... compared to PMI generated images and "pathology findings."
    • Clinical Studies (both 2006 and 2009):

      • Expert Clinical Assessment (DRE): The ground truth was "prostate abnormalities detected by DRE," performed by urologists or clinical investigators.

    8. The sample size for the training set

    The document does not specify a sample size for a training set. This suggests that if machine learning was used, the training data details were not included in this summary, or the device relies more on physical modeling and signal processing rather than a data-driven machine learning model requiring an explicit training set as commonly understood in modern AI/ML. The description of its principle of operation focuses on sensing pressure differences and converting them into elasticity images using established physical principles and calibrated sensor data.

    9. How the ground truth for the training set was established

    Since no explicit training set sample size was provided, the method for establishing its ground truth is also not described. If "training" refers to the sensor calibration, it was established by placing the probe into a calibration chamber and applying known pressure sequences (0-30 kPa).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1