Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K230136
    Date Cleared
    2023-04-24

    (96 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    A cabinet X-ray system used to provide digital X-ray images of surgical and core biopsy specimens from various anatomical regions in order to allow rapid verification that the correct tissue has been excised during the procedure.

    Doing the verification in the same room as the procedure or nearby improves workflow, thus reducing the overall operative time.

    Device Description

    The TrueView 200 Pro-US is a Cabinet X-ray System intended to provide the detailed radiographic imaging of small surgical excised or biopsy specimens and to further provide rapid verification that correct tissue has been excised. The TrueView 200 Pro-US includes the following major components: system monitor, touch-screen control display, and an imaging cabinet.

    This all-in-one system includes shielding that is incorporated within the cabinet chamber system design, eliminating the need for separate shielding. The unit is mounted on casters for easy transportation.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the TrueView 200 Pro-US Specimen Radiography System. It primarily focuses on demonstrating substantial equivalence to a predicate device (TrueView 100 Pro) rather than describing a study proving the device meets specific acceptance criteria for AI performance.

    Therefore, many of the requested details, particularly those related to AI algorithm performance (e.g., sample sizes for test/training sets, expert ground truth, MRMC studies, specific acceptance criteria for AI metrics), are not present in this document. This device is a cabinet X-ray system, suggesting it's hardware for imaging, not an AI-powered diagnostic tool. The document doesn't mention any AI components or algorithms that would require such specific performance testing.

    Here's what can be extracted from the document, and where information is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" in the context of AI performance metrics (e.g., sensitivity, specificity, AUC). Instead, it focuses on demonstrating compliance with recognized standards and functional performance of the radiography system.

    Criteria TypeDescription (from document)Performance/Compliance (from document)
    Standards Compliance- ANSI UL 61010-1 3rd Ed- IEC 61010-2-091:2019- IEC 61010-2-101:2018- IEC 61326-1 Edition 3.0 2020-10- IEC 61326-2-6 Edition 3.0 2020-10- ISTA 3B-2017- 21 CFR 1020.40Complies with applicable IEC-61010 standards (general electrical safety including mechanical hazards plus particular standards for cabinet x-ray systems) and international EMC standards. Compliance demonstrated by Intertek (third-party test house).
    Functional Performance (Bench Testing)- Functional testing- Usability testingSuccessfully performed design control verification tests and validation tests. Results support substantial equivalence.
    Imaging Performance Parameters- Limiting Spatial Resolution- Output Image- Display Monitor- Time to Preview- Cycle Time- 10 lp/mm- 14-bit image data- 2.3 MP High luminescence diagnostic monitor- < 20 seconds- < 60 seconds(These match the predicate device, implying satisfactory performance).
    Safety Features- Door interlock- Passcode key- Fully shieldedComplies; supports substantial equivalence.

    Missing: No specific quantitative acceptance criteria or reported performance for AI metrics (e.g., sensitivity, specificity, F1-score) are provided. This is consistent with the device being an X-ray system, not an AI diagnostic tool.


    2. Sample size used for the test set and the data provenance

    Not applicable/Not provided for AI performance. The document mentions "design control verification tests and validation tests" and "bench testing, including functional testing and usability testing," but does not specify sample sizes for these tests or data provenance (country of origin, retrospective/prospective). Given it's a hardware device performance test, the "test set" would refer to test conditions and specimens, not patient data for AI validation.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable/Not provided. As there's no mention of an AI component requiring diagnostic performance validation with expert ground truth, this information is not relevant to the document's scope.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable/Not provided. No adjudication method is mentioned, as there is no diagnostic AI component requiring human expert review for ground truth establishment.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. No MRMC study was conducted or reported, as this device is an imaging system and not described as having an AI assistance feature for human readers.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. There is no mention of a standalone algorithm performance study, as there is no specific AI algorithm described within the device's functionality.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable/Not provided. Ground truth, in the context of diagnostic performance, is not discussed as the device is a specimen radiography system, not a diagnostic AI. The "ground truth" for this device's performance would likely relate to image quality parameters (e.g., resolution, contrast) and system functionality, verified through established engineering and quality control methods.


    8. The sample size for the training set

    Not applicable/Not provided. No training set for an AI algorithm is mentioned.


    9. How the ground truth for the training set was established

    Not applicable/Not provided. No ground truth for a training set is mentioned.


    In summary: The provided FDA 510(k) clearance document for the TrueView 200 Pro-US Specimen Radiography System focuses on establishing substantial equivalence to a predicate device based on its intended use, technological characteristics, and compliance with general safety and performance standards for X-ray systems. It does not detail the validation of an AI component with specific diagnostic performance metrics, as the device itself is a hardware imaging system, rather than an AI-powered diagnostic tool.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230140
    Date Cleared
    2023-04-24

    (96 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    A cabinet X-ray system used to provide digital X-ray images of surgical and core biopsy specimens from various anatomical regions in order to allow rapid verification that the correct tissue has been excised during the procedure. Doing the verification in the same room as the procedure or nearby improves workflow, thus reducing the overall operative time.

    Device Description

    The TrueView Core 100Pro-US Core Specimen Radiography System (CSRS) is a Cabinet X-ray System intended to provide the detailed radiographic imaging of small surgical excised or biopsy specimens and to further provide rapid verification that correct tissue has been excised. The TrueView Core 100Pro-US includes the following major components: touch-screen control display, and an imaging cabinet. This all-in-one system includes shielding that is incorporated within the cabinet chamber system design, eliminating the need for separate shielding.

    AI/ML Overview

    The provided FDA submission for the TrueView Core 100Pro-US Core Specimen Radiography System is not an AI/ML device. It is a cabinet X-ray system used for digital X-ray imaging of surgical and core biopsy specimens. Therefore, the specific criteria requested for AI/ML devices regarding acceptance criteria, study details, human reader improvement, and ground truth establishment (for training/test sets) are not applicable to this submission.

    The document discusses performance data related to the device's electrical safety, mechanical hazards, and electromagnetic compatibility, as shown by compliance with IEC 61010 standards and various functional and usability tests.

    Here is a summary of the information that is applicable and found in the document:

    1. A table of acceptance criteria and the reported device performance:

    The document primarily focuses on compliance with standards rather than specific acceptance criteria in the context of diagnostic accuracy for AI/ML.

    Acceptance Criteria (Standards Compliance)Reported Device Performance
    ANSI UL 61010-1 3rd Ed, May 12, 2012Complies
    IEC 61010-2-091:2019Complies
    IEC 61010-2-101:2018Complies
    IEC 61326-1 Ed 3.0 2020-10Complies
    IEC 61326-2-6 Ed 3.0 2020-10Complies
    ISTA 3B-2017Complies
    21 CFR 1020.40Complies
    Functional testingSuccessfully performed
    Usability testingSuccessfully performed
    Time to Preview< 20 seconds
    Cycle Time< 60 seconds

    2. Sample sized used for the test set and the data provenance: Not applicable as this is not an AI/ML device with a diagnostic algorithm. Performance testing focused on hardware safety and functionality.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc): Not applicable.

    8. The sample size for the training set: Not applicable.

    9. How the ground truth for the training set was established: Not applicable.

    In summary, the document describes a traditional medical device (an X-ray system) and details its compliance with relevant safety and performance standards through bench testing, rather than an AI/ML diagnostic system that would require a test set, ground truth, and expert evaluations.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1