Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K211317
    Device Name
    CRUXVIEW
    Manufacturer
    Date Cleared
    2021-07-29

    (90 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    CRUXELL Corp.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CRUXVIEW is a software intended for using and managing dental x-ray image sent by Cruxcan image plate scanner, storing the images and allowing the user to process and examine the images in order to achieve improved diagnoses.

    Device Description

    The CRUXVIEW is dental x-ray image management software which provides various features to acquire, transfer, edit, display, and store scanned dental images which is specifically for a 510k cleared device (K183637), Cruxcan (CRX-1000). It also provides server/client model which allow users to upload and download the images and patient information from any workstations in the network environment. CRUXVIEW consists of two parts: CRUXVIEW Viewer and CRUXVIEW Server. CRX-1000 sends the scanned image to the CRUXVIEW viewer. The CRUXVIEW viewer sends the received image to the CRUXVIEW Server. The CRUXVIEW Viewer searches the image stored in the CRUXVIEW Server and shows the image to the user. The subject device has various functions for users' needs including length and angle measurement functions. The CRUXVIEW supports DICOM file formats.

    AI/ML Overview

    The provided text describes the CRUXVIEW, a dental x-ray image management software, and notes its substantial equivalence to the predicate device, DIGORA FOR WINDOWS 2.0. However, the document does not contain the detailed study information required to fully answer your request regarding acceptance criteria and performance study details.

    Specifically, the document mentions "Performance test for accuracy of measurement function" and "SW Validation for Viewer & Server" but does not provide details on the acceptance criteria, study methodology, sample sizes, ground truth establishment, or expert involvement.

    Here's a breakdown of what can be extracted and what information is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    This information is not provided in the document. The document states that "The test results of the tests performed on the subject device supported that it is substantially equivalent to the predicate device," but it does not specify the acceptance criteria used for these tests or the quantitative performance metrics achieved.


    2. Sample Size Used for the Test Set and Data Provenance

    This information is not provided in the document.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the document.


    4. Adjudication Method for the Test Set

    This information is not provided in the document.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its Effect Size

    This information is not provided in the document. The document mentions "SW Validation for Viewer & Server" and "Performance test for accuracy of measurement function," which suggests technical validation, but not a clinical comparative effectiveness study with human readers.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study was done

    The document mentions "Performance test for accuracy of measurement function" and "SW Validation for Viewer & Server." While these are standalone tests of the software's functions, they are not presented as a "standalone (algorithm only)" performance study in the sense of a diagnostic effectiveness study, but rather as functional verification. Details on the methodology and results are not provided.


    7. The Type of Ground Truth Used

    This information is not provided in the document.


    8. The Sample Size for the Training Set

    This information is not provided in the document.


    9. How the Ground Truth for the Training Set was Established

    This information is not provided in the document. Given that the CRUXVIEW is software for managing and processing images from a cleared device (Cruxcan image plate scanner) rather than an AI-driven diagnostic tool generating new findings, the concept of a "training set" for AI algorithms and associated ground truth establishment may not fully apply in the same way as it would for a machine learning-based diagnostic device. However, without details, this remains an assumption.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183637
    Manufacturer
    Date Cleared
    2019-02-12

    (48 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Cruxell Corp.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CRUXCAN (CRX-1000) is indicated for capturing, digitization and processing of intra-oral x-ray images stored in imaging plate recording media.

    Device Description

    The CRUXCAN (CRX-1000) is a dental computed radiography system and is intended to produce digital X-Ray images for dental radiography purposes. It consists of a scanner, reusable imaging plates and workstation software. This device scans X-Ray exposed imaging plate and produces X-Ray image in digital form. Then, digital image is transferred to workstation for further processing and routing. The design features a built-in erase function and a color touch-screen LCD panel without physical push buttons for device operation. This device is intended to be operated in a radiological environment by qualified staff. The Imaging Plate is a polyester film made of densely applied particles of inorganic crystals called photostimulable phosphor. It is a flexible X-ray sensor for the CR system which can be used with a conventional medical X-ray imaging system and can be employed as a substitute for the screen/film system. The phosphor layer has a function of recording an X-ray image. This photostimulable phosphor is a special luminescent material which stores X-ray energy and emits light proportional to the stored X-ray energy when stimulation energy such as visible light is irradiated onto it. CRUXCAN TWAIN Driver and Viewer which transmit and display images are substantially equivalent to the predicate device software as well in terms of functions and principle of operations. The User interface is slightly different but the differences in design do not raise questions in device effectiveness.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and the study proving the device meets them:

    The document (K183637) describes the CRUXCAN (CRX-1000), a dental computed radiography system. The acceptance criteria and performance study presented here are primarily focused on demonstrating substantial equivalence to predicate devices rather than proving a specific clinical performance threshold against a predefined ground truth or an improvement over human readers.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly derived from the comparative performance shown against the predicate devices. The "SE Note" column indicates whether the subject device is considered "Equivalent" or if there's "No issue on safety and effectiveness" despite differences. This means the subject device's performance, even if quantitatively different, is considered acceptable because it's either comparable or superior without raising new safety/effectiveness concerns.

    CategoryAcceptance Criteria (Implied by Predicate)Reported Device Performance (CRUXCAN)SE Note
    Indications for UseCapturing, digitization, and processing of intra-oral x-ray images.Capturing, digitization, and processing of intra-oral x-ray images.Equivalent
    CommunicationDICOM (Predicate)TIFF / Raw FormatNo issue on safety and effectiveness
    Power Supply50 – 60 Hz, 100 - 240 V ~50 – 60 Hz, 100 - 240 V ~Equivalent
    Operating SystemWindows NT 4.0/2000/XP (Predicate 1), Windows 7 or 8 (Predicate 2)Windows 10 or higherEquivalent
    X-ray AbsorberImaging PlateImaging PlateEquivalent
    Cassette SizeSize 0, 1, 2, 3 (specific mm dimensions)Size 0, 1, 2, 3 (specific mm dimensions)No issue on safety and effectiveness
    Pixel Size40 um, 64 um (Predicate 1); 30um, 60um (Predicate 2)25 um, 50 umNo issue on safety and effectiveness (Subject device has smaller pixels, considered better)
    Dynamic Range (Acq.)14 bit (Predicates)8bit / 16 bitEquivalent (Subject device offers wider dynamic range with 16-bit)
    Resolution12.5 lp/mm @ 40um (Predicate 1); 16.7 lp/mm@30um (Predicate 2)14.0lp/mm @ 25umNo issue on safety and effectiveness (Subject device's resolution is comparable/better)
    DQE at 10% eff.2.4 lp/mm (Predicates)2.8 lp/mmNo issue on safety and effectiveness (Subject device has better DQE)
    MTF at 3lp/mm32% (Predicates)35%No issue on safety and effectiveness (Subject device has better MTF)
    Max. Resolution40um (Predicate 1); 30um (Predicate 2)25umNo issue on safety and effectiveness (Subject device has better max resolution)

    2. Sample Size Used for the Test Set and Data Provenance

    The document states that "Performance studies were conducted on the device imaging plate according to the FDA guidance 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'." However, it does not specify the sample size used for performance testing (e.g., number of images, number of patients).

    Furthermore, the data provenance (e.g., country of origin, retrospective or prospective) for any images used in the performance evaluation is not mentioned in the provided text.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The provided document does not mention the use of experts to establish a ground truth for a test set in the context of clinical performance or diagnostic accuracy. The performance evaluation focuses on technical imaging characteristics (e.g., DQE, MTF, resolution).

    4. Adjudication Method for the Test Set

    Since there is no mention of a test set requiring expert interpretation or establishing clinical ground truth, there is no adjudication method described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical testing was not necessary for the subject device, in order to demonstrate substantial equivalence." Therefore, there is no information on the effect size of human readers improving with AI vs. without AI assistance as this type of study was not conducted.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    The device is an imaging system (scanner, imaging plate, workstation software) that produces images, rather than an AI algorithm for image analysis. Therefore, the concept of a "standalone" algorithm-only performance as typically applied to AI CAD systems does not directly apply in the context presented. The performance tests ("Performance studies were conducted on the device imaging plate") describe the technical capabilities of the imaging system itself.

    7. Type of Ground Truth Used

    The ground truth for the performance tests appears to be technical measurements and comparisons of physical imaging characteristics (e.g., lp/mm for resolution, percentage for MTF, lp/mm for DQE) against established engineering standards and the specifications of predicate devices. This is not a clinical ground truth like pathology or outcomes data.

    8. Sample Size for the Training Set

    The document does not mention a training set as this device is a hardware/software system for image acquisition and processing, not a machine learning algorithm that requires a training set in the typical sense.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or applicable in the context of this traditional medical device evaluation for image acquisition, there is no information on how ground truth was established for a training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1