Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241546
    Manufacturer
    Date Cleared
    2025-02-26

    (271 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    aiCockpit AI Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Al Viewer is software for clinical review intended for use in General Radiology and other healthcare imaging applications.

    Al Viewer is intended to be used with off the shelf computing devices. Al Viewer supports major desktop browsers such as Microsoft Edge, Chrome, Safari.

    Al Viewer can display annotations and measurements as an overlay on images from DICOM objects, and from Al software. The viewer can perform 3D Multi-Planar Reformatting (MPR), 3D Maximum Intensity Projection (MIP) and 3D Volume Rendering (VR). Al Viewer is purposed to aid in reviewing findings through its ability to display clinical documents and reports side by side with the images.

    Device Description

    Al Viewer is a stand-alone software as medical device (Stand-alone SaMD) for clinicians able to use web browsers at client stations to view DICOM image data. It is intended to provide image and related information to render and interact with Al findings and annotations from Lung Nodule detection Al algorithms, but it does not directly generate any potential findings or diagnosis.

    Al Viewer allows trained medical professionals to perform 2D, MPR & 3D image manipulations using the Adjustment Tools, including window level, rotate, flip, pan, stack scroll, and magnify. Al Viewer is also capable of organizing all the images and presenting them in a zero-footprint, web user interface, allowing the users to view images in their preferred layout.

    Al Viewer allows trained medical professions with the Al Findings such as accept/reject/edit. Editing tools include selecting descriptive features for findings as well as allowing the physician to control editing quantifiable measurements first computed by AI such as lengths and volumes.

    Al Viewer supports major desktop browsers such as Microsoft Edge, Chrome, Safari.

    AI/ML Overview

    The provided text is a 510(k) summary for the Fovia, Inc. aiCockpit AI Viewer. However, it does not contain the specific details required to fully describe the acceptance criteria and the study that proves the device meets them, as requested in your prompt.

    Specifically, the document states: "Software Validation and Verification Testing was conducted, and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff..." and then lists general testing steps like "Full regression tests..." and "Bug fix verification...". It also explicitly states "Not Applicable" for more detailed information when describing "Non-Clinical and/or Clinical Tests Summary & Conclusions 21 CFR 807.92(b)".

    This means the key information you're asking for, such as a table of acceptance criteria, reported device performance, sample sizes, ground truth establishment, and details of comparative effectiveness studies (MRMC or standalone), is not present in the provided text.

    Based on the available information, I can only provide a summary of what the document does state about testing:

    No specific acceptance criteria or detailed study results are provided in this document. The 510(k) summary indicates that software validation and verification testing was conducted according to FDA guidance, but it does not disclose the acceptance criteria, the reported performance, or the methodologies of these tests in detail.

    Here's what can be extracted, acknowledging the significant gaps in information for your request:

    1. A table of acceptance criteria and the reported device performance:

    • Information Not Provided: The document does not specify any quantitative or qualitative acceptance criteria for the aiCockpit AI Viewer's performance. It also does not report specific device performance metrics against any criteria.

    2. Sample sized used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    • Information Not Provided: The document does not specify the sample size of any test set used, nor does it provide details about the data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

    • Information Not Provided: The document does not mention the use of experts to establish a ground truth for a test set, nor does it specify their number or qualifications.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Information Not Provided: No adjudication method is mentioned in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Information Not Provided: The document does not describe any MRMC comparative effectiveness study. The AI Viewer is described as a tool to "render and interact with AI findings and annotations from Lung Nodule detection AI algorithms" and to "display annotations and measurements as an overlay on images from DICOM objects, and from AI software," rather than an AI algorithm itself that would augment human reading for a direct performance comparison.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Information Not Provided: The document describes the aiCockpit AI Viewer as a "stand-alone software as medical device (Stand-alone SaMD)" for clinicians to view DICOM data and interact with AI findings. It explicitly states it "does not directly generate any potential findings or diagnosis." Therefore, a standalone performance of an algorithm itself is not relevant to this specific device's function as a viewer.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Information Not Provided: No ground truth information is provided in the document.

    8. The sample size for the training set:

    • Information Not Provided: The document does not discuss any training set, as the device is a viewer for AI findings, not an AI algorithm itself that requires training.

    9. How the ground truth for the training set was established:

    • Information Not Provided: Not applicable, as no training set or ground truth for a training set is mentioned for this device.

    What the document does say about testing:

    • Software Validation and Verification Testing Summary:
      • "The verifications of features were performed by members of the product." (No specific roles mentioned for these "members")
      • Test Steps Included:
        • Full regression tests executed on the build with all feature implementations to uncover bugs.
        • Bug fix verification for reported issues and regression tests on related features.
        • End-to-end full tests executed on the candidate build to verify all functionalities and fixes.
      • Guidance Followed: FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submission for Management of Cybersecurity in Medical Devices."

    In conclusion, while the document confirms that software validation and verification testing took place to support the substantial equivalence claim, it does not provide the specific performance data, acceptance criteria, sample sizes, or details about ground truth and expert involvement that your request asks for. This type of detailed information is typically found in the full 510(k) submission and associated test reports, which are not part of this summary document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1