Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K110869
    Device Name
    CARA
    Manufacturer
    Date Cleared
    2011-07-14

    (107 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K082364

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CARA is a comprehensive software platform intended for importing, processing, and storing of color fundus images as well as visualization of original and enhanced image through computerized networks.

    Device Description

    CARA is a software platform that collects, enhances, stores, and manages color fundus images. Through the internet, CARA software collects and manages color fundus images from a range of approved computerized digital imaging devices. CARA enables a real-time review of retinal image data (both original and enhanced) from an internet-browser-based user interface to allow authorized users to access and view data saved in a centralized database. The system utilizes state-of-the-art encryption tools to ensure a secure networking environment.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a device named CARA, a software platform for managing color fundus images.

    Here's an analysis based on the provided information, addressing your requested points:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission does not specify quantitative acceptance criteria or provide a table of device performance against such criteria. The document states "The results of performance and software validation and verification testing demonstrate that CARA performs as intended and meets the specifications. This supports the claim of substantial equivalence," but the specific metrics are not detailed.

    2. Sample Size Used for the Test Set and Data Provenance

    No specific test set or sample size for evaluating performance is mentioned. The submission states, "Since the CARA system currently is not a stand-alone tool, does not make any diagnostic claims and does not replace the existing retinal images or the treating physician, the sponsor believes that the software testing and validation presented in this 510(k) are sufficient and that there is no need for a clinical trial." This indicates that no human-read test set data was used to demonstrate performance. The country of origin for any internal software testing data is not specified, but the applicant's address is in Canada.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    Not applicable. No clinical test set requiring expert ground truth was used for this 510(k) submission.

    4. Adjudication Method for the Test Set

    Not applicable. No clinical test set requiring adjudication was used.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    No MRMC comparative effectiveness study was done. The device is not making diagnostic claims and "does not replace the existing retinal images or the treating physician," therefore, a study on human reader improvement with or without AI assistance was not deemed necessary by the sponsor.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The CARA system is explicitly stated as "not a stand-alone tool" and "does not make any diagnostic claims." The document does not describe any standalone performance testing of an algorithm making diagnostic claims. The "software testing and validation" mentioned are likely related to functional performance, security, and compatibility as a picture archiving and communication system, not diagnostic accuracy.

    7. The Type of Ground Truth Used

    For the purposes of this 510(k), which focuses on the device as a Picture Archiving and Communications System, the concept of "ground truth" for diagnostic accuracy (e.g., pathology, outcomes data) is not applicable. The system's "performance" is based on its ability to import, process, store, and visualize fundus images as intended by its specifications.

    8. The Sample Size for the Training Set

    Not applicable. As CARA is described as a software platform for managing and enhancing images, not a diagnostic AI algorithm, there is no mention of a "training set" in the context of machine learning model development.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no mention of a training set for a machine learning model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K101861
    Date Cleared
    2010-12-22

    (173 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K082364, K963333, K980295

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TrueVision® 3D Visualization and Guidance System is an adjunct imaging tool that provides onscreen guidance with alignment, orientation, and sizing for eye surgery. The system is intended for use as a preoperative and postoperative image capture tool with visualization and guidance provided during anterior segment ophthalmic surgical procedures, including limbal relaxing incisions, capsulorhexis and toric intraocular lens (toric IOL) positioning. The system utilizes surgeon confirmation at each step for planning and positioning of guidance templates.

    Device Description

    The TrueVision® 3D Visualization and Guidance System is a stereoscopic highdefinition digital video camera and workstation, which operates as an adjunct to the surgical microscope during cataract surgery and the slit lamp microscope during pre-operative and post-operative image capture. The visualization system displays real-time images during eve surgery on a flat-panel, high-definition digital 3D display device positioned for live video image viewing by the surgeon and surgical personnel in the operating room.

    The Cataract and Refractive Toolset system combines the TrueVision FDAregistered Class 1 Device (TrueVision® 3D Visualization System for Microsurgery) with proprietary TrueWare™ software (controlled via remote keyboard with included touchpad mouse) to provide enhanced visualization and surgical quidance assistance to the surgeon during specific procedures such as Limbal Relaxing Incision, Capsulorhexis, and toric IOL positioning.

    Using standard pre-operative clinical data, together with surgeon-driven, onscreen templates and guides, the Refractive Cataract Toolset provides graphical assistance to the surgeon as desired during the surgery. Guidance applications include incision templates to optimize the position of limbal relaxing incisions, sizing and positioning of capsulorhexis tears, and rotational alignment of toric intraocular lenses (toric IOL).

    AI/ML Overview

    The provided 510(k) summary for the TrueVision 3D Visualization and Guidance System (K101861) includes some details about performance testing but does not provide a table of acceptance criteria with specific numeric targets or detailed reported device performance against those targets. It also lacks granular information on some of the requested study parameters.

    Here's a breakdown of the available information and what is not explicitly stated:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states: "Performance verification and validation testing was completed to demonstrate that the device performance complies with specifications and requirements identified for the TrueVision® 3D Visualization and Guidance System. This was accomplished by software verification testing and a non-significant risk clinical study. All criteria for this testing were met and results demonstrate that the TrueVision® 3D Visualization and Guidance System meets all performance specifications and requirements."

    However, specific acceptance criteria (e.g., minimum accuracy percentages, error margins) and the quantitative results validating these criteria are not provided. The summary only confirms that "All criteria for this testing were met."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The document mentions a "non-significant risk clinical study" but does not provide details about its sample size.
    • Data Provenance: Not explicitly stated. It's unclear if the data was retrospective or prospective, or the country of origin. Given the device's intended use in surgical settings, it's likely prospective for the clinical study aspect, but this is an inference.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not explicitly stated.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: Not explicitly stated. The focus is on the device's performance in providing "onscreen guidance" and assisting the surgeon, not on comparing human reader performance with and without AI assistance.
    • Effect Size: Not applicable, as an MRMC study is not described.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Study Done: Yes, implicitly. The document mentions "software verification testing" to demonstrate compliance with specifications. This would typically involve testing the algorithm's performance independent of human interaction for its guidance functions. The "non-significant risk clinical study" would then likely evaluate the overall system (device + human interaction) in a real-world setting. However, specific performance metrics for the standalone algorithm are not presented.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The document states that the system "utilizes surgeon confirmation at each step for planning and positioning of guidance templates." This suggests that the ground truth for evaluating the guidance features (e.g., accuracy of incision templates, capsulorhexis sizing, IOL alignment recommendations) would be based on surgeon confirmation/expert judgment during the clinical study. It is not explicitly stated if pathology or outcomes data were used directly for ground truth establishment.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not mentioned or applicable. The device primarily functions as a "visualization and guidance system" with "surgeon-driven, onscreen templates and guides." There is no indication that it uses a machine learning algorithm that requires a separate "training set" in the context of typical AI device development. The software likely implements pre-defined algorithms based on known surgical parameters and anatomical measurements, rather than learning from a large dataset.

    9. How Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not applicable, as there's no mention of a training set in the context of machine learning. The guidance templates are described as "surgeon-driven" and utilizing "standard pre-operative clinical data," implying that the accuracy of the guidance is based on established surgical principles and measurements, confirmed by the surgeon.

    In summary, while the K101861 document states that performance testing, including software verification and a clinical study, was conducted and that all criteria were met, it lacks the detailed quantitative data and specific methodological descriptions required to fully populate the requested information.

    Ask a Question

    Ask a specific question about this device

    K Number
    K093313
    Device Name
    SYNERGY
    Date Cleared
    2009-12-02

    (41 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K082364, K013694, K071299, K072971

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Synergy is a comprehensive software platform intended for use in acquisition or importing, processing, measurement, analysis and storage of clinical images and videos of the eye as well as in management of patient data, diagnostic data, clinical information, reports from ophthalmic diagnostic instruments through either a direct connection with the instruments or through computerized networks.

    Device Description

    Synergy is a software platform that collects, processes, measures, analyzes, stores, and manages patient data and clinical information. Synergy is used together with a number of computerized digital imaging devices. In addition, Synergy software collects and manages patient demographics, image data, and clinical reports from a range of approved medical devices. Synergy enables a real-time review of diagnostic patient information at a PC workstation. In addition to the desktop application, Synergy also includes an internet-browser-based user interface to allow authorized users to access, view, create reports, and analyze patient and examination data saved in a centralized database. The system utilizes dual level authentication and 128-bit encryption to ensure secure networking environment.

    AI/ML Overview

    This 510(k) summary provides information for the Topcon Medical Systems, Inc. Synergy, an ophthalmic image management system.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Not specifiedNo performance data was required or provided. Software validation and verification demonstrate that the Synergy performs as intended and meets its specifications.

    2. Sample size used for the test set and the data provenance

    The document explicitly states: "No performance data was required or provided." Therefore, there is no test set sample size and no data provenance mentioned for a clinical performance study. The evaluation focused on software validation and verification.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    As no performance data was provided, there was no test set and therefore no experts used to establish ground truth for a clinical performance evaluation.

    4. Adjudication method for the test set

    Not applicable, as no performance study with a test set was conducted.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The device description indicates it is a software platform for image management and analysis, not an AI-assisted diagnostic tool that would typically involve human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No standalone performance study was done in the context of clinical accuracy or diagnostic capability, as explicitly stated: "No performance data was required or provided." The "standalone" aspect described is the software itself performing its intended functions (acquisition, processing, measurement, analysis, storage, management) rather than a diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Since no performance data or clinical study was conducted, no ground truth was established or used in the context of diagnostic accuracy. The "ground truth" for the device's functionality would have been its own specifications, verified through software validation and verification.

    8. The sample size for the training set

    Not applicable, as this device is an image management system and not an AI/ML diagnostic algorithm that typically requires a training set for model development.

    9. How the ground truth for the training set was established

    Not applicable, as no training set was used for an AI/ML model.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1