Search Filters

Search Results

Found 11 results

510(k) Data Aggregation

    K Number
    K242821
    Date Cleared
    2025-02-20

    (155 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI, Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI CHESTSUITE XR MALPOSITIONED ETT ASSESSMENT SYSTEM (EFAI ETTXR) is a radiological computer-aided triage and notification software indicated for use in the analysis of chest X-ray (CXR) images in adults. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive cases of vertically malpositioned endotracheal tube (ETT) in relation to the carina. Findings are flagged when the ETT distal tip is assessed as being more than 7 cm above the carina, less than 3 cm above the carina, or when it is below the carina (i.e in the right or left mainstem bronchus). The device assesses solely the vertical position of the ETT distal tip relative to the carina, does not factor patient positioning, and cannot detect esophageal intubation. The device is tested in the single lumen endotracheal tube, while it may trigger a false prioritization alert in the case of properly positioned double lumen ETT.

    EFAI ETTXR analyzes cases using algorithms to identify suspected malpositioned ETT findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage. EFAI ETTXR is not intended to direct attention to specific portions of an image or to anomalies of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out malpositioned ETT or otherwise preclude clinical assessment of chest radiographs.

    Device Description

    EFAI CHESTSUITE XR MALPOSITIONED ETT ASSESSMENT SYSTEM (EFAI ETTXR) is a radiological computer-assisted triage and notification software system. The software uses deep learning techniques to automatically analyze chest radiographs and alerts the PACS/RIS workstation once images with features suggestive of malpositioned ETT are identified.

    Through the use of EFAI ETTXR, a radiologist is able to review studies with features suggestive of malpositioned ETT earlier than in standard of care workflow.

    The device is intended to provide a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from the prioritization. It does not mark, highlight, or direct users' attention to a specific location on the original chest radiographs. The device aims to aid in prioritization and triage of radiological medical images only.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the EFAI ETTXR device, based on the provided document:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device PerformanceComments
    Primary Endpoints
    Sensitivity >= 80%0.890 (95% CI: 0.846-0.923)Meets acceptance criteria.
    Specificity >= 80%0.935 (95% CI: 0.909-0.954)Meets acceptance criteria.
    Secondary Endpoint
    System processing time (less than pre-specified goal)2.49 minutes (95% CI: 2.43-2.56 minutes) on averageMeets acceptance criteria (significantly less than goal, though the goal itself is not explicitly stated in minutes).

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:
    * Sample Size: 940 studies (each patient included only one study).
    * Data Provenance: Retrospective, consecutively collected from multiple clinical sites across the United States. None of the studies were used in model development or analytical validation.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
    * Number of Experts: Three.
    * Qualifications: U.S. board-certified radiologists.

    3. Adjudication Method for the Test Set:
    * Method: Majority agreement among the three U.S. board-certified radiologists.
    * Resulting Ground Truth: 259 positive cases for malpositioned ETT, 681 negative cases (316 correctly positioned ETTs, 365 with no ETT).

    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
    * No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly described in this document. The study described is a standalone performance validation of the AI model.

    5. If a Standalone (Algorithm Only) Performance Study Was Done:
    * Yes, a standalone performance validation study was done. The document states: "The observed results of the standalone performance validation study demonstrated that EFAI ETTXR by itself, in the absence of any interaction with a clinician, can provide case-level notifications with features suggestive of malpositioned ETT with satisfactory results."

    6. The Type of Ground Truth Used:
    * Expert Consensus: The ground truth was established by the majority agreement of three U.S. board-certified radiologists.

    7. The Sample Size for the Training Set:
    * The document does not specify the exact sample size for the training set. It mentions that "None of the studies [in the test set] was used as part of the EFAI ETTXR model development or analytical validation testing," implying a separate training set was used, but its size is not provided.

    8. How the Ground Truth for the Training Set Was Established:
    * The document does not explicitly state how the ground truth for the training set was established. It only implies the use of "deep learning techniques" and a "database of images" for the algorithm. It's common in AI development studies for the training set ground truth to also be established by expert review, but this is not detailed for EFAI ETTXR's training data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241923
    Date Cleared
    2024-12-06

    (158 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI, Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI NEUROSUITE CT MIDLINE SHIFT ASSESSMENT SYSTEM (EFAI MLSCT) is a software workflow tool designed to aid in prioritizing the clinical assessment of non-contrast head CT cases with features suggestive of midline shift (MLS) in individuals aged 18 years and above. EFAI MLSCT analyzes cases using deep learning algorithms to identify suspected MLS findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage.

    EFAI MLSCT is not intended to direct attention to specific portions of an image or to anomalies other than MLS. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out MLS or otherwise preclude clinical assessment of CT studies.

    Device Description

    EFAI NEUROSUITE CT MIDLINE SHIFT ASSESSMENT SYSTEM (EFAI MLSCT) is a radiological computer-assisted triage and notification software system. The software uses deep learning techniques to automatically analyze non-contrast head CTs and alerts the PACS/RIS workstation once images with features suggestive of MLS are identified.

    Through the use of EFAI MLSCT, a radiologist is able to review studies with features suggestive of MLS earlier than in standard of care workflow.

    The device is intended to provide a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from the prioritization. It does not mark, highlight, or direct users' attention to a specific location on the original non-contrast head CT. The device aims to aid in prioritization and triage of radiological medical images only.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the EFAI Neurosuite CT Midline Shift Assessment System (MLS-CT-100), based on the provided text:


    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Lower Bound of 95% CI)Reported Device Performance (95% CI)
    Sensitivity> 0.80.961 (0.903-0.985)
    Specificity> 0.80.955 (0.916-0.973)
    AUROCNot explicitly stated (but reported)0.983 (0.967-0.996)
    Processing TimeSignificantly less than pre-specified goal62.04 seconds (60.65-63.44)

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 300 cases (102 positive for MLS, 198 negative for MLS). Each case included only one CT study.
    • Data Provenance: Retrospective, consecutively collected from multiple clinical sites across the United States (U.S.). The U.S. cases were solely collected for this study.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    • Number of Experts: Three (3)
    • Qualifications: U.S. board-certified radiologists.

    4. Adjudication Method (Test Set)

    • Adjudication Method: Majority agreement between the three experts established the reference standard (ground truth).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No. The document describes a "standalone performance validation study" and mentions "Reader comparison analysis" for overall safety & effectiveness, but does not detail an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated for an effect size. The study described focuses on the standalone performance of the AI.

    6. Standalone Performance Study

    • Was it done? Yes. The document explicitly states: "The observed results of the standalone performance validation study demonstrated that EFAI MLSCT by itself, in the absence of any interaction with a clinician, can provide case-level notifications with features suggestive of MLS with satisfactory results."

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (majority agreement of three U.S. board-certified radiologists).

    8. Sample Size for the Training Set

    • The document states that the "model development and validation utilized cases from Taiwan," but it does not specify the sample size for the training set. It only mentions that the U.S. validation cases were not used for model development or analytical validation testing.

    9. How the Ground Truth for the Training Set Was Established

    • The document indicates that the model was developed and validated using cases from Taiwan, but it does not describe how the ground truth for these training cases was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K234042
    Date Cleared
    2024-06-07

    (169 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI BONESUITE XR BONE AGE PRO ASSESSMENT SYSTEM (EFAI BAPXR) is designed to view and quantify bone age from 2D Posterior Anterior (PA) view of left-hand radiographs using deep learning techniques to aid in the analysis of bone age assessment of patients between 2 to 16 years old for pediatric radiologists. The results should not be relied upon alone by pediatric radiologists to make diagnostic decisions. The images shall be with left hand and wrist fully visible within the field of view, and shall be without any major bone destruction, deformity, fracture, excessive motion, or other major artifacts.

    Device Description

    The device is a software designed to aid the quantification of bone age for patients between 2 to 16 years old. The software uses deep learning techniques to analyze posterior-anterior (PA) radiographs of the left-hand according to the Greulich-Pyle (GP) method.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:

    EFAI Bonesuite XR Bone Age Pro Assessment System (BAP-XR-100) Performance Study

    1. A table of acceptance criteria and the reported device performance

    The acceptance criteria for this device are based on the intercept and slope of a Deming regression analysis between the device's output (EFAI BAPXR) and the Ground Truth (GT). The criteria are that both the intercept and slope of the regression line must fall within the range of the highest acceptable bias. The text does not explicitly state the numerical "highest acceptable bias" range, but it states that the observed results met these general criteria.

    MetricAcceptance Criteria (General)Reported Device Performance (EFAI BAPXR vs. GT)
    Deming Regression InterceptFall within the range of the highest acceptable bias-0.07 (95% CI: [-0.13, -0.01])
    Deming Regression SlopeFall within the range of the highest acceptable bias1.00 (95% CI: [0.99, 1.00])
    Percentage of cases with bone age difference 88%
    Bland-Altman 95% Limits of Agreement (EFAI BAPXR vs. GT)(Not explicitly stated as an primary acceptance criterion, but reported as an indicator of high consistency)-0.517 to 0.743 (with CIs in gray dashed lines)

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Test Set (Clinical Study): 600 cases
    • Data Provenance: Retrospectively collected from 27 locations across multiple states and multiple clinical organizations in the United States.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Number of Experts: Four (4)
    • Qualifications of Experts: U.S. board-certified expert radiologists. Specific experience level (e.g., years) is not mentioned.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    The ground truth for the test set was established through a "Ground Truthing Workflow" involving multiple stages:

    • Bone Age Assessment: Individual assessments by the four expert radiologists.
    • Consensus Via Grading: Implies a process of evaluating and potentially assigning grades to assessments based on predetermined criteria (e.g., differences).
    • Majority Voting: Most likely used when assessments differed, to reach an initial consensus.
    • Final Adjudication: This step suggests a process where discrepancies or remaining disagreements after majority voting were resolved by a final decision-making body or method. The flowchart indicates a systematic process to ensure consistency and consensus, though the exact rules for "Final Adjudication" (e.g., if a lead adjudicator made a final decision or if all 4 radiologists had to agree) are not explicitly detailed beyond "consensus among all readers reviewing the radiographs."

    This detailed workflow suggests a robust, multi-reader consensus approach for ground truthing, rather than a simple 'none' or majority vote without further review.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader multi-case (MRMC) comparative effectiveness study (human readers with AI vs. without AI assistance) was not explicitly described. The clinical study was a standalone performance study of the EFAI BAPXR device itself, comparing its output to ground truth established by expert radiologists, not measuring human reader improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done. The description states: "EFAI conducted a standalone performance study with the proposed device EFAI BAPXR..." This study measured the performance of the EFAI BAPXR algorithm directly against the established ground truth.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used was expert consensus based on assessments by four U.S. board-certified expert radiologists, following a structured "Ground Truthing Workflow" that included individual assessments, consensus via grading, majority voting, and final adjudication, comparing their findings to the Greulich-Pyle Atlas.

    8. The sample size for the training set

    The training set comprised 23,578 cases.

    9. How the ground truth for the training set was established

    For the training set, the ground truth was established as the average of the bone age assessments independently done by three board-certified radiologists.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240291
    Date Cleared
    2024-04-08

    (67 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI, Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI CARDIOSUITE CTA ACUTE AORTIC SYNDROME ASSESSMENT SYSTEM (EFAI AASCTA) is a radiological computer aided triage and notification software indicated for use in the analysis of chest-abdomen CTA in adults aged 22 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive cases of aortic dissection (AD) or aortic intramural hematoma (IMH) pathology.

    EFAI AASCTA uses an artificial intelligence algorithm to identify suspected findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage. EFAI AASCTA is not intended to direct attention to specific portions or anomalies of an image. Its results are not intended to be used on a stand-alone basis for clinical decisionmaking nor is it intended to rule out AAS or otherwise preclude clinical assessment of computed tomography cases.

    Device Description

    EFAI CARDIOSUITE CTA ACUTE AORTIC SYNDROME ASSESSMENT SYSTEM (EFAI AASCTA) is a radiological computer-assisted triage and notification software system. The software uses deep learning techniques to automatically analyze chest or chest-abdomen CTA and alerts the PACS/RIS workstation once images with features suggestive of AD or IMH are identified.

    Through the use of EFAI AASCTA, a radiologist is able to review studies with features suggestive of AD or IMH earlier than in standard of care workflow.

    The device is intended to provide a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from the prioritization. It does not mark, highlight, or direct users' attention to a specific location on the original chest or chest-abdomen CTA. The device aims to aid in prioritization and triage of radiological medical images only.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the EFAI CARDIOSUITE CTA ACUTE AORTIC SYNDROME ASSESSMENT SYSTEM, based on the provided document:


    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance Criteria (Lower Bound of 95% CI)Reported Device Performance (95% CI)
    Sensitivity> 0.80.929 (0.878 - 0.960)
    Specificity> 0.80.915 (0.871 - 0.945)
    Processing TimeNot explicitly stated as an AC37.86 seconds (35.22 - 40.50)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 380 CTA studies (156 positive cases, 224 negative cases).
    • Data Provenance: Retrospective, multisite clinical validation study. The data was collected in the United States. None of the studies in the test set were used for model development or analytical validation. The study population included 51.58% females and 48.42% males, with a mean age of 62.90 years. CT scanner manufacturers included Philips, Toshiba, Siemens, GE, and others.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three.
    • Qualifications of Experts: U.S. board-certified radiologists.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Majority agreement between the three experts. (Described as "the reference standard (ground truth) was generated by the majority agreement between the three experts.")

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not reported. The study focused on the standalone performance of the AI algorithm.

    6. Standalone Performance Study

    • Yes, a standalone performance study was conducted. The results reported (sensitivity and specificity) are for the EFAI AASCTA by itself, "in the absence of any interaction with a clinician."

    7. Type of Ground Truth Used

    • Ground Truth Type: Expert consensus. Specifically, the "majority agreement between the three experts" (U.S. board-certified radiologists) determined the presence of AD or IMH for each case.

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions that none of the 380 studies in the validation test set were used for model development (training) or analytical validation.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only discusses the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K231025
    Date Cleared
    2023-10-04

    (176 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI ICHCT is a software workflow tool designed to aid in prioritizing the clinical assessment of adult non-contrast head CT cases with features suggestive of acute intracranial hemorrhage (ICH). EFAI ICHCT analyzes cases using deep learning algorithms to identify suspected ICH findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage.

    EFAI ICHCT is not intended to direct attention to specific portions of an image or to anomalies other than acute ICH. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out hemorrhage or otherwise preclude clinical assessment of CT studies.

    Device Description

    EFAI NEUROSUITE CT ICH ASSESSMENT SYSTEM (EFAI ICHCT) is a radiological computer-assisted triage and notification software system. The software uses deep learning techniques to automatically analyze non-contrast head CTs and alerts the PACS/RIS workstation once images with features suggestive of acute ICH are identified.

    Through the use of EFAI ICHCT, a radiologist is able to review studies with features suggestive of acute ICH earlier than in standard of care workflow.

    The device is intended to provide a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from the prioritization. It does not mark, highlight, or direct users' attention to a specific location on the original non-contrast head CT. The device aims to aid in prioritization and triage of radiological medical images only.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Lower Bound of 95% CI)Reported Device Performance (95% CI)Met?
    Sensitivity> 0.80.947 (0.895 - 0.974)Yes
    Specificity> 0.80.949 (0.902 - 0.974)Yes
    System Processing TimeNot explicitly stated (compared to predicate)34.96 seconds (33.89 - 36.03)N/A

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 288 CT studies (132 ICH positives and 156 ICH negatives).
    • Data Provenance: Retrospective, consecutively collected from 23 clinical sites in the United States (U.S.). None of these studies were used in model development or analytical validation.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Three (3)
    • Qualifications of Experts: U.S. board-certified neuroradiologists. (Specific years of experience are not mentioned, but board certification implies significant expertise).

    4. Adjudication Method for the Test Set

    • Method: Majority agreement between the three experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No. The provided text describes a standalone performance validation study. The closest mention of human interaction is that the device "can provide case-level notifications with features suggestive of ICH with satisfactory results" in the "absence of any interaction with a clinician."

    6. Standalone Performance (Algorithm Only)

    • Was it done? Yes. The study details "the standalone performance validation study demonstrated that EFAI ICHCT by itself, in the absence of any interaction with a clinician, can provide case-level notifications with features suggestive of ICH with satisfactory results."

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (majority agreement of three U.S. board-certified neuroradiologists).

    8. Sample Size for the Training Set

    • Training Set Sample Size: 3,776 cases. (There was also a validation set of 1,038 cases and a separate test set of 551 cases from the initial collection for model development, distinct from the clinical validation test set).

    9. How Ground Truth for the Training Set Was Established

    • The text states, "During the process of model development, a total of 5,365 adult cases were retrospectively collected between 2010 and 2018 from Taiwan."
    • While it mentions these cases were "subsequently divided into training, validation, and testing datasets," the method for establishing ground truth specifically for the training set is not explicitly detailed in the provided text. It can be inferred that a similar process of expert review would have been used, but the number of experts or adjudication method for the training data is not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K231928
    Date Cleared
    2023-09-25

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI HCAPSeg is a software device intended to assist trained radiation oncology professionals, including, but not limited to, radiation oncologists, medical physicists, and dosimetrists, during their clinical workflows of radiation therapy treatment planning by providing initial contours of organs at risk on non-contrast CT images. EFAI HCAPSeg is intended to be used on adult patients only.

    The contours are generated by deep-learning algorithms and then transferred to radiation therapy treatment planning systems. EFAI HCAPSeg must be used in conjunction with a DICOM-compliant treatment planning system to review and edit results generated. EFAI HCAPSeg is not intended to be used for decision making or to detect lesions.

    EFAI HCAPSeg is an adjunct tool and is not intended to replace a clinician's judgment and manual contouring of the normal organs on CT. Clinicians must not use the software generated output alone without review as the primary interpretation.

    Device Description

    EFAI RTSuite CT HCAP-Segmentation System, herein referred to as EFAI HCAPSeg, is a standalone software that is designed to be used by trained radiation oncology professionals to automatically delineate organs-at-risk (OARs) on CT images. This auto-contouring of OARs is intended to facilitate radiation therapy workflows.

    The device receives CT images in DICOM format as input and automatically generates the contours of OARs, which are stored in DICOM format and in RTSTRUCT modality. The device does not offer a user interface and must be used in conjunction with a DICOM-compliant treatment planning system to review and edit results. Once data is routed to EFAI HCAPSeg, the data will be processed and no user interaction is required, nor provided.

    The deployment environment is recommended to be in a local network with an existing hospital-grade IT system in place. EFAI HCAPSeg should be installed on a specialized server supporting deep learning processing. The configurations are only being operated by the manufacturer:

    • Local network setting of input and output destinations;
    • Presentation of labels and their color; ●
    • Processed image management and output (RTSTRUCT) file management. ●
    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance (EFAI HCAPSeg)Statistical Result (p-value)
    OARs Present in Both EFAI HCAPSeg and Comparison DeviceThe mean Dice Coefficient (DSC) of OARs for each body part (Head & Neck, Chest, Abdomen & Pelvis) should be non-inferior to that of the comparison device, with a pre-specified margin.Overall Mean DSC: 0.83 (vs. 0.75 for Head & Neck, 0.84 for Chest, 0.82 for Abdomen & Pelvis in comparison device)
    Ask a Question

    Ask a specific question about this device

    K Number
    K232100
    Date Cleared
    2023-08-08

    (25 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI PACS PRO is intended to be used as a Digital Imaging and Communications in Medicine (DICOM) and non-DICOM information and data management system. The EFAI PACS PRO displays, processes, stores, and transfers medical data from original equipment manufacturers (OEMs) that support the DICOM standard, with the exception of mammography. It provides the capability to store images and patient information from OEM equipment, and perform filtering, digital manipulation and quantitative measurements. The client software is designed to run on standard personal and business computers. The product is intended to be used by trained medical professionals, including but not limited to radiologists, oncologists, and physicians. It is intended to provide image and related information that is interpreted by a trained professional to render findings and/or diagnosis, but it does not directly generate any diagnosis or potential findings.

    Device Description

    The software is a stand-alone software as medical device (Stand-alone SaMD) which provides instant services for clinicians able to use web browsers at client stations to search and view medical data of desired patients which is stored in the software. The software also provides the following visualization, annotation and quantification functionalities which can be applied to the images on the web browser at client stations.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the EFAI PACS PRO device. However, it does not contain specific acceptance criteria, detailed performance data, or information regarding a study design (like sample sizes, ground truth establishment, expert qualifications, or MRMC studies) that would typically be associated with proving a device meets acceptance criteria.

    The document primarily focuses on demonstrating substantial equivalence to a predicate device (EFAI PACS K211257) based on its intended use, technological characteristics, and conformance to general software and usability standards.

    Here's a breakdown of the requested information based on the provided text, highlighting what is present and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    This information is not explicitly stated in the provided document in the form of a table with specific quantitative acceptance criteria or reported performance metrics (e.g., accuracy, sensitivity, specificity).

    The "Performance Data - Non-Clinical" section states: "Results confirm that the design inputs and performance specifications for the device are met. The EFAI PACS PRO passed the testing in accordance with internal requirements, national standards, and international standards shown below, supporting its safety and effectiveness, and its substantial equivalence to the predicate device."

    However, it does not detail:

    • What those "design inputs and performance specifications" are (i.e., the acceptance criteria).
    • The actual "results" or specific performance values achieved by the device against these criteria.

    2. Sample size used for the test set and the data provenance

    This information is not provided in the document. The document mentions "non-clinical tests" but does not detail the datasets used for these tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided. The document does not describe the establishment of a ground truth for a test set, nor does it mention any experts involved in such a process.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    This information is not provided. There is no description of an adjudication method, as no specific test set and ground truth establishment process are detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This information is not provided. The document makes no mention of an MRMC study or any assessment of human reader performance improvement with AI assistance. The device is described as a PACS system for displaying, processing, storing, and transferring medical data, and does not directly generate diagnoses or findings, suggesting it may not involve an AI component that directly aids in human reader interpretation for diagnostic tasks in the way an AI-CAD device might.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    This information is not provided. The device is described as a "stand-alone software as medical device" (SaMD) and offers functionalities like visualization, annotation, and quantification. However, it is a PACS system; it is not an algorithm designed to perform diagnostic tasks independently. Thus, a "standalone" performance evaluation in the context of an AI algorithm predicting an outcome is not applicable here. The "standalone" refers to the entire software system existing on its own, not an AI component's diagnostic performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    This information is not provided. There is no mention of ground truth in the document, as no specific diagnostic performance study is detailed.

    8. The sample size for the training set

    This information is not provided. Since no AI/ML algorithm requiring a training set for diagnostic purposes is described, details about a training set are absent.

    9. How the ground truth for the training set was established

    This information is not provided for the same reason as point 8.


    Summary of what the document does provide regarding "performance":

    The document indicates that adherence to the following standards supports its safety and effectiveness and substantial equivalence:

    • Software verification and validation per IEC 62304/FDA Guidance
    • Application of usability engineering to medical devices Part 1 per IEC 62366-1
    • Guidance on the application of usability engineering to medical devices per IEC 62366-2

    This implies that the "acceptance criteria" were met through compliance with these general software development, validation, and usability standards, rather than specific quantitative diagnostic performance metrics. The device is a "Medical Image Management And Processing System" (PACS), not a CAD (Computer-Aided Detection/Diagnosis) device, which would typically require extensive clinical performance studies with specific accuracy metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K222076
    Date Cleared
    2022-09-08

    (56 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI Chestsuite XR Pleural Effusion Assessment System is a software workflow tool designed to aid the clinical assessment of adult (18 years of age or older) Chest X-Ray cases with features suggestive of pleural efflusion in the medical care environment. EFAI Chestsuite XR Pleural Effusion Assessment System analyzes cases using an artificial intelligence algorithm to identify suspected findings on chest x-ray images taken in PA position. It makes case-level output available to a PACS/workstation for worklist prioritization or triage. EFAI Chestsuite XR Pleural Effusion Assessment System is not intended to direct attention to specific portions or anomalies of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out pleural effusion or otherwise preclude clinical assessment of X-Ray cases.

    Device Description

    EFAI ChestSuite XR Pleural Effusion Assessment System, is a radiological computer-assisted triage and notification software system. The software uses deep learning techniques to automatically analyze PA chest x-rays and sends notification messages to the picture archiving and communication system (PACS)/workstation to allow suspicious findings of pleural effusion to be identified.

    The device is intended to provide a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from the prioritization. It does not mark, highlight, or direct users' attention to a specific location on the original chest X-ray. The device aims to aid in prioritization and triage of radiological medical images only.

    The deployment environment is recommended to be in a local network with an existing hospitalgrade IT system in place. EFAI Chestsuite XR Pleural Effusion Assessment System should be installed on a specialized server supporting deep learning processing. The configurations are only being operated by the manufacturer:

    • Local network setting of input and output destinations; ●
      EFAI Chestsuite XR Pleural Effusion Assessment System is a software-only device which operates in four stages - data transfer, data preprocessing. AI inference and data post processing. The workflow of the device begins with applying a number of filtering rules based on image characteristics and DICOM tags to ensure only eligible images are analyzed by the algorithm. The image preprocessing unit ensures that all the input data are normalized (image size, contrast) such that it is ready for the algorithm to conduct the analysis. The AI inference generates an assessment which is then post-processed into a JSON message and transferred to an API server. The software is packaged as a docker container such that it can be installed and deployed to both a physical or virtual machine.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Lower Bound)Reported Device Performance (95% CI)
    AUC> 0.950.9712 (0.9538-0.9885)
    Sensitivity> 0.800.9510 (0.9195-0.9706)
    Specificity> 0.800.9745 (0.9505-0.9870)

    The reported device performance for all metrics (AUC, Sensitivity, Specificity) exceeded their respective acceptance criteria.

    2. Sample size used for the test set and the data provenance

    • Sample Size: 600 anonymized Chest X-ray images (286 positive for pleural effusion, 314 negative).
    • Data Provenance: Retrospective cohort collected from multiple institutions in the US and OUS (Outside US).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Three.
    • Qualifications of Experts: US board-certified radiologists. The specific number of years of experience is not mentioned.

    4. Adjudication method for the test set

    • Adjudication Method: Majority agreement was used as the reference standard (ground truth). This implies a 3-reader consensus where at least 2 out of 3 had to agree.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study with human-in-the-loop performance was not explicitly done or reported in this document. The study focused on the standalone performance of the AI algorithm. Therefore, no effect size of human readers improving with AI vs. without AI assistance is provided.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, a standalone performance test was performed to compare the pleural effusion classification performance and processing time of the EFAI Chestsuite XR Pleural Effusion Assessment System against the predicate device, HealthCXR.

    7. The type of ground truth used

    • Type of Ground Truth: Expert consensus (majority agreement of three US board-certified radiologists).

    8. The sample size for the training set

    • The document mentions an "internal validation test" with 1454 images collected retrospectively between 2006-2018 from Taiwan, where "Ground-truthing (classified into positive and negative of pleural effusion) was done by three board-certified radiologists." This sounds like an internal validation set rather than the training set. The true size of the training set is not explicitly stated in the provided text.

    9. How the ground truth for the training set was established

    • As the training set size is not explicitly stated, the method for establishing its ground truth is also not explicitly detailed. However, for the internal validation set mentioned (1454 images), the ground truth was established by three board-certified radiologists. It's highly probable that a similar method (expert review) was used for the training data as well, given the nature of the task.
    Ask a Question

    Ask a specific question about this device

    K Number
    K213731
    Date Cleared
    2022-05-31

    (186 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI CARDIOSUITE SPECT MYOCARDIAL PERFUSION AGILE WORKFLOWS is an image processing software that provides analysis on DICOM images acquired from GE Medical Systems Nuclear Quantitative Perfusion SPECT software to support appropriately trained healthcare professionals in the evaluation and assessment of myocardial perfusions.

    It provides the following functionality:

    • Segmentation of the Bull's Eye images from the original DICOM
    • Analysis of the Bull's Eye images to help assess perfusion
    • Custom settings to generate text reports

    The results of this processing may be used to aid in evaluating and assessing myocardial perfusions.

    The system is an adjunct tool for GE Medical Systems Nuclear Quantitative Perfusion SPECT software.

    Device Description

    The device allows users to interact with the software application via a web interface to upload, inspect, assess myocardial perfusion from Bull's Eye images. The user can change the quantitative settings to correct for numerical calculations and clinical adjustments.

    The device is designed to take images produced by GE Medical Systems Nuclear Quantitative Perfusion SPECT software and process the data to provide both numerical analysis of the Bull's Eye images to help assess for myocardial perfusions, and generate a report based on the users report settings and preference.

    The algorithm segments bull's eye images from SPECT images generated by GE's workstation and conducts quantitative analysis based on the color settings set by the user. The color scale is designed to follow GE's design convention, where the color red is indicative of a normal condition and blue representing severe perfusion. Each of the 17 segments would produce a quantitative evaluation under rest and stress conditions based on the color scale. The clinician would then design and fill in diagnosis terminologies that is best suited for each associated numerical results of each segment and generate a template report documenting the patient's condition.

    AI/ML Overview

    Based on the provided text, the document is a 510(k) Premarket Notification from EverFortune.AI Co., Ltd. for their device, EFAI CARDIOSUITE SPECT Myocardial Perfusion Agile Workflows. The primary purpose of this document is to demonstrate "substantial equivalence" to a legally marketed predicate device (AutoQUANT® Plus) rather than providing detailed performance data from a clinical study for specific acceptance criteria.

    The document explicitly states: "EFAI SPECT Workflows did not require clinical study since substantial equivalence to the currently market and predicate device was demonstrated with the following attribute: Principle of Operation; Indications for Use; Fundamental scientific technology; Non-clinical performance testing: Safety and effectiveness."

    Therefore, much of the requested information regarding "acceptance criteria" based on a study proving the device meets the criteria, particularly clinical performance data, is not present in this 510(k) summary because a clinical study was not conducted or deemed necessary for this submission. The tests performed were non-clinical, focusing on software verification and validation, and usability engineering.

    Here's a breakdown of the information that can be extracted or inferred, and what is explicitly not available:


    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria: No specific numerical acceptance criteria (e.g., minimum sensitivity, specificity, or image quality scores) from a performance study are provided in this document. The "criteria" for this 510(k) submission appear to be demonstrating substantial equivalence through non-clinical testing and comparison of technological characteristics with the predicate device.
    • Reported Device Performance:
      • Non-Clinical Tests: "Results confirm that the design inputs and performance specifications for the device are met." (General statement, no specific metrics provided).
      • Standards Met:
        • Software verification and validation per IEC 62304/FDA Guidance
        • Application of usability engineering to medical devices - Part 1 per IEC 62366-1
        • Guidance on the application of usability engineering to medical devices per IEC 62366-2

    Table (based on inferred "acceptance" for substantial equivalence and reported non-clinical performance):

    Acceptance Criteria Category (Inferred)Reported Device Performance
    Equivalence in Principle of OperationFound to be substantially equivalent to predicate device
    Equivalence in Indications for UseFound to be substantially equivalent to predicate device
    Equivalence in Fundamental Scientific TechnologyFound to be substantially equivalent to predicate device
    Non-Clinical Performance: Software ValidationPassed testing in accordance with IEC 62304/FDA Guidance
    Non-Clinical Performance: Usability EngineeringPassed testing in accordance with IEC 62366-1 and IEC 62366-2
    Safety and EffectivenessSupported by non-clinical testing; no new questions of safety/effectiveness

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified. Since no clinical study was performed, there isn't a "test set" of patient data in the sense of a clinical trial. The non-clinical testing would have used various test cases and scenarios, but the number of these is not disclosed.
    • Data Provenance: Not specified for the non-clinical tests. For the intended use of the device, it processes DICOM images acquired from GE Medical Systems Nuclear Quantitative Perfusion SPECT software. The origin of the training data is not mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not available as no clinical study with expert-established ground truth on a test set was conducted for this 510(k) submission.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • This information is not available as no clinical study with a test set requiring adjudication was conducted.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC study was done. The document explicitly states: "EFAI SPECT Workflows did not require clinical study". Therefore, no effect size of human reader improvement with AI assistance is provided.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document describes the device as "image processing software that provides analysis on DICOM images... to support appropriately trained healthcare professionals in the evaluation and assessment of myocardial perfusions." It's stated as an "adjunct tool." While software verification and validation were done, indicating standalone technical performance testing, no specific "algorithm only" performance metrics comparable to a clinical study (e.g., sensitivity/specificity for a clinical outcome) are reported. The focus was on software functionality and compliance with standards.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    • This information is not available as no clinical study with established ground truth was conducted. For the non-clinical software tests, the "ground truth" would be determined by the software's specified design outputs and expected behavior, not clinical expert consensus or pathology.

    8. The sample size for the training set

    • The sample size for the training set is not specified in this document. The document focuses on demonstrating substantial equivalence, not detailing the development or training of the AI components.

    9. How the ground truth for the training set was established

    • How the ground truth for the training set was established is not specified in this document. Similar to point 8, the focus of this 510(k) summary is on equivalence and non-clinical validation, not on the specifics of algorithm development and training data curation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220264
    Date Cleared
    2022-04-28

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Ever Fortune.AI Co., Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EFAI HNSeg is a software device intended to assist trained radiation oncology professionals, including, but not limited to, radiation oncologists, medical physicists, and dosimetrists, during their clinical workflows of radiation therapy treatment planning by providing initial contours of organs at risk in the head and neck region on non-contrast CT images. EFAI HNSeg is intended to be used on adult patients only.

    The contours are generated by deep-learning algorithms and then transferred to radiation therapy treatment planning systems. EFAI HNSeg must be used in conjunction with a DICOM-compliant treatment planning system to review and edit results generated. EFAI HNSeg is not intended to be used for decision making or to detect lesions.

    EFAI HNSeg is an adjunct tool and is not intended to replace a clinician's judgment and manual contouring of the normal organs on CT. Clinicians must not use the software generated output alone without review as the primary interpretation.

    Device Description

    EFAI RTSuite CT HN-Segmentation System, herein referred to as EFAI HNSeg, is a standalone software that is designed to be used by trained radiation oncology professionals to automatically delineate head-and-neck organs-at-risk (OARs) on CT images. This auto-contouring of OARs is intended to facilitate radiation therapy workflows.

    The device receives CT images in DICOM format as input and automatically generates the contours of OARs, which are stored in DICOM format and in RTSTRUCT modality. The device does not offer a user interface and must be used in conjunction with a DICOM-compliant treatment planning system to review and edit results. Once data is routed to EFAI HNSeg, the data will be processed and no user interaction is required, nor provided.

    The deployment environment is recommended to be in a local network with an existing hospitalgrade IT system in place. EFAI HNSeg should be installed on a specialized server supporting deep learning processing. The configurations are only being operated by the manufacturer:

    • Local network setting of input and output destinations;
    • Presentation of labels and their color;
    • Processed image management and output (RTSTRUCT) file management.
    AI/ML Overview

    Here is a summary of the acceptance criteria and study information for the EFAI RTSuite CT HN-Segmentation System based on the provided document:

    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Quantitative Metrics)Reported Device Performance (EFAI HNSeg)
    Non-inferiority to predicate device (AccuContour™) with a non-inferiority limit of 0.1 Dice coefficient.The EFAI HNSeg device was non-inferior to the predicate (AccuContour™) by at least a non-inferiority limit of 0.1 Dice.

    Study Information

    1. Sample size used for the test set and data provenance:

      • Test Set Size: Not explicitly stated in the provided text.
      • Data Provenance: Not explicitly stated in the provided text (e.g., country of origin, retrospective or prospective).
    2. Number of experts used to establish the ground truth for the test set and their qualifications: Not explicitly stated in the provided text for the test set.

    3. Adjudication method for the test set: Not explicitly stated in the provided text.

    4. Multi-Reader Multi-Case (MRMC) comparative effectiveness study: No, an MRMC study was not conducted. The study was a "non-inferiority standalone performance test" comparing the device's output to a predicate device. It did not involve comparing human readers with and without AI assistance to determine an effect size.

    5. Standalone performance study: Yes, a standalone performance test was done. The document states: "To establish the contour performance of EFAI HNSeg, a non-inferiority standalone performance test was performed." This study compared the device's automatically generated contours against those of a predicate device.

    6. Type of ground truth used: The ground truth for contour performance, though not explicitly detailed in its establishment, was used to compare against the device's output and the predicate device's output. Given the context of segmenting "organs at risk," it can be inferred that the ground truth would typically be expert-annotated contours. The comparison was specifically against the performance of a legally marketed predicate device (AccuContour™) which itself would have established its own performance against a form of ground truth or clinical standard.

    7. Sample size for the training set: Not explicitly stated in the provided text.

    8. How the ground truth for the training set was established: Not explicitly stated in the provided text. However, for deep learning models like EFAI HNSeg, training set ground truth for segmentation would typically be established through expert manual contouring of OARs on CT images by qualified professionals (e.g., radiation oncologists, medical physicists, dosimetrists).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2