Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K243558
    Device Name
    Canvas Dx
    Manufacturer
    Date Cleared
    2025-04-11

    (144 days)

    Product Code
    Regulation Number
    882.1491
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Canvas Dx is intended for use by healthcare providers as an aid in the diagnosis of Autism Spectrum Disorder (ASD) for patients ages 18 months through 72 months who are at risk for developmental delay based on concerns of a parent, caregiver, or healthcare provider.

    The device is not intended for use as a stand-alone diagnostic device but as an adjunct to the diagnostic process.

    Device Description

    Canvas Dx is a prescription diagnostic aid for healthcare professionals (HCP) considering the diagnosis of Autism Spectrum Disorder (ASD) in patients 18 months through 72 months of age at risk for developmental delay. The subject device is identical to the Cognoa ASD Diagnosis Aid which was authorized under DEN200069 and was renamed Canvas Dx shortly thereafter. Canvas Dx consists of Software as a Medical Device (SaMD) together with several medical device data system (MDDS) components. The SaMD components consist of the following:

    • Device inputs:
      • Device Input 1: The answers to the Caregiver Questionnaire
      • Device Input 2: Patient Video Analysis
      • Device Input 3: The answers to the Healthcare Provider Questionnaire
    • A machine learning (ML) algorithm ('Algorithm') modeled after standard medical evaluation methodologies and drives the device outputs.
    • Device outputs:
      • 'Positive for autism'
      • 'Negative for autism'
      • 'Indeterminate'

    The MDDS components that are compatible with the SaMD components include the following:

    • A caregiver facing mobile application, which provides Device Input 1;
    • A video analyst system, which provides Device Input 2;
    • A healthcare provider portal, which provides Device Input 3;
    • Several supporting software and backend services and infrastructure, including privacy and security encryption and infrastructure in compliance with HIPAA and other best practices.

    The subject of this submission is the inclusion of a Predetermined Change Control Plan (PCCP) that allows updates to the Canvas Dx model and performance thresholds.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for Canvas Dx primarily describe the Predetermined Change Control Plan (PCCP) for the device, rather than a new standalone clinical study proving the device meets acceptance criteria. The information regarding device performance and clinical validation directly points to the predicate device, Cognoa ASD Diagnosis Aid (DEN200069), stating that Canvas Dx is identical to it.

    Therefore, the acceptance criteria and study details provided are those for the original Cognoa ASD Diagnosis Aid (DEN200069) clearance.

    Here's an analysis based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance metrics of the predicate device (Cognoa ASD Diagnosis Aid, DEN200069), which Canvas Dx claims to replicate.

    MetricAcceptance Criteria (Implied by Predicate Performance)Reported Device Performance (from DEN200069)
    Positive Predictive Value (PPV)Not explicitly stated as a minimum, but established by predicate.80.77% (CI: 70.27-88.82%)
    Negative Predictive Value (NPV)Not explicitly stated as a minimum, but established by predicate.98.25% (CI: 90.61-99.96%)
    Determinate RateNot explicitly stated as a minimum, but established by predicate.31.76% (CI: 63.58%-87.67%) - Mistake in document, 31.76% is not in CI 63.58-87.67%. Likely meant 71.76% or similar within CI range. Using the CI lower bound as the more conservative. Assuming it meant Percentage of results that are determinate.
    SensitivityNot explicitly stated as a minimum, but established by predicate.98.44% (CI: 91.6%-99.96%)
    SpecificityNot explicitly stated as a minimum, but established by predicate.78.87% (CI: 67.56%-87.67%)

    Note on Determinate Rate: There appears to be a typo in the provided document regarding the Determinate Rate: "31.76% (CI: 63.58%-87.67%)". A value of 31.76% cannot be within a confidence interval of 63.58%-87.67%. It is likely that the "31.76%" is a typo, and the actual determinate rate is within the reported CI, or the CI is for a different metric. For the purpose of this table, I've noted the discrepancy.

    Study Proving Device Meets Acceptance Criteria

    The details provided refer to the original clinical validation study for the Cognoa ASD Diagnosis Aid (DEN200069), as no new clinical testing was performed for the Canvas Dx submission (K243558) itself, which focuses on a Predetermined Change Control Plan (PCCP).

    1. Sample size used for the test set and the data provenance:

      • Sample Size: Not explicitly stated as an exact number of patients in the provided text. The study was described as a "prospective, double-blinded, single-arm" study conducted at "14 sites".
      • Data Provenance: The document does not specify the country of origin of the data. It was a prospective study.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: "3 clinical specialists."
      • Qualifications: The specific qualifications (e.g., number of years of experience, specific sub-specialties beyond "clinical specialists") are not provided in the text.
    3. Adjudication method for the test set:

      • The document states "Based on review by 3 clinical specialists" for establishing ground truth. It does not specify the adjudication method (e.g., 2+1, 3+1, majority vote, consensus meeting) used by these three specialists.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No information about an MRMC study or the effect size of human reader improvement with AI assistance is provided. The device is described as an "aid in the diagnosis" and "adjunct to the diagnostic process," implying a human-in-the-loop, but the clinical study described is a direct comparison to a "clinical reference standard," not a human-AI team comparison.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the performance metrics (PPV, NPV, Sensitivity, Specificity, Determinate Rate) presented are representative of the algorithm's standalone performance in comparison to the clinical reference standard. The device inputs are collected from caregivers and healthcare providers, but the algorithm itself generates the "Positive for autism," "Negative for autism," or "Indeterminate" output. The text explicitly states, "The device is not intended for use as a stand-alone diagnostic device but as an adjunct to the diagnostic process," which refers to its clinical use case, but the performance values provided relate to the algorithm's direct output on the test set.
    6. The type of ground truth used:

      • "Clinical reference standard." The exact components of this clinical reference standard (e.g., ADOS, ADI-R, expert clinical diagnosis, combination) are not specified but implied to be a robust, recognized method for ASD diagnosis.
    7. The sample size for the training set:

      • The sample size for the training set is not provided in the document.
    8. How the ground truth for the training set was established:

      • This information is not provided in the document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243891
    Date Cleared
    2025-03-26

    (98 days)

    Product Code
    Regulation Number
    882.1491
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EarliPoint System device is indicated as a tool to aid qualified clinicians in the diagnosis and assessment of Autism Spectrum Disorder (ASD) in children ages 16 months through 30 months, who are at risk based on concerns identified by a parent, caregiver, or healthcare provider.

    Device Description

    The EarliPoint system uses an eye tracker to capture the patient's looking behavior while viewing a series of videos. The system then remotely analyzes the looking behavior data using software and outputs a diagnosis of the patient's ASD status and associated developmental delay indicies.
    The EarliPoint System device consists of the following:
    • Eye-tracking module and a separate Operator Module that can control the Eye-tracking module remotely. The patient sits on a chair and the Eye-tracking module is adjusted by the operator such that the patient's eyes are within the specification of the eye tracking window.
    • Eye-tracking module captures the patient visual response to social information provided in the form of a series of age-appropriate videos.
    • Operator's module is used to initiate and monitors the session remotely
    • WebPortal securely stores all patient information, analyzes the eye tracking data, and outputs the results. Users can retrieve the results directly from the web-portal.
    • Artificial intelligence software analyzes the eye-tracking data and provides a diagnosis for ASD. In addition, it also outputs 3 developmental delay indices (called EarliPoint Severity Indices) that proxy the ADOS-2 and Mullen validated ASD instruments Social Disability Index correlates and proxies ADOS-2 Verbal Ability Index correlates and proxies the age equivalent Mullen Verbal Ability score Non-verbal Ability Index correlates and proxies the age equivalent non-verbal Mullen Ability score

    AI/ML Overview

    The EarliPoint System device's acceptance criteria and the study proving its performance are detailed below. It's important to note that the provided text is an FDA 510(k) clearance letter and summary, which primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting a full clinical study report. Therefore, some information, particularly granular details about study design, expert qualifications, or the exact training process, might not be explicitly stated to the level one would find in a peer-reviewed publication.

    Acceptance Criteria and Reported Device Performance

    The core acceptance criteria for the EarliPoint System, as demonstrated in the pivotal study, revolve around its ability to accurately diagnose Autism Spectrum Disorder (ASD) in comparison to expert clinical diagnosis. The key metrics are Sensitivity and Specificity.

    Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implied/Expected for a Diagnostic Aid)Reported Device Performance (Full Study Population)Reported Device Performance (CertainDx Subpopulation)
    SensitivityHigh (to correctly identify individuals with ASD)71% (157/221)78.0% (117/150)
    SpecificityHigh (to correctly identify individuals without ASD)80.7% (205/254)85.4% (158/185)

    Note: The document does not explicitly state pre-defined acceptance thresholds for sensitivity and specificity. The reported performance suggests the levels that were considered acceptable for clearance.

    Study Details

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 475 evaluable patients. 25 patients had missing data for either the device or the control diagnosis, making the initial enrollment 500 patients.
    • Data Provenance:
      • Country of Origin: United States (six sites).
      • Retrospective or Prospective: Prospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated how many individual experts were used across the six sites, but the ground truth was established by "expert clinicians" as the "current best practice for diagnosis of ASD."
    • Qualifications of Experts: Not explicitly detailed, but they are referred to as "expert clinicians" in "specialized developmental disabilities centers," implying specialized training and experience in diagnosing ASD.

    4. Adjudication Method for the Test Set

    • The document does not explicitly describe an adjudication method for the expert clinical diagnosis (ground truth). It refers to it as the "current best practice," suggesting that the consensus or standard diagnostic process by qualified clinicians was deemed sufficient as the reference standard.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or performed in the context of human readers improving with AI vs. without AI assistance. The study described is a direct comparison of the EarliPoint system's diagnosis (algorithm only) against expert clinical diagnosis, not an AI-assisted human reader study.

    6. Standalone (Algorithm Only) Performance

    • Was a standalone performance study done? Yes, the presented sensitivity and specificity values are for the EarliPoint System's diagnosis alone (algorithm only), without human-in-the-loop assistance for interpretation of the device's output itself for the primary outcome. The system produces a diagnosis for ASD.

    7. Type of Ground Truth Used

    • Type of Ground Truth: "Expert clinician diagnosis (current best practice for diagnosis of ASD)." This is a form of expert consensus or clinical standard of care, rather than pathology or long-term outcomes data.

    8. Sample Size for the Training Set

    • The document does not provide the sample size for the training set. The clinical study details refer to a "pivotal study" used for evaluating safety and effectiveness, which serves as the test set for the device, rather than data used for initial model training.

    9. How Ground Truth for the Training Set was Established

    • The document does not provide details on how the ground truth for the training set was established. Since the pivotal study's data is described as the test set (evaluable N=475), any training data and its associated ground truth establishment would have occurred prior to this specific study and are not disclosed in this regulatory submission summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    K230337
    Device Name
    EarliPoint
    Date Cleared
    2023-06-29

    (142 days)

    Product Code
    Regulation Number
    882.1491
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EarliPoint System is indicated for use in specialized developmental disabilities centers as a tool to aid clinicians in the diagnosis and assessment of ASD patients ages 16 months through 30 months.

    Device Description

    The device is a more compact version of the predicate device but otherwise has similar functions and features. The system uses an eye tracker to capture the patient's looking behavior while viewing a series of videos. The system then remotely analyzes the looking behavior data using software and outputs a diagnosis of the patient's ASD status and assesses the symptoms associated with ASD.

    The system has two modules:

    EarliPoint System consists of the following:

    • Eye-tracking module and a separate Operator Module that can control the Eye-tracking module remotely. The patient sits on a chair and the Eye-tracking module is adjusted by the operator such that the patient's eyes are within the specification of the eye tracking window
    • -Eye-tracking module captures the patient visual response to social information provided in the form of a series of age-appropriate videos
    • Operator's module is used to initiate and monitors the session remotely -
    • -WebPortal securely stores all patient information, analyzes the eye tracking data, and outputs the results. Users can retrieve the results directly from the web-portal.
    • -Artificial intelligence software analyzes the eye-tracking data and provides a diagnosis for ASD. In addition, it also outputs 3 indices (called EarliPoint Severity Indices) that proxy the ADOS-2 and Mullen validated ASD instruments
      • o Social Disability Index correlates and proxies ADOS-2
      • O Verbal Ability Index correlates and proxies the age equivalent Mullen Verbal Ability score
      • O Non-verbal Ability Index correlates and proxies the age equivalent non-verbal Mullen Ability score

    The eye-tracker used in the EarliPoint device has similar capability as the eye tracker used in the predicate device.

    AI/ML Overview

    The provided text describes a 510(k) submission for a modified EarliPoint System, which is an aid for diagnosing and assessing Autism Spectrum Disorder (ASD). The submission details performance testing and a pivotal clinical study.

    Here's an analysis of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria in this submission are derived from the clinical study's performance metrics against a reference standard (expert clinical diagnosis for ASD).

    Performance MetricAcceptance Criteria (from study results)Reported Device Performance (EarliPoint mITD)Reported Device Performance (EarliPoint CertainDx)
    SensitivityNot explicitly stated as a pre-defined acceptance criteria, but observed performance71% (95% CI: 64.6% - 76.9%)78.0% (95% CI: 70.5% - 84.3%)
    SpecificityNot explicitly stated as a pre-defined acceptance criteria, but observed performance80.7% (95% CI: 75.3% - 85.4%)85.4% (95% CI: 79.5% - 90.2%)
    SafetyNo serious adverse events related to device useNo reported serious adverse eventsNo reported serious adverse events
    Correlation with ADOS-2Positive correlation with EarliPoint Social Disability IndexCorrelates and proxies ADOS-2Correlates and proxies ADOS-2
    Correlation with Mullen Verbal AbilityPositive correlation with EarliPoint Verbal Ability IndexCorrelates and proxies age equivalent Mullen Verbal Ability scoreCorrelates and proxies age equivalent Mullen Verbal Ability score
    Correlation with Mullen Non-verbal AbilityPositive correlation with EarliPoint Non-verbal Ability IndexCorrelates and proxies age equivalent non-verbal Mullen Ability scoreCorrelates and proxies age equivalent non-verbal Mullen Ability score

    Note: The document does not explicitly state pre-defined numerical acceptance criteria for sensitivity and specificity. The reported performance is the outcome of the study, and its acceptance implies these values were deemed sufficient by the FDA for substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 475 evaluable patients for primary and secondary endpoint analysis. 25 patients had missing data for either the device or control diagnosis, bringing the total enrolled to 500.
    • Data Provenance:
      • Country of Origin: United States
      • Retrospective or Prospective: Prospective
      • Multi-center: Six sites

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified. The text mentions "expert clinicians" but does not quantify the number of individual experts or how many were involved per case.
    • Qualifications of Experts: Referred to as "expert clinicians" and their diagnosis is considered the "current best practice for diagnosis of ASD." No further specific qualifications like years of experience or board certifications are provided in the document.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The text mentions "expert clinician diagnosis" as the reference standard but does not detail how consensus was reached if multiple experts were involved or if there was a single expert per case.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted in the traditional sense of comparing human readers with AI assistance versus human readers without AI assistance.
    • Effect Size: Therefore, no effect size for human reader improvement with AI assistance is reported. The study's design was a within-subject comparison where all patients were evaluated by both the EarliPoint system and expert clinicians, comparing the device's diagnostic output against the expert diagnosis.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Standalone Performance: Yes, the study evaluated the "EarliPoint System diagnosis" relative to the expert clinical diagnosis. The device's output (ASD diagnosis and severity indices) is generated by "Artificial intelligence software [that] analyzes the eye-tracking data and provides a diagnosis for ASD." This indicates a standalone performance evaluation of the algorithm. The "EarliPoint CertainDx" analysis further refined this by focusing on cases where clinicians were certain of the diagnosis, still representing the device's standalone output being compared to this subset of expert diagnoses.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: "Expert clinician diagnosis (current best practice for diagnosis of ASD)." This is a form of expert consensus or clinical standard of care. It is further supported by correlation with validated ASD instruments (ADOS-2 and Mullen).

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. The clinical study described (the pivotal study) is an evaluation study (test set) for the device's performance, not the dataset used to train the AI algorithm.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set was established. While it details the ground truth for the test set (expert clinician diagnosis), it does not provide information on the training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K213882
    Date Cleared
    2022-06-08

    (177 days)

    Product Code
    Regulation Number
    882.1491
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EarliPoint System is indicated for use in specialized developmental disabilities centers as a tool to aid clinicians in the diagnosis and assessment of Autism Spectrum Disorder (ASD) for patients ages 16 months through 30 months.

    Device Description

    The EarliPoint System is a medical device for diagnosis of Autism Spectrum Disorder (ASD) in children.

    EarliPoint System consists of the following:

    • EarliPoint WebPortal to enter the patient information and for access to the patient evaluation results,
    • EarliPoint Device with eye-tracking capability captures the patient visual response to social information provided in the form of a series of age-appropriate videos
    • Artificial intelligence software analyzes the eye-tracking data and provides a diagnosis for ASD.
    • Eye-tracking data also outputs 3 indices (called EarliPoint Severity Indices) that proxy the ADOS-2 and Mullen validated ASD instruments
      • Social Disability Index correlates and proxies ADOS-2
      • Verbal Ability Index correlates and proxies the age equivalent Mullen Verbal Ability score
      • Non-verbal Ability Index correlates and proxies the age equivalent non-verbal Mullen Ability score
    AI/ML Overview

    The EarliPoint System, a device for diagnosing Autism Spectrum Disorder (ASD), underwent a pivotal study to demonstrate its safety and effectiveness.

    Here's a breakdown of the acceptance criteria and study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" but rather shows the device's performance (sensitivity and specificity) relative to a predicate device (CanvasDx) and a "gold standard" expert clinical diagnosis. The implication is that meeting or exceeding the predicate's performance, especially for certain populations, was a key aspect of acceptance.

    Population/MetricSensitivity Mean (n/N) [95% CI]Specificity Mean (n/N) [95% CI]
    CanvasDx (mITD), N=42551.6% (63/122) [42.8% - 60.5%]18.5% (56/303) [14.3% - 23.3%]
    EarliPoint (mITD), N=47571% (157/221) [64.6% - 76.9%]80.7% (205/254) [75.3% - 85.4%]
    EarliPoint CertainDx (N=335)78.0% (117/150) [70.5%, 84.3%]85.4% (158/185) [79.5% - 90.2%]
    EarliPoint UncertainDx (N=140)56.3% (p=1.00)68.1% (p=0.69)

    Note: The document states: "When compared to the predicate. CanvasDx, the sensitivity and specificity of the EarliPoint device are higher than the CanvasDx and hence the two devices are substantially equivalent." This implies that outperforming the predicate was a key acceptance criterion.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 475 evaluable patients for primary and secondary endpoint analysis (from an initial enrollment of 500).
    • Data Provenance: The study was "multi-center" with data collected from "six sites in the United States." The study was "prospective."

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated, but the ground truth was established by "expert clinicians" at "six sites." The document refers to "expert clinician diagnosis" and "clinicians' reference diagnoses."
    • Qualifications of Experts: Described as "expert clinician diagnosis (current best practice for diagnosis of ASD)." Specific qualifications (e.g., years of experience, specialization) are not detailed.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method for conflicting expert opinions. It refers to a "gold standard of the expert clinician diagnosis." It does, however, categorize the ground truth into "Certain Dx Population" (clinicians' certainty rating > 80%) and "Uncertain Dx Population" (clinicians' certainty rating ≤ 80%), indicating some level of consideration for diagnostic confidence.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was explicitly mentioned in the provided text, comparing human readers with AI assistance vs. without AI assistance. The study focuses on the device's performance compared to human expert diagnosis as the ground truth. The device is intended as an "aid clinicians," suggesting a human-in-the-loop scenario, but a specific MRMC study design is not detailed.

    6. Standalone (Algorithm Only) Performance

    • The reported sensitivity and specificity values represent the standalone performance of the EarliPoint System's algorithm in diagnosing ASD, as it provides a "diagnosis for ASD" from analyzing eye-tracking data. The study "evaluated for ASD by both the EarliPoint system and by expert clinician diagnosis."

    7. Type of Ground Truth Used

    • The primary ground truth used was "expert clinician diagnosis (current best practice for diagnosis of ASD)."
    • Additionally, the study correlated the three EarliPoint Severity Indices (Social Disability, Verbal Ability, Non-verbal Ability) against corresponding "expert clinical instruments of ADOS-2 and Mullen." This indicates a reliance on established clinical diagnostic tools as part of the expert ground truth.

    8. Sample Size for the Training Set

    • The document does not provide the sample size used for the training set. It only describes the design and results of the pivotal clinical study (test set).

    9. How the Ground Truth for the Training Set was Established

    • The document does not provide information on how the ground truth for the training set was established. It focuses solely on the test set's ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN200069
    Manufacturer
    Date Cleared
    2021-06-02

    (211 days)

    Product Code
    Regulation Number
    882.1491
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cognoa ASD Diagnosis Aid is intended for use by healthcare providers as an aid in the diagnosis of Autism Spectrum Disorder (ASD) for patients ages 18 months through 72 months who are at risk for developmental delay based on concerns of a parent, caregiver, or healthcare provider.

    The device is not intended for use as a stand-alone diagnostic device but as an adjunct to the diagnostic process.

    Device Description

    The Cognoa ASD Diagnosis Aid is a software as a medical device (SaMD) that utilizes a machine-learning algorithm that receives independent information from caregivers or parents, trained analysts, and healthcare professionals (HCPs) to aid in the diagnosis of ASD. It consists of multiple software applications and hardware platforms. Input data is acquired via a Mobile App, a Video Analyst Portal, and a HCP Portal.

    • . Mobile App: User interface (UI) for the caregiver or parent to upload videos of the patient via Wi-Fi connection and answer questions about key developmental behaviors. Interfaces with Application Programming Interface (API) server for transmission and management of patient data. Compatible with both iOS (versions 12 and 13) and Android platforms (versions 9 and 10).
    • Video Analyst Portal: UI for trained analysts to review uploaded patient videos . remotely and answer questions about the patients' behaviors observed in the videos.
    • . HCP Portal: UI for the HCP to answer questions about key developmental behaviors for the patient's age group, view device output and access the interactive dashboard to view all patient results, patient videos, answers to questionnaires administered and device performance data. Compatible with computer operating systems macOS (Catalina or Mojave) and Windows 10, and browsers Safari (versions 12 or 13) and Chrome (versions 84 or 85).
      Following analysis of the input data, the Cognoa ASD Diagnosis Aid machine-learning algorithm produces a single scalar value between (1) and (6) which is then compared to preset thresholds to determine the classification. If the value is greater than the upper threshold, then the device output is 'Positive for ASD.' If the value is less than the lower threshold, then the device output is 'Negative for ASD.' If the available information does not allow the algorithm to render a reliable result, the device output is 'No Result.'
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the Cognoa ASD Diagnosis Aid meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Metric (Objective)Acceptance Criteria (Target)Reported Device Performance (Point Estimate)95% Confidence Interval
    Positive Predictive Value (PPV)Greater than 65%80.77% (63/78)70.27%, 88.82%
    Negative Predictive Value (NPV)Greater than 85%98.25% (56/57)90.61%, 99.96%
    Sensitivity(Not explicitly defined as an acceptance criteria but evaluated as a secondary objective)98.44% (63/64)91.6%, 99.96%
    Specificity(Not explicitly defined as an acceptance criteria but evaluated as a secondary objective)78.87% (56/71)67.56%, 87.67%
    No Result Rate(Not explicitly defined as a threshold, but assessed as a primary objective; implies demonstrating a reasonable rate for an aid in diagnosis)68.24% (290/425)63.58%, 72.64%

    Conclusion on Acceptance: The device successfully met both the primary effectiveness objectives criteria for PPV (80.77% > 65%) and NPV (98.25% > 85%).

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size:
      • Test Set for Analysis: 425 subjects successfully completed both the device assessment and the specialist assessment (clinical reference standard).
      • Subjects with Device Output (Positive/Negative for ASD): Of the 425 completers, 135 subjects received a definitive "Positive for ASD" or "Negative for ASD" output from the device. This subset was used to calculate the performance metrics (PPV, NPV, sensitivity, specificity).
    • Data Provenance:
      • Country of Origin: United States.
      • Retrospective or Prospective: Prospective. The study was designed and conducted specifically to evaluate the device.

    3. Number of Experts and Their Qualifications for Ground Truth

    • Number of Experts: Up to three specialists were involved in establishing the clinical reference standard (ground truth) for each patient. This included a site-specific specialist and one or two central specialist clinicians.
    • Qualifications of Experts: The text states they were "specialists" and "specialist clinicians" using the DSM-5 criteria, implying they were qualified healthcare professionals with expertise in diagnosing ASD. While specific years of experience are not provided, their role in making a clinical diagnosis using established criteria suggests appropriate qualifications.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing the clinical reference standard was a multi-expert consensus approach:

    • Initial Diagnosis: A site-specific specialist made an initial diagnosis using DSM-5 criteria.
    • First Review: A central off-site reviewing specialist clinician reviewed the case (standardized medical history, physical form, and a video of the diagnostic encounter).
    • Agreement: If the first central reviewer agreed with the site diagnosing clinician, the diagnosis was considered validated.
    • Disagreement/Second Review: If the first central reviewer disagreed, the case was referred to a second reviewing specialist clinician.
    • Resolution: "Majority rule was used to resolve discrepancies between the two central reviewers and the site diagnosing specialist who all evaluated the same subjects." This can be characterized as a 2+1 consensus model (2 central reviewers + 1 site specialist).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a traditional MRMC comparative effectiveness study was not explicitly described in terms of comparing human readers with AI vs. without AI assistance to measure an "effect size" of improvement.
    • Type of Study: The clinical validation study was a prospective, double-blinded, single-arm study evaluating the device's performance against a clinical reference standard. It focused on the standalone performance characteristics of the device as an aid, not directly on the improvement of human readers when assisted by the AI. The human factors study involved HCPs interacting with the device interface and interpreting its outputs, but it wasn't designed as an MRMC to quantify diagnostic improvement with AI.

    6. Standalone (Algorithm Only) Performance

    • Was a standalone performance study done? Yes, the core clinical validation study effectively evaluated a form of standalone performance of the algorithm's output (Positive/Negative/No Result) against a clinical reference standard. While the algorithm receives inputs from caregivers, trained analysts, and HCPs, the evaluation of PPV, NPV, sensitivity, and specificity is a measure of the algorithm's diagnostic classification performance on the test data. The output classification is solely based on the algorithm's processing of these inputs, without further human modification of the classification itself.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert Consensus (Clinical Reference Standard). This involved the determination of clinical diagnosis based on the majority assessment of up to three specialists using the DSM-5 criteria.

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size used for the training set.
    • However, it does mention an exclusion criterion for the clinical study: "Subjects whose medical records had been included in any internal Cognoa training or validation sets." This confirms that separate training and validation sets were used, adhering to good machine learning practices, but the specific size of the training set is not provided in this regulatory summary.

    9. How Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established.
    • Given the nature of the device and the rigorous establishment of ground truth for the test set (expert consensus using DSM-5), it is highly probable that a similar, robust method involving clinical experts and diagnostic criteria would have been used for the training set ground truth, but the details are not provided in this specific excerpt.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1