Search Filters

Search Results

Found 19 results

510(k) Data Aggregation

    K Number
    K250221
    Date Cleared
    2025-07-01

    (158 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    **
    Trade/Device Name: StrokeSENS ASPECTS Software Application
    Regulation Number: 21 CFR 892.2060
    Proposed Classification:** Device Class: II
    Product Code: POK
    Regulation Number: 21 CFR 892.2060
    computer-assisted diagnostic software for lesions suspicious of cancer. |
    | Regulation Number | 21 CFR 892.2060
    | 21 CFR 892.2060 |
    | **DICOM Compliant?

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    StrokeSENS ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.

    The Software automatically registers images and uses an Atlas to segment and analyze ASPECTS Regions. StrokeSENS ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score.

    StrokeSENS ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. StrokeSENS ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 12 hours from time last known well).

    StrokeSENS ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. StrokeSENS ASPECTS presents the original and annotated images for concurrent reads. StrokeSENS ASPECTS additionally provides a visualization of the voxels contributing to the automated ASPECTS score.

    Limitations:

    1. StrokeSENS ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
    2. StrokeSENS ASPECTS has been validated in patients with known MCA or ICA occlusion prior to ASPECTS scoring.
    3. Use of StrokeSENS ASPECTS in clinical settings other than brain ischemia within 12 hours from time last known well, caused by known ICA or MCA occlusions, has not been tested.
    4. StrokeSENS ASPECTS has only been validated and is intended to be used in patient populations aged over 21.

    Contraindications:

    • StrokeSENS ASPECTS is contraindicated for use on brain scans displaying neurological pathologies other than acute ischemic stroke, such as tumors or abscesses, hemorrhagic transformation, and hematoma.

    Cautions:

    • Patient Motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
    Device Description

    StrokeSENS ASPECTS is a stand-alone software device that uses machine learning algorithms to automatically process NCCT (non-contrast computed tomography) brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.

    The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. StrokeSENS ASPECTS provides an automated ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgement.

    StrokeSENS ASPECTS can connect with other DICOM-compliant devices, to transfer NCCT scans for software processing.

    Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via the StrokeSENS UI or other DICOM-compatible radiological viewer.

    StrokeSENS ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.

    StrokeSENS ASPECTS principal workflow for NCCT includes the following key steps:

    • Receive NCCT DICOM Image
    • Automated image analysis and processing to identify and visualize the voxels which have been included in the ASPECTS score (Also referred to as a 'heat map' or 'VCTA; Voxels Contributing to ASPECTS Score').
    • Automated image analysis and processing to register the subject image to an atlas to segment and highlight ASPECTS regions and to display whether or not each region is qualified as contributing to the ASPECTS score.
    • Generation of auto-generated results for review and analysis by users.
    • Generation of verified/modified result summary for archiving, once the user verifies or modifies the results.

    Once the auto-generated ASPECTS score results are available, the physician is asked to confirm that the case in question is for an ICA or MCA occlusion and is able to modify/verify the ASPECTS regional score. The ASPECTS auto-generated results, including the ASPECTS score, indication of affected side, affected ASPECTS regions and voxel-wise analysis (shown as a heatmap of voxels 'contributing to ASPECTS score'), along with the user-verified/modified result summary can be sent to the Picture Archiving and Communications System (PACS).

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study that proves the device meets those criteria, based on the provided FDA 510(k) Clearance Letter.

    Acceptance Criteria and Device Performance

    The provided text details two primary performance studies: Standalone Performance and Clinical Validation (MRMC study), along with a Clinical Validation of Voxels Contributing to ASPECTS (VCTA). The acceptance criteria are implicitly derived from the reported performance benchmarks for these studies.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implicit)Reported Device Performance (Standalone Study)Reported Device Performance (MRMC Clinical Validation)Reported Device Performance (VCTA Clinical Validation)
    Standalone Performance:
    AUC-ROC for region-level Clustered ROC Analysis90.9% (95% CI = [88.7%, 93.1%])N/A (Standalone study only)N/A (Standalone study only)
    Accuracy90.6% [89.7%, 91.5%]N/A (Standalone study only)N/A (Standalone study only)
    Sensitivity70.6% [69.2%, 72.1%]N/A (Standalone study only)N/A (Standalone study only)
    Specificity93.9% [93.2%, 94.7%]N/A (Standalone study only)N/A (Standalone study only)
    Clinical Validation (Reader Improvement - MRMC):
    Statistically significant improvement in reader AUC with AI assistance vs. without AI assistanceN/A (MRMC study only)Statistically significant improvement of 5.7% from 68.6% (unaided) to 74.3% (aided) (p-value
    Ask a Question

    Ask a specific question about this device

    K Number
    K251071
    Manufacturer
    Date Cleared
    2025-05-02

    (25 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    France

    Re: K251071
    Trade/Device Name: Fetal EchoScan (v1.1)
    Regulation Number: 21 CFR 892.2060
    2025**

    Re: K251071
    Trade/Device Name: Fetal EchoScan (v1.1)
    Regulation Number: 21 CFR 892.2060
    Name | Radiological computer-assisted diagnostic software for lesions suspicious for cancer 21 CFR 892.2060
    k) Number | TBD | K242342 |
    | Applicant | BrightHeart | BrightHeart |
    | Classification Regulation | 892.2060
    | 892.2060 |

    Page 6

    510(k) Summary Page 3 of 8

    | | Subject Device
    Fetal EchoScan v1.1 |

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Fetal EchoScan is a machine learning-based computer-assisted diagnosis (CADx) software device indicated as an adjunct to fetal heart ultrasound examination in pregnant women aged 18 or older undergoing second-trimester anatomic ultrasound exams.

    When utilized by an interpreting physician, Fetal EchoScan provides information regarding the presence of any of the following suspicious radiographic findings:

    • overriding artery
    • septal defect at the cardiac crux
    • abnormal relationship of the outflow tracts
    • enlarged cardiothoracic ratio
    • right ventricular to left ventricular size discrepancy
    • tricuspid valve to mitral valve annular size discrepancy
    • pulmonary valve to aortic valve annular size discrepancy
    • cardiac axis deviation

    Fetal EchoScan is to be used with cardiac fetal ultrasound video clips containing interpretable 4-chamber, left ventricular outflow tract, right ventricular outflow tract standard views.

    Fetal EchoScan is intended for use as a concurrent reading aid for interpreting physicians (OB-GYN, MFM). It does not replace the role of the physician or of other diagnostic testing in the standard of care. When utilized by an interpreting physician, this device provides information that may be useful in rendering an accurate diagnosis regarding the potential presence of morphological abnormalities that might be suggestive of fetal congenital heart defects that may be useful in determining the need for additional exams.

    Fetal EchoScan is not intended for use in multiple pregnancies, cases of heterotaxy and postnatal ultrasound exams.

    Device Description

    Fetal EchoScan is a cloud-based software-only device which uses neural networks to detect suspicious cardiac radiographic findings for further review by trained and qualified physicians. Fetal EchoScan is intended to be used as an adjunct to the interpretation of the second-trimester fetal anatomic ultrasound exam performed between 18 and 24 weeks of gestation, for pregnant women aged 18 or more.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for Fetal EchoScan v1.1:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" but rather presents the performance metrics achieved by the device in both standalone and reader studies. The implication is that these performance levels were deemed acceptable for clearance.

    Table 1. Standalone Performance of Fetal EchoScan v1.1 for all suspicious radiographic findings Combined

    MetricAcceptance Criteria (Implied)Reported Device Performance (Worst-Case Sensitivity, Best-Case Specificity)Reported Device Performance (Best-Case Sensitivity, Worst-Case Specificity)
    Sensitivity for any suspicious findingsHigh (not numerically specified)0.977 (95% CI, 0.954 ; 0.989)0.987 (95% CI, 0.967 ; 0.995)
    Specificity for any suspicious findingsHigh (not numerically specified)0.977 (95% CI, 0.961 ; 0.987)0.963 (95% CI, 0.944 ; 0.976)
    Conclusive Output RateHigh (not numerically specified)98.8% (95% CI, 97.8 ; 99.3)98.8% (95% CI, 97.8 ; 99.3)

    Table 2. Reader Study Performance of Fetal EchoScan v1.1 for all suspicious radiographic findings Combined

    MetricAcceptance Criteria (Implied)Reported Device Performance (AI-Aided)Reported Device Performance (Unaided)Improvement (AI-Aided vs. Unaided)DBM-OR p-value
    ROC AUC for any suspicious findingsSignificantly higher with aid0.974 (95% CI 0.957-0.990)0.825 (95% CI 0.741-0.908)+0.149 (14.9%)0.002
    Mean Sensitivity for any suspicious findingsImproved with aid0.935 (95% CI 0.892-0.978)0.782 (95% CI 0.686-0.878)+0.153 (15.3%)Not explicitly stated for sensitivity/specificity
    Mean Specificity for any suspicious findingsImproved with aid0.970 (95% CI 0.949-0.991)0.759 (95% CI 0.630-0.887)+0.211 (21.1%)Not explicitly stated for sensitivity/specificity

    Note: The numerical acceptance criteria for "high sensitivity" and "high specificity" are not explicitly defined in the provided document, but the reported performance values surpassed what was considered acceptable by the FDA for substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size (Standalone Testing): 877 clinically acquired fetal ultrasound exams.
    • Test Set Sample Size (Reader Study): 200 exams.
    • Data Provenance:
      • Country of Origin: U.S.A. and France.
      • Retrospective or Prospective: The document doesn't explicitly state whether the data was retrospective or prospective, but it mentions "clinically acquired" exams, which often implies retrospective use of existing data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three (3) pediatric cardiologists.
    • Qualifications of Experts: Pediatric cardiologists. No further details on years of experience or board certification are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Majority voting among the three pediatric cardiologists.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes.
    • Effect Size of Human Readers' Improvement with AI vs. without AI assistance:
      • ROC AUC: Humans improved by +14.9% (from 0.825 unaided to 0.974 aided), with a p-value of 0.002.
      • Mean Sensitivity: Humans improved by +15.3% (from 0.782 unaided to 0.935 aided).
      • Mean Specificity: Humans improved by +21.1% (from 0.759 unaided to 0.970 aided).

    6. Standalone Performance Study

    • Was a standalone study done? Yes.
    • Performance Metrics: Refer to Table 1 above. The AI system had a conclusive output rate of 98.8%. Sensitivity ranged from 0.977 to 0.987, and Specificity ranged from 0.963 to 0.977 for the detection of any suspicious findings, depending on how inconclusive outputs were treated.

    7. Type of Ground Truth Used

    • Ground Truth Type: Expert consensus. Specifically, it was derived from a "truthing process in which three pediatric cardiologists assessed the presence or absence of each of the eight findings, and majority voting was used." This constitutes expert consensus.

    8. Sample Size for the Training Set

    • The document states: "The ultrasound examinations used for training and validation are entirely distinct from the examinations used in standalone testing." However, the specific sample size for the training set is not provided in the clearance letter. It only mentions that the data used for standalone testing (877 exams) and the reader study (200 exams) were distinct from the training and validation data.

    9. How the Ground Truth for the Training Set Was Established

    • The document states: "The ultrasound examinations used for training and validation are entirely distinct from the examinations used in standalone testing." However, the methodology for establishing ground truth for the training set is not explicitly detailed in the provided text. It can be inferred that a similar expert review process would have been used, but no specific details are given.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243614
    Device Name
    Sonio Suspect
    Manufacturer
    Date Cleared
    2025-02-21

    (91 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    France

    February 21, 2025

    Re: K243614

    Trade/Device Name: Sonio Suspect Regulation Number: 21 CFR 892.2060
    892.2050 | 21 CFR 892.2060
    Performance testing - Bench

    Sonio conducted a standalone performance testing in accordance with 21 CFR §892.2060
    Performance testing - Clinical

    Sonio conducted a clinical performance testing in accordance with 21 CFR §892.2060
    The special controls for 21 CFR 892.2060 regulation are satisfied by demonstrating the effectiveness

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonio Suspect is intended to assist interpreting physicians, during or after fetal ultrasound examinations, by automatically identifying and characterizing abnormal fetal ultrasound findings on detected views, using machine learning techniques.

    The device is intended for use as a concurrent reading aid on acquired images, during and/or after fetal ultrasound examinations.

    The device provides information on abnormal findings that may be useful in rendering potential diagnosis.

    Patient management decisions should not be made solely on the results of the Sonio Suspect analysis.

    Device Description

    Sonio Suspect is a Software as a Service (SaaS) solution that aims at helping interpreting physicians (designated as healthcare professionals i.e. HCP in the following) to identify abnormal fetal ultrasound findings during and/or after fetal ultrasound examinations.

    Sonio Suspect is a web application accessible from any device connected to the internet. It can be accessed on a tablet, computer or any other support capable of providing access to a web application.

    Sonio Suspect can be used by HCPs as a concurrent reading aid on acquired images, to assist them during and/or after fetal ultrasound examinations of gestational age (GA): from 11 weeks to 41 weeks. A concurrent read by the users means a read in which the device output is available during and/or after the fetal ultrasound examination.

    The way Sonio Suspect is built allows the HCP to use it at any moment. The software can process any Ultrasound image file uploaded by the HCP, at any time.

    Sonio Suspect can be connected through API to external devices (as an ultrasound machine) to receive images.

    Sonio Suspect workflow goes through the following steps:

    As soon as an image is automatically received, it is automatically detected and associated with a view (and can be manually re-associated by the HCP). Then abnormal fetal ultrasound findings linked to the view are evaluated and displayed, individually, with one of the following status:

    • Suspected (abnormal findings identified on the image);
    • . Not Suspected (abnormal findings not identified on the image);
    • . Can't be analyzed (abnormal findings not evaluated due to one or several structures not detected or if the fetal position selected is "other or unknown" while it's required to evaluate the abnormal finding).

    Each abnormal finding status can be manually overridden to Present or Not Present by the user.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    DescriptionAcceptance Criteria (Implicit from validation studies)Reported Device Performance
    Standalone Performance (Algorithm only)Sensitivity: High sensitivity desired for detecting abnormal findings.
    Specificity: High specificity desired to minimize false positives.Average Sensitivity: 93.2% (95% CI: 91.6%-94.6%)
    Average Specificity: 90.8% (95% CI: 89.5%-92.0%)
    (Individual abnormal finding performance detailed in Table 3)
    Clinical Performance (Human reader with AI assistance vs. without)Reader Accuracy Improvement: The performance of readers assisted by Sonio Suspect should be superior to their performance when unassisted.AUC Improvement: AUC in "Unassisted" setting: 68.9%. AUC in "Assisted" setting: 90.0%. Significant difference of 21.9%.
    (ROC curves (Figure 1) and AUC for individual findings (Table 4) confirm consistent improvement)

    Detailed Study Information:

    1. Sample size used for the test set and the data provenance:

      • Standalone Test Set: 8,745 fetal ultrasound images from 1,115 exams.
      • Clinical Test Set: 750 fetal ultrasound images (between 11 and 41 weeks) evaluated by each reader, from 287 distinct exams.
      • Data Provenance: The standalone test set included data from 75 sites, with 64 located in the United States. The clinical test set included data from 47 sites, with 37 located in the United States. This indicates a mix of US and OUS (Outside US) data, explicitly representing the intended use population. The study was retrospective.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document implies that ground truth for the clinical study was based on expert consensus, as it refers to a "fully-crossed multiple case (MRMC) retrospective reader study" where readers provide a "binary determination of the presence or absence of an abnormal finding." However, the exact number of experts explicitly establishing the ground truth for the test set (as opposed to participating as readers) or their specific qualifications for ground truth establishment are not explicitly stated in the provided text. The readers themselves were:
        • 13 readers: 5 MFM (Maternal-Fetal Medicine), 6 OB/GYN (Obstetrician-Gynecologists), and 2 Diagnostic radiologists.
        • Experience: 1-30+ years' experience.
    3. Adjudication method for the test set:

      • The document states that in the clinical study, "For each image, each reader was required to provide a binary determination of the presence or absence of an abnormal finding and to provide a score representing their confidence in their annotation." It also mentions "two independent reading sessions separated by a washout period." While this describes the reader process, it does not explicitly describe an adjudication method (like 2+1 or 3+1) used to establish a definitive ground truth from multiple expert opinions. It implies that the ground truth was pre-established for the images used in the reader study.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Yes, an MRMC comparative effectiveness study was done.
      • Effect Size: The study demonstrated a significant improvement in reader accuracy. The Area Under the Curve (AUC) for readers:
        • Without AI assistance ("Unassisted"): 68.9%
        • With AI assistance ("Assisted"): 90.0%
        • This represents a significant difference (effect size) of 21.9% in AUC.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance testing was conducted.
      • The results are detailed in Table 3, showing an average sensitivity of 93.2% and specificity of 90.8% for abnormal finding detection.
    6. The type of ground truth used:

      • Implicitly, expert consensus or pre-established clinical diagnosis. For the standalone study, the robust sensitivity and specificity metrics suggest comparison against a definitive "ground truth" for the presence or absence of abnormal findings. For the clinical study, readers compared their findings against this ground truth. The document does not specify if pathology or outcomes data were directly used to define the ground truth for every case, but it's common for such studies to rely on a panel of experts or established clinical reports to define the ground truth for imaging-based diagnoses.
    7. The sample size for the training set:

      • The sample size for the training set is not explicitly stated. The document mentions that the global validation dataset for standalone testing "was independent of the data used during model development (training/internal validation) and the establishment of device operating points," implying a separate training set existed, but its size is not provided.
    8. How the ground truth for the training set was established:

      • This information is not explicitly provided. It can be inferred that similar methods to the test set (e.g., expert review and consensus) would have been used, but no specifics are given in the text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243294
    Manufacturer
    Date Cleared
    2025-02-14

    (119 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    0JJ United Kingdom

    Re: K243294

    Trade/Device Name: Brainomix 360 e-ASPECTS Regulation Number: 21 CFR 892.2060
    |
    | Regulation Number: | 21 C.F.R. §892.2060
    Trade Name: Brainomix 360 e-ASPECTS Manufacturer: Brainomix Limited Regulation Number: 21 C.F.R. §892.2060
    Regulation Number: 21 C.F.R. §892.2060 Regulatory Class: Class II Regulation Name: Radiological computer-assisted
    | 21 CFR §892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Brainomix 360 e-ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.

    The software automatically registers images and uses an atlas to segment and analyze ASPECTS regions. Brainomix 360 e-ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECTS (Alberta Stroke Program Early CT) score.

    Brainomix 360 e-ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. Brainomix 360 e-ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 24 hours from time last known well).

    Brainomix 360 e-ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. Brainomix 360 e-ASPECTS additionally provides a visualization of the voxels contributing to and excluded from the automated ASPECTS score, and a calculation of the voxel volume contributing to ASPECTS score.

    Limitations:

    1. Brainomix 360 e-ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
    2. The Brainomix 360 e-ASPECTS score should be only used for ischemic stroke patients following the standard of care.
    3. Brainomix 360 e-ASPECTS has only been validated and is intended to be used in patient populations aged over 21 years.
    4. Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Images viewed on a mobile platform are compressed preview images and not for diagnostic interpretation.
    5. Brainomix 360 e-ASPECTS has been validated and is intended to be used on Siemens Somatom Definition scanners.

    Contraindications/ Exclusions/Cautions:

    · Patient motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
    · Hemorrhagic Transformation, Hematoma.

    Device Description

    Brainomix 360 e-ASPECTS (also referred to as e-ASPECTS in this submission) is a medical image visualization and processing software package compliant with the DICOM standard and running on an off-the-shelf physical or virtual server.

    Brainomix 360 e-ASPECTS allows for the visualization, analysis and post-processing of DICOM compliant Non-contrast CT (NCCT) images which, when interpreted by a trained physician or medical technician, may yield information useful in clinical decision making.

    Brainomix 360 e-ASPECTS is a stand-alone software device which uses machine learning algorithms to automatically process NCCT brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.

    The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. e-ASPECTS provides an automatic ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from NCCT brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgment.

    Brainomix 360 e-ASPECTS can connect with other DICOM-compliant devices, for example to transfer NCCT scans from a Picture Archiving and Communication System (PACS) to Brainomix 360 e-ASPECTS software for processing.

    Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via a web user interface on any machine contained and accessed within a hospital network and firewall and with a connection to the Brainomix 360 e-ASPECTS software (e.g. a LAN connection).

    Brainomix 360 e-ASPECTS notification capabilities enable clinicians to preview images through a mobile application or via e-mail.

    Brainomix 360 e-ASPECTS email notification capabilities enable clinicians to preview images via e-mail notification with result image attachments. Images that are previewed via e-mail are compressed, are for informational purposes only, and not intended for diagnostic use beyond notification.

    Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests.

    Brainomix 360 e-ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.

    Once received, image processing is automatically applied. Once any image processing has been completed, notifications are sent to pre-configured users to inform that the image processing results are ready. Users can then access and review the results and images via the web user interface case viewer or PACS viewer.

    The core of e-ASPECTS algorithm (excluding image loading or result output format) can be summarised in the following 3 key steps of the processing pipeline:

    • Pre-processing: brain extraction from the three dimensional (3D) non-enhanced contrast CT head dataset and its reorientation/normalization by 3D spatial registration to a standard template space.
    • Delineation of the 20 (10 for each cerebral hemisphere) pre-defined ASPECTS regions of interest on the normalized 3D image.
    • Image feature extraction and heatmap generation which consists of the computation of numerical values characterizing brain tissue, apply a trained predictive model to those features and generate a 3D heatmap from the models output for highlighting regions contributing towards the ASPECTS score.

    The Brainomix 360 e-ASPECTS module is made available to the user through the Brainomix 360 platform. The Brainomix 360 platform is a central control unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:

    • Brainomix 360 e-ASPECTS (K221564) (predicate device)
    • Brainomix 360 e-CTA (K192692)
    • Brainomix 360 e-CTP (K223555)
    • Brainomix 360 e-MRI (K231656)
    • Brainomix 360 Triage ICH (K231195)
    • Brainomix 360 Triage LVO (K231837)
    • Brainomix 360 Triage Stroke (K232496)
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:


    Brainomix 360 e-ASPECTS Device Performance Study

    The Brainomix 360 e-ASPECTS device underwent performance testing to demonstrate its accuracy and effectiveness. This included both standalone algorithm performance and a multi-reader multi-case (MRMC) study to assess the impact of AI assistance on human readers.

    1. Acceptance Criteria and Reported Device Performance

    Digital Phantom Validation (for "volume contributing to e-ASPECTS")

    Metric NameAcceptance CriteriaReported PerformancePass/Fail
    Absolute Bias (upper 95% CI)0.860.993Pass

    Standalone Performance Testing (for ASPECTS score accuracy)

    Metric NameAcceptance Criteria (Implied by positive results)Reported Performance (Model only)Outcome
    AUCHigh diagnostic accuracy83% (95% CI: 80-86%)Good
    SensitivityGood detection of affected regions69% (56-75%)Good
    SpecificityGood identification of unaffected regions97% (80-97%)Good

    Multi-Reader Multi-Case (MRMC) Study (Human + AI vs. Human only for ASPECTS score accuracy)

    Metric NameAcceptance Criteria (Implied by statistical significance)Reported Performance (Human only)Reported Performance (Human + AI assistance)Effect Size (Improvement)Statistical Significance
    AUCImprovement in AUC with AI assistance78%85%6.4%p=.03 (statistically significant)
    SensitivityImprovement in Sensitivity with AI assistance61%72%11%Not explicitly stated as statistically significant, but driving AUC improvement
    SpecificityImprovement in Specificity with AI assistance96%98%2%Not explicitly stated as statistically significant, but contributing to AUC improvement
    Cohen's KappaImprovement with AI assistanceNot explicitly statedImproved significantly-Significantly improved
    Weighted KappaImprovement with AI assistanceNot explicitly statedImproved significantly-Significantly improved

    2. Sample Sizes and Data Provenance

    • Digital Phantom Validation Test Set: n=110 synthetic datasets
    • Standalone Performance Test Set: n=137 non-contrast CT scans
      • Data Provenance: From 3 different USA institutions (Siemens, GE, Philips, and Toshiba scanners).
      • Retrospective/Prospective: The data appears to be retrospective based on the description of patient admission dates (March 2012 and August 2023) and clinical context.
    • MRMC Study Test Set: n=140 NCCT scans
      • Data Provenance: Cases collected from various clinical sites (specific countries not explicitly stated, but the mention of US neuroradiologists for ground truth suggests US data). Scanners included Siemens, GE, Philips, and Toshiba.
      • Retrospective/Prospective: The study used "retrospective data" (explicitly stated on page 12).
    • Training Set Sample Size: The document does not specify the sample size for the training set. It mentions the algorithm is based on "machine learning" and a "trained predictive model" but provides no details on the training data.

    3. Number of Experts and Qualifications for Ground Truth Establishment

    • Standalone Performance Test Set: Three board-certified US neuroradiologists. No information on years of experience is provided.
    • MRMC Study Test Set: Three board-certified US neuroradiologists for establishing the ground truth that human readers were compared against. No information on years of experience is provided.

    4. Adjudication Method for the Test Set(s) Ground Truth

    • Standalone Performance Test Set: "Consensus of three board-certified US neuroradiologists." This implies that the ground truth was established by agreement among the three experts. The specific method (e.g., 2-out-of-3, or discussion to reach full consensus) is not detailed, but "consensus" suggests agreement.
    • MRMC Study Test Set: "Consensus of three board-certified US neuroradiologists." Similar to the standalone study, ground truth was established by consensus.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done?: Yes, an MRMC study was conducted.
    • Effect Size: The study showed a 6.4% improvement in AUC for readers with e-ASPECTS support (85%) compared to without e-ASPECTS support (78%). This improvement was statistically significant (p=.03). There was also an improvement in sensitivity (from 61% to 72%) and a small improvement in specificity (from 96% to 98%). Cohen's Kappa and weighted Kappa also improved significantly.
    • Readers: 7 clinical readers (1 "expert" neuroradiologist and 6 "non-expert" radiologists or neurologists).

    6. Standalone Performance (Algorithm Only)

    • Was it done?: Yes, a standalone performance testing was conducted.
    • Performance Metrics: The algorithm achieved an AUC of 83% (95% CI: 80-86%), with a sensitivity of 69% (56-75%) and a specificity of 97% (80-97%) on a case-level as compared to expert consensus. Area under the curve (AUC) specifically refers to overall region-level performance.

    7. Type of Ground Truth Used

    • Digital Phantom Validation: Synthetic volumes/known phantom volumes.
    • Standalone Performance Testing: Expert consensus (of three board-certified US neuroradiologists).
    • MRMC Study: Expert consensus (of three board-certified US neuroradiologists).

    8. Sample Size for the Training Set

    The document does not provide a specific sample size for the training set. It only states that the device uses "machine learning algorithms" and a "trained predictive model."

    9. How Ground Truth for Training Set Was Established

    The document does not describe how the ground truth for the training set was established. It only refers to a "trained predictive model."

    Ask a Question

    Ask a specific question about this device

    K Number
    K242130
    Device Name
    Koios DS
    Date Cleared
    2024-11-15

    (116 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Street 14th Floor New York, NY 10018

    Re: K242130

    Trade/Device Name: Koios DS Regulation Number: 21 CFR 892.2060
    Common Name:
    Device Classification: | Radiological Computer-Assisted Diagnostic Software
    21 CFR 892.2060
    verification and validation and product labelling include all requirements proscribed in the 21 CFR 892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Koios Decision Support (DS) is an artificial intelligence (AI)/machine learning (ML)-based computer-aided diagnosis (CADx) software device intended for use as an adjunct to diagnostic ultrasound examinations of lesions or nodules suspicious for breast or thyroid cancer.

    Koios DS allows the user to select or confirm regions of interest (ROIs) within an image representing a single lesion or nodule to be analyzed. The software then automatically characterizes the selected image data to generate an AI/ML-derived cancer risk assessment and selects applicable lexicon-based descriptors designed to improve overall diagnostic accuracy as well as reduce interpreting physician variability.

    Koios DS software may also be used as an image viewer of multi-modality digital images, including ultrasound and mammography. The software includes tools that allow users to adjust, measure and document images, and output into a structured report.

    Koios DS software is designed to assist trained interpreting physicians in analyzing the breast ultrasound images of adult (>= 22 years) female patients with soft tissue breast lesions and/or thyroid ultrasounds of all adult (>= 22 years) patients with thyroid nodules suspicious for cancer. When utilized by an interpreting physician who has completed the prescribed training, this device provides information that may be useful in recommending appropriate clinical management.

    Limitations:
    · Patient management decisions should not be made solely on the results of the Koios DS analysis.
    · Koios DS software is not to be used for the evaluation of normal tissue, on sites of post-surgical excision, or images with doppler, elastography, or other overlays present in them.
    · Koios DS software is not intended for use on portable handheld devices (e.g. smartphones or tablets) or as a primary diagnostic viewer of mammography images.
    · The software does not predict the presence of the thyroid nodule margin descriptor, extra-thyroidal extension. In the event that this condition is present, the user may select this category manually from the margin descriptor list.

    Device Description

    Koios Decision Support (DS) is a software application designed to assist trained interpreting physicians in analyzing breast and thyroid ultrasound images. The software device is a web application that is deployed to a Microsoft IIS web server and accessed by a user through a compatible client. Once logged in and granted access to the Koios DS application, the user examines selected breast or thyroid ultrasound DICOM images. The user selects Regions of Interest (ROIs) of orthogonal views of a breast lesion or thyroid nodule for processing by Koios DS. The ROI(s) are transmitted electronically to the Koios DS server for image processing and the results are returned to the user for review.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Device Name: Koios DS Version 3.6

    1. Table of Acceptance Criteria and Reported Device Performance (Combining Breast and Thyroid where applicable):

    Acceptance Criteria CategorySpecific Metric (Breast Engine)Reported Device Performance (Breast Engine)Specific Metric (Thyroid Engine)Reported Device Performance (Thyroid Engine)Acceptance Criteria (Smart Click)Reported Device Performance (Smart Click)Acceptance Criteria (Image Registration & Matching)Reported Device Performance (Image Registration & Matching)Acceptance Criteria (OCR)Reported Device Performance (OCR)
    Standalone Performance (AI Engine)Malignancy Risk Classifier AUC0.945 [0.932, 0.959] (increased from 0.929)AUC (ACR TI-RADS, with AI Adapter)79.8% (significant increase over average physician AUC)Non-inferiority Test - Sensitivity / SpecificitySensitivity: Difference = -0.009 [-0.036, 0.018] (Non-inferior)
    Specificity: Difference = -0.018 [-0.041, 0.005] (Non-inferior)No Match Rate0.32%Breast Freetext Identification (by field)Breast Side: 0.983
    Location Type: 0.948
    Clock Hour: 0.926
    Clock Minute: 0.934
    CMFN: 0.944
    Plane: 0.976
    Categorical Output Sensitivity0.976 [0.960, 0.992] (increased from 0.97)Sensitivity (ACR TI-RADS, biopsy rec., with AI Adapter)0.644 [0.545, 0.744] (non-significant improvement over avg physician)Non-inferiority Test - AUCDifference = -0.012 [-0.029, 0.006] (Non-inferior)Average Time for Study Preprocessing2.39 +/- 0.48 secondsThyroid Freetext Identification (by field)Thyroid Side: 0.965
    Pole: 0.976
    Region: 0.998
    Plane: 0.970
    Categorical Output Specificity0.632 [0.588, 0.676] (increased from 0.61)Specificity (ACR TI-RADS, biopsy rec., with AI Adapter)0.612 [0.566, 0.658] (significant improvement over avg physician)Sub-optimal ROI TestDifference = 0.026 [-0.009, 0.062] (Non-inferior)Average Time for Image Matching0.22 +/- 0.12 secondsMeasurement Text Identification (by field)Measurement Description: 0.943
    Measurement Value: 0.948
    Unit of Measurement: 0.967
    Sensitivity to Region of Interest0.012 (decreased from 0.019)Sensitivity (ACR TI-RADS, follow-up rec., with AI Adapter)0.879 [0.812, 0.946] (non-significant improvement)Detection DICE CoefficientDICE = 0.913 +/- 0.075 (demonstrating precise approximation to physician ROIs)End-to-End Breast Engine PerformanceAUC = 0.946
    Sensitivity = 0.975
    Specificity = 0.637
    Sensitivity to Transducer Frequency (High freq, >=15MHz)AUC = 0.948 [0.917, 0.978] (increased from 0.940)Specificity (ACR TI-RADS, follow-up rec., with AI Adapter)0.495 [0.446, 0.544] (significant improvement)Non-inferiority Test - Descriptor Agreement (per descriptor, e.g., Composition)Demonstrated non-inferiority for all listed descriptors (Composition, Echogenicity, Shape, Margin, Echogenic Foci subcategories). Examples: Composition: 0.018 [0.001, 0.035]; Echogenicity: -0.005 [-0.022, 0.011]End-to-End Thyroid Engine PerformanceAUC = 0.801
    Sensitivity = 0.670
    Specificity = 0.603
    Sensitivity to Transducer Frequency (Low freq, Significant increase in agreement.
        *   Intra-operator variability (class switching rate):
            *   USE Alone: 13.6%
            *   USE + DS: 10.8% (p = 0.042) => **Statistically significant reduction.**
    *   **Thyroid (CRRS-3 Study):**
        *   Change in average AUC (USE + DS vs. USE Alone, all readers, all data): **+0.083 [0.066, 0.099]** (parametric) / **+0.079 [0.062, 0.096]** (non-parametric)
        *   Specifically for US readers, US data: **+0.074 [0.051, 0.098]** (parametric) / **+0.073 [0.049, 0.096]** (non-parametric). This demonstrates a statistically significant improvement in overall reader performance.
        *   **Change in average Sensitivity/Specificity of FNA (with AI Adapter + size criteria):**
            *   All readers, all data: **+0.084 (sensitivity), +0.140 (specificity)**
            *   US readers, US data: **+0.058 (sensitivity), +0.130 (specificity)**
        *   **Change in average Sensitivity/Specificity of Follow-up (with AI Adapter + size criteria):**
            *   All readers, all data: **+0.060 (sensitivity), +0.206 (specificity)**
            *   US readers, US data: **+0.053 (sensitivity), +0.180 (specificity)**
        *   Inter-Reader Variability (relative change in TI-RADS points association): **40.7% (all readers, all data)**, 37.4% (US readers, US data), 49.7% (EU Readers, EU Data)
        *   Impact on Interpretation Time: **-23.6% (all readers, all data)**, -22.7% (US readers, US data), -32.4% (EU Readers, EU Data).
    

    6. Standalone (Algorithm Only without Human-in-the-loop) Performance:

    • Yes, for both Breast and Thyroid AI Engines, Smart Click, Image Registration and Matching, and OCR.
      • Breast Engine: AUC = 0.945; Sensitivity = 0.976; Specificity = 0.632.
      • Thyroid Engine (ACR TI-RADS, biopsy recommendation): Sensitivity = 0.644; Specificity = 0.612.
      • Thyroid Smart Click: Demonstrated non-inferiority for Sensitivity, Specificity, AUC, and descriptor agreement compared to physician-selected calipers. Detection DICE = 0.913.
      • Image Registration and Matching: Very high DICE coefficients (Breast 0.995, Thyroid 0.996) and successful match rates (>99.5%).
      • OCR Engine: High accuracy rates for identification of various freetext and measurement fields (e.g., Breast Side 0.983, Measurement Value 0.948).

    7. Type of Ground Truth Used:

    • Malignancy Risk Classification (Breast & Thyroid AI Engines):
      • Breast: Pathology or 1-year follow-up.
      • Thyroid: Pathology results only (for standalone). Clinical study also used cyto-/histological or excisional pathology.
    • Descriptor Predictions (Thyroid Standalone): Tested objectively against ground truth pathology and subjectively for agreement with readers' descriptor categorizations.
    • Smart Click, Image Registration, OCR: Ground truth was established by manual annotations, physician-drawn ROIs, or defined objective metrics (like DICE coefficient against a reference ROI).

    8. Sample Size for Training Set:

    • Not explicitly stated for either Breast or Thyroid engines. The text mentions drawing upon a "large database of known cases" for the underlying engines and that the test sets were "set aside from the system's training data." However, the exact number of cases/images in the training set is not provided.

    9. How Ground Truth for Training Set was Established:

    • Not explicitly detailed for either Breast or Thyroid engines. The text states the engines "draw upon knowledge learned from a large database of known cases, tying image features to their eventual diagnosis, to form a predictive model." This implies that the training data had associated definitive diagnoses (e.g., from pathology or follow-up), but the process of establishing this ground truth (e.g., expert review, adjudication) for the training data is not described.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242342
    Device Name
    Fetal EchoScan
    Manufacturer
    Date Cleared
    2024-11-14

    (99 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Haussmann Paris, 75009, France

    Re: K242342

    Trade/Device Name: Fetal EchoScan Regulation Number: 21 CFR 892.2060
    Name | Radiological computer-assisted diagnostic software for lesions suspicious for cancer 21 CFR 892.2060
    |
    | Classification Regulation | 892.2060
    | 892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Fetal EchoScan is a machine learning-based computer-assisted diagnosis (CADx) software device indicated as an adjunct to fetal heart ultrasound examination in pregnant women aged 18 or older undergoing second-trimester anatomic ultrasound exams.

    When utilized by an interpreting physician, Fetal EchoScan provides information regarding the presence of any of the following suspicious radiographic findings:

    • overriding artery
    • septal defect at the cardiac crux
    • abnormal relationship of the outflow tracts
    • enlarged cardiothoracic ratio
    • right ventricular to left ventricular size discrepancy
    • tricuspid valve to mitral valve annular size discrepancy
    • pulmonary valve to aortic valve annular size discrepancy
    • cardiac axis deviation

    Fetal EchoScan is to be used with cardiac fetal ultrasound video clips containing interpretable 4-chamber, left ventricular outflow tract, right ventricular outflow tract standard views.

    Fetal EchoScan is intended for use as a concurrent reading aid for interpreting physicians (OB-GYN, MFM). It does not replace the role of the physician or of other diagnostic testing in the standard of care. When utilized by an interpreting physician, this device provides information that may be useful in rendering an accurate diagnosis regarding the potential presence of morphological abnormalities that might be suggestive of fetal congenital heart defects that may be useful in determining the need for additional exams.

    Fetal EchoScan is not intended for use in multiple pregnancies, cases of heterotaxy, and postnatal ultrasound exams.

    Device Description

    Fetal EchoScan is a cloud-based software-only device which uses neural networks to detect suspicious cardiac radiographic findings for further review by trained and qualified physicians. Fetal EchoScan is intended to be used as an adjunct to the interpretation of the second-trimester fetal anatomic ultrasound exam performed between 18 and 24 weeks of gestation, for pregnant women aged 18 or more.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Fetal EchoScan device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a set of predefined thresholds. Instead, it presents the performance of the device in various metrics and then concludes that these results demonstrate substantial equivalence. For the purpose of this request, I will infer the implied acceptance criteria from the reported performance and the conclusion of substantial equivalence.

    Inferred Acceptance Criteria & Reported Device Performance

    Metric / FindingInferred Acceptance Criteria (Implicit from conclusion of substantial equivalence)Fetal EchoScan Performance (Worst-Case Sensitivity / Best-Case Specificity)Fetal EchoScan Performance (Best-Case Sensitivity / Worst-Case Specificity)Aided Reader Performance (ROC AUC)
    Standalone Performance
    Any suspicious findingsHigh Sensitivity & High SpecificitySensitivity: 0.977 (0.954-0.989)
    Specificity: 0.977 (0.961-0.987)Sensitivity: 0.987 (0.967-0.995)
    Specificity: 0.963 (0.944-0.976)N/A
    Overriding arteryHigh Sensitivity & High SpecificitySensitivity: 0.894 (0.820-0.940)
    Specificity: 0.989 (0.977-0.995)Sensitivity: 0.942 (0.880-0.973)
    Specificity: 0.979 (0.963-0.988)0.953 (0.916-0.990)
    Cardiac crux septal defectHigh Sensitivity & High SpecificitySensitivity: 0.905 (0.823-0.951)
    Specificity: 0.995 (0.985-0.998)Sensitivity: 0.917 (0.838-0.959)
    Specificity: 0.989 (0.977-0.995)0.971 (0.943-0.999)
    Abnormal OT relationshipHigh Sensitivity & High SpecificitySensitivity: 0.869 (0.781-0.925)
    Specificity: 0.991 (0.979-0.996)Sensitivity: 0.952 (0.884-0.981)
    Specificity: 0.989 (0.977-0.995)0.972 (0.953-0.992)
    Enlarged CTRHigh Sensitivity & High SpecificitySensitivity: 0.955 (0.876-0.985)
    Specificity: 1.000 (0.993-1.000)Sensitivity: 0.955 (0.876-0.985)
    Specificity: 1.000 (0.993-1.000)0.960 (0.930-0.989)
    Cardiac axis deviationHigh Sensitivity & High SpecificitySensitivity: 0.945 (0.851-0.981)
    Specificity: 1.000 (0.993-1.000)Sensitivity: 0.945 (0.851-0.981)
    Specificity: 1.000 (0.993-1.000)0.967 (0.932-1.000)
    PV/AV size discrepancyHigh Sensitivity & High SpecificitySensitivity: 0.954 (0.914-0.975)
    Specificity: 0.989 (0.977-0.995)Sensitivity: 0.954 (0.914-0.975)
    Specificity: 0.989 (0.977-0.995)0.979 (0.962-0.997)
    RV/LV size discrepancyHigh Sensitivity & High SpecificitySensitivity: 0.950 (0.900-0.975)
    Specificity: 1.000 (0.993-1.000)Sensitivity: 0.950 (0.900-0.975)
    Specificity: 1.000 (0.993-1.000)0.991 (0.983-0.999)
    TV/MV size discrepancyHigh Sensitivity & High SpecificitySensitivity: 0.943 (0.896-0.970)
    Specificity: 1.000 (0.993-1.000)Sensitivity: 0.943 (0.896-0.970)
    Specificity: 1.000 (0.993-1.000)0.964 (0.938-0.990)
    MRMC Study Performance
    ROC AUC (any suspicious finding)Significantly higher with aid than unaidedN/AN/AAided: 0.974 (0.957-0.990)
    Unaided: 0.825 (0.741-0.908)
    Mean Sensitivity (Any finding)Increased with aidN/AN/AAided: 0.935 (0.892-0.978)
    Unaided: 0.782 (0.686-0.878)
    Mean Specificity (Any finding)Increased with aidN/AN/AAided: 0.970 (0.949-0.991)
    Unaided: 0.759 (0.630-0.887)
    Conclusive output rateHigh98.8% (95% CL, 97.8-99.3)N/AN/A

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size for Standalone Test Set: 877 clinically acquired fetal ultrasound exams.
    • Sample Size for MRMC Test Set: 200 exams.
    • Data Provenance: The data was collected from 11 centers in the U.S.A. and France. It was retrospectively collected as it refers to "clinically acquired fetal ultrasound exams".

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three (3) pediatric cardiologists.
    • Qualifications of Experts: The document specifies "pediatric cardiologists" but does not provide details on their years of experience or other specific qualifications beyond their specialty.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Majority voting. This means that if at least two out of the three pediatric cardiologists agreed on the presence or absence of a finding, that was established as the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes.
    • Effect size of human readers improvement with AI vs. without AI assistance:
      • ROC AUC for any suspicious finding: +14.9% increase (from 0.825 unaided to 0.974 aided, p=0.002).
      • Mean Sensitivity for any suspicious finding: +15.3% increase (from 0.782 unaided to 0.935 aided).
      • Mean Specificity for any suspicious finding: +21.1% increase (from 0.759 unaided to 0.970 aided).

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes.
    • The results are presented in Table 1, showing sensitivity and specificity for "Any suspicious findings" and each individual finding, calculated under two scenarios for inconclusive outputs.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus. Specifically, it was derived from a truthing process by three pediatric cardiologists using majority voting.

    8. Sample Size for the Training Set

    • The document states that "The ultrasound examinations used for training and validation are entirely distinct from the examinations used in standalone testing," but it does not explicitly provide the sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    • The document states that the "ultrasound examinations used for training and validation are entirely distinct from the examinations used in standalone testing." However, similar to the training set sample size, it does not explicitly describe how the ground truth for the training set was established. It only details the ground truth establishment for the test sets (standalone and MRMC).
    Ask a Question

    Ask a specific question about this device

    K Number
    K241245
    Device Name
    EchoSolv AS
    Manufacturer
    Date Cleared
    2024-10-04

    (154 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Texas 78705

    October 4, 2024

    Re: K241245

    Trade/Device Name: EchoSolv AS Regulation Number: 21 CFR 892.2060
    Computer-Assisted Diagnostic Software (CADx) for
    Lesions Suspicious for Cancer |
    | Regulation | 21 CFR 892.2060
    STANDALONE PERFORMANCE TESTING

    Standalone performance testing was performed in accordance with 21 CFR §892.2060
    CLINICAL PERFORMANCE TESTING 7.3

    Clinical performance testing was performed in accordance with 21 CFR $892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoSolv AS is a machine learning (ML) and artificial intelligence (AI) based decision support software indicated for use as an adjunct to echocardiography for assessment of severe aortic stenosis (AS).

    When utilized by an interpreting physician, this device provides information to facilitate rendering an accurate diagnosis of AS. Patient management decisions should not be made solely on the results of the EchoSolv AS analysis.

    EchoSolv AS includes both the algorithm based AS phenotype analysis, and the application of recognized AS clinical practice quidelines.

    Limitations: EchoSolv AS is not intended for patients under the age of 18 years or those who have previously undergone aortic valve replacement surgery

    Device Description

    EchoSolv AS is a standalone, cloud-based decision support software which is intended to be used certified cardiologist to aid in the diagnosis of Severe Aortic Stenosis. EchoSolv AS analyzes basic patient demographic data and measurements obtained from a transthoracic echo examination to provide a categorical assessment as to whether the data are suggestive of a high, medium or low probability of Severe AS. EchoSolv AS is intended for patients who 18 years or older who have an echocardiogram performed as part of routine clinical care (i.e., for the evaluation of structural heart disease).

    Patient demographic and echo measurement data is automatically processed through the artificial intelligence algorithm which provides an output regarding the probability of a Severe AS phenotype to aid in the clinical diagnosis of Severe AS during the review of the patient echo study and generation of the final study report, according to current clinical practice guidelines. The software provides an output on the following assessments:

    1. Severe AS Phenotype Probability

    Whether the patient has a high, medium, or low probability of exhibiting a Severe AS phenotype, based on analysis by the EchoSolv AS proprietary Al algorithm, that the determined predicted AVA is ≤1.0cm². The Al probability score requires a minimum set of data inputs to provide a valid output but is based on all available echocardiographic measurement data and does not rely on the traditional LVOT measurements used to in the continuity equation.

    1. Severe AS Guideline Assessment

    Whether the patient meets the definition for Severe AS based on direct evaluation of provided echocardiogram data measurements (AV Peak Velocity, AV Mean Gradient and AV Area) with current clinical practice guidelines (2020 ACC/AHA Guideline for the Management of Patients with Valvular Heart Disease).

    EchoSolv AS is intended to be used by board-certified cardiologists who review echocardiograms during the evaluation and diagnosis of structural heart disease, namely aortic stenosis. EchoSolv AS is intended to be used in conjunction with current clinical practices and workflows to improve the identification of Severe AS cases.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study detailed in the provided document for the EchoSolv AS device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" in a tabulated format. However, based on the performance data presented, the implicit acceptance criteria can be inferred from the reported performance and comparison to a predicate device. The performance metrics reported are AUROC, Sensitivity, Specificity, Diagnostic Likelihood Ratios (DLR), and improvement in reader AUROC and concordance in the MRMC study.

    Performance MetricImplicit Acceptance Criterion (Based on context/predicate)Reported Device Performance (EchoSolv AS)
    Standalone Performance
    AUROC (Overall)Expected to be high, comparable to or better than predicate (Predicate: 0.927 AUROC)0.948 (95% CI: 0.943-0.952)
    Sensitivity (at high probability)High (No specific threshold given, but expected to detect a good proportion of true positive cases)0.801 (95% CI: 0.786-0.818)
    Specificity (at high probability)High (No specific threshold given, but expected to correctly identify true negative cases)0.923 (95% CI: 0.915-0.932)
    DLR (Low Probability)Low (Indicative of low probability of disease)0.067 (95% CI: 0.057-0.080)
    DLR (Medium Probability)Close to 1 (Weakly indicative)0.935 (95% CI: 0.829-1.05)
    DLR (High Probability)High (Strongly indicative of disease)10.3 (95% CI: 9.22-11.50)
    Cochran-Armitage Trend Test (p-value)Statistically significant trend (p
    Ask a Question

    Ask a specific question about this device

    K Number
    K240697
    Date Cleared
    2024-09-09

    (179 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    | 21 CFR 892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    See-Mode Augmented Reporting Tool, Thyroid (SMART-T) is a stand-alone reporting software to assist trained medical professionals in analyzing thyroid ultrasound images of adult (>=22 years old) patients who have been referred for an ultrasound examination.

    Output of the device includes regions of interest (ROIs) placed on the thyroid ultrasound images assisting healthcare professionals to localize nodules in thyroid studies. The device also outputs ultrasonographic lexicon-based descriptors based on ACR TI-RADS. The software generates a report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control.

    SMART-T may also be used as a structured reporting software for further ultrasound studies. The software includes tools for reading measurements and annotations from the images that can be used for generating a structured report.

    Patient management decisions should not be made solely on the basis of analysis by See-Mode Augmented Reporting Tool, Thyroid.

    Device Description

    See-Mode Augmented Reporting Tool, Thyroid (SMART-T) is a stand-alone, web-based image processing and reporting software for localization, characterization and reporting of thyroid ultrasound images.

    The software analyzes thyroid ultrasound images and uses machine learning algorithms to extract specific information. The algorithms can identify and localize suspicious soft tissue nodules and also generate lexicon-based descriptors, which are classified according to ACR TI-RADS (composition, echogenicity, shape, margin, and echogenic foci) with a calculated TI-RADS level according to the ACR TI-RADS chart.

    SMART-T may also be used as a structured reporting software for further ultrasound studies. The software includes tools for reading measurements and annotations from the images that can be used for generating a structured report.

    The software then generates a report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. Any information within this report can be changed and modified by the clinician if needed during quality control and before finalizing the report.

    The software runs on a standard "off-the-shelf" computer and can be accessed within the client web browser to perform the reporting of ultrasound images. Input data and images for the software are acquired through DICOM-compliant ultrasound imaging devices.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the See-Mode Augmented Reporting Tool, Thyroid (SMART-T) device, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria CategorySpecific MetricAcceptance Criteria (Explicitly Stated or Inferred)Reported Device Performance (Aided)Reported Device Performance (Unaided)Standalone Performance (Algorithm Only)
    Nodule LocalizationAULROC (IOU > 0.5)Improvement over unaided performance0.758 (0.711, 0.803)0.736 (0.693, 0.780)0.703 (0.642, 0.762)
    AULROC (IOU > 0.6)Improvement over unaided performance0.734 (0.682, 0.781)0.682 (0.632, 0.730)N/A
    AULROC (IOU > 0.7)Improvement over unaided performance0.686 (0.629, 0.740)0.548 (0.490, 0.610)N/A
    AULROC (IOU > 0.8)Improvement over unaided performance0.593 (0.529, 0.658)0.356 (0.293, 0.423)N/A
    Localization Accuracy (Bounding box IOU > 0.5)Superior to unaided performance95.6% (94.1, 97.0)93.6% (92.1, 95.0)95.1%
    TI-RADS DescriptorsComposition AccuracySignificant improvement over unaided performance84.9% (82.2, 87.5)80.4% (77.3, 83.4)86.7%
    Echogenicity AccuracySignificant improvement over unaided performance77.4% (74.4, 80.3)70.0% (67.0, 72.8)68.2%
    Shape AccuracySignificant improvement over unaided performance90.8% (88.2, 93.1)86.4% (83.7, 88.8)93.4%
    Margin AccuracySignificant improvement over unaided performance73.5% (70.2, 76.7)57.3% (53.3, 61.2)58.4%
    Echogenic Foci AccuracySignificant improvement over unaided performance75.2% (71.9, 78.5)71.1% (67.1, 74.9)70.3%
    TI-RADS Level AgreementOverall TI-RADS Level AgreementSignificant improvement over unaided performance60.0% (56.8, 63.3)51.1% (47.8, 54.5)63.8% (60.0, 67.7)
    TI-RADS Level Agreement (TR-1)Improvement over unaided performance59.0% (42.3, 74.9)52.9% (37.3, 68.3)61.9% (40.0, 82.6)
    TI-RADS Level Agreement (TR-2)Improvement over unaided performance38.1% (31.1, 45.6)31.2% (24.6, 38.1)41.1% (31.7, 50.4)
    TI-RADS Level Agreement (TR-3)Significant improvement over unaided performance68.9% (62.6, 74.9)58.8% (52.2, 65.4)71.7% (64.9, 78.3)
    TI-RADS Level Agreement (TR-4)Significant improvement over unaided performance61.4% (56.5, 66.3)52.1% (47.2, 57.0)65.5% (59.1, 71.6)
    TI-RADS Level Agreement (TR-5)Significant improvement over unaided performance71.3% (61.8, 80.5)62.0% (52.2, 71.5)77.0% (66.1, 87.3)

    Note: The acceptance criteria are largely inferred from the study's objective to demonstrate "superior performance," "significant improvement," and "consistent performance" compared to unaided reading, and "on-par" with aided use for standalone. Exact numerical thresholds for acceptance were not explicitly stated as distinct acceptance criteria.


    Study Details

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: 600 cases from 600 unique patients.
    • Data Provenance: Retrospective collection of thyroid ultrasound images. 74% of the data was acquired from the US. The cases in the MRMC study were sourced from institutions or sources not part of the model training or development datasets to ensure generalizability.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Two expert US-board certified radiologists and one adjudicator (also a US-board certified radiologist with the most years of experience).
    • Qualifications: US-board certified radiologists, with one having "the most years of experience" for adjudication.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: 2+1 (Two expert radiologists' consensus, with an additional expert radiologist adjudicating disagreements). Specifically, the text states "consensus labels of two expert US-board certified radiologists and an adjudicator (also US-board certified radiologist with the most years of experience)."

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study Done: Yes.
    • Effect Size of Improvement (Aided vs. Unaided):
      • AULROC (IOU > 0.5): 0.022 (0.758 aided - 0.736 unaided)
      • AULROC (IOU > 0.6): 0.052 (0.734 aided - 0.682 unaided)
      • AULROC (IOU > 0.7): 0.138 (0.686 aided - 0.548 unaided)
      • AULROC (IOU > 0.8): 0.237 (0.593 aided - 0.356 unaided)
      • Localization Accuracy: 2.0% improvement (95.6% aided - 93.6% unaided)
      • TI-RADS Descriptors Accuracy Improvements:
        • Composition: 4.5% (84.9% vs 80.4%)
        • Echogenicity: 7.4% (77.4% vs 70.0%)
        • Shape: 4.4% (90.8% vs 86.4%)
        • Margin: 16.2% (73.5% vs 57.3%)
        • Echogenic Foci: 4.1% (75.2% vs 71.1%)
      • Overall TI-RADS Level Agreement: 8.9% (60.0% vs 51.1%)

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Study Done: Yes. The text explicitly states: "To evaluate the standalone performance of our device, where the output of the models are directly compared against ground truth labels."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Nodule Benign/Malignant Status: Sourced from reference standard of Fine Needle Aspiration (FNA) or 2-year follow-up for benign cases (outcomes data/pathology).
    • Localization, ACR TI-RADS Lexicon Descriptors, and TI-RADS Level Agreement: Expert consensus based on the labels of two expert US-board certified radiologists and an adjudicator.

    8. The sample size for the training set:

    • The document states that the cases in the MRMC study were sourced from institutions or sources not part of the model training or development datasets. However, the specific sample size for the training set is not provided in the given text.

    9. How the ground truth for the training set was established:

    • The document implies that the training data was distinct from the test set, but it does not explicitly describe how the ground truth for the training set was established. It only details the ground truth establishment for the test set used in the standalone and MRMC studies.
    Ask a Question

    Ask a specific question about this device

    K Number
    K234141
    Manufacturer
    Date Cleared
    2024-08-01

    (216 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    District of Columbia 20004

    Re: K234141

    Trade/Device Name: AISAP Cardio V1.0 Regulation Number: 21 CFR 892.2060
    Computer-Assisted Diagnostic Software (CADx) for lesions suspicious for cancer

    Regulation: 21 CFR §892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AISAP CARDIO V1.0 is a software platform that automatically processes and analyzes acquired cardiac POCUS images, producing a report with diagnostic assessment and measurements of several key cardiac structural and functional parameters, including: presence of valvular pathology (regurgitations of the mitral, tricuspid, aortic valves and aortic stenosis), and measurements of the Left Ventricle Ejection Fraction (LVEF), right and left ventricular dimensions, right ventricular fractional area change (RV FAC), atrial areas, ascending aorta diameter, and inferior vena cava (IVC) diameter.

    The device outputs are provided in a report that is intended to support qualified physicians in their analysis and interpretation of adult cardiac POCUS images, using FDA-cleared ultrasound devices. Physicians should be trained and privileged by their organization following education processes and should perform cardiac POCUS according to their specialty professional society clinical guidelines.

    AISAP CARDIO V1.0 has not been validated for the assessment of congenital heart disease, and/or intra-cardiac lesions (e.g., tumors, thrombi, vegetations), prosthetic valves, and in the presence of ventricular assist devices.

    AISAP CARDIO V1.0 is indicated for use in adult patients only.

    Device Description

    AISAP CARDIO V1.0 is a machine learning-based decision support software device, indicated as an adjunct to diagnostic Cardiac point of care ultrasound (C-POCUS) for adult patients undergoing assessment for cardiac disease. This device performs automated analysis of ultrasound images and generates valvular assessments and measurements of standard cardiac structural and functional parameters.

      1. Inform the user of a suspected cardiac valvular regurgitation (mitral, tricuspid, or aortic), and/or aortic stenosis is: either greater than mild severity or none to mild severity.
      1. Inform the user of the 4 class American Society of Echocardiography (ASE) recommended category for valvular regurgitation (mitral, tricuspid, or aortic), and or aortic stenosis. Each finding categorizes according to none, mild, moderate, or severe.
      1. Measurements of the following standard cardiac structural or functional parameters:
      • Left Ventricular Ejection Fraction (LVEF) (percent) a.
      • Left ventricular end diastolic diameter (cm) b.
      • Right ventricular area change (RV FAC [ratio]) C.
      • Inferior vena cava (IVC) maximal diameter (mm) d.
      • e. Aortic root diameter (cm)
      • ਿ Right atrium (RA) area (cm2)
      • Left atrium (LA) area (cm²) g.

    AISAP CARDIO V1.0 assists the physician in assessing 4 major valvular findings in adults, along with providing information on several correlated cardiac ultrasound measurements frequently found to be abnormal in association with valvular heart disease. Used together and interpreted by the physician, the device provides information that may assist in rendering an accurate diagnosis of selected cardiac findings. AISAP CARDIO V1.0 is adjunctive to cardiac POCUS (C-POCUS) use by privileged physicians in use scenarios supported by clinical guidelines. Specifically, patient management decisions are not intended to be and should not be made solely on the results of the software analysis of the proposed device. When significant valve pathology is suspected comprehensive echocardiography should be considered in accordance with the relevant professional guidelines.

    AISAP CARDIO V1.0 uses machine learning NN (neural network) models trained to recognize patterns and make decisions. AISAP CARDIO V1.0 contains classification models which identify categories within data, regression models which predict numerical values, and instance segmentation models that detect and segment objects within images.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and supporting studies for the AISAP Cardio V1.0 device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MeasurementAcceptance CriteriaReported Device PerformanceStudy Type
    Structural & Functional MeasurementsStandalone Model Performance
    LVEFRMSE 0.80MR: 0.975; AS: 0.969; AR: 0.993; TR: 0.973Standalone Model Performance
    Clinical Reader PerformanceMulti-Reader Study
    MR (Aided vs. Un-aided)Lower bound of 95% CI for (AUC_aided - AUC_unaided) > 0AUC_aided (0.963) > AUC_unaided (0.870)Clinical Reader Performance (MRMC)
    TR (Aided vs. Un-aided)Lower bound of 95% CI for (AUC_aided - AUC_unaided) > 0AUC_aided (0.937) > AUC_unaided (0.851)Clinical Reader Performance (MRMC)
    AR (Aided vs. Un-aided)Lower bound of 95% CI for (AUC_aided - AUC_unaided) > 0AUC_aided (0.947) > AUC_unaided (0.868)Clinical Reader Performance (MRMC)
    AS (Aided vs. Un-aided)Lower bound of 95% CI for (AUC_aided - AUC_unaided) > 0AUC_aided (0.925) > AUC_unaided (0.897)Clinical Reader Performance (MRMC)
    View ClassificationAccuracy > 95%PLAX: 100%; PSAX: 99.2%; A4C: Not reported; SC IVC: 98.8%View Classification Validation Study

    2. Sample Size for Test Set and Data Provenance

    • Standalone Structural and Functional Measurements Study: 200 cases
    • Standalone Valvular Pathology Study: 329 cases
    • Clinical Reader Performance (MRMC) Study: 260 cases
    • View Classification Validation Study: 500 sampled loops per cardiac view (from the clinical study dataset)

    Data Provenance:
    The test data was collected prospectively at 4 clinical reader sites located in the United States (51% of cases) and Israel (49% of cases). Images were acquired with different US device vendors (Philips, GE, Wisonic, EchoNous) from both in-patient and out-patient settings. Both physicians and sonographers performed the POCUS exams.

    3. Number of Experts and Qualifications for Test Set Ground Truth

    • Structural and Functional Measurements Study: 3 US board-certified cardiologists with a minimum of 5 years of experience.
    • Valvular Pathology Study: Cardiologist interpretations (number not specified, but the context implies multiple experts as ground truth for other studies).
    • Clinical Reader Performance (MRMC) Study: 3 US Board Certified cardiologists (for the severity grade of valvular pathologies).
    • View Classification Validation Study: 2 certified experienced echo technicians, with over-read by a lead technician and a senior cardiologist.

    4. Adjudication Method for Test Set

    • Structural and Functional Measurements Study: Ground truth was established by the mean value determined by the 3 cardiologists' measurements following ASE guidelines.
    • Valvular Pathology Study: Not explicitly stated, but implies expert interpretation as the ground truth.
    • Clinical Reader Performance (MRMC) Study: "2+1" annotation strategy. The 260 cases were interpreted independently by two U.S. Board Certified ground truth cardiologists. Any discrepancies were interpreted by a third ground truth cardiologist. Any persistent disagreements were decided at a meeting of the three ground truth cardiologists.
    • View Classification Validation Study: View verification by 2 certified experienced echo technicians, with an over-read of 30% of cases by a lead technician and an additional 10% over-read by a senior cardiologist.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Yes, a MRMC comparative effectiveness study was done. It was called the "Clinical Reader Performance" study.

    Effect Size (Improvement with AI vs. without AI assistance):

    The study demonstrated an improvement in AUC, Kappa, and Accuracy when readers were aided by the device.

    • AUC Improvement (AI-aided vs. Unaided):

      • MR: 0.963 vs. 0.870 (Improvement: 0.093)
      • TR: 0.937 vs. 0.851 (Improvement: 0.086)
      • AR: 0.947 vs. 0.868 (Improvement: 0.079)
      • AS: 0.925 vs. 0.897 (Improvement: 0.028)
        (The passing criteria states the lower bound of the 95% CI for this difference lay entirely above zero, indicating a statistically significant positive effect.)
    • Kappa Improvement (AI-aided vs. Unaided):

      • MR: 0.881 vs. 0.756 (Improvement: 0.125)
      • TR: 0.881 vs. 0.765 (Improvement: 0.116)
      • AR: 0.913 vs. 0.815 (Improvement: 0.098)
      • AS: 0.850 vs. 0.792 (Improvement: 0.058)
    • Accuracy Improvement (AI-aided vs. Unaided):

      • MR: 73.6% vs. 61.6% (Improvement: 12.0%)
      • TR: 75.3% vs. 64.1% (Improvement: 11.2%)
      • AR: 80.6% vs. 71.7% (Improvement: 8.9%)
      • AS: 74.7% vs. 69.8% (Improvement: 4.9%)

    6. Standalone (Algorithm Only) Performance Study

    Yes, standalone performance studies were done for both:

    • "Standalone Model Performance for Structural and Functional Measurements"
    • "Standalone Model Performance for Valvular Pathology"

    7. Type of Ground Truth Used

    • Structural and Functional Measurements: Expert consensus (mean value of 3 cardiologists' measurements) following American Society of Echocardiography (ASE) guidelines.
    • Valvular Pathology: Cardiologist interpretations (implied expert consensus).
    • Clinical Reader Performance (MRMC): Expert consensus of 3 US Board Certified cardiologists, established via an adjudication process ("2+1" strategy and consensus meeting).
    • View Classification: Expert consensus of 2 certified experienced echo technicians, with over-read by a lead technician and a senior cardiologist.

    8. Sample Size for Training Set

    Over 140,000 individual exams were used for training the machine learning models, representing > 1 billion frames.

    9. How Ground Truth for Training Set Was Established

    The AISAP CARDIO V1.0 algorithms were trained at 2 academic institutions that perform cardiac ultrasound examinations and interpretations according to ASE guidelines. This implies that the ground truth for the training data was established by expert interpretation and measurements conforming to these professional guidelines at the academic institutions.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233342
    Device Name
    CINA-ASPECTS
    Manufacturer
    Date Cleared
    2024-03-15

    (168 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Columbia 20004

    March 15, 2024

    Re: K233342

    Trade/Device Name: CINA-ASPECTS Regulation Number: 21 CFR 892.2060
    computer-assisted diagnostic software for lesions suspicious of cancer |
    | Regulation No: | 21 CFR § 892.2060
    | lesions suspicious of cancer |
    | Regulation No: | 21 CFR § 892.2060

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CINA-ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.

    The Software automatically reorients images, segments and analyzes ASPECTS Regions of Interest (ROIs). CINA-ASPECTS extracts image data for the ROI(s) to provide analysis and computer analytics based on morphological characteristics. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score.

    CINA-ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. This device provides information that may be useful in the characterization of early ischemic (acute) brain tissue injury during image interpretation.

    CINA-ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment using the ASPECTS region definitions and highlighting ROIs and numerical scoring.

    Limitations:

    1. CINA-ASPECT is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
    2. CINA-ASPECT has been validated in patients with known MCA or ICA unilateral occlusion prior to ASPECTS scoring.
    3. CINA-ASPECTS is not suitable for use on brain scans displaying neurological pathologies other than acute stroke, such as tumors or abscesses, traumatic brain injuries, hemorrhagic transformation and hematoma.
    4. Use of CINA-ASPECT in clinical settings other than brain ischemia within 12 hours from time last known well, caused by known ICA or MCA occlusions has not been tested.
    5. CINA-ASPECTS has only been validated and is intended to be used in patient populations aged over 21.
    6. CINA-ASPECTS has been validated and is intended to be used with images acquired with Canon Medical Systems Corporation, GE Healthcare, Philips Healthcare and Siemens Healthineers scanners.

    Contraindications/Exclusions/Cautions:

    • Patient motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
    • Important streak artifacts and noisy images: Presence of important streak artifact and significant noise within the NCCT images that make the scan technically inadequate.
    • Hemorrhagic Transformation, Hematoma.
    Device Description

    CINA-ASPECTS is a standalone computer-aided diagnosis (CADx) software that processes noncontrast head CT (NCCT).

    CINA-ASPECTS is a standalone executable program that may be run directly from the commandline or through integration, deployment and use with medical image communications devices. The software does not interface directly with any CT scanner or data collection equipment; instead, the software receives non-contrast head CT (NCCT) scans identified by medical image communications devices, processes them using algorithmic methods involving execution of multiple computational steps to provide an automatic ASPECT score based on the case input file for the physician.

    The score includes which ASPECT regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data and overlaid onto brain scan images. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgment.

    Series are processed by running the CINA-ASPECTS Image Processing Applications on noncontrast head CT images (NCCT) to perform the:

    • Reorientation, tilt-correction of the input imaging data;
    • Delineation of predefined regions of interest on the corrected input data and computing numerical values characterizing underlying voxel values within those regions;
    • Visualizing the voxels which have contributed to the ASPECTS score (also referred to as a 'heat map'); and
    • Labeling of these delineated regions and providing a summary score reflecting the number of regions with early ischemic change as per ASPECTS guidelines.

    The CINA-ASPECTS User Interface Agent provides the ASPECTS information to the clinician to review and edit. It also requires the confirmation by a clinician that a Large Vessel Occlusion (LVO) is detected. This confirmation is used by the CINA-ASPECTS to limit the detection of areas of early ischemic changes to the infarcted brain hemisphere selected by the user. The final summary score together with the regions selected and underlying voxel values are then stored in DICOM format to be transferred by the medical image communications device for output to a Picture Archiving and Communication System (PACS) or workstation.

    The CINA-ASPECTS device is made of two components:

    • The CINA-ASPECTS image processing application which reads the input file and generates an automatic ASPECT score and the applications outputs
    • A CINA-ASPECTS UI Agent which provides the ASPECTS information to the clinician to review and edit for final archiving.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the CINA-ASPECTS device, based on the provided FDA 510(k) summary:


    CINA-ASPECTS Device Acceptance Criteria and Performance Study

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document details two main studies: a Standalone Performance Testing and a Clinical Multi-Reader Multi-Case (MRMC) Performance Study. The acceptance criteria aren't explicitly listed as a separate table with pass/fail metrics in the summary, but rather are demonstrated through the successful outcomes of these studies. The performance metrics reported are measures of the device's accuracy and utility.

    Note: The FDA 510(k) summary typically presents a high-level overview. Specific numerical acceptance thresholds (e.g., "sensitivity must be > X%") are often detailed in the full submission but are not fully elaborated here. Instead, the document states that the device "met all design requirements and specifications" and "achieved its primary endpoint," implying successful adherence to pre-defined acceptance criteria.

    Acceptance Criterion (Inferred from Study Goals)Reported Device Performance (CINA-ASPECTS)
    Standalone Performance
    Accurate representation of key processing parameters under a range of clinical parameters.Demonstrated accurate representation. "The Standalone Performance Testing demonstrated that the proposed device provides accurate representation of key processing parameters under a range of clinically relevant parameters." "The CINA-ASPECTS device performed properly and matched with the ground truth."
    Generalizability across patient demographics, clinical parameters, ASPECTS regions, and image acquisition parameters.Achieved primary endpoint and established generalizability. "The Standalone Performance Testing study demonstrated that CINA-ASPECTS achieved its primary endpoint and established that CINA-ASPECTS performances generalize to a range of typical patient demographics, Clinical parameters, ASPECTS regions, and image acquisition parameters encountered in multiple clinical sites and scanner makers and models."
    Safety and effectiveness."The performance testing of the CINA-ASPECTS device establishes that the subject device is safe and effective, meets its intended use statement and is compatible with clinical use."
    Clinical Performance (MRMC Study)
    Improve agreement between readers (with AI assist) and reference standard for ASPECTS scoring.Readers agreed with "almost ½ a region (4.1%, [95% Cl: 3.3% -4.9%]) more per scan with CINA-ASPECTS than without." "The clinical data demonstrates that CINA-ASPECTS shows a significant improvement in the agreement between the readers and a reference standard when using the CINA-ASPECTS software compared to routine clinical practice."
    Improve overall reader ROC AUC.Overall readers' ROC AUC improved significantly from 0.75 (Unaided arm) to 0.79 (Aided arm).
    Reduce variation in performance between different readers.The range in the ROC AUC between users was narrower when assisted by the software.
    Reduce mean time spent per case.The mean time spent per case among all readers was significantly reduced when using CINA-ASPECTS.
    Substantial equivalence for improving reader accuracy compared to the predicate device."This study demonstrates substantial equivalence of the CINA-ASPECTS software for improving reader accuracy, compared to the predicate device. The results showed statistically significant improvement in the readers' accuracy when using the software compared to the conventional manual method used in routine clinical practice." "With CINA-ASPECTS the readers agreed, on average, with almost ½ a region (4.1%, [95% Cl: 3.3% -4.9%]) more per scan than without CINA-ASPECTS. These findings are similar to the results reported for the predicate device."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 200 clinical anonymized NCCT cases.
    • Data Provenance: Retrospective, multinational, multi-vendor dataset from 5 clinical sites in two countries (US and France). Acquired by 4 different scanner makers (GE, Siemens, Canon, Philips) and 27 different scanner models.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document mentions that the MRMC study evaluated the performance of "8 clinical readers" and that the "clinical data demonstrates that CINA-ASPECTS shows a significant improvement in the agreement between the readers and a reference standard." However, it does not explicitly state the number or qualifications of experts used to establish the ground truth specifically for the standalone performance test.

    For the MRMC study readers, it states: "The panel of readers consisted of 4 expert neuroradiologists and 4 non-experts from different specialties (stroke neurologist, general radiologist, neurointensivist, vascular neurologist), representing the intended use population." While these readers contributed to the "aided" and "unaided" performance evaluation, they are not explicitly designated as the ground truth setters for the test set. The term "reference standard" is used, implying a separate, likely expert-derived, ground truth, but its specifics are not detailed here.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1) used to establish the ground truth for the test set. It mentions agreement with a "reference standard" in the context of the MRMC study, but not how that reference standard was formed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes, a retrospective, multinational, multi-vendor, and blinded Clinical Multi-Reader Multi-Case (MRMC) Performance study was conducted.
    • Effect size of how much human readers improve with AI vs without AI assistance:
      • Agreement with reference standard: With CINA-ASPECTS, readers agreed, on average, with almost ½ a region (4.1%, [95% Cl: 3.3% -4.9%]) more per scan than without CINA-ASPECTS.
      • Overall ROC AUC: Improved significantly from 0.75 (Unaided arm) to 0.79 (Aided arm).
      • Reduced variation: The range in the ROC AUC between users was narrower when assisted by the software.
      • Time spent: Mean time spent per case among all readers was significantly reduced when using CINA-ASPECTS.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    • Was a standalone study done? Yes. "Standalone performance testing was conducted to comply with special controls for this device type."

    7. Type of Ground Truth Used

    The document states that in the standalone performance testing, "The CINA-ASPECTS device performed properly and matched with the ground truth." For the MRMC study, it refers to improvement in "agreement between the readers and a reference standard."
    However, the specific methodology for establishing this "ground truth" or "reference standard" (e.g., expert consensus of several independent radiologists, pathology results, outcomes data) is not explicitly detailed in the provided text. It is implied to be expert-derived, given the context of radiological assessment.

    8. Sample Size for the Training Set

    The document states, "The validation dataset was separated from the one used for the algorithm training/testing and has never been used in any way in the development of the software device." However, the sample size for the training set is not provided in this summary.

    9. How the Ground Truth for the Training Set was Established

    The document describes how the validation dataset was separated from the training/testing data but does not specify how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2