Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K240697
    Date Cleared
    2024-09-09

    (179 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    See-Mode Technologies Pte. Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    See-Mode Augmented Reporting Tool, Thyroid (SMART-T) is a stand-alone reporting software to assist trained medical professionals in analyzing thyroid ultrasound images of adult (>=22 years old) patients who have been referred for an ultrasound examination.

    Output of the device includes regions of interest (ROIs) placed on the thyroid ultrasound images assisting healthcare professionals to localize nodules in thyroid studies. The device also outputs ultrasonographic lexicon-based descriptors based on ACR TI-RADS. The software generates a report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control.

    SMART-T may also be used as a structured reporting software for further ultrasound studies. The software includes tools for reading measurements and annotations from the images that can be used for generating a structured report.

    Patient management decisions should not be made solely on the basis of analysis by See-Mode Augmented Reporting Tool, Thyroid.

    Device Description

    See-Mode Augmented Reporting Tool, Thyroid (SMART-T) is a stand-alone, web-based image processing and reporting software for localization, characterization and reporting of thyroid ultrasound images.

    The software analyzes thyroid ultrasound images and uses machine learning algorithms to extract specific information. The algorithms can identify and localize suspicious soft tissue nodules and also generate lexicon-based descriptors, which are classified according to ACR TI-RADS (composition, echogenicity, shape, margin, and echogenic foci) with a calculated TI-RADS level according to the ACR TI-RADS chart.

    SMART-T may also be used as a structured reporting software for further ultrasound studies. The software includes tools for reading measurements and annotations from the images that can be used for generating a structured report.

    The software then generates a report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. Any information within this report can be changed and modified by the clinician if needed during quality control and before finalizing the report.

    The software runs on a standard "off-the-shelf" computer and can be accessed within the client web browser to perform the reporting of ultrasound images. Input data and images for the software are acquired through DICOM-compliant ultrasound imaging devices.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the See-Mode Augmented Reporting Tool, Thyroid (SMART-T) device, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria CategorySpecific MetricAcceptance Criteria (Explicitly Stated or Inferred)Reported Device Performance (Aided)Reported Device Performance (Unaided)Standalone Performance (Algorithm Only)
    Nodule LocalizationAULROC (IOU > 0.5)Improvement over unaided performance0.758 (0.711, 0.803)0.736 (0.693, 0.780)0.703 (0.642, 0.762)
    AULROC (IOU > 0.6)Improvement over unaided performance0.734 (0.682, 0.781)0.682 (0.632, 0.730)N/A
    AULROC (IOU > 0.7)Improvement over unaided performance0.686 (0.629, 0.740)0.548 (0.490, 0.610)N/A
    AULROC (IOU > 0.8)Improvement over unaided performance0.593 (0.529, 0.658)0.356 (0.293, 0.423)N/A
    Localization Accuracy (Bounding box IOU > 0.5)Superior to unaided performance95.6% (94.1, 97.0)93.6% (92.1, 95.0)95.1%
    TI-RADS DescriptorsComposition AccuracySignificant improvement over unaided performance84.9% (82.2, 87.5)80.4% (77.3, 83.4)86.7%
    Echogenicity AccuracySignificant improvement over unaided performance77.4% (74.4, 80.3)70.0% (67.0, 72.8)68.2%
    Shape AccuracySignificant improvement over unaided performance90.8% (88.2, 93.1)86.4% (83.7, 88.8)93.4%
    Margin AccuracySignificant improvement over unaided performance73.5% (70.2, 76.7)57.3% (53.3, 61.2)58.4%
    Echogenic Foci AccuracySignificant improvement over unaided performance75.2% (71.9, 78.5)71.1% (67.1, 74.9)70.3%
    TI-RADS Level AgreementOverall TI-RADS Level AgreementSignificant improvement over unaided performance60.0% (56.8, 63.3)51.1% (47.8, 54.5)63.8% (60.0, 67.7)
    TI-RADS Level Agreement (TR-1)Improvement over unaided performance59.0% (42.3, 74.9)52.9% (37.3, 68.3)61.9% (40.0, 82.6)
    TI-RADS Level Agreement (TR-2)Improvement over unaided performance38.1% (31.1, 45.6)31.2% (24.6, 38.1)41.1% (31.7, 50.4)
    TI-RADS Level Agreement (TR-3)Significant improvement over unaided performance68.9% (62.6, 74.9)58.8% (52.2, 65.4)71.7% (64.9, 78.3)
    TI-RADS Level Agreement (TR-4)Significant improvement over unaided performance61.4% (56.5, 66.3)52.1% (47.2, 57.0)65.5% (59.1, 71.6)
    TI-RADS Level Agreement (TR-5)Significant improvement over unaided performance71.3% (61.8, 80.5)62.0% (52.2, 71.5)77.0% (66.1, 87.3)

    Note: The acceptance criteria are largely inferred from the study's objective to demonstrate "superior performance," "significant improvement," and "consistent performance" compared to unaided reading, and "on-par" with aided use for standalone. Exact numerical thresholds for acceptance were not explicitly stated as distinct acceptance criteria.


    Study Details

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: 600 cases from 600 unique patients.
    • Data Provenance: Retrospective collection of thyroid ultrasound images. 74% of the data was acquired from the US. The cases in the MRMC study were sourced from institutions or sources not part of the model training or development datasets to ensure generalizability.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Two expert US-board certified radiologists and one adjudicator (also a US-board certified radiologist with the most years of experience).
    • Qualifications: US-board certified radiologists, with one having "the most years of experience" for adjudication.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: 2+1 (Two expert radiologists' consensus, with an additional expert radiologist adjudicating disagreements). Specifically, the text states "consensus labels of two expert US-board certified radiologists and an adjudicator (also US-board certified radiologist with the most years of experience)."

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study Done: Yes.
    • Effect Size of Improvement (Aided vs. Unaided):
      • AULROC (IOU > 0.5): 0.022 (0.758 aided - 0.736 unaided)
      • AULROC (IOU > 0.6): 0.052 (0.734 aided - 0.682 unaided)
      • AULROC (IOU > 0.7): 0.138 (0.686 aided - 0.548 unaided)
      • AULROC (IOU > 0.8): 0.237 (0.593 aided - 0.356 unaided)
      • Localization Accuracy: 2.0% improvement (95.6% aided - 93.6% unaided)
      • TI-RADS Descriptors Accuracy Improvements:
        • Composition: 4.5% (84.9% vs 80.4%)
        • Echogenicity: 7.4% (77.4% vs 70.0%)
        • Shape: 4.4% (90.8% vs 86.4%)
        • Margin: 16.2% (73.5% vs 57.3%)
        • Echogenic Foci: 4.1% (75.2% vs 71.1%)
      • Overall TI-RADS Level Agreement: 8.9% (60.0% vs 51.1%)

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Study Done: Yes. The text explicitly states: "To evaluate the standalone performance of our device, where the output of the models are directly compared against ground truth labels."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Nodule Benign/Malignant Status: Sourced from reference standard of Fine Needle Aspiration (FNA) or 2-year follow-up for benign cases (outcomes data/pathology).
    • Localization, ACR TI-RADS Lexicon Descriptors, and TI-RADS Level Agreement: Expert consensus based on the labels of two expert US-board certified radiologists and an adjudicator.

    8. The sample size for the training set:

    • The document states that the cases in the MRMC study were sourced from institutions or sources not part of the model training or development datasets. However, the specific sample size for the training set is not provided in the given text.

    9. How the ground truth for the training set was established:

    • The document implies that the training data was distinct from the test set, but it does not explicitly describe how the ground truth for the training set was established. It only details the ground truth establishment for the test set used in the standalone and MRMC studies.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201369
    Date Cleared
    2020-09-16

    (117 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    See-Mode Technologies Pte. Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    See-Mode AVA (Augmented Vascular Analysis) is a stand-alone, image processing software for analysis, measurement, and reporting of DICOM-compliant vascular ultrasound images obtained from carotid and lower limb arteries. The analysis includes segmentation of vessels walls and measurement of the intima-media thickness (IMT) of the carotid artery in B-Mode images, finding velocities in Doppler images, and reading annotations on the images. The software generates a vascular ultrasound report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. The client software is designed to run on a standard desktop or laptop computer. See-Mode AVA is intended to be used by trained medical professionals, including but not limited to physicians and medical technicians. The software is not intended to be used as an independent source of medical advice, or to determine or recommend a course of action or treatment for patients.

    Device Description

    See-Mode AVA (Augmented Vascular Analysis) is a standalone software for analysis and reporting of vascular ultrasound images. There is no dedicated medical equipment required for operation of this software except for an ultrasound machine that is the source of image acquisition. The software runs on a standard off-the-shelf computer and is accessible within a web browser.

    See-Mode AVA takes as input DICOM-compliant vascular ultrasound images. The software uses proprietary algorithms for image analysis. including segmentation of vessel walls and measurement of the intima-media thickness (IMT) of the carotid artery in B-Mode images and finding peak systolic and end diastolic velocities (PSV and EDV) from Doppler images. The software generates a vascular ultrasound report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. Any information within this report must be fully reviewed and approved by a qualified clinician before the vascular ultrasound report is finalized.

    See-Mode AVA is not intended to be used as an independent source of analysis and reporting vascular ultrasound images. Any information provided by the software has to be reviewed by a qualified clinician (including sonographers, radiologists, and cardiologists) and can be modified to correct any possible mistakes. The software provides multiple methods for performing quality control and modification of image analysis results. When the vascular ultrasound report is finalized by a qualified clinician, See-Mode AVA exports the report. This report can be used adjunctly with other medical data by a physician to help in the assessment of the cardiovascular health of the patient.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the See-Mode AVA device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly list "acceptance criteria" for all tasks in a table format. However, it does present performance metrics that imply the criteria met for each function. I've extracted these and presented them in a table, along with the device's reported performance.

    Device FunctionImplied Acceptance Criteria (Based on reported performance)Reported Device Performance
    Segmentation of B-mode Carotid Ultrasound Images & IMT MeasurementStrong correlation with expert measurements; Outperform predicate device.IMT Correlation Coefficient: 0.89 (with average of 2 experts)
    Outperforms predicate (reported correlation 0.6)
    Text Recognition (Reading Annotations)High accuracy in reading various annotation types.Accuracy: 92% to 96% (depending on annotation type)
    Signal Processing (Reading PSV & EDV from Doppler Waveforms)Strong correlation with clinician annotations.PSV Correlation Coefficient: 0.98
    EDV Correlation Coefficient: 0.97
    Waveform Type Classifier (Lower Limb Doppler Images)Strong agreement with expert annotations.Overall Accuracy: 93%

    2. Sample Size Used for the Test Set and Data Provenance

    • Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement:
      • Sample Size: 205 longitudinal B-mode carotid images.
      • Data Provenance: Retrospective dataset from multiple centers. The document does not specify the country of origin.
    • Text Recognition (Reading Annotations):
      • Sample Size: Varied from 783 to 1432 images, depending on the type of annotation being read.
      • Data Provenance: Retrospective vascular ultrasound dataset. The document does not specify the country of origin.
    • Signal Processing (Reading PSV & EDV from Doppler Waveforms):
      • Sample Size: 1117 images.
      • Data Provenance: Images where clinicians annotated PSV and EDV values at the time of image acquisition. The document does not specify the country of origin or whether it's retrospective or prospective, though the nature of "annotations at the time of image acquisition" suggests a retrospective analysis of existing data.
    • Waveform Type Classifier (Lower Limb Doppler Images):
      • Sample Size: 150 images.
      • Data Provenance: A collection of images representing typical use cases in the clinical field. The document does not specify the country of origin or whether it's retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement:
      • Number of Experts: 2 expert readers.
      • Qualifications: Not explicitly stated beyond "expert readers."
    • Text Recognition (Reading Annotations):
      • Number of Experts: Not explicitly stated, implied to be based on existing annotations, likely from clinicians.
      • Qualifications: Not explicitly stated.
    • Signal Processing (Reading PSV & EDV from Doppler Waveforms):
      • Number of Experts: Clinicians.
      • Qualifications: "Clinicians at the time of image acquisition." No further details on their specific roles or experience are provided.
    • Waveform Type Classifier (Lower Limb Doppler Images):
      • Number of Experts: Expert readers.
      • Qualifications: Not explicitly stated beyond "expert readers."

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (such as 2+1, 3+1, or none) for any of the test sets.

    • For IMT measurement, it compares the algorithm to the "average of two experts," implying that their individual measurements were used, but not necessarily a consensus process or adjudication beyond averaging.
    • For other tasks, it refers to "expert annotations" or "clinician annotations" without detailing how disagreements, if any, were resolved.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size How Much Human Readers Improve with AI vs. Without AI Assistance

    No, an MRMC comparative effectiveness study that measures the improvement of human readers with AI assistance versus without AI assistance was not explicitly described.

    The studies primarily evaluated the standalone performance of the AVA device against ground truth established by experts/clinicians or against the performance of a predicate device. While it claims the device "outperforms the reported results of the predicate device" for IMT, this is a comparison of standalone algorithm performance, not human-in-the-loop effectiveness.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone (algorithm only) performance evaluations were done for all the described device functions:

    • Segmentation of B-mode carotid ultrasound images and IMT measurement.
    • Text recognition algorithm for reading annotations.
    • Signal processing algorithm for analyzing doppler waveforms (PSV and EDV).
    • Waveform type classifier on lower limb doppler images.

    The results presented (correlation coefficients, accuracy) are indicative of the algorithm's direct performance.

    7. The Type of Ground Truth Used

    The following types of ground truth were used:

    • Expert Consensus/Annotations:
      • For Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement, ground truth was established by "2 expert readers' measurements" (implied average).
      • For Waveform Type Classifier (Lower Limb Doppler Images), ground truth was "annotations (i.e., waveform type) by expert readers."
    • Clinician Annotations:
      • For Signal Processing (Reading PSV & EDV from Doppler Waveforms), ground truth was "annotations (i.e. PSV and EDV values) on the images annotated by clinicians at the time of image acquisition."
    • Existing Image Annotations:
      • For Text Recognition (Reading Annotations), the algorithm's performance was evaluated against "reading different types of annotations," implying these annotations were present as ground truth on the images.

    No pathology or outcomes data was mentioned as ground truth.

    8. The Sample Size for the Training Set

    The document does not provide any specific information or sample size for the training set used for the AI/ML algorithms in See-Mode AVA. It only mentions that the device "incorporates a logical update to use artificial intelligence for image analysis" and benefits from "established machine learning methods."

    9. How the Ground Truth for the Training Set Was Established

    Since no information about the training set's sample size or data is provided, the document does not describe how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1