Search Filters

Search Results

Found 14 results

510(k) Data Aggregation

    K Number
    K240417
    Manufacturer
    Date Cleared
    2024-11-08

    (269 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ProFound Detection V4.0 is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting Physician.

    Device Description

    ProFound Detection V4.0 is a computer-assisted detection and diagnosis (CAD) software device that detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) images. The ProFound Detection V4.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound Detection V4.0 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. Each detected finding will also be assigned a "score" that corresponds to the ProFound Detection V4.0 algorithm's confidence that the detected finding is a cancer (Certainty of Finding). Certainty of Finding scores are a percentage in range of 0% to 100% to indicate CAD's confidence that the finding is malignant. ProFound Detection V4.0 also assigns a score to each case (Case Score) as a percentage in range of 0% to 100% to indicate CAD's confidence that the case has malignant findings. The higher the Certainty of Finding or Case Score, the higher the confidence that the detected finding is a cancer or that the case has malignant findings.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    The core acceptance criterion is non-inferiority to the predicate device (ProFound AI V3.0) on key performance metrics.

    Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Non-inferior to Predicate)Reported ProFound Detection V4.0 Performance (with priors)Reported ProFound Detection V4.0 Performance (without priors)Reported Predicate Performance (ProFound AI V3.0)
    SensitivityNot inferior to 0.87250.9004 (0.8633-0.9374)0.9004 (0.8633-0.9374)0.8725 (0.8312-0.9138)
    SpecificityNot inferior to 0.52780.6205 (0.5846-0.6565)0.5863 (0.5498-0.6228)0.5278 (0.4909-0.5648)
    AUCNot inferior to 0.82300.8753 (0.8475-0.9032)0.8714 (0.8423-0.9007)0.8230 (0.7878-0.8570)

    Summary of Performance vs. Criteria:
    The study demonstrated that ProFound Detection V4.0, particularly when using prior images, achieved superior performance across all three metrics (Sensitivity, Specificity, and AUC) compared to the predicate device, thus meeting the non-inferiority acceptance criteria and additionally showing superiority in specificity.

    Study Details

    2. Sample size used for the test set and the data provenance:

    • Sample Size: 952 cases
      • 251 biopsy-proven cancer cases (with 256 malignant lesions)
      • 701 non-cancer cases
    • Data Provenance:
      • Country of Origin: U.S. image acquisition sites
      • Retrospective or Prospective: Retrospectively collected
      • Independence: The data was collected from sites independent of those included in the training and development sets. iCAD ensured this independence by sequestering the data.
      • Manufacturer: 100% Hologic DBT system exam data.
      • Exam Dates: 2018 - 2022.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: The text states, "Each cancer case was a biopsy proven positive, truthed by an expert breast imaging radiologist". While it explicitly mentions "an expert breast imaging radiologist" in the singular for truthing, it does not specify the exact number of unique "expert breast imaging radiologists" involved in truthing the entire dataset or their specific years of experience.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The text does not specify a formal adjudication method (like 2+1 or 3+1) for establishing ground truth from multiple readers. Ground truth was established based on clinical data including radiology report, follow-up biopsy, and pathology data, and then truthed by an expert radiologist.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs. without AI assistance:

    • No, an MRMC comparative effectiveness study was NOT done. The study described is a standalone performance assessment of the AI algorithm itself, comparing it to a predicate AI algorithm. It does not evaluate the performance of human readers, either with or without AI assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone study was done. The text explicitly states: "A standalone study was conducted, which evaluated the performance of ProFound Detection version 4.0 without an interpreting physician." This study directly compared the algorithm's performance (V4.0) against the predicate (V3.0) on an independent test set.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • The ground truth was a combination of biopsy-proven pathology data and clinical data, including radiology reports and follow-up data. Specifically, "These reference standards were derived from clinical data including radiology report, follow-up biopsy and pathology data. Each cancer case was a biopsy proven positive, truthed by an expert breast imaging radiologist who outlined the location and extent of cancer lesions in the case."

    8. The sample size for the training set:

    • The sample size for the training set is not provided. The text only refers to the test set being "independent of those included in the training and development" and that iCAD "ensures the independence of this dataset by sequestering the data and keeping it separate from the test and development datasets."

    9. How the ground truth for the training set was established:

    • How the ground truth for the training set was established is not explicitly detailed. The text mentions that the test set's ground truth was established by "biopsy proven cancer cases" and "truthed by an expert breast imaging radiologist." While it implies a similar process would likely be used for training data, the specific method for the training set's ground truth establishment is not provided in the submitted document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211506
    Manufacturer
    Date Cleared
    2021-07-12

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PowerLook Density Assessment is a software application intended for use with digital breast tomosynthesized 2D images from tomosynthesis exams. PowerLook Density Assessment provides an ACR BI-RADS Atlas 5th Edition breast density category to aid health care professionals in the assessment of breast tissue composition. PowerLook Density Assessment produces adjunctive information. It is not a diagnostic aid.

    Device Description

    PowerLook Density Assessment 4.0 is a software application intended for use with mammography exams containing synthetic 2D images generated from Digital Breast Tomosynthesis (DBT) data. The PowerLook Density Assessment software assesses breast tissue composition and provides a breast density category aligned with BI-RADS® 5th Edition density lexicon. The PowerLook Density Assessment 4.0 algorithm is designed to be used with cases containing up to four synthetic 2D views. When exams contain only DBT and synthetic 2D images generated from DBT, the 4.0 algorithm is used. The PowerLook Density Assessment software is designed to work in conjunction with iCAD's PowerLook DICOM server platform, which is a Class I exempt medical device. The PowerLook Density Assessment 4.0 utilizes data management capabilities of PowerLook for controlling input to and output from the PowerLook Density Assessment algorithm. Results of the PowerLook Density Assessment software application can be displayed on a mammography review workstation, mammography reporting application or radiology information system (RIS), or printed case report.

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details for the PowerLook Density Assessment V4.0.

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA submission for PowerLook Density Assessment V4.0 (K211506) does not explicitly state numerical acceptance criteria in terms of metrics like accuracy, sensitivity, or specificity. Instead, it refers to a qualitative acceptance criteria: "The performance of the system on all three datasets were above the desired performance, demonstrating that PowerLook Density Assessment 4.0 accurately calculates the BI-RADS breast density category for Hologic C-View and GE V-Preview data."

    However, based on the nature of breast density assessment devices, the implicit acceptance criteria would typically involve a high level of agreement between the device's output and expert-determined ground truth, particularly concerning the assignment of ACR BI-RADS breast density categories. While specific percentages are not provided, the claim of "accurately calculates" suggests that substantial agreement was achieved.

    Acceptance Criteria (Implicit from "accurately calculates BI-RADS breast density category")Reported Device Performance
    High agreement with expert-assigned BI-RADS breast density categories"The performance of the system on all three datasets were above the desired performance, demonstrating that PowerLook Density Assessment 4.0 accurately calculates the BI-RADS breast density category for Hologic C-View and GE V-Preview data." (Specific metrics like accuracy, sensitivity, or specificity are not provided in the document)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document does not specify the exact number of cases (images/patients) used in the test set. It mentions "Hologic C-View, GE V-Preview V3 and GE V-Preview V4.1 cases were run through Density Assessment 4.0."
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used or their qualifications (e.g., years of experience as a radiologist) for establishing the ground truth for the test set. It implicitly refers to "BI-RADS® 5th Edition density lexicon," suggesting that expert radiologists were involved in the ground truth labeling process, as this lexicon is used by radiologists.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) used for the test set ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study is mentioned in the provided text, nor is an effect size for human readers improving with AI assistance. The device is intended to provide "adjunctive information" to aid healthcare professionals, rather than directly assisting in a diagnostic reading task where reader performance would be measured.

    6. Standalone Performance

    Yes, a standalone performance study was done. The document states: "The performance of the system on all three datasets were above the desired performance, demonstrating that PowerLook Density Assessment 4.0 accurately calculates the BI-RADS breast density category for Hologic C-View and GE V-Preview data." This indicates that the algorithm's performance was evaluated independently in calculating the breast density categories.

    7. Type of Ground Truth Used

    The ground truth used is based on the ACR BI-RADS Atlas 5th Edition breast density category. This strongly implies expert consensus or expert-assigned categories, as radiologists are trained to use this lexicon.

    8. Sample Size for the Training Set

    The document does not provide information on the sample size used for the training set. It primarily focuses on the validation of the device.

    9. How Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. However, given the device's function and the ground truth used for validation (BI-RADS 5th Edition), it is highly probable that the training data's ground truth was also established by expert radiologists adhering to the same BI-RADS lexicon.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203822
    Manufacturer
    Date Cleared
    2021-03-12

    (73 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ProFound AI® V3.0 is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting Physician

    Device Description

    The ProFound Al® V3.0 device detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) images. The ProFound AI V3.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound AI V3.0 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. Each detected finding will also be assigned a "score" that corresponds to the ProFound AI V3.0 algorithm's confidence that the detected finding is a cancer (Certainty of Finding). Certainty of Finding scores are a percentage in range of 0% to indicate CAD's confidence that the finding is malignant. ProFound AI V3.0 also assigns a score to each case (Case Score) as a percentage in range of 0% to 100% to indicate CAD's confidence that the case has malignant findings. The higher the Certainty of Finding or Case Score, the higher the confidence that the detected finding is a cancer or that the case has malignant findings.

    AI/ML Overview

    The provided text describes specific acceptance criteria and the study conducted to demonstrate that ProFound AI® Software V3.0 meets these criteria.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that the "Indications for Use" remain unchanged from the Predicate UNMODIFIED Device ProFound AI V2.1, and that the "technological characteristics of Modified Device, ProFound AI V3.0 remain unchanged from Unmodified Device ProFound AI V2.1 as the predicate." The key improvement for V3.0 is "software improvements leading to improved specificity for GE and Hologic modalities."

    While specific numerical acceptance criteria (e.g., minimum sensitivity, minimum specificity) are not explicitly stated in a table format with target thresholds, the performance is assessed relative to the predicate device (ProFound AI V2.0/V2.1). The primary performance improvements demonstrated are in specificity.

    Acceptance Criterion (Implicitly based on Predicate Equivalence)Reported Device Performance (ProFound AI V3.0)
    Non-inferiority in case sensitivity vs. ProFound AI V2.0/V2.1Hologic DBT: The conclusion of non-inferiority of the standalone performance of ProFound AI V3.0 with a Hologic DBT screening population compared to the baseline performance of ProFound AI V2 with a Hologic DBT screening population in terms of case sensitivity, FP rate per 3D volume, and AUC. Claims established in the original Reader Study (K182373) apply to ProFound AI V3.0 with Hologic DBT.

    GE DBT: The conclusion of non-inferiority of the standalone performance of ProFound AI V3.0 with a GE DBT screening population compared to the baseline performance of ProFound AI V2 with a Hologic DBT screening population in terms of case sensitivity, FP rate per 3D volume, and AUC. Claims established in the original Reader Study (K182373) apply to ProFound AI V3.0 with GE DBT. |
    | Improved specificity for GE and Hologic modalities | Hologic DBT: A paired comparison demonstrated a significant increase in specificity from V2.0 to V3.0.

    GE DBT: A paired comparison demonstrated a significant increase in specificity from V2.0 to V3.0. |
    | Retention of original Indications for Use | Unchanged from ProFound AI V2.1. |
    | Non-raising of different questions of safety and effectiveness | "These changes do not raise different questions of safety and effectiveness." |

    2. Sample Size Used for the Test Set and Data Provenance

    The document refers to a "ProFound AI V2 Pivotal Reader Study Clinical Study Report (CSR) (K182373)". This reader study, performed for the predicate device, is the basis for the claims applicable to V3.0 regarding non-inferiority in sensitivity.

    For the Hologic DBT Non-clinical Validation Testing and GE DBT Non-clinical Validation Testing for V3.0 itself, the document states, "A paired comparison assessed the performance of ProFound AI V3.0 on [Hologic/GE] DBT images to the performance of ProFound AI V2.0 on the same set of [Hologic/GE] DBT images," implying that the test set for these specificity comparisons consisted of images from both Hologic and GE systems, from which both V2.0 and V3.0 analyses were derived.

    • Sample Size: The exact number of cases or images in the test set specifically for the V3.0 validation studies (Hologic and GE paired comparisons) is not explicitly stated in the provided text. The non-inferiority claims rely on the original K182373 study, but its sample size is also not detailed here.
    • Data Provenance: The document does not specify the country of origin for the data. Since the device is U.S. FDA cleared, it is plausible the data is from the US, but this is not confirmed. The studies are described as "Non-clinical Validation Testing" and "Supplemental Standalone Study," indicating they are retrospective studies.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    The provided text does not explicitly state the number of experts used to establish the ground truth or their specific qualifications for the test sets. It references "the original Reader Study described in 0074-6003. PowerLook® Tomo Detection V2 Pivotal Reader Study Clinical Study Report (CSR) (K182373)", which would have involved radiologists, but details are not provided here.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing ground truth for the test sets is not explicitly stated in the provided text.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done?
      • The document states that the claims established in the original Reader Study (K182373) for the predicate device (V2) apply to ProFound AI V3.0. This original study was likely an MRMC study to support the human-in-the-loop performance of V2.
      • For V3.0 itself, the validation focuses on standalone performance comparisons between V2.0 and V3.0 to demonstrate non-inferiority in sensitivity and improvement in specificity. A new human reader study was not conducted specifically for V3.0 to re-evaluate human reader improvement.
    • Effect size of human readers improving with AI vs. without AI assistance: This information is not provided for V3.0, as the primary validation focused on the standalone performance of the algorithm and its non-inferiority/specificity improvement over its predecessor. The predicate device's MRMC study (K182373) would contain this information for V2.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    • Yes, standalone studies were done. The document explicitly refers to "ProFound AI V3.0 Hologic supplemental Standalone Study" and "ProFound AI V3.0 GE Supplemental Standalone Study." These studies compared the performance of V3.0 with V2.0/V2.1 on the same image sets.
    • The performance metrics assessed in these standalone studies included:
      • Case sensitivity
      • FP rate per 3D volume (False Positives)
      • Area Under the localized Receiver Operating Characteristic (ROC) Curve (AUC)
      • Specificity (which was shown to have a significant increase)

    7. Type of Ground Truth Used

    The type of ground truth used is not explicitly detailed in the provided text. However, for breast cancer detection, ground truth for such studies typically involves:

    • Expert Consensus: Multiple radiologists reviewing cases and reaching agreement.
    • Pathology: Biopsy-proven presence or absence of malignancy.
    • Follow-up Outcomes Data: Clinical follow-up over time to confirm benign or malignant status.

    Given that the device detects "malignant soft-tissue densities and calcifications," it is highly likely that pathology (biopsy results) and/or expert radiologist consensus with follow-up were used to establish definitive ground truth regarding the presence and nature of cancers.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set of ProFound AI V3.0. It mentions that V3.0 uses "deep learning technology to process feature computations and uses pattern recognition to identify suspicious breast lesions," which implies a training phase, but details about the training data are absent.

    9. How the Ground Truth for the Training Set Was Established

    The document does not specify how the ground truth for the training set was established. Similar to the test set, it would likely involve expert annotations, pathology, and/or follow-up data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191994
    Manufacturer
    Date Cleared
    2019-10-04

    (70 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ProFound™ AI V2.1 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.

    Device Description

    ProFound AI V2.1 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) images. ProFound AI V2.1 has the same performance with the DBT systems cleared for use with ProFound AI V2; furthermore, it provides support for additional DBT systems. The ProFound AI V.2.1 Software allows a radiologist to quickly identify suspicious soft tissue densities (masses, architectural distortions and asymmetries) and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound AI V2.1 marks are displayed, the marks will appear as overlays on the 3D tomosynthesis images. For 3D tomosynthesis cases and depending on the functionality offered by the viewing/reading application, the ProFound AI V2.1 marks may also serve as a navigation tool for users because each mark can be linked to the tomosynthesis slice where the detection was identified. Each detected region is also assigned a "score" that corresponds to the ProFound AI V2.1 algorithm's confidence that the detected region is malignant (certainty of finding). Each case is also assigned a case score that corresponds to the ProFound AI V2.1 algorithm's confidence that a case is malignant. The certainty of finding scores are represented as an integer in range of 0 to 100 to indicate the CAD confidence that the detected region or case is malignant. The higher the certainty of finding or case score, the more likely the detected region or case is to be malignant.

    AI/ML Overview

    Here’s a summary of the acceptance criteria and the study details for the ProFound™ AI Software V2.1, based on the provided FDA 510(k) summary.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "Case-Level Sensitivity, Lesion-Level Sensitivity, FP Rate in Non-Cancer Cases, and Specificity met design specifications" for both Siemens Standard and Empire Reconstruction datasets. However, the specific numerical acceptance criteria are not explicitly provided in the text. The document refers to "design specifications" and "the detailed results are in the User Manual," implying these numerical targets exist but are not included in the 510(k) summary provided.

    For the comparison studies, the acceptance criterion was "the difference between the control group [Hologic] and the test group [Siemens Standard/Empire] is within the margin of non-inferiority for Sensitivity and AUC, and FPPI." The reported performance was that "Each of the three measures produced differences that were within the margin of non-inferiority." Again, specific numerical margins for non-inferiority are not detailed.

    Acceptance Criteria (Not explicitly stated numerically, but implied)Reported Device Performance (Met criteria)
    Standalone Performance:
    Case-Level Sensitivity meets design specificationsMet design specifications (for both Siemens Standard and Empire Reconstruction)
    Lesion-Level Sensitivity meets design specificationsMet design specifications (for both Siemens Standard and Empire Reconstruction)
    FP Rate in Non-Cancer Cases meets design specificationsMet design specifications (for both Siemens Standard and Empire Reconstruction)
    Specificity meets design specificationsMet design specifications (for both Siemens Standard and Empire Reconstruction)
    Non-Inferiority Comparison (vs. Hologic):
    Difference in Sensitivity (Siemens vs. Hologic) within non-inferiority marginWithin the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction)
    Difference in FPPI (Siemens vs. Hologic) within non-inferiority marginWithin the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction)
    Difference in AUC (Siemens vs. Hologic) within non-inferiority marginWithin the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction)

    2. Sample Size Used for the Test Set and Data Provenance

    • Siemens Standard Reconstruction Dataset:
      • Sample Size: 694 cases (238 cancer, 456 non-cancer)
      • Provenance: Not explicitly stated (e.g., country of origin). The study is described as a "screening population dataset," implying it is collected for screening purposes. The terms "stratified bootstrap procedure was used to estimate performance over a screening patient population" suggest it's representative of a screening population. Whether it's retrospective or prospective is not explicitly stated, but "dataset consisted of" typically implies retrospective collection for testing.
    • Siemens Empire Reconstruction Dataset:
      • Sample Size: 322 cases (140 cancer, 182 non-cancer)
      • Provenance: Not explicitly stated (e.g., country of origin). Similar to the Standard Reconstruction dataset, it is described as a "screening population dataset," implying it is collected for screening purposes. Whether it's retrospective or prospective is not explicitly stated, but "dataset consisted of" typically implies retrospective collection for testing.
    • Hologic (Control Group for Comparison): The document references "baseline performance of ProFound AI for DBT V2.0 with Hologic DBT images." While a control group is mentioned, the specific sample size for the Hologic dataset used in the comparison is not provided in this excerpt, only that the performance was used as a reference for non-inferiority.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    The document does not explicitly state the number of experts used or their qualifications for establishing ground truth for the test sets.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for the test sets (e.g., 2+1, 3+1).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study (AI vs. without AI assistance) is not described in this document. The studies presented are standalone performance evaluations of the AI system and non-inferiority comparisons of the AI system's performance across different DBT acquisition systems. The "concurrently by interpreting physicians" in the indication for use suggests a human-in-the-loop interaction, but a specific MRMC study to quantify human improvement with AI is not detailed here.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, standalone (algorithm only without human-in-the-loop performance) studies were done.

    • The "ProFound AI for DBT V2.1 Siemens Standard Screening Population Dataset" study explicitly states: "Standalone testing was performed on tomosynthesis slices only."
    • Similarly, the "ProFound AI for DBT V2.1 Siemens Empire Screening Population Dataset" study states: "Standalone testing was performed on tomosynthesis slices only."
    • The comparison studies ("Standalone Hologic Comparison Test Results") also involve comparing "the standalone performance of ProFound AI for DBT V2.0 with Hologic DBT images to the performance of ProFound AI for DBT V2.1 with Siemens Standard/Empire Reconstruction DBT images."

    7. Type of Ground Truth Used

    The type of ground truth used is not explicitly stated in this excerpt. However, in the context of screening population datasets for cancer detection, ground truth is typically established by:

    • Pathology (biopsy results) for positive cases.
    • Long-term follow-up (e.g., 1-2 years of negative imaging) for negative cases.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The document does not specify how the ground truth for the training set was established. It only mentions that the "ProFound AI 2.1 algorithm uses deep learning technology to process feature computations and uses pattern recognition to identify suspicious breast lesions." This implies a training process based on labeled data, but details about the origin and establishment of those labels are not provided in this excerpt.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182373
    Manufacturer
    Date Cleared
    2018-12-06

    (97 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PowerLook® Tomo Detection V2 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.

    Device Description

    PLTD V2 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) image. The PLTD V2 software allows a interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the PLTD V2 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. The PLTD V2 marks also serve as a navigation tool for users, because each mark is linked to the tomosynthesis plane where the detection was identified. Users can navigate to the plane associated with each mark by clicking on the detection mark. Each detected region will also be assigned a "score" that corresponds to the PLTD V2 algorithm's confidence that the detected region is a cancer (Certainty of Finding Score). Certainty of Finding scores are relative scores assigned to each detected region and a Case Score is assigned to each case regardless of the number of detected regions. Certainty of Finding and Case Scores are computed by the PLTD V2 algorithm and represent the algorithm's confidence that a specific finding or case is malignant. The scores are represented on a 0% to 100% scale. Higher scores represent a higher algorithm confidence that a finding or case is malignant. Lower scores represent a lower algorithm confidence that a finding or case is malignant.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text.

    1. Acceptance Criteria and Reported Device Performance

    The device is a Computer-Assisted Detection and Diagnosis (CAD) software for digital breast tomosynthesis (DBT) exams. The acceptance criteria are largely demonstrated through the multi-reader multi-case (MRMC) pivotal reader study and standalone performance evaluations.

    Table of Acceptance Criteria and Reported Device Performance:

    Criteria CategoryMetricAcceptance Criteria (Implied / Stated)Reported Device Performance (with CAD vs. without CAD)
    Pivotal Reader Study (Human-in-the-Loop)
    Radiologist PerformanceCase-level Area Under the Receiver Operating Characteristic (ROC) Curve (AUC)Non-inferiority to radiologist performance without CAD. Implicit superiority is also a desirable outcome.AUC with CAD: 0.852
    AUC without CAD: 0.795
    Average difference: 0.057 (95% CI: 0.028, 0.087); p
    Ask a Question

    Ask a specific question about this device

    K Number
    K180125
    Manufacturer
    Date Cleared
    2018-04-05

    (79 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    iCAD, Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PowerLook Density Assessment is a software application intended for use with digital breast tomosynthesized 2D images from tomosynthesis exams. PowerLook Density Assessment provides an ACR BI-RADS Atlas 5th Edition breast density category to aid health care professionals in the assessment of breast tissue composition. PowerLook Density Assessment produces adjunctive information. It is not a diagnostic aid.

    Device Description

    The PowerLook Density Assessment Software analyzes digital breast tomosynthesis 2D synthetic images to calculate the dense tissue area of each breast. The measured dense tissue area is then used to provide a Category of 1-4 consistent with ACR BI-RADS 5th edition a-d. The top-level design sub-systems are as follows: Initialization, Breast Segmentation, Breast Thickness Correction, and Breast Density Computation. The assessment results in a final density map and, in conjunction with its pixel size (in square cm), is used to compute the area of the dense tissue (square cm). The area of the breast (square cm) is computed by counting the total number of pixels in the valid regions of the breast segmentation mask). The ratio of the dense area to the total breast area gives the percent breast density (PBD) for the given view. The dense areas, breast areas, percent breast densities, and dispersion for the CC and MLO views are averaged in order to report measurements for each breast. The average PBD and the average dispersion are then taken, and mapped to a density category from 1 through 4 consistent with ACR BI-RADS 5th edition a-d for each breast, using a set of calibrated boundaries. The higher category of the two breasts is reported as the overall case score. The PowerLook Density Assessment is designed as a stand-alone executable operating within the larger software framework provided by PowerLook AMP. As such, the PowerLook Density Assessment software is purely focused on processing tomosynthesis 2D synthetic images and is not concerned with system issues such as managing DICOM image inputs or managing system outputs to a printer, PACS or Mammography Workstation. The PowerLook Density Assessment software is automatically invoked by PowerLook AMP. The results of PowerLook Density Assessment are designed to display on a mammography workstation, high resolution monitor, or in a printed case report. PowerLook Density Assessment is designed to process approximately 60-120 cases per hour.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the PowerLook Density Assessment Software, based on the provided FDA 510(k) document:

    Acceptance Criteria and Device Performance

    The document states that the PowerLook Density Assessment Software performed "substantially equivalent to the predicate device." While specific numerical acceptance criteria (e.g., minimum kappa score, percentage agreement) are not explicitly listed in the provided text as pass/fail thresholds, the performance was assessed based on:

    • Kappa score: A statistical measure of inter-rater agreement, commonly used for categorical data.
    • Percent correct in each BI-RADS category: Measures the accuracy of the software's classification into each of the four BI-RADS density categories (1, 2, 3, 4).
    • Combined A/B and C/D BI-RADS categories: Assesses performance when categories are grouped (e.g., non-dense vs. dense).

    The document states: "PowerLook Density Assessment performed substantially equivalent to the predicate device." This implies that the device's performance metrics were within an acceptable range compared to the already cleared predicate.

    Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    Kappa ScoreSubstantially equivalent to predicate devicePerformance substantially equivalent to predicate device
    Percent Correct (Each BI-RADS Category)Substantially equivalent to predicate devicePerformance substantially equivalent to predicate device
    Combined A/B and C/D BI-RADS CategoriesSubstantially equivalent to predicate devicePerformance substantially equivalent to predicate device

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated in the provided text. The document mentions "a set of digital breast tomosynthesis synthesized 2D images."
      • Data Provenance: Not explicitly stated (e.g., country of origin, specific clinics). The study used "digital breast tomosynthesis synthesized 2D images from." It is retrospective, as it refers to images for which BI-RADS scores "were obtained from radiologists."
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: Not explicitly stated. The document mentions "BI-RADS scores were obtained from radiologists." It does not specify if this was a single radiologist or multiple.
      • Qualifications of Experts: The experts are identified as "radiologists." No specific experience level (e.g., "10 years of experience") is provided.
    3. Adjudication method for the test set:

      • The document states that BI-RADS scores "were obtained from radiologists," but it does not specify an adjudication method (such as 2+1, 3+1, or none) for determining a consensus ground truth if multiple radiologists were involved.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

      • No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The study primarily focused on the standalone performance of the PowerLook Density Assessment Software against radiologist assessments (ground truth). It did not describe a scenario where human readers' performance with and without AI assistance was compared.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone study was performed. The described validation involved running the "PowerLook Density Assessment Software... followed by a comparison of the results between the predicate results, desired results, and observed performance..." This indicates the algorithm's performance was evaluated independently against the established ground truth.
    6. The type of ground truth used:

      • Expert Consensus (Radiologist BI-RADS Scores): The ground truth was established by "BI-RADS scores... obtained from radiologists." This implies the radiologists' interpretations served as the reference standard for breast density categorization.
    7. The sample size for the training set:

      • Not specified. The document does not provide any information regarding the training set's sample size or characteristics.
    8. How the ground truth for the training set was established:

      • Not specified. Since information about the training set size or its establishment is absent, the method for establishing ground truth for training data is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K153570
    Manufacturer
    Date Cleared
    2016-02-25

    (73 days)

    Product Code
    Regulation Number
    892.5900
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ICAD INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Axxent® Electronic Brachytherapy System Model 110 XP 1200 is a high dose rate Brachytherapy device for use with Axxent Applicators to treat lesions, tumors and conditions in or on the body where radiation is indicated. Only Xoft Axxent Surface Applicators can be used with the Axxent Electronic Brachytherapy System Model 110 XP 1200.

    Device Description

    The Axxent Electronic Brachytherapy System consists of two primary components: the Axxent System Controller (Controller); the Axxent HDR X-ray Source-2.2 (Catheter/Source). The System is designed to deliver doses of X-ray radiation to tissue in proximity to the applicator using a miniature X-ray tube powered by the Controller.

    The Axxent Electronic Brachytherapy System is a mobile, computer-controlled platform that is responsible for the overall operation of the System. The Controller is designed to work with the Source, which is a miniature X-ray tube located at the end of a flexible catheter. The Catheter/Source is inserted into a lumen of an appropriate Applicator which are cleared separately under their 510(k). The Axxent Electronic Brachytherapy System Model 110 XP 1200 described in this 510(k) will only be used for surface applications using Xoft Axxent Surface Applicators.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Axxent Electronic Brachytherapy System Model 110 XP 1200. This is a medical device for radiation therapy, not an AI/ML imaging device. Therefore, much of the requested information regarding AI-specific criteria (such as sample sizes for test and training sets, expert ground truth adjudication, MRMC studies, or standalone algorithm performance) is not applicable or available in this document.

    However, I can extract the acceptance criteria related to the device's performance and the nature of the study conducted to prove it meets those criteria.

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria for this device are focused on demonstrating that technological changes do not negatively impact the device's fundamental functional, scientific, and performance characteristics, particularly concerning radiation dosage. The device seeks substantial equivalence to its predicate device (Axxent Electronic Brachytherapy System, K122951).

    Acceptance Criteria / Performance AspectReported Device Performance (Model 110 XP 1200 vs. Predicate)
    Spatial Parameters (Azimuthal and Polar Variation)Equivalence with the current device.
    Depth DoseEquivalence with the current device.
    First and Second Half Value LayersAgreement between the current x-ray source/catheter and the proposed source/catheter measurement.
    Consistency of Spatial Measurements, Depth Dose, and Source/Catheter Spectrum after Extended UseConsistency demonstrated.
    Source/Catheter Output Linearity and ReproducibilityOutput is linear as a function of time and reproducible.
    Proposed Source/Catheter LongevityFunctions for at least as long as the current source.
    Usability in Simulated Clinical SettingAble to be used in the same manner as the current x-ray source/catheter in a simulated clinical setting.
    Clinical Dose Equivalence in Surface Applicator IndicationClinical dose is identical when using either source/catheter design in the surface applicator indication.

    Study Details (as per the document):

    1. Sample size used for the test set and the data provenance: Not applicable in the context of an AI/ML study. The testing was non-clinical performance data (laboratory testing of the device's physical properties), not based on a "test set" of patient data.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The ground truth for this device's performance is based on physical measurements of radiation characteristics.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This is not an AI/ML imaging device.

    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done: Not applicable. This is not an AI/ML imaging device.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc): The ground truth for performance was established through physical measurements and validation testing of the device's characteristics (e.g., spatial parameters, depth dose, half-value layers, output linearity, longevity).

    7. The sample size for the training set: Not applicable. This is not an AI/ML device.

    8. How the ground truth for the training set was established: Not applicable.

    Overall Study Description:

    The study referenced is a non-clinical performance assessment conducted to support the substantial equivalence of the Axxent Electronic Brachytherapy System Model 110 XP 1200 to its predicate device. This involved a series of laboratory tests and validation activities focused on the physical and operational characteristics of the device, particularly the changes in the cooling system and anode target. The goal was to confirm that these changes did not alter the fundamental safety and effectiveness of the device, especially concerning radiation delivery. The conclusion from these non-clinical tests was that the clinical dose is identical regardless of whether the proposed or current source/catheter design is used in the surface applicator indication.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141343
    Manufacturer
    Date Cleared
    2014-09-05

    (107 days)

    Product Code
    Regulation Number
    892.5900
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ICAD, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Axxent Cervical Applicator is indicated for use with the Axxent Electronic Brachytherapy System to deliver high dose rate brachytherapy for intracavitary treatment of cancer of the uterus, cervix, endometrium and vagina.

    Device Description

    The Axxent Cervical Applicator is a component of the Axxent Electronic Brachytherapy System (cleared under K122951) which utilizes a proprietary miniaturized X-ray source and does not require radioactive isotopes. The applicator allows the Axxent HDR X-ray source to deliver high-dose rate, low energy radiation treatment to the uterus, cervix, endometrium and vagina. The Axxent HDR X-ray source mimics the penetration and dose characteristics of Iridium-192.

    The Axxent Cervical applicator is provided non-sterile and can be reused. The user must sterilize the device using steam sterilization before each use. An Applicator Clamp is a required accessory to stabilize the Cervical Applicator during radiation treatment.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the Axxent® Cervical Applicator, primarily focused on adding MRI compatibility and MR Conditional labeling. It is not an AI/ML device, and therefore, many of the requested criteria related to AI/ML device performance and studies are not applicable.

    However, I can extract information related to the acceptance criteria and the study performed for the MRI compatibility of the device, as that is the specific new claim being sought.

    Here's the information regarding the MRI compatibility study:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (Test Standard)Reported Device Performance
    Magnetically induced displacement force (ASTM F2052)Results indicated the device meets the criteria for MR Conditional labeling.
    Magnetically induced torque (ASTM F2213 as a guide)Results indicated the device meets the criteria for MR Conditional labeling.
    MR image artifact (ASTM F2119)Results indicated the device meets the criteria for MR Conditional labeling.
    Radiofrequency induced heating (ASTM F2182)Results indicated the device meets the criteria for MR Conditional labeling.

    Note: The document states that the results of the testing "indicated that, in accordance with the guidance of relevant ASTM standards, Xoft's Cervical Applicator should be labeled MR Conditional." This implies that the device successfully met the established thresholds and requirements outlined in each ASTM standard for magnetic displacement, torque, artifact, and heating, thereby qualifying for MR Conditional labeling. Specific numerical thresholds or measured values are not provided in this summary.

    2. Sample sized used for the test set and the data provenance

    • Sample size: "Test samples were provided in their final manufactured condition." The exact number of samples tested is not specified, but it implies a sufficient number of devices were used for each test to achieve reliable results as per the ASTM standards.
    • Data provenance: Not explicitly stated, but given it's a submission to the FDA by a US-based company, the testing was likely conducted in a controlled laboratory environment, presumably in the US. It is a prospective study as tests were conducted specifically for this submission.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This question is not applicable as this is a device compatibility test, not an AI/ML diagnostic or prognostic study requiring expert ground truth for interpretation of medical images or patient outcomes. The "ground truth" here is adherence to engineering standards for MRI safety.

    4. Adjudication method for the test set

    • Not applicable. This involves objective physical measurements and adherence to ASTM standards, not subjective interpretation requiring adjudication among experts.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This is not an AI/ML device and no MRMC study was conducted.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This is not an AI/ML device.

    7. The type of ground truth used

    • The "ground truth" in this context is established by the ASTM consensus standards (ASTM F2052, ASTM F2213, ASTM F2119, ASTM F2182) for evaluating MRI compatibility. Performance is judged against the objective physical criteria and thresholds defined within these standards.

    8. The sample size for the training set

    • Not applicable. This is not an AI/ML device; there is no training set.

    9. How the ground truth for the training set was established

    • Not applicable. This is not an AI/ML device; there is no training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K123442
    Manufacturer
    Date Cleared
    2013-02-27

    (111 days)

    Product Code
    Regulation Number
    892.5700
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    ICAD, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K122951
    Manufacturer
    Date Cleared
    2013-01-17

    (114 days)

    Product Code
    Regulation Number
    892.5900
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ICAD, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Axxent® Electronic Brachytherapy System is a high dose rate Brachytherapy device for use with Axxent Applicators to treat lesions, tumors and conditions in or on the body where radiation is indicated.

    Device Description

    The Axxent Electronic Brachytherapy System consists of three primary components: the Axxent System Controller (Controller); the Axxent HDR X-ray Source-2.2 (Source); and an Axxentcompatible applicator (Applicator). The System is designed to deliver doses of X-ray radiation to the tissue in proximity to the applicator using a disposable, miniature X-ray tube powered by the Controller.

    The Axxent Electronic Brachytherapy Controller is a mobile, computer-controlled platform that is responsible for the overall operation of the System. The Controller is designed to work with the Source, which is a disposable, miniature X-ray tube located at the end of a flexible cable. The Source is inserted into a lumen of the appropriate Applicator.

    AI/ML Overview

    Here's an analysis of the provided text regarding the Axxent Electronic Brachytherapy System, focusing on acceptance criteria and study details.

    Important Note: The provided text is a 510(k) summary and FDA clearance letter, which primarily focuses on demonstrating substantial equivalence to predicate devices and general safety/effectiveness. It does not contain the detailed information typically found in a clinical study report addressing specific performance metrics, sample sizes for ground truth, expert qualifications, or MRMC studies for AI devices. This device is a radiation therapy system, not an AI-powered diagnostic tool, so many of the requested categories (like MRMC, standalone AI performance, and AI-specific ground truth details) are not directly applicable or reported in this type of submission.


    Acceptance Criteria and Reported Device Performance

    Given that this is a radiation therapy system and the submission focuses on substantial equivalence, the "acceptance criteria" here refer more to the successful demonstration of the device meeting its performance, functional, and system requirements during verification and validation testing, rather than specific diagnostic accuracy metrics.

    Acceptance Criteria CategoryReported Device Performance/Statement
    System Functionality & PerformanceAll performance, functional, and system requirements were met.
    Safety and EffectivenessDevice labeling contains instructions, cautions, and warnings for safe and effective use. Risk management via risk analysis. Potential hazards are controlled via software development, verification, and validation testing.
    Substantial Equivalence to PredicatesThe technological characteristics are the same as Axxent Electronic Brachytherapy System (K072683) and similar to Varian VariSource 200 HDR Afterloader (K061582) in terms of design, materials, principles of operation, and product specifications.

    Study Details (Based on Available Information)

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated in the provided documents. The validation testing likely involved a series of engineering and functional tests on the device itself, rather than a "test set" of patient data in the way an AI diagnostic device would.
      • Data Provenance: Not applicable in the context of patient data for a performance study. The "data" here refers to the outcomes of the engineering and software validation tests.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This question is not directly applicable to a hardware/software system validation in the way it would be for a diagnostic AI system requiring expert consensus on images. The "ground truth" for this device's performance would be established by engineering specifications and regulatory standards. Expert oversight would be from the development and testing engineers and regulatory affairs personnel within iCAD/Xoft.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. Adjudication methods like 2+1 or 3+1 are typically used in clinical studies for diagnostic accuracy to resolve discrepancies between readers, particularly for ground truth establishment or comparative reader performance. This submission describes performance and functional testing of a radiation delivery system.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study was done or reported. This device is a therapeutic device (radiation delivery system), not an AI-assisted diagnostic tool. Therefore, human reader (e.g., radiologist) improvement with AI assistance is not a relevant metric for this submission.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Standalone performance (algorithm only) is partially relevant in that the device's control system and dose delivery algorithm operate in a "standalone" fashion to execute a treatment plan. The "Assessment of Non-Clinical Performance Data" indicates "Validation testing was performed according to a Software System Test Plan. All performance, functional and system requirements were met." This implies the software and system were tested independently to ensure they meet their specifications. However, this is not an "AI algorithm only" performance as conceptualized for diagnostic AI.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The "ground truth" for the device's function and performance is its adherence to predefined engineering specifications, design requirements, and regulatory standards. This is assessed through various verification and validation tests (e.g., electrical safety, radiation output accuracy, software functionality, mechanical integrity). For safety, a risk analysis identifies potential hazards.
    7. The sample size for the training set:

      • Not applicable. This device is not an AI/machine learning system that requires a "training set" of data for model development. Its software and control algorithms are developed through traditional engineering and programming methods.
    8. How the ground truth for the training set was established:

      • Not applicable, as there is no "training set" for this type of medical device as described in the provided document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 2