Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K220590
    Device Name
    aPROMISE X
    Date Cleared
    2022-04-29

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    EXINI Diagnostics AB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    aPROMISE X is intended to be used by healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images. The system is intended to be used with images acquired using nuclear medicine (NM) imaging. using PSMA PET/CT. The device provides general Picture Archiving and Communications System (PACS) tools as well as a clinical application for oncology including marking of regions of interest and quantitative analysis.

    Device Description

    aPROMISE (automated PROstate specific Membrane Antigen Imaging SEgmentation) X is an updated version of previously cleared device, aPROMISE v 1.2.1 (K211655), with a web interface where users can upload body scans of PSMA PET/CT image data in the form of DICOM files, review patient studies and share study assessments within a team. The software platform has two installation configurations: either deployed to a cloud infrastructure or to a local server at a clinical facility. The software platform can communicate via http-protocol as described in part 18 of the DICOM standard. This compatibility enables direct transmission of DICOM data from a PACS to aPROMISE X. Manual upload via aPROMISE X user interface is also supported. The software complies with the Digital Imaging and Communications in Medicine (DICOM) 3 standard.

    Multiple scans can be uploaded for each patient and the system provides a separate review for each study. The review page display studies in a 4-panel view showing PET, CT, PET/CT fusion and maximum intensity projection (MIP) simultaneously and includes the option to display each view separately. The device is used to review entire patient studies, using image visualization and analysis tools for users to identify and mark regions of interest (ROIs). While reviewing image data, users can mark ROIs by selecting from pre-defined hotspots that are highlighted when hovering with the mouse pointer over the segmented region, or by manual drawing, i.e selecting individual voxels in the image slices to include as hotspots. Selected or drawn hotspots are subject to automatic quantitative analysis. The user can review the results of this quantitative analysis and determine which hotspots should be reported as suspicious lesions.

    To create a report the signing user is required to confirm quality control, and electronically sign the report preview. Signed reports are saved in the device and can be exported as a JPG or DICOM file.

    AI/ML Overview

    The provided text, a 510(k) summary for the aPROMISE X device, details its comparison to a predicate device and includes information about performance testing. However, it does not contain specific acceptance criteria, reported device performance metrics (e.g., sensitivity, specificity, AUC), or a detailed description of a clinical study that proves the device meets specific acceptance criteria related to its diagnostic performance.

    The document states that the Analytical Performance in Clinical Study "demonstrated that aPROMISE X enables the automated quantification of tracer uptake are more reproducible, and efficient than those obtained manually. The study demonstrated that aPROMISE X enables the reader to achieve a higher efficiency and quantitative consistency, while maintaining the diagnostic performance of the physicians." This implies some form of performance evaluation, but the specific metrics and methodology required to answer your detailed questions are not provided in this 510(k) summary.

    Therefore, I cannot populate the table or answer most of your questions with the information available in the provided text.

    Here is what I can infer or state based on the document:

    1. A table of acceptance criteria and the reported device performance:

    Acceptance CriteriaReported Device Performance
    Specific diagnostic performance metrics (e.g., sensitivity, specificity, AUC for lesion detection)Not provided in this document. The document states that the Analytical Performance in Clinical Study demonstrated "maintaining the diagnostic performance of the physicians," but no specific metrics or associated acceptance criteria are listed.
    Accuracy, linearity, and limit of detection of SUV and volume quantification (from Digital Phantom Validation Study)"All SUV and volume quantification tests of aPROMISE X met their predetermined acceptance criteria." (Specific numerical criteria and results are not provided.)
    Equivalent performance of aPROMISE X vs. predicate for standard functions in marking and quantitative assessments of user-defined ROI"This study demonstrated the equivalent performance of aPROMISE X as compared to the previous version, predicate aPROMISE v1.2.1 (K211655) for standard functions in marking and quantitative assessments of user defined region of interest in PSMA PET/CT." (Specific metrics are not provided.)
    Reproducibility and efficiency of automated quantification vs. manual quantification"aPROMISE X enables the automated quantification of tracer uptake are more reproducible, and efficient than those obtained manually." (No numerical metrics provided.)
    Reader efficiency and quantitative consistency with aPROMISE X vs. without"aPROMISE X enables the reader to achieve a higher efficiency and quantitative consistency..." (No numerical metrics provided.)

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified for the "Analytical Performance in Clinical Study."
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. The study compared performance to "clinicians" and "physicians" but did not detail their role in ground truth establishment or their qualifications.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • A study comparing performance was done ("compared the performance of aPROMISE X to that of clinicians"), and it suggested improvement in "efficiency and quantitative consistency" while "maintaining diagnostic performance." However, it's not explicitly stated as a formal MRMC study, and no specific effect sizes or quantitative improvements in diagnostic performance (e.g., AUC increase, sensitivity/specificity changes) with AI assistance versus without are provided.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • The phrasing "aPROMISE X enables the automated quantification of tracer uptake" suggests the algorithm performs quantification. However, the study description focuses on comparing "aPROMISE X" (which is a system used by "healthcare professionals") "to that of clinicians," implying it might be evaluating the system as a whole rather than a purely standalone algorithm's diagnostic capability. The section primarily highlights where the AI assists the user, such as in hotspot detection for user selection: "Pre-definition of Hotspot: Algorithm, using a machine-learning model to detect high local intensity regions of interest in the PET series."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not specified. The study is described as comparing to "clinicians" and "physicians," which implies clinical judgment, but the gold standard (e.g., biopsy results, long-term follow-up) for determining the true presence or absence of a clinically significant lesion is not mentioned.

    8. The sample size for the training set:

    • Not specified. The document mentions a "machine-learning model" for hotspot detection, implying a training set was used, but its size is not disclosed.

    9. How the ground truth for the training set was established:

    • Not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211655
    Device Name
    aPROMISE
    Date Cleared
    2021-07-27

    (60 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    EXINI Diagnostics AB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    aPROMISE is intended to be used by healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images. The system is intended to be used with images acquired using nuclear medicine (NM) imaging using PSMA PET/CT. The device provides general Picture Archiving and Communications System (PACS) tools as well as a clinical application for oncology including marking of regions of interest and quantitative analysis.

    Device Description

    aPROMISE (automated PROstate specific Membrane Antigen Imaging SEgmentation) consists of a cloud-based software platform with a web interface where users can upload body scans of PSMA PET/CT image data in the form of DICOM files, review patient studies and share study assessments within a team. The software complies with the Digital Imaging and Communications in Medicine (DICOM) 3 standard.

    Multiple scans can be uploaded for each patient and the system provides a separate review for each study. The review page display studies in a 4-panel view showing PET, CT, PET/CT fusion and maximum intensity projection (MIP) simultaneously and includes the option to display each view separately. The device is used to review entire patient studies, using image visualization and analysis tools for users to identify and mark regions of interest (ROIs). While reviewing image data, users can mark ROIs by selecting from pre-defined hotspots that are highlighted when hovering with the mouse pointer over the segmented region, or by manual drawing, i.e selecting individual voxels in the image slices to include as hotspots. Selected or drawn hotspots are subject to automatic quantitative analysis. The user can review the results of this quantitative analysis and determine which hotspots should be reported as suspicious lesions.

    To create a report the signing user is required to confirm quality control, and electronically sign the report preview. Signed reports are saved in the device and can be exported as a JPG or DICOM file.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Important Note: The provided text is an FDA 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a standalone clinical study report with detailed acceptance criteria and performance data in the typical scientific paper format. Therefore, some information, especially very specific numeric acceptance criteria and detailed study results, is inferenced or described broadly as "met predetermined acceptance criteria" because the exact values are not explicitly stated in this regulatory document.


    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA 510(k) summary does not present specific numeric acceptance criteria (e.g., minimum accuracy percentage, specific sensitivity/specificity thresholds) or detailed reported device performance in a precise tabular format. Instead, it describes functional and analytical performance goals and states that these were met.

    For the purpose of this request, I've constructed a table based on the descriptions provided in the "Bench Testing" section:

    Acceptance Criteria Category/DescriptionReported Device Performance
    Digital Phantom Validation Study
    Accuracy, Linearity, and Limit of Detection of SUV (Standardized Uptake Value) and Volume quantification against known values of a digital reference object (NEMA phantom).All SUV and volume quantification tests of aPROMISE met their predetermined acceptance criteria. (Specific numeric thresholds for accuracy, linearity, and LOD are not provided in the document.)
    Comparison to Predicate (KSWVWR - K160334)
    Equivalent performance for standard functions in marking and quantitative assessments of user-defined regions of interest in PSMA PET/CT.Demonstrated equivalent performance when compared to the predicate KSWVWR (K160334). (Specific metrics of equivalence are not provided in the document.)
    Analytical Performance in Clinical Study (Reproducibility & Consistency)
    Enables automated quantification of tracer uptake in reference organs that are more reproducible and consistent than those obtained manually by clinicians.Demonstrated that aPROMISE enables the automated quantification of tracer uptake in reference organs that are more reproducible and consistent than those obtained manually. (Specific metrics for reproducibility and consistency are not provided in the document.)
    Analytical Performance in Clinical Study (Sensitivity for Pre-selection)
    High sensitivity in pre-selection of regions of interest determined to be suspicious for metastatic disease.Demonstrated that aPROMISE has high sensitivity in pre-selection of regions of interest that are determined to be suspicious for metastatic disease. (Specific sensitivity metric is not provided; also note the subject device states "All detected high intensity ROIs can be shown for review by the user without pre-selection," which seems to contradict the predicate's ANN pre-selection method description, but the document clarifies "showing all detected high intensity regions (no preselection as of the predicate)" as the subject device's approach while still achieving high sensitivity in detecting these regions for user review).

    2. Sample Size Used for the Test Set and Data Provenance

    The document refers to a "Digital Phantom Validation Study" and an "Analytical Performance in Clinical Study."

    • Sample Size for Test Set:
      • Digital Phantom Validation Study: "digital reference object (NEMA phantom)" - This implies a single, well-defined digital phantom, not a large sample size of patient data.
      • Analytical Performance in Clinical Study: The specific sample size (number of patients or scans) for this "clinical study" is not specified in the provided document.
    • Data Provenance:
      • Digital Phantom Validation Study: A digital NEMA phantom is a synthetic dataset. The document does not specify its origin.
      • Analytical Performance in Clinical Study: The document refers to it as a "Clinical Study" but does not specify the country of origin of the data, nor whether it was retrospective or prospective. Given the context of a 510(k) submission and the lack of detailed clinical trial information, it is likely that this was a retrospective analysis of existing clinical data, but this is not explicitly stated.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document states that the "Analytical Performance in Clinical Study" compared the performance of aPROMISE "to that of clinicians." However, the exact number of clinicians (experts) involved in establishing the ground truth or serving as comparators is not specified.
    • Qualifications of Experts: The document refers to them as "clinicians." No further specific qualifications (e.g., number of years of experience, subspecialty training like nuclear medicine physician or radiologist) are provided.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) used for establishing ground truth from the "clinicians." It simply states that the device's performance was compared to "that of clinicians" for manual quantification and for identifying suspicious regions. It would imply that the clinicians' outputs themselves served as the comparative "ground truth" or reference for assessing the device's analytical performance.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Information provided does not indicate that a formal MRMC comparative effectiveness study was performed where human readers' performance with and without AI assistance was measured. The "Analytical Performance in Clinical Study" primarily assessed the device's standalone analytical capabilities (reproducibility/consistency of quantification and sensitivity of pre-selection) compared to manual clinician performance, not an AI-assisted human workflow versus unassisted human workflow.
    • Effect size of improvement: Since a formal MRMC study as described (human readers with and without AI) was not detailed, there is no information provided on the effect size of how much human readers improve with AI vs. without AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, aspects of standalone performance were analyzed.

    • The Digital Phantom Validation Study explicitly assessed the algorithm's accuracy, linearity, and limit of detection for SUV and volume quantification against known phantom values, which is a standalone assessment.
    • The Analytical Performance in Clinical Study also evaluated the device's automated quantification of tracer uptake and its pre-selection of regions of interest, which are functions performed by the algorithm before user interaction. It specifically states "aPROMISE enables the automated quantification..." and "aPROMISE has high sensitivity in pre-selection of regions of interest..." This indicates standalone algorithmic functions were evaluated.

    7. Type of Ground Truth Used

    • For Digital Phantom Validation: The ground truth was known values from a digital reference object (NEMA phantom). This is a synthetic, precisely defined ground truth.
    • For Analytical Performance in Clinical Study: The ground truth or reference standard for comparison appears to be the manual quantification and determination of suspicious regions by "clinicians." This implies "expert consensus" or "expert readings" served as the benchmark, though the exact process (e.g., single expert, multiple experts with consensus, adjudication) is not detailed. There is no mention of pathology or long-term outcomes data used as ground truth.

    8. Sample Size for the Training Set

    The document does not provide any information regarding the sample size of the training set used for developing the aPROMISE algorithms (e.g., for organ segmentation or hotspot detection). The 510(k) summary focuses on the validation of the finished device.


    9. How the Ground Truth for the Training Set Was Established

    Since the document does not specify the training set used, it does not describe how the ground truth for any potential training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191262
    Device Name
    aBSI
    Date Cleared
    2019-08-05

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    EXINI Diagnostics AB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    aBSI is intended to be used by trained healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images acquired using nuclear medicine (NM) imaging. The device provides general Picture Archiving and Communications System (PACS) tools as well as a clinical application for oncology including lesion marking and quantitative analysis.

    Device Description

    The aBSI is a software-only medical device that provides a fully quantitative assessment of a patient's skeletal disease on a bone scan, as the fraction of the total skeleton weight. The user of this product is typically a health-care professional using the software to view the patient images and analysis results.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the aBSI device, based on the provided text:

    Acceptance Criteria and Device Performance

    The document doesn't explicitly define a table of "acceptance criteria" with specific performance thresholds. Instead, it refers to the device's performance through verification testing and clinical studies, demonstrating its ability to quantify the Bone Scan Index (BSI) and its association with clinical outcomes.

    However, based on the performance data presented, here's an inferred table of performance aspects and reported findings:

    Performance AspectReported Device Performance
    BSI Calculation Linearity & AccuracyDemonstrated during analytical validation studies. (Specific metrics not provided in the summary, but implied to be acceptable.)
    BSI Calculation PrecisionDemonstrated during analytical validation studies. (Specific metrics not provided in the summary, but implied to be acceptable.)
    Reproducibility (different cameras)Demonstrated during analytical validation studies. (Specific metrics not provided in the summary, but implied to be acceptable.)
    Reproducibility (multiple images)Demonstrated during analytical validation studies. (Specific metrics not provided in the summary, but implied to be acceptable.)
    Reproducibility (repeated bone scans)Demonstrated during patient study. (Specific metrics not provided in the summary, but implied to be acceptable.)
    Hotspot Detection Algorithm ImprovementImproved algorithm from predicate. Clinical performance testing demonstrated "substantially equivalent performance" to the predicate, implying improvement while maintaining equivalence.
    Association with Overall Survival (Study I)Automated BSI (median=1.07; range: 0-32.60) was significantly associated with OS (HR:1.20; 95%CI:1.14-1.26; P
    Ask a Question

    Ask a specific question about this device

    K Number
    K122205
    Device Name
    EXINI
    Date Cleared
    2012-08-15

    (21 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    EXINI DIAGNOSTICS AB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EXINI is intended to be used by trained healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images. The system is intended to be used with images acquired using nuclear imaging (NM) and computed tomography (CT). The software provides general Picture Archiving and Communications System (PACS) tools and a clinical application for oncology including lesion marking and analysis.

    Device Description

    The EXINI software provides trained health-care professionals and researchers with a software tool set for acceptance, transfer, storage, image display, manipulation and quantification of digital medical images. The software is intended to be used with images acquired using nuclear imaging (NM) and computed tomography (CT) modalities. The software provides general Picture Archiving and Communications System (PACS) tools and a clinical application for markup and analysis of bone lesions in bone scans. The software complies with the Digital Imaging and Communications in Medicine (DICOM 3) standard. The software runs on a standard PC with Microsoft Windows operating system.

    AI/ML Overview

    The provided 510(k) summary for the EXINI device states that clinical performance data was NOT conducted to support this submission. Therefore, the document does not contain information about a study proving the device meets acceptance criteria based on clinical performance.

    The non-clinical performance data section refers to "verification and validation (V&V) plan including definition of test methods and acceptance criteria was designed to ensure equivalent performance with the predicate device." However, specific acceptance criteria and detailed reported performance metrics for the EXINI device regarding its image processing and analysis capabilities are not explicitly provided in the excerpt. Instead, it concludes that the non-clinical performance data shows "equal performance and raises no new questions of safety and effectiveness in comparison to the predicate device."

    Without specific acceptance criteria and reported device performance from a clinical study, it's not possible to populate the table or answer most of the requested questions.

    Here's a breakdown of what can be extracted and what is explicitly stated as not available:

    1. A table of acceptance criteria and the reported device performance

    • Not available in the provided text for clinical performance. The document states "Clinical testing was not conducted to support this 510(k) submission."
    • For non-clinical performance, the document vaguely mentions "test methods and acceptance criteria was designed to ensure equivalent performance with the predicate device," but does not provide the specific criteria or reported performance metrics.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Not applicable/Not available. Since no clinical study was conducted, there is no test set for clinical performance.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable/Not available. Since no clinical study was conducted, there's no ground truth established for a test set.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable/Not available. Since no clinical study was conducted, there's no adjudication method for a test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was explicitly NOT done. The document states "Clinical testing was not conducted to support this 510(k) submission."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No, a standalone clinical performance study was explicitly NOT done. The document states "Clinical testing was not conducted to support this 510(k) submission." The device is described as "semi-automatic" and requires a "manual step (hotspot verification step) where the user reviews and edits the selection of hotspots." This indicates it's designed for human-in-the-loop operation, and no standalone clinical performance was assessed.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not applicable/Not available. Since no clinical study was conducted, there is no ground truth Type established for clinical performance.

    8. The sample size for the training set

    • Not available. The document does not provide information about a training set since it focuses on the predicate device equivalence based on technological characteristics and non-clinical V&V. AI/ML models typically require a training set, but this document does not detail its development beyond stating it uses "image processing techniques for segmentation of skeletal regions, normalization and hotspot contouring/segmentation."

    9. How the ground truth for the training set was established

    • Not available. The document does not provide information about a training set or how its ground truth might have been established.

    Summary of available information regarding compliance:

    The EXINI device passed non-clinical performance data testing. The "verification and validation (V&V) plan including definition of test methods and acceptance criteria was designed to ensure equivalent performance with the predicate device" (IBIS Explorer and Markup Software). The V&V test results indicated that EXINI "meets its intended use, user needs and software requirements."

    The conclusion is that the non-clinical performance data showed "equal performance and raises no new questions of safety and effectiveness in comparison to the predicate device." This means the device was cleared based on demonstrating substantial equivalence to a predicate device through non-clinical testing and shared technological characteristics, not through a clinical study demonstrating specific performance metrics against an established ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1