Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K163250
    Date Cleared
    2017-05-11

    (174 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K160315, K150665, K023785, K111336

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Longitudinal Brain Imaging (LoBI) is a post-processing application to be used for viewing and evaluating neurological images provided by a magnetic resonance diagnostic device.

    The LoBI application is intended for viewing, manipulation and comparison of medical imaging and/or multiple time-points. The LoBI application enables visualization of information that would otherwise have to be visually compared disjointedly. The LoBI application provides analysis tools to help the user assess, and document changes in diagnostic and follow-up examinations. The LoBI application is designed to support the workflow by helping the user to confirm the absence or presence of lesions, including evaluation, follow-up and documentation of any such lesions.

    The physician retains the ultimate responsibility for making the final diagnosis and treatment decision.

    Device Description

    Philips Medical Systems' Longitudinal Brain Imaging application (LoBI) is a post processing software application intended to assist in the evaluation of serial brain imaging based on MR data.

    The LoBI application allows the user to view images, perform segmentation of lesions, along with segmentation editing tool and volumetric quantification of segmented volumes and quantitative comparison between time points. LoBI application provides automatic registration between studies from different time points. for longitudinal comparison.

    The LoBI application provides a supportive tool for visualization of subtle differences in the brain of the same individual across time, which can be used by clinicians as the assessment of disease progression.

    The physician retains the ultimate responsibility for making the final diagnosis based on image visualization as well as any segmentation and measurement results obtained from the application.

    The LoBI application is intended to be used for adult population only

    Key Features
    LoBI application has the following key features:

      1. Longitudinal comparison between brain images in multiple studies
      1. Support for multi-slice MR sequences (2D and 3D) and allow user to use basic viewing operations such as: Scroll, pan, zoom, windowing and annotation
      1. Identify pre-defined data types (pre-sets) and user created hanging layouts
      1. Automatic registration between studies (same patient, different time-points)
      1. Single mode: allows reviewing each of the launched studies, showing multiple sequences of the same study, using the whole reading space
      1. Tissue segmentation and editing tools allowing volumetric measurement of different lesion types
      1. Lesion management tool allowing matching between lesions in different studies to facilitate the assessment of differences over time
      1. CoBI feature (Comparative Brain Imaging) a supportive tool for visualization of subtle differences in lesions of the same individual across time for similar sequences. The CoBI feature provides a mathematical subtraction of scans yielding, after bias-field correction and intensity scaling, a colorcoded image of the differences in intensity between two registered scans.
      1. Results are displayed in tabular and graphical formats.
    AI/ML Overview

    Here's a summary of the acceptance criteria and study information for the Philips Longitudinal Brain Imaging (LoBI) application, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document focuses on demonstrating substantial equivalence to predicate devices and adherence to regulatory standards rather than explicit quantitative acceptance criteria or detailed device performance metrics in a table format. The primary "acceptance criteria" are implied by compliance with:

    • International and FDA-recognized consensus standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
    • FDA guidance document: "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
    • Internal Philips verification and validation processes: Ensuring the device "meets the acceptance criteria and is adequate for its intended use and specifications."

    Since specific numerical performance criteria (e.g., accuracy, sensitivity, specificity for particular lesion types) and corresponding reported performance are not provided in this 510(k) summary, the table below reflects what is broadly stated.

    Acceptance Criteria (Implied)Reported Device Performance
    Compliance with ISO 14971 (Risk Management)Demonstrated
    Compliance with IEC 62304 (Software Life Cycle Processes)Demonstrated
    Compliance with IEC 62366-1 (Usability Engineering)Demonstrated
    Compliance with FDA Guidance for Software in Medical DevicesDemonstrated
    Compliance with DICOM PS 3.1-3.18 (DICOM Standard)Demonstrated
    Fulfillment of intended functionality (CoBI feature, registration, segmentation, measurement, etc.)Verified through "Full functionality test" (covering detailed requirements per Product Requirement Specification) and "Validation" (using real recorded clinical data cases to simulate actual use and ensure customer needs / intended functionality fulfillment). Performance demonstrated to meet defined functionality requirements and performance claims.
    CoBI feature functions correctly and meets specificationsProven through verification activities
    Meets customer needs and fulfills intended functionality (validated with real clinical data)Proven through validation activities

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: Not explicitly stated as a number of cases or images. The validation activities used "real recorded clinical data cases." The quantity of these cases is not specified.
    • Data Provenance: The data used for validation consisted of "real recorded clinical data cases." No specific country of origin is mentioned. It is indicated as retrospective, as they are "recorded" data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • This information is not provided in the document. The general statement is that "The physician retains the ultimate responsibility for making the final diagnosis," suggesting human expert involvement in clinical practice, but not explicitly defining how ground truth for the test set was established or by whom.

    4. Adjudication Method for the Test Set:

    • This information is not provided in the document.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI Vs Without AI Assistance:

    • No MRMC comparative effectiveness study was done or reported. The document states explicitly: "The subject of this premarket submission. Longitudinal Brain Imaging (LoBI) application did not require clinical studies to support equivalence." The testing focused on verification and validation of the software's functionality and compliance with standards.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • The document describes the LoBI application as a "post-processing software application intended to assist in the evaluation of serial brain imaging" and emphasizes that "The physician retains the ultimate responsibility for making the final diagnosis."
    • While the software performs automated functions like registration, segmentation, and quantitative comparison, the validation process using "real recorded clinical data cases" seems to focus on the software's ability to provide accurate tools and information that a user would interpret.
    • The description of "Full functionality test" and "RMF testing" could involve standalone algorithmic performance evaluation against predefined specifications. However, an explicit "standalone" performance study as a separate regulatory study with defined metrics (e.g., algorithm-only sensitivity/specificity against ground truth) is not detailed in this summary. The focus is on the tool's supportive role for the user.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):

    • The type of ground truth used for the validation data is not explicitly specified. It refers to "real recorded clinical data cases," implying that the medical imaging data came with existing clinical interpretations or diagnoses, which would have implicitly served as a form of reference or "ground truth" for evaluating the software's utility in "confirming the absence or presence of lesions, including evaluation, quantification, follow-up and documentation." However, the method of establishing this ground truth (e.g., expert consensus, pathology) is not detailed.

    8. The Sample Size for the Training Set:

    • The document does not provide information regarding a distinct training set sample size or how the LoBI application was developed using machine learning or AI. The product description focuses on its functionality as a post-processing application with features like automatic registration and tissue segmentation, which could be rule-based or machine learning-driven, but this is not specified, nor is training data mentioned.

    9. How the Ground Truth for the Training Set Was Established:

    • Since a training set is not mentioned, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K162484
    Date Cleared
    2017-02-23

    (169 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K153444, K060937, K160315, K111336

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Lung Nodule Assessment and Comparison Option is intended for use as a diagnostic patient-imaging tool. It is intended for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include diameter, volume and volume over time. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed.

    Device Description

    The Lung Nodule Assessment and Comparison Option application is intended for use as a diagnostic patient-imaging tool. It is intended for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed. The user interface and automated tools help to determine growth patterns and compose comparative reviews. The Lung Nodule Assessment and Comparison Option application requires the user to identify a nodule and to determine the type of nodule in order to use the appropriate characterization tool. Lung Nodule Assessment and Comparison Option may be utilized in both diagnostic and screening evaluations supporting Low Dose CT Lung Cancer Screening*.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Device Name: Lung Nodule Assessment and Comparison Option (LNA)

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly list quantified acceptance criteria with numerical targets. Instead, it indicates that the device was tested against its defined functional requirements and performance claims, and that it "meets the acceptance criteria and is adequate for its intended use and specifications." The "acceptance criteria" are implied by the verification and validation tests performed to ensure the device's design meets user needs and intended use, and that its technological characteristics claims are met.

    However, based on the description of the device's capabilities, we can infer some key performance areas that would have been subject to acceptance criteria:

    Acceptance Criteria (Inferred from features and V&V activities)Reported Device Performance
    Accuracy of Lung and Lobe SegmentationValidation activities assure that the lung and lobe segmentation are adequate from an overall product perspective.
    Accuracy of Nodule Segmentation (Single-click and Manual Editing)Verified and validated as part of the overall design and functionality.
    Accuracy of Nodule Measurements (Diameter, Volume, Mean HU)Automatic software calculation of these measurements is a key feature, and the device was tested to meet its defined functionality requirements and performance claims. Manual editing with automatic recalculation is also validated.
    Functionality and Accuracy of Comparison and Matching for Temporal StudiesValidation activities assure that the comparison, as well as the nodule matching and propagation functionality, are adequate from an overall product perspective. Automatic calculations of doubling time and percent/absolute changes in measurements were tested.
    Functionality of Lung-RADS™ ReportingValidation activities assure the Prefill functionality for the Lung RADS score is adequate.
    Accuracy and Functionality of Risk Calculator ToolThe risk prediction functionality was validated. Based on McWilliams et al. (2013) study, which showed excellent discrimination and calibration (AUC > 0.90). The LNA's risk calculator is based on this model and its performance was validated.
    Usability of the SoftwareA usability study was conducted according to standards.
    Compliance with Relevant Standards and Guidance DocumentsComplies with ISO 14971, IEC 62304, IEC 62366-1, and FDA guidance for software in medical devices.
    Overall functionality and performance of the clinical workflowEach test case was evaluated for the complete clinical workflow in a validation study using real recorded clinical data.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not specify a numerical sample size for the internal validation studies conducted by Philips for the LNA application. It states that the LNA application was validated "using real recorded clinical data cases in order to simulate the actual use of the software."
    • Data Provenance for Philips' Internal Tests: The text implicitly suggests the data was retrospective, as it refers to "real recorded clinical data cases." The country of origin for these internal test cases is not specified.
    • Data Provenance for the Risk Calculator (McWilliams et al. study):
      • Development Data Set: Participants from the Pan-Canadian Early Detection of Lung Cancer Study (PanCan).
      • Validation Data Set: Participants from chemoprevention trials at the British Columbia Cancer Agency (BCCA), sponsored by the U.S. National Cancer Institute.
      • This indicates the data was from Canada (PanCan, BCCA in British Columbia) and supported by the U.S. National Cancer Institute. Both were prospective population-based studies.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number of experts or their qualifications for establishing ground truth specifically for Philips' internal V&V test set. It mentions the LNA application was validated to address "user needs" and simulate "actual use of the software," which implies expert input, but no details are provided.

    For the Risk Calculator, the ground truth for malignancy in the McWilliams et al. study was established through tracking the final outcomes of all detected nodules. This likely involved pathology reports and clinical follow-up, adjudicated by clinical experts, but the exact number and qualifications of these experts are not detailed in this summary.

    4. Adjudication Method for the Test Set

    The document does not describe a specific adjudication method (e.g., 2+1, 3+1) for Philips' internal V&V test set. The validation process involved evaluating each test case for the complete clinical workflow and ensuring the design meets user needs, which might involve expert review, but the formal adjudication protocol is not elaborated upon in this summary.

    For the Risk Calculator's underlying study (McWilliams et al.), the "final outcomes of all nodules" suggests a definitive ground truth based on pathology or long-term clinical stability/progression, but the adjudication method for these biological outcomes is not specified within this document.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    The document does not report an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The studies described focus on the standalone performance and validation of the LNA application's features and the underlying model for the risk calculator.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone performance was evaluated for various features of the LNA application:

    • The automatic segmentation capabilities (lungs, lobes, nodules) were validated to be "adequate."
    • The automatic measurement calculations (diameters, volume, mean HU) were tested to comply with "defined functionality requirements and performance claims."
    • The comparison and matching functionality and "Prefill functionality for the Lung RADS score and the risk prediction" were assured to be "adequate."
    • The Risk Calculator tool itself (based on McWilliams et al.) demonstrated standalone predictive performance with "excellent discrimination and calibration, with areas under the receiver-operating-characteristic curve of more than 0.90." This indicates strong standalone performance of the algorithm in predicting malignancy.

    7. The Type of Ground Truth Used

    • For Philips' Internal V&V: The ground truth appears to be based on "real recorded clinical data cases," implying clinical diagnoses, measurements, and potentially pathology results where applicable, as evaluated against the software's specified functionality and user needs. The specific hierarchy or gold standard used for each feature's ground truth (e.g., expert consensus for segmentation, pathology for nodule type) is not explicitly detailed.
    • For the Risk Calculator (McWilliams et al. study): The ground truth for malignancy was established by tracking "the final outcomes of all nodules," which would primarily be pathology results for cancerous nodules and long-term clinical outcome data (stability or benign diagnosis) for non-cancerous ones.

    8. The Sample Size for the Training Set

    The document does not specify the sample size for the training set used for the LNA application's algorithms, including the segmentation, measurement, and comparison features.

    For the Risk Calculator's underlying model (McWilliams et al.):

    • The "development data set" (training set) included participants from the Pan-Canadian Early Detection of Lung Cancer Study (PanCan). The exact number of participants or nodules is not provided in this summary but the PanCan study is a large, population-based study.

    9. How the Ground Truth for the Training Set Was Established

    For the Risk Calculator's underlying model (McWilliams et al.):

    • The ground truth for the development data set (PanCan study) was established by tracking "the final outcomes of all nodules of any size that were detected on baseline low-dose CT scans." This indicates that the ground truth for malignancy was based on definitive pathological diagnosis or long-term clinical follow-up confirming benignity or stability.
    Ask a Question

    Ask a specific question about this device

    K Number
    K162025
    Date Cleared
    2016-10-18

    (88 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K060937, K130278, K032096, K111804, K111336

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Philips IntelliSpace Portal Platform is a software medical device that allows multiple users clinical applications from compatible computers on a network.

    The system allows networking, selection, processing and filming of multimodality DICOM images.

    This software is for use with off-the-shelf PC computer technology that meets defined minimum specifications .

    Philips IntelliSpace Portal Platform is intended to be used by trained professionals, including but not limited to physicians and medical technicians.

    This medical device is not to be used for mammography.

    The device is not intended for diagnosis of lossy compressed images.

    Device Description

    Philips IntelliSpace Portal Platform is a software medical device that allows multiple users to remotely access clinical applications from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images. This software is for use with offthe-shelf PC computer technology that meets defined minimum specifications.

    The IntelliSpace Portal Platform communicates with imaging systems of different modalities using the DICOM-3 standard.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the IntelliSpace Portal Platform (K162025):

    The submitted document is a 510(k) Premarket Notification for the Philips IntelliSpace Portal Platform. This submission aims to demonstrate substantial equivalence to a legally marketed predicate device (GE AW Server K081985).

    Important Note: The document focuses on demonstrating substantial equivalence for a Picture Archiving and Communications System (PACS) and related functionalities. Unlike AI/ML-driven diagnostic devices, the information provided here does not detail performance metrics like sensitivity, specificity, or AUC against a specific clinical condition using a test set of images with established ground truth from a clinical study. Instead, the acceptance criteria and "study" refer to engineering and functional verification and validation testing to ensure the software performs as intended and safely, consistent with a PACS system.

    Here's the breakdown based on your requested information:


    1. A table of acceptance criteria and the reported device performance

      The document does not provide a table with specific quantitative acceptance criteria or reported performance results in the classical sense (e.g., sensitivity, specificity, accuracy percentages) because it's for a PACS platform, not a diagnostic AI algorithm for a specific clinical task.

      Instead, the "acceptance criteria" for a PACS platform primarily relate to its functional performance, compliance with standards, and safety. The reported "performance" is a successful demonstration of these aspects.

      Acceptance Criteria (Inferred from regulatory requirements and description)Reported Device Performance (as stated in the submission)
      Compliance with ISO 14971 (Risk Management)Demonstrated compliance with ISO 14971. (p. 9)
      Compliance with IEC 62304 (Medical Device Software Lifecycle Processes)Demonstrated compliance with IEC 623304. (p. 9)
      Compliance with NEMA-PS 3.1-PS 3.20 (DICOM Standard)Demonstrated compliance with NEMA-PS 3.1-PS 3.20 (DICOM). (p. 9)
      Compliance with FDA Guidance for Content of Premarket Submissions for Software Contained in Medical DevicesDemonstrated compliance with relevant FDA guidance document. (p. 9)
      Meeting defined functionality requirements and performance claims (e.g., networking, selection, processing, filming of multimodality DICOM images, multi-user access, various viewing/manipulation tools as listed in comparison tables)Verification and Validation tests performed to address intended use, technological characteristics, requirement specifications, and risk management results. Tests demonstrated the system meets all defined functionality requirements and performance claims. (p. 9)
      Safety and Effectiveness equivalent to predicate deviceDemonstrated substantial equivalence in terms of safety and effectiveness, confirming no new safety or effectiveness concerns. (p. 9, 10)
    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

      This type of information is not provided in this document. Since the submission is for a PACS platform and not a diagnostic AI algorithm, there is no mention of a "test set" of clinical cases or patient data in the context of diagnostic performance evaluation. The "testing" refers to software verification and validation, which would involve testing functionalities rather than analyzing a dataset of medical images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

      This information is not applicable/not provided. As explained above, there is no "test set" of clinical cases with ground truth established by medical experts for diagnostic performance.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

      This information is not applicable/not provided. There is no clinical "test set" requiring adjudication for diagnostic performance.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

      No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is a PACS platform, not an AI-assisted diagnostic tool designed to improve human reader performance for a specific clinical task. The submission explicitly states: "The subject of this premarket submission, ISPP does not require clinical studies to support equivalence." (p. 9).

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

      No, a standalone performance study (in the context of an AI algorithm performing a diagnostic task) was not done. This device is a software platform for image management and processing, intended for use by trained professionals (humans-in-the-loop) for visualization and administrative functions.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

      This information is not applicable/not provided. There is no ground truth data in the context of diagnostic accuracy for this PACS platform submission. The "ground truth" for its functionality would be defined by its requirement specifications, and testing would verify if those specifications are met.

    8. The sample size for the training set

      This information is not applicable/not provided. This device is a PACS platform, not an AI/ML algorithm that requires a "training set" of data in the machine learning sense. The software development process involves design and implementation, followed by verification and validation, but not training on a dataset of images to learn a specific task.

    9. How the ground truth for the training set was established

      This information is not applicable/not provided. As there is no "training set," there is no ground truth establishment for it.

    Ask a Question

    Ask a specific question about this device

    K Number
    K150665
    Date Cleared
    2015-08-07

    (144 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K060937, K111336

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Philips Spectral CT Applications support viewing and analysis of images at energies selected from the available spectrum in order to provide information about the chemical composition of the body materials and/or contrast agents. The Spectral CT Applications provide for the quantification and graphical display of attenuation, material density, and effective atomic number. This information may be used by a trained healthcare professional as a diagnostic tool for the visualization and analysis of anatomical and pathological structures.

    The Spectral enhanced Advanced Vessel Analysis (SAVA) application is intended to assist clinicians in viewing and evaluating CT images, for the inspection of contrast-enhanced vessels.

    The Spectral enhanced Comprehensive Cardiac Analysis (SCCA) application is intended to assist clinicians in viewing and evaluating cardiovascular CT images.

    The Spectral enhanced Tumor Tracking (sTT) application is intended to assist clinicians in viewing and evaluating CT images, for the inspection of tumors.

    Device Description

    The Spectral CT Applications package introduces a set of three SW clinical applications: spectral enhanced Comprehensive Cardiac Analysis (sCCA), spectral enhanced Advanced Vessel Analysis (sAVA), and spectral enhanced Tumor Tracking (sTT). Each application provides tools that assist a trained personal in visualization and analysis of anatomical and pathological structures.

    The sCCA application is targeted to assist the user in analysis and diagnostic of Cardiac Cases, as contrast enhanced and ECG triggered scans. The application input is a cardiac case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the coronary tree and chambers. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the coronary vessel & findings along it.

    The sAVA application is targeted to assist the user in analysis and diagnostic of CT Angiography cases, as contrast enhanced and whole body CT-angiography scans. The application input is a CT Angiography case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the vessel of interest. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the vessels & findings along it.

    The sTT application is targeted to assist the user in analysis of tumors, as contrast enhanced, soft tissue oriented, and whole body scans. The application input is a tumor suspected contrast enhanced case that was acquired on the IQon CT scanner; the application takes the user through typical workflow steps that allow him to extract qualitative and quantitative information on the tumor of interest. The output of this application is information on physical (length, width, volume) and composition properties (Effective Atomic number, Attenuation, HU) of the tumor.

    AI/ML Overview

    The provided text describes the Philips Spectral CT Applications, a set of software clinical applications (sCCA, sAVA, sTT) designed to assist clinicians in viewing and evaluating CT images with spectral data. While the document mentions verification and validation, it does not provide explicit acceptance criteria in a quantitative table or detailed performance metrics against those criteria. It generally states that "SW requirements were met" and "intended uses and defined user needs were met."

    Therefore, I cannot populate a table of acceptance criteria and reported device performance with specific numerical values from the provided text. However, I can extract information related to the study that proves the device meets its intended uses and user needs, as described in the "Summary of Clinical Testing" section.

    Here's a breakdown of the available information:

    1. A table of acceptance criteria and the reported device performance:

    No explicit quantitative acceptance criteria or corresponding reported device performance metrics are provided in the document. The document states that "SW requirements were met" and "intended uses and defined user needs were met."

    2. Sample sized used for the test set and the data provenance:

    • Test Set Sample Size: "clinical datasets" – specific number not provided.
    • Data Provenance: The datasets were "derived from Philips IQon Spectral CT system (K133674)." The country of origin is not specified, but the manufacturer is Philips Medical Systems Nederland B.V. and the regulatory contact is in the USA. The data appears to be retrospective, as it's referred to as "clinical datasets" used for validation, not newly acquired data specifically for this study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: "Philips Internal certified radiologists" – specific number not provided.
    • Qualifications of Experts: "certified radiologists" - specific experience level (e.g., 10 years) not provided. They are referred to as representing "a typical user."

    4. Adjudication method for the test set:

    • Adjudication Method: Not explicitly stated. The document mentions that "The evaluators were questioned against each of the intended uses and provided score to describe their level of satisfaction." This suggests individual evaluation rather than a formal consensus or adjudication process among multiple readers for ground truth establishment.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No MRMC comparative effectiveness study, comparing human readers with and without AI assistance, is described. The validation focused on whether the applications meet intended uses and user needs, as evaluated by radiologists. The applications are described as "assistive tools" for clinicians.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Not explicitly evaluated or described. The validation confirms that the applications "allow visualization, manipulation and analysis of spectral data" and "assist clinicians in viewing and evaluating CT images." This implies human interaction as part of the intended use.

    7. The type of ground truth used:

    • Ground Truth Type: The ground truth for the validation appears to be based on the subjective evaluation and "satisfaction scores" provided by "Philips Internal certified radiologists" against the stated intended uses and user needs. It is based on expert consensus/evaluation of the utility of the application's outputs, rather than a separate, independent "ground truth" like pathology or outcomes data.

    8. The sample size for the training set:

    • Training Set Sample Size: Not specified. The document primarily discusses verification and validation using "datasets that were generated by the Philips IQon Spectral CT system" and "clinical datasets." It does not provide details on a distinct training set.

    9. How the ground truth for the training set was established:

    • Ground Truth for Training Set: Not specified, as details about a separate training set or how its ground truth was established are not provided in the document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1