Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K173001
    Device Name
    uWS-CT
    Date Cleared
    2018-11-07

    (406 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K033326, K162025, K081985, K023785

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-CT is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The CT Oncology application is intended to support fast-tracking routine diagnostic oncology, staging, and follow-up, by providing a tool for the user to perform the segmentation of suspicious lesions in lung or liver. The CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.

    The CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. The CT Lung Density application is intended to provide the user a number of density parameters and structure information for evaluating tomogram scans of the lung.

    The CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.

    The CT Vessel Analysis application is intended to provide a tool for viewing, manipulating CT vascular images.

    The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    Device Description

    uWS-CT is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    AI/ML Overview

    The provided document is a 510(k) Premarket Notification from Shanghai United Imaging Healthcare Co., Ltd. for their device uWS-CT. This document outlines the device's indications for use, technological characteristics, and comparison to predicate devices, but it does not contain a detailed study demonstrating that the device meets specific acceptance criteria based on human-in-the-loop or standalone performance.

    Instead, the document primarily focuses on demonstrating substantial equivalence to predicate devices based on similar functionality and intended use, supported by software verification and validation testing, hazard analysis, and performance evaluations for various CT applications. It explicitly states that "No clinical study was required." and "No animal study was required." for this submission.

    Therefore, I cannot provide the detailed information requested in the prompt's format (acceptance criteria table, sample size, expert ground truth, MRMC study, etc.) because these types of studies were not conducted or reported in this 510(k) submission.

    The "Performance Data" section (Page 11) lists "Performance Evaluation Report For CT Lung Nodule," "Performance Evaluation Report For CT Oncology," etc., but these are internal reports that are not detailed in this public document. They likely refer to internal testing that verifies the software's functions perform as designed, rather than robust clinical performance studies against specific quantitative acceptance criteria with human readers or well-defined ground truth beyond internal validation.

    What is present in the document regarding "performance" is:

    • Software Verification and Validation: This typically involves testing that the software functions as designed, is free of bugs, and meets its specified requirements. The document mentions "hazard analysis," "software requirements specification (SRS)," "software architecture description," "software development environment description," "software verification and validation," and "cyber security documents."
    • Performance Evaluation Reports for specific applications: These are listed but not detailed (e.g., CT Lung Nodule, CT Oncology). It's implied these show the software functions correctly for those applications.

    In summary, based on the provided text, there is no information about:

    • A table of acceptance criteria with reported device performance in the context of clinical accuracy or diagnostic performance.
    • Sample sizes used for a test set in a clinical performance study.
    • Data provenance for a clinical test set.
    • Number of experts or their qualifications for establishing clinical ground truth.
    • Adjudication methods for a clinical test set.
    • Multi-Reader Multi-Case (MRMC) comparative effectiveness studies.
    • Standalone (algorithm-only) performance studies against clinical ground truth.
    • Type of clinical ground truth used (pathology, outcomes data, expert consensus from an external panel).
    • Sample size for a training set (as no AI/ML model requiring a training set is explicitly discussed in terms of its performance data; the device is described as "CT Image Post-Processing Software" with various applications.)
    • How ground truth for a training set was established.

    The closest the document comes to "acceptance criteria" and "performance" are discussions of functional equivalence to predicate devices and general software validation, stating that the proposed device performs in a "similar manner" and has a "safety and effectiveness profile that is similar to the predicate device."

    Ask a Question

    Ask a specific question about this device

    K Number
    K171850
    Date Cleared
    2017-11-09

    (141 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K160743, K153444, K012238, K023785, K02005, K162025

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Philips CT Big Bore is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipments and accessories. These systems are indicated for head and whole body X-ray Computed Tomography applications in oncology, vascular and cardiology, for patients of all ages.

    These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.

    • Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
    Device Description

    The Philips CT Big Bore is currently available in two system configurations, the Oncology configuration and the Radiology (Base) configuration.

    The main components (detection system, the reconstruction algorithm, and the x-ray system) that are used in the Philips CT Big Bore have the same fundamental design characteristics and are based on comparable technologies as the predicate.

    The main system modules and functionalities are:

    1. Gantry. The Gantry consists of 4 main internal units:
      a. Stator a fixed mechanical frame that carries HW and SW
      b. Rotor A rotating circular stiff frame that is mounted in and supported by the stator.
      c. X-Ray Tube (XRT) and Generator, fixed to the Rotor frame
      d. Data Measurement System (DMS) a detector array, fixed to the Rotor frame
    2. Patient Support (Couch) carries the patient in and out through the Gantry bore synchronized with the scan
    3. Console A two part subsystem containing a Host computer and display that is the primary user interface and the Common Image Reconstruction System (CIRS) - a dedicated, powerful image reconstruction computer

    In addition to the above components and the software operating them, each system includes workstation hardware and software for data acquisition, display, manipulation, storage and filming as well as post-processing into views other than the original axial images. Patient supports (positioning aids) are used to position the patient.

    AI/ML Overview

    This document describes the Philips CT Big Bore, a Computed Tomography X-Ray System. The submission focuses on demonstrating substantial equivalence to a predicate device rather than a standalone clinical efficacy study with acceptance criteria in the typical sense of a diagnostic AI product. Therefore, much of the requested information regarding clinical studies and expert review for ground truth is not directly applicable in the same way.

    However, based on the provided text, we can infer and extract the following:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are framed in terms of achieving similar or improved performance compared to the predicate device and meeting established industry standards for CT systems. The reported device performance is primarily a comparison to the predicate device's specifications and measurements on phantoms.

    MetricAcceptance Criteria (Implicit: Similar to/Better than Predicate & Standards)Reported Device Performance (Philips CT Big Bore / Tested Values)
    Design/Fundamental Scientific Technology
    ApplicationHead/Body (Identical to Predicate)Head/Body
    Scan RegimeContinuous Rotation (Identical to Predicate)Continuous Rotation
    No. of SlicesUp to 40 (Predicate)16/32 (with optional WARP/DAS for 32 slices)
    Scan ModesSurview, Axial Scan, Helical Scan (Identical to Predicate)Surview, Axial Scan, Helical Scan
    Minimum Scan Time0.42 sec for 360° rotation (Identical to Predicate)0.42 sec for 360° rotation
    Image (Spatial) Resolution15 lp/cm max. (Predicate)16 lp/cm (±2 lp/cm)
    Image Noise, Body, STD Res.10.7 at 16.25 mGy (Predicate)10.7
    Image MatrixUp to 1024 x 1024 (Identical to Predicate)Up to 1024 x 1024
    Display1024 x 1280 (Identical to Predicate)1024 x 1280
    Host InfrastructureWindows XP (Predicate)Windows 7 (Essentially the same, Windows based)
    CIRS InfrastructurePC/NT computer based on Intel processor & custom Multiprocessor Array (Predicate)Windows Vista & custom Multiprocessor Array (Identical, Windows based)
    CommunicationCompliance with DICOM (Identical to Predicate)Compliance with DICOM
    Dose Reporting and ManagementNo (Predicate)Compliance with MITA XR25 and XR29
    Generator and Tube Power60 kW (Predicate)80 kW (Software limited to 60kW)
    mA Range30-500mA (Predicate)20-665mA (Software limited to 500mA)
    kV Settings80, 120, 140 (Predicate)80, 100, 120, 140
    Focal SpotDynamic Focal Spot (Identical to Predicate)Dynamic Focal Spot in X axis
    Tube TypeMRC 800 (Predicate)MRC Ice Tube (880) (Identical tube technology)
    Detectors Type2.4 or 4 cm NanoPanel detector (Predicate)2.4 cm NanoPanel (Revision, slightly better performance stated)
    Scan Field of ViewUp to 600 mm (Identical to Predicate)Up to 600 mm
    Detector TypeSingle layer ceramic scintillator plus photodiode array (Identical to Predicate)Single layer ceramic scintillator plus photodiode array
    Gantry Tilt$\pm 30^0$ (Identical to Predicate)$\pm 30^0$
    Gantry Rotation Speed143 RPM (Identical to Predicate)143 RPM
    Bore Size850 mm (Identical to Predicate)850 mm
    Low dose CT lung cancer screeningYes (Predicate)Yes (Configuration with Brilliance Big Bore cited in K153444)
    Communication between injector and scannerSAS (Spiral Auto Start) (Predicate)SAS and SyncRight
    DoseRight / Dose ManagementYes (K012238) (Predicate)Yes and iDose4
    Dose ModulationD-DOM and Z-DOM (Predicate)D-DOM (Angular DOM) and Z-DOM FDOM, 3D-DOM
    Cone Beam Reconstruction Algorithm - COBRAYes (Identical to Predicate)Yes
    Axial 2D ReconstructionYes (Identical to Predicate)Yes
    Lung Nodule AssessmentYes (K023785) (Identical to Predicate)Yes
    ECG Signal HandlingYes (Identical to Predicate)Yes
    Cardiac ReconstructionYes (Identical to Predicate)Yes
    Bolus TrackingYes (K02005) (Identical to Predicate)Yes
    Calcium ScoringYes (Identical to Predicate)Yes
    Heartbeat Calcium Scoring (HBCS)Yes (Identical to Predicate)Yes
    Virtual ColonoscopyYes (Identical to Predicate)Yes
    Pediatric Applications SupportYes (Identical to Predicate)Yes
    Remote Workstation OptionYes - MxView - later renamed Extended Brilliance Workstation (Predicate)Yes - IntelliSpace Portal (K162025)
    Volume RenderingYes (Identical to Predicate)Yes
    Liver PerfusionYes (Identical to Predicate)Yes
    Dental PlanningYes (Identical to Predicate)Yes
    Functional CTYes (Identical to Predicate)Yes
    Stent PlanningYes (Identical to Predicate)Yes
    Retrospective TaggingYes (Identical to Predicate)Yes
    Prospective Cardiac GatingYes (Identical to Predicate)Yes
    CT Performance Metrics (Phantoms)
    MTFCut-off: High Mode 16±2lp/cm; Standard Mode: 13±2 lp/cm (Measured)
    CTDIvol (Head)10.61mGy/100mAs±25% at 120kV (Measured)
    CTDIvol (Body)5.92mGy/100mAs±25% at 120kV (Measured)
    CT number accuracy (Water)0±4HU (Measured)
    Noise0.27% ± 0.04% at 120 kV, 250 mAs, 12 mm slice thickness, UA filter (Measured)
    Slice Thickness (Nominal 0.75mm)0.5mm - 1.5mm (Measured)
    Slice Thickness (Nominal 1.5mm)1.0mm - 2.0mm (Measured)

    2. Sample Size for Test Set and Data Provenance

    The document does not explicitly state a "test set" in the context of an AI/algorithm-driven diagnostic study. Instead, it refers to "bench testing included basic CT performance tests on phantoms" and "Sample clinical images were provided with this submission, which were reviewed and evaluated by radiologists."

    • Sample Size for Test Set: Not specified for clinical images. For bench testing, it refers to "phantoms."
    • Data Provenance: Not specified for the "sample clinical images." Given the context of a 510(k) for a hardware device, it's highly likely these were internal and possibly from a variety of sources. It's not stated whether they were retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: "radiologists" (plural, but exact number not specified).
    • Qualifications of Experts: Only "radiologists" are mentioned. No details on years of experience or subspecialty.

    4. Adjudication Method for Test Set

    • Adjudication Method: Not specified. The document states, "All images were evaluated to have good image quality," suggesting a qualitative assessment rather than a structured adjudication process for a specific diagnostic task.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a typical MRMC comparative effectiveness study was not performed as described. This submission is for a CT scanner itself, not an AI-assisted interpretation tool where human readers' performance with and without AI would be compared.
    • Effect Size of Human Readers with AI vs. without AI: Not applicable, as this was not an AI-assistance study.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Study: No, this was not a standalone algorithm performance study. The submission is for a complete CT imaging system. The performance metrics reported are for the overall system, not an isolated algorithm. The document mentions "optional software algorithm called WARP or DAS" for increasing slice count, and features like "iDose4" (an extension of DoseRight) and "FDOM, 3D-DOM" for dose modulation, but their standalone performance is not detailed in terms of a clinical study.

    7. Type of Ground Truth Used

    • Type of Ground Truth: For the "sample clinical images," the ground truth seems to be expert opinion / qualitative assessment by radiologists that the image quality was "good." For the technical performance parameters (MTF, CTDIvol, CT number accuracy, Noise, Slice Thickness), the ground truth was derived from physical phantom measurements against established technical specifications.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable. This document describes a CT scanner (hardware and embedded software), not a machine learning model that would have a separate "training set" in the conventional sense. The "training" for the system's development would be through engineering design, iterative testing, and adherence to established physical and software engineering principles.

    9. How Ground Truth for the Training Set Was Established

    • How Ground Truth for Training Set Was Established: Not applicable. (See point 8). The development of the CT system likely involved extensive engineering design, simulations, and validation against known physical principles and performance targets, which are fundamentally different from establishing ground truth for a machine learning training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K163250
    Date Cleared
    2017-05-11

    (174 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K160315, K150665, K023785, K111336

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Longitudinal Brain Imaging (LoBI) is a post-processing application to be used for viewing and evaluating neurological images provided by a magnetic resonance diagnostic device.

    The LoBI application is intended for viewing, manipulation and comparison of medical imaging and/or multiple time-points. The LoBI application enables visualization of information that would otherwise have to be visually compared disjointedly. The LoBI application provides analysis tools to help the user assess, and document changes in diagnostic and follow-up examinations. The LoBI application is designed to support the workflow by helping the user to confirm the absence or presence of lesions, including evaluation, follow-up and documentation of any such lesions.

    The physician retains the ultimate responsibility for making the final diagnosis and treatment decision.

    Device Description

    Philips Medical Systems' Longitudinal Brain Imaging application (LoBI) is a post processing software application intended to assist in the evaluation of serial brain imaging based on MR data.

    The LoBI application allows the user to view images, perform segmentation of lesions, along with segmentation editing tool and volumetric quantification of segmented volumes and quantitative comparison between time points. LoBI application provides automatic registration between studies from different time points. for longitudinal comparison.

    The LoBI application provides a supportive tool for visualization of subtle differences in the brain of the same individual across time, which can be used by clinicians as the assessment of disease progression.

    The physician retains the ultimate responsibility for making the final diagnosis based on image visualization as well as any segmentation and measurement results obtained from the application.

    The LoBI application is intended to be used for adult population only

    Key Features
    LoBI application has the following key features:

      1. Longitudinal comparison between brain images in multiple studies
      1. Support for multi-slice MR sequences (2D and 3D) and allow user to use basic viewing operations such as: Scroll, pan, zoom, windowing and annotation
      1. Identify pre-defined data types (pre-sets) and user created hanging layouts
      1. Automatic registration between studies (same patient, different time-points)
      1. Single mode: allows reviewing each of the launched studies, showing multiple sequences of the same study, using the whole reading space
      1. Tissue segmentation and editing tools allowing volumetric measurement of different lesion types
      1. Lesion management tool allowing matching between lesions in different studies to facilitate the assessment of differences over time
      1. CoBI feature (Comparative Brain Imaging) a supportive tool for visualization of subtle differences in lesions of the same individual across time for similar sequences. The CoBI feature provides a mathematical subtraction of scans yielding, after bias-field correction and intensity scaling, a colorcoded image of the differences in intensity between two registered scans.
      1. Results are displayed in tabular and graphical formats.
    AI/ML Overview

    Here's a summary of the acceptance criteria and study information for the Philips Longitudinal Brain Imaging (LoBI) application, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document focuses on demonstrating substantial equivalence to predicate devices and adherence to regulatory standards rather than explicit quantitative acceptance criteria or detailed device performance metrics in a table format. The primary "acceptance criteria" are implied by compliance with:

    • International and FDA-recognized consensus standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
    • FDA guidance document: "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
    • Internal Philips verification and validation processes: Ensuring the device "meets the acceptance criteria and is adequate for its intended use and specifications."

    Since specific numerical performance criteria (e.g., accuracy, sensitivity, specificity for particular lesion types) and corresponding reported performance are not provided in this 510(k) summary, the table below reflects what is broadly stated.

    Acceptance Criteria (Implied)Reported Device Performance
    Compliance with ISO 14971 (Risk Management)Demonstrated
    Compliance with IEC 62304 (Software Life Cycle Processes)Demonstrated
    Compliance with IEC 62366-1 (Usability Engineering)Demonstrated
    Compliance with FDA Guidance for Software in Medical DevicesDemonstrated
    Compliance with DICOM PS 3.1-3.18 (DICOM Standard)Demonstrated
    Fulfillment of intended functionality (CoBI feature, registration, segmentation, measurement, etc.)Verified through "Full functionality test" (covering detailed requirements per Product Requirement Specification) and "Validation" (using real recorded clinical data cases to simulate actual use and ensure customer needs / intended functionality fulfillment). Performance demonstrated to meet defined functionality requirements and performance claims.
    CoBI feature functions correctly and meets specificationsProven through verification activities
    Meets customer needs and fulfills intended functionality (validated with real clinical data)Proven through validation activities

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: Not explicitly stated as a number of cases or images. The validation activities used "real recorded clinical data cases." The quantity of these cases is not specified.
    • Data Provenance: The data used for validation consisted of "real recorded clinical data cases." No specific country of origin is mentioned. It is indicated as retrospective, as they are "recorded" data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • This information is not provided in the document. The general statement is that "The physician retains the ultimate responsibility for making the final diagnosis," suggesting human expert involvement in clinical practice, but not explicitly defining how ground truth for the test set was established or by whom.

    4. Adjudication Method for the Test Set:

    • This information is not provided in the document.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI Vs Without AI Assistance:

    • No MRMC comparative effectiveness study was done or reported. The document states explicitly: "The subject of this premarket submission. Longitudinal Brain Imaging (LoBI) application did not require clinical studies to support equivalence." The testing focused on verification and validation of the software's functionality and compliance with standards.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • The document describes the LoBI application as a "post-processing software application intended to assist in the evaluation of serial brain imaging" and emphasizes that "The physician retains the ultimate responsibility for making the final diagnosis."
    • While the software performs automated functions like registration, segmentation, and quantitative comparison, the validation process using "real recorded clinical data cases" seems to focus on the software's ability to provide accurate tools and information that a user would interpret.
    • The description of "Full functionality test" and "RMF testing" could involve standalone algorithmic performance evaluation against predefined specifications. However, an explicit "standalone" performance study as a separate regulatory study with defined metrics (e.g., algorithm-only sensitivity/specificity against ground truth) is not detailed in this summary. The focus is on the tool's supportive role for the user.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):

    • The type of ground truth used for the validation data is not explicitly specified. It refers to "real recorded clinical data cases," implying that the medical imaging data came with existing clinical interpretations or diagnoses, which would have implicitly served as a form of reference or "ground truth" for evaluating the software's utility in "confirming the absence or presence of lesions, including evaluation, quantification, follow-up and documentation." However, the method of establishing this ground truth (e.g., expert consensus, pathology) is not detailed.

    8. The Sample Size for the Training Set:

    • The document does not provide information regarding a distinct training set sample size or how the LoBI application was developed using machine learning or AI. The product description focuses on its functionality as a post-processing application with features like automatic registration and tissue segmentation, which could be rule-based or machine learning-driven, but this is not specified, nor is training data mentioned.

    9. How the Ground Truth for the Training Set Was Established:

    • Since a training set is not mentioned, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K162955
    Date Cleared
    2016-12-19

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    Reference Devices :

    K151353, K123920, K113620, K160315, K150665, K023785

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multi-Modality Tumor Tracking (MMTT) application is a post processing software application used to display, process, analyze , quantify and manipulate anatomical and functional images, for CT, MR PET/CT and SPECT/CT images and/or multiple time-points. The MMTT application is intended for use on tumors which are known/confirmed to be pathologically diagnosed cancer. The results obtained may be used as a tool by clinicians in determining the diagnosis of patient disease conditions in various organs, tissues, and other anatomical structure.

    Device Description

    Philips Medical Systems' Multi-Modality Tumor Tracking (MMTT) application is a post - processing software. It is a non-organ specific, multi-modality application which is intended to function as an advanced visualization application. The MMTT application is intended for displaying, processing, analyzing, quantifying and manipulating anatomical and functional images, from multi-modality of CT ,MR PET/CT and SPECT/CT scans.

    The Multi-Modality Tumor Tracking (MMTT) application allows the user to view imaging, perform segmentation and measurements and provides quantitative and characterizing information of oncology lesions, such as solid tumor and lymph node, for a single study or over the time course of several studies (multiple time-points). Based on the measurements, the MMTT application provides an automatic tool which may be used by clinicians in diagnosis, management and surveillance of solid tumors and lymph node, conditions in various organs, tissues, and other anatomical structures, based on different oncology response criteria.

    AI/ML Overview

    The provided text does not contain detailed information about a study that proves the device meets specific acceptance criteria, nor does it include a table of acceptance criteria and reported device performance.

    The submission is a 510(k) premarket notification for the "Multi-Modality Tumor Tracking (MMTT) application." For 510(k) submissions, the primary goal is to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving a device meets specific, pre-defined performance acceptance criteria through a rigorous clinical or non-clinical study that would be typical for a PMA (Premarket Approval) application.

    Here's what can be extracted and inferred from the document regarding the device's validation:

    Key Information from the Document:

    • Study Type: No clinical studies were required or performed to support equivalence. The validation was based on non-clinical performance testing, specifically "Verification and Validation (V&V) activities."
    • Demonstration of Compliance: The V&V tests were intended to demonstrate compliance with international and FDA-recognized consensus standards and FDA guidance documents, and that the device "Meets the acceptance criteria and is adequate for its intended use and specifications."
    • Acceptance Criteria (Implied): While no quantitative table is provided, the acceptance criteria are implicitly tied to:
      • Compliance with standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
      • Compliance with FDA guidance documents for software in medical devices.
      • Addressing intended use, technological characteristics claims, requirement specifications, and risk management results.
      • Functionality requirements and performance claims as described in the device description (e.g., longitudinal follow-up, multi-modality support, automated/manual registration, segmentation, measurement calculations, support for oncology response criteria, SUV calculations).
    • Performance (Implied): "Testing performed demonstrated the Multi-Modality Tumor Tracking (MMTT) meets all defined functionality requirements and performance claims." Specific quantitative performance metrics are not given.

    Information NOT present in the document:

    The following information, which would typically be found in a detailed study report proving acceptance criteria, is not available in this 510(k) summary:

    1. A table of acceptance criteria and the reported device performance: This document states the device "Meets the acceptance criteria and is adequate for its intended use and specifications," but does not list these criteria or the test results.
    2. Sample sizes used for the test set and the data provenance: No details on the number of images, patients, or data characteristics used for non-clinical testing.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience): Since it was non-clinical testing, there's no mention of expert involvement in establishing ground truth for a test set.
    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable as no expert-adjudicated clinical test set is described.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed as no clinical studies were undertaken.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The V&V activities would have included testing the software's functionality, which could be considered standalone performance testing, but specific metrics are not provided. The device is a "post processing software application" used "by clinicians," implying a human-in-the-loop tool rather than a fully autonomous AI diagnostic device.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not detailed for the non-clinical V&V testing. For the intended use, the device is for "tumors which are known/confirmed to be pathologically diagnosed cancer," suggesting that the "ground truth" for the intended use context is pathological diagnosis. However, this is not the ground truth for the V&V testing itself.
    8. The sample size for the training set: Not applicable; this is a 510(k) for a software application, not specifically an AI/ML product where a training set size would be relevant for model development. The document does not describe any machine learning model training.
    9. How the ground truth for the training set was established: Not applicable for the same reason as above.

    In summary, this 510(k) submission relies on a demonstration of substantial equivalence to existing predicate devices and internal non-clinical verification and validation testing, rather than a clinical study with specific, quantifiable performance metrics against an established ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1