Search Results
Found 3 results
510(k) Data Aggregation
(59 days)
AV Vascular is indicated to assist users in the visualization, assessment and quantification of vascular anatomy on CTA and/or MRA datasets, in order to assess patients with suspected or diagnosed vascular pathology and to assist with pre-procedural planning of endovascular interventions.
AV Vascular is a post-processing software application intended for visualization, assessment, and quantification of vessels in computed tomography angiography (CTA) and magnetic resonance angiography (MRA) data with a unified workflow for both modalities.
AV Vascular includes the following functions:
-
Advanced visualization: the application provides all relevant views and interactions for CTA and MRA image review: 2D slides, MIP, MPR, curved MPR (cMPR), stretched MPR (sMPR), path-aligned views (cross-sectional and longitudinal MPRs), 3D volume rendering (VR).
-
Vessel segmentation: automatic bone removal and vessel segmentation for head/neck and body CTA data, automatic vessel centerline, lumen and outer wall extraction and labeling for the main branches of the vascular anatomy in head/neck and body CTA data, semi-automatic and manual creation of vessel centerline and lumen for CTA and MRA data, interactive two-point vessel centerline extraction and single-point centerline extension.
-
Vessel inspection: enable inspection of an entire vessel using the cMPR or sMPR views as well as inspection of a vessel locally using vessel-aligned views (cross-sectional and longitudinal MPRs) by selecting a position along a vessel of interest.
-
Measurements: ability to create and save measurements of vessel and lumen inner and outer diameters and area, as well as vessel length and angle measurements.
-
Measurements and tools that specifically support pre-procedural planning: manual and automatic ring marker placement for specific anatomical locations, length measurements of the longest and shortest curve along the aortic lumen contour, angle measurements of aortic branches in clock position style, saving viewing angles in C-arm notation, and configurable templated
-
Saving and export: saving and export of batch series and customizable reports.
This summarization is based on the provided 510(k) clearance letter for Philips Medical Systems' AV Vascular device.
Acceptance Criteria and Device Performance for Aorto-iliac Outer Wall Segmentation
| Metrics | Acceptance Criteria | Reported Device Performance (Mean with 98.75% confidence intervals) |
|---|---|---|
| 3D Dice Similarity Coefficient (DSC) | > 0.9 | 0.96 (0.96, 0.97) |
| 2D Dice Similarity Coefficient (DSC) | > 0.9 | 0.96 (0.95, 0.96) |
| Mean Surface Distance (MSD) | < 1.0 mm | 0.57 mm (0.485, 0.68) |
| Hausdorff Distance (HD) | < 3.0 mm | 1.68 mm (1.23, 2.08) |
| ∆Dmin (difference in minimum diameter) | > 95% |∆Dmin| < 5 mm | 98.8% (98.3-99.2%) |
| ∆Dmax (difference in maximum diameter) | > 95% |∆Dmax| < 5 mm | 98.5% (97.9-98.9%) |
The reported device performance for all primary and secondary metrics meets the predefined acceptance criteria.
Study Details for Aorto-iliac Outer Wall Segmentation Validation
-
Sample Size used for the Test Set and Data Provenance:
- Sample Size: 80 patients
- Data Provenance: Retrospectively collected from 7 clinical sites in the US, 3 European hospitals, and one hospital in Asia.
- Independence from Training Data: All performance testing datasets were acquired from clinical sites distinct from those which provided the algorithm training data. The algorithm developers had no access to the testing data, ensuring complete independence.
- Patient Characteristics: At least 80% of patients had thoracic and/or abdominal aortic diseases and/or iliac artery diseases (e.g., thoracic/abdominal aortic aneurysm, ectasia, dissection, and stenosis). At least 20% had been treated with stents.
- Demographics:
- Geographics: North America: 58 (72.5%), Europe: 3 (3.75%), Asia: 19 (23.75%)
- Sex: Male: 59 (73.75%), Female: 21 (26.25%)
- Age (years): 21-50: 2 (2.50%), 51-70: 31 (38.75%), >71: 45 (56.25%), Not available: 2 (2.5%)
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Number of Experts: Three
- Qualifications: US-board certified radiologists.
-
Adjudication Method for the Test Set:
- The three US-board certified radiologists independently performed manual contouring of the outer wall along the aorta and iliac arteries on cross-sectional planes for each CT angiographic image.
- After quality control, these three aortic and iliac arterial outer wall contours were averaged to serve as the reference standard contour. This can be considered a form of consensus/averaging after independent readings.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- The provided document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to measure human reader improvement with AI assistance. The study focused on the standalone performance of the AI algorithm compared to an expert-derived ground truth.
-
Standalone (Algorithm Only Without Human-in-the-Loop Performance):
- Yes, the performance data provided specifically describes the standalone performance of the AI-based algorithm for aorto-iliac outer wall segmentation. The algorithm's output was compared directly against the reference standard without human intervention in the segmentation process.
-
Type of Ground Truth Used:
- Expert Consensus/Averaging: The ground truth was established by averaging the independent manual contouring performed by three US-board certified radiologists.
-
Sample Size for the Training Set:
- The document states that the testing data were independent of the training data and that developers had no access to the testing data. However, the exact sample size for the training set is not specified in the provided text.
-
How the Ground Truth for the Training Set Was Established:
- The document implies that training data were used, but it does not describe how the ground truth for the training set was established. It only ensures that the testing data did not come from the same clinical sites as the training data and that algorithm developers had no access to the testing data.
Ask a specific question about this device
(174 days)
The Longitudinal Brain Imaging (LoBI) is a post-processing application to be used for viewing and evaluating neurological images provided by a magnetic resonance diagnostic device.
The LoBI application is intended for viewing, manipulation and comparison of medical imaging and/or multiple time-points. The LoBI application enables visualization of information that would otherwise have to be visually compared disjointedly. The LoBI application provides analysis tools to help the user assess, and document changes in diagnostic and follow-up examinations. The LoBI application is designed to support the workflow by helping the user to confirm the absence or presence of lesions, including evaluation, follow-up and documentation of any such lesions.
The physician retains the ultimate responsibility for making the final diagnosis and treatment decision.
Philips Medical Systems' Longitudinal Brain Imaging application (LoBI) is a post processing software application intended to assist in the evaluation of serial brain imaging based on MR data.
The LoBI application allows the user to view images, perform segmentation of lesions, along with segmentation editing tool and volumetric quantification of segmented volumes and quantitative comparison between time points. LoBI application provides automatic registration between studies from different time points. for longitudinal comparison.
The LoBI application provides a supportive tool for visualization of subtle differences in the brain of the same individual across time, which can be used by clinicians as the assessment of disease progression.
The physician retains the ultimate responsibility for making the final diagnosis based on image visualization as well as any segmentation and measurement results obtained from the application.
The LoBI application is intended to be used for adult population only
Key Features
LoBI application has the following key features:
-
- Longitudinal comparison between brain images in multiple studies
-
- Support for multi-slice MR sequences (2D and 3D) and allow user to use basic viewing operations such as: Scroll, pan, zoom, windowing and annotation
-
- Identify pre-defined data types (pre-sets) and user created hanging layouts
-
- Automatic registration between studies (same patient, different time-points)
-
- Single mode: allows reviewing each of the launched studies, showing multiple sequences of the same study, using the whole reading space
-
- Tissue segmentation and editing tools allowing volumetric measurement of different lesion types
-
- Lesion management tool allowing matching between lesions in different studies to facilitate the assessment of differences over time
-
- CoBI feature (Comparative Brain Imaging) a supportive tool for visualization of subtle differences in lesions of the same individual across time for similar sequences. The CoBI feature provides a mathematical subtraction of scans yielding, after bias-field correction and intensity scaling, a colorcoded image of the differences in intensity between two registered scans.
-
- Results are displayed in tabular and graphical formats.
Here's a summary of the acceptance criteria and study information for the Philips Longitudinal Brain Imaging (LoBI) application, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance:
The document focuses on demonstrating substantial equivalence to predicate devices and adherence to regulatory standards rather than explicit quantitative acceptance criteria or detailed device performance metrics in a table format. The primary "acceptance criteria" are implied by compliance with:
- International and FDA-recognized consensus standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
- FDA guidance document: "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
- Internal Philips verification and validation processes: Ensuring the device "meets the acceptance criteria and is adequate for its intended use and specifications."
Since specific numerical performance criteria (e.g., accuracy, sensitivity, specificity for particular lesion types) and corresponding reported performance are not provided in this 510(k) summary, the table below reflects what is broadly stated.
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Compliance with ISO 14971 (Risk Management) | Demonstrated |
| Compliance with IEC 62304 (Software Life Cycle Processes) | Demonstrated |
| Compliance with IEC 62366-1 (Usability Engineering) | Demonstrated |
| Compliance with FDA Guidance for Software in Medical Devices | Demonstrated |
| Compliance with DICOM PS 3.1-3.18 (DICOM Standard) | Demonstrated |
| Fulfillment of intended functionality (CoBI feature, registration, segmentation, measurement, etc.) | Verified through "Full functionality test" (covering detailed requirements per Product Requirement Specification) and "Validation" (using real recorded clinical data cases to simulate actual use and ensure customer needs / intended functionality fulfillment). Performance demonstrated to meet defined functionality requirements and performance claims. |
| CoBI feature functions correctly and meets specifications | Proven through verification activities |
| Meets customer needs and fulfills intended functionality (validated with real clinical data) | Proven through validation activities |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: Not explicitly stated as a number of cases or images. The validation activities used "real recorded clinical data cases." The quantity of these cases is not specified.
- Data Provenance: The data used for validation consisted of "real recorded clinical data cases." No specific country of origin is mentioned. It is indicated as retrospective, as they are "recorded" data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- This information is not provided in the document. The general statement is that "The physician retains the ultimate responsibility for making the final diagnosis," suggesting human expert involvement in clinical practice, but not explicitly defining how ground truth for the test set was established or by whom.
4. Adjudication Method for the Test Set:
- This information is not provided in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI Vs Without AI Assistance:
- No MRMC comparative effectiveness study was done or reported. The document states explicitly: "The subject of this premarket submission. Longitudinal Brain Imaging (LoBI) application did not require clinical studies to support equivalence." The testing focused on verification and validation of the software's functionality and compliance with standards.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The document describes the LoBI application as a "post-processing software application intended to assist in the evaluation of serial brain imaging" and emphasizes that "The physician retains the ultimate responsibility for making the final diagnosis."
- While the software performs automated functions like registration, segmentation, and quantitative comparison, the validation process using "real recorded clinical data cases" seems to focus on the software's ability to provide accurate tools and information that a user would interpret.
- The description of "Full functionality test" and "RMF testing" could involve standalone algorithmic performance evaluation against predefined specifications. However, an explicit "standalone" performance study as a separate regulatory study with defined metrics (e.g., algorithm-only sensitivity/specificity against ground truth) is not detailed in this summary. The focus is on the tool's supportive role for the user.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):
- The type of ground truth used for the validation data is not explicitly specified. It refers to "real recorded clinical data cases," implying that the medical imaging data came with existing clinical interpretations or diagnoses, which would have implicitly served as a form of reference or "ground truth" for evaluating the software's utility in "confirming the absence or presence of lesions, including evaluation, quantification, follow-up and documentation." However, the method of establishing this ground truth (e.g., expert consensus, pathology) is not detailed.
8. The Sample Size for the Training Set:
- The document does not provide information regarding a distinct training set sample size or how the LoBI application was developed using machine learning or AI. The product description focuses on its functionality as a post-processing application with features like automatic registration and tissue segmentation, which could be rule-based or machine learning-driven, but this is not specified, nor is training data mentioned.
9. How the Ground Truth for the Training Set Was Established:
- Since a training set is not mentioned, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
(56 days)
Multi-Modality Tumor Tracking (MMTT) application is a post processing software application used to display, process, analyze , quantify and manipulate anatomical and functional images, for CT, MR PET/CT and SPECT/CT images and/or multiple time-points. The MMTT application is intended for use on tumors which are known/confirmed to be pathologically diagnosed cancer. The results obtained may be used as a tool by clinicians in determining the diagnosis of patient disease conditions in various organs, tissues, and other anatomical structure.
Philips Medical Systems' Multi-Modality Tumor Tracking (MMTT) application is a post - processing software. It is a non-organ specific, multi-modality application which is intended to function as an advanced visualization application. The MMTT application is intended for displaying, processing, analyzing, quantifying and manipulating anatomical and functional images, from multi-modality of CT ,MR PET/CT and SPECT/CT scans.
The Multi-Modality Tumor Tracking (MMTT) application allows the user to view imaging, perform segmentation and measurements and provides quantitative and characterizing information of oncology lesions, such as solid tumor and lymph node, for a single study or over the time course of several studies (multiple time-points). Based on the measurements, the MMTT application provides an automatic tool which may be used by clinicians in diagnosis, management and surveillance of solid tumors and lymph node, conditions in various organs, tissues, and other anatomical structures, based on different oncology response criteria.
The provided text does not contain detailed information about a study that proves the device meets specific acceptance criteria, nor does it include a table of acceptance criteria and reported device performance.
The submission is a 510(k) premarket notification for the "Multi-Modality Tumor Tracking (MMTT) application." For 510(k) submissions, the primary goal is to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving a device meets specific, pre-defined performance acceptance criteria through a rigorous clinical or non-clinical study that would be typical for a PMA (Premarket Approval) application.
Here's what can be extracted and inferred from the document regarding the device's validation:
Key Information from the Document:
- Study Type: No clinical studies were required or performed to support equivalence. The validation was based on non-clinical performance testing, specifically "Verification and Validation (V&V) activities."
- Demonstration of Compliance: The V&V tests were intended to demonstrate compliance with international and FDA-recognized consensus standards and FDA guidance documents, and that the device "Meets the acceptance criteria and is adequate for its intended use and specifications."
- Acceptance Criteria (Implied): While no quantitative table is provided, the acceptance criteria are implicitly tied to:
- Compliance with standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
- Compliance with FDA guidance documents for software in medical devices.
- Addressing intended use, technological characteristics claims, requirement specifications, and risk management results.
- Functionality requirements and performance claims as described in the device description (e.g., longitudinal follow-up, multi-modality support, automated/manual registration, segmentation, measurement calculations, support for oncology response criteria, SUV calculations).
- Performance (Implied): "Testing performed demonstrated the Multi-Modality Tumor Tracking (MMTT) meets all defined functionality requirements and performance claims." Specific quantitative performance metrics are not given.
Information NOT present in the document:
The following information, which would typically be found in a detailed study report proving acceptance criteria, is not available in this 510(k) summary:
- A table of acceptance criteria and the reported device performance: This document states the device "Meets the acceptance criteria and is adequate for its intended use and specifications," but does not list these criteria or the test results.
- Sample sizes used for the test set and the data provenance: No details on the number of images, patients, or data characteristics used for non-clinical testing.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience): Since it was non-clinical testing, there's no mention of expert involvement in establishing ground truth for a test set.
- Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable as no expert-adjudicated clinical test set is described.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed as no clinical studies were undertaken.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The V&V activities would have included testing the software's functionality, which could be considered standalone performance testing, but specific metrics are not provided. The device is a "post processing software application" used "by clinicians," implying a human-in-the-loop tool rather than a fully autonomous AI diagnostic device.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not detailed for the non-clinical V&V testing. For the intended use, the device is for "tumors which are known/confirmed to be pathologically diagnosed cancer," suggesting that the "ground truth" for the intended use context is pathological diagnosis. However, this is not the ground truth for the V&V testing itself.
- The sample size for the training set: Not applicable; this is a 510(k) for a software application, not specifically an AI/ML product where a training set size would be relevant for model development. The document does not describe any machine learning model training.
- How the ground truth for the training set was established: Not applicable for the same reason as above.
In summary, this 510(k) submission relies on a demonstration of substantial equivalence to existing predicate devices and internal non-clinical verification and validation testing, rather than a clinical study with specific, quantifiable performance metrics against an established ground truth.
Ask a specific question about this device
Page 1 of 1