Search Results
Found 4 results
510(k) Data Aggregation
(197 days)
FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray images of the liver arterial tree. Its output is intended as an adjunct means to help identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during embolization procedures.
FlightPlan for Liver is a post-processing software application for use with interventional fluoroscopy procedures, using 3D rotational angiography images as input. It operates on the AW VolumeShare 4 [K052995] and AW VolumeShare 5 [K110834] platform. It is an extension to the Volume Viewer application [K041521] utilizing the rich set of the 3D processing features of Volume Viewer. FlightPlan for Liver delivers post-processing features that will aid physicians in their analysis of 3D X-ray images of the liver arterial tree. Additionally FlightPlan for Liver includes an algorithm to highlight the potential vessel(s) in the vicinity of a target.
Here's an analysis of the provided text regarding the acceptance criteria and study for the FlightPlan for Liver device:
Acceptance Criteria and Device Performance
There is no explicit table of acceptance criteria or reported device performance metrics (e.g., sensitivity, specificity, AUC) in the provided document. The submission focuses on demonstrating substantial equivalence to a predicate device and confirming that the software functions as required and fulfills user needs.
The "Performance testing" mentioned is described as "computing time of algorithm on several data," implying it's a speed or efficiency metric rather than a diagnostic performance metric. The "Verification confirms that the Design Output meets the Design Input (Product Specifications) requirements" and "Validation confirms that the product fulfills the user needs and the intended use under simulated use conditions," but specific, quantifiable acceptance criteria are not detailed.
The "Summary of Clinical Tests" states that the study "demonstrate[d] the safety and effectiveness of FlightPlan for Liver" and compared its output "to a reference reading established by two senior interventional oncologists." However, the exact metrics used for comparison and the "acceptance criteria" for those metrics are not provided. The key takeaway is that the clinical data was not intended to support a claim of improved clinical outcomes.
Study Details
Here's what can be extracted about the study that proves the device meets the (unspecified quantitative) acceptance criteria:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria Category | Specific Criteria (Implicit/General) | Reported Device Performance |
---|---|---|
Functional Verification | Application works as required; Risk mitigations correctly implemented. | "Verification tests... performed to check whether the application works as required and whether the risk mitigations have been correctly implemented." |
Performance Testing | Algorithm computing time (specific targets not provided). | "Performance testing consists of computing time of algorithm on several data." |
Design Validation | Product fulfills user needs and intended use under simulated use conditions. | "Validation tests consist of typical use case scenario described by the sequence of operator actions. The Design Validation confirms that the product fulfills the user needs and the intended use under simulated use conditions." |
Clinical Effectiveness | Output provides adjunct information to aid physicians in evaluating hepatic arterial anatomy; output compared to reference reading. | Output was compared to a reference reading established by two senior interventional oncologists. No specific quantitative performance metrics (e.g., accuracy, precision) are provided, nor are numerical results of this comparison. |
Substantial Equivalence | Functionality, safety, and effectiveness are comparable to the predicate device. | "GE Healthcare considers the FlightPlan for Liver application to be as safe and as effective as its predicate device, and its performance is substantially equivalent to the predicate device." |
2. Sample size used for the test set and the data provenance
- Test Set Size: 44 subjects, representing a total of 66 tumors.
- Data Provenance: Retrospective study. The country of origin is not explicitly stated, but given the submitter's address (Buc, FRANCE) and the GE Healthcare global nature, it could be either European or multinational, but this is speculative.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Two.
- Qualifications: "Senior interventional oncologists." Specific experience (e.g., years) is not provided.
4. Adjudication method for the test set
- The ground truth was established by a "reference reading established by two senior interventional oncologists." While it states the two established the reference, it doesn't specify if this was by consensus, independent reads with adjudication, or another method. The phrasing "a reference reading established by two" suggests a single, agreed-upon ground truth, likely consensus or 2-reader agreement if initial reads differed.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not done. The study specifically states that the clinical data "was not designed nor intended to support a claim of an improvement in clinical outcomes of such procedures, and no such claim is being made." The study focused on comparing the device's output to an expert reference, not on human performance improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, the clinical study appears to evaluate the algorithm's standalone performance compared to a "reference reading." The "output of FlightPlan for Liver was compared to a reference reading," indicating the algorithm's direct output was assessed. No mention is made of human interaction or interpretation of the algorithm's output as part of this comparison.
7. The type of ground truth used
- Type of Ground Truth: Expert consensus/reference reading. Specifically, "a reference reading established by two senior interventional oncologists."
8. The sample size for the training set
- The document does not mention the sample size for the training set. It only describes the clinical study as a "retrospective study" used for verification and validation, implying it was a test set. There's no information about the data used to train the "algorithm to highlight the potential vessel(s)."
9. How the ground truth for the training set was established
- This information is not provided since the document does not detail the training set or its ground truth establishment.
Ask a specific question about this device
(32 days)
AW VolumeShare 5 is a review workstation, which allows easy selection, review, processing and filming of multi-modality DICOM images from a variety of diagnostic imaging systems. When interpreted by a trained physician, filmed or displayed images on the AW monitor may be used as a basis of diagnosis, except in the case of mammography images.
AngioViz is an application which produces from a DSA series parametric images representing maximum opacification, time to peak and combinations of those, to enable the user to more easily visualize characteristics related to vascular flow.
The AngioViz application can be used to process DSA image data from any location in the human body for which DSA imaging is used.
The AW VolumeShare 5 is a stand-alone workstation with its own image database residing on its dedicated computer. The AW VolumeShare 5 workstation supports functions for image display, manipulation, and selective recording (either on film or on disk).
The AW VolumeShare 5 is intended to be used to create and review diagnostic evidence related to radiology procedures by trained and licensed physicians and/or qualified clinical/medical personnel. The device is not intended for diagnosis of mammography images
AW VolumeShare 5 workstation, like its predicate Advantage Workstation 4.3, provides a platform for a variety of other GE software medical devices to operate, all of which are cleared by FDA in their own names.
AngioViz is an option offered on AW Volume Share 5. It is an integrated post processing image analysis software dedicated to the application of vascular imaging on body vessels.
AngioViz is an application which produces from a DSA series parametric images representing maximum opacification, time to peak and combinations of those, to enable the user to more easily visualize characteristics related to vascular flow.
The AngioViz application can be used to process DSA image data from any location in the human body for which DSA imaging is used.
The provided 510(k) premarket notification for the GE Healthcare AW VolumeShare 5 with AngioViz Option does not contain acceptance criteria or a study proving that the device meets specific performance criteria.
Here's a breakdown of the information that is present and absent:
1. Table of Acceptance Criteria and Reported Device Performance:
- Absent. The document explicitly states: "The subject of this premarket submission, AW VolumeShare 5 with AngioViz, did not require clinical studies to support substantial equivalence." This means there were no performance metrics defined or measured for this specific submission to demonstrate equivalence to a predicate device. The submission focuses on the technological equivalence and safety, not on specific performance claims measured against acceptance criteria.
2. Sample Size Used for the Test Set and Data Provenance:
- Absent. Since no clinical studies were performed, there is no test set or associated sample size discussed.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Absent. As no clinical studies were performed, there was no test set requiring expert-established ground truth.
4. Adjudication Method:
- Absent. No clinical studies, no adjudication method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Absent. The document explicitly states no clinical studies were required or performed. Therefore, no MRMC study was conducted or reported.
6. Standalone Performance Study:
- Absent. No standalone performance study was conducted or reported for the AngioViz option. The submission focuses on the AW VolumeShare 5 as a review workstation and the technological similarity of AngioViz to the predicate device's underlying technology.
7. Type of Ground Truth Used:
- Absent. Ground truth is not relevant in the context of this submission, which relies on technological equivalence rather than performance evaluation against a gold standard for specific diagnostic claims.
8. Sample Size for the Training Set:
- Absent. This submission does not describe any machine learning or AI models that would require a 'training set.' The AngioViz application is described as generating "parametric images representing maximum opacification, time to peak and combinations of those, to enable the user to more easily visualize characteristics related to vascular flow." This implies image processing and visualization techniques, not necessarily a trained AI model with a training set in the typical sense.
9. How the Ground Truth for the Training Set Was Established:
- Absent. As there is no training set mentioned, there is no discussion of how ground truth would be established for it.
In summary:
This 510(k) submission for the GE Healthcare AW VolumeShare 5 with AngioViz Option received FDA clearance based on substantial equivalence to a predicate device (Advantage Workstation 4.3 (K052995)). The justification for substantial equivalence primarily relies on:
- Technological Equivalence: Stating that "AW VolumeShare 5 with AngioViz option employs the same Technology as that of its predicate device."
- Compliance with Voluntary Standards: (as detailed in Sections 9, 11, and 16 of the submission, though these sections are not provided in the snippet).
- Quality Assurance Measures: Including Risk Analysis, Requirements Reviews, Design Reviews, Performance testing (Verification), Safety testing (Verification), and Final acceptance testing (Validation). However, the results of these tests and their specific acceptance criteria are not detailed in the provided pages.
No clinical studies were performed or deemed necessary to support this substantial equivalence determination. Therefore, the document does not contain the specific performance metrics, test sets, expert ground truth establishment, or comparative studies you requested.
Ask a specific question about this device
(73 days)
AW Server is a medical software system that allows multiple users to remotely access AW applications from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images.
Both the client and server software are only for use with off the shelf hardware technology that meets defined minimum specifications.
The device is not intended for diagnosis of mammography images. The device is not intended for diagnosis of lossy compressed images. For other images, trained physicians may use the images as a basis for diagnosis upon ensuring that monitor quality, ambient light conditions and image compression ratios are consistent with clinical application.
AW Server is a medical software system that allows multiple users to remotely access AW applications from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images.
Both the client and server software are only for use with off the shelf hardware technology that meets defined minimum specifications.
The device is not intended for diagnosis of mammography images. The device is not intended for diagnosis of lossy compressed images. For other images, trained physicians may use the images as a basis for diagnosis upon ensuring that monitor quality, ambient light conditions and image compression ratios are consistent with clinical application.
A W Server is a software package delivered with off-the-shelf server-class hardware that allows easy selection, review, processing and filming of multiple modality DICOM images from a variety of PC client machines, using LAN or WAN networks. It also allows user selectable loss-less and lossy compression schemes that are used in order to make a trade-off between speed and quality.
AW Server is intended to be used in a manner similar to the current GE Medical Systems A W workstation product. It will be used to create and review diagnostic evidence related to radiology procedures by trained physicians in General Purpose Radiology, Oncology, Cardiology and Neurology clinical areas.
AW Server, like Advantage Workstation 4.3, may be used with a variety of other GE software medical devices, which are cleared by FDA in their own names.
The provided text describes a 510(k) summary for the GE Medical Systems' AW Server. This device is a medical software system that allows remote access to AW applications, networking, selection, processing, and filming of multimodality DICOM images. However, the document does NOT contain information about acceptance criteria or a study proving performance against such criteria.
The 510(k) submission primarily focuses on demonstrating substantial equivalence to existing predicate devices (Advantage Workstation 4.3 and AquariusNET Server) based on functional features and intended use. It highlights that the AW Server does not introduce new potential safety risks and performs comparably to devices already on the market.
Therefore, I cannot fulfill your request for: a table of acceptance criteria and reported device performance, sample sizes, data provenance, number of experts, adjudication methods, MRMC study details, standalone performance, type of ground truth, training set sample size, or how training ground truth was established.
The document states:
- "AW Server does not result in any new potential safety risks and performs as well as devices currently on the market."
- "GE considers features of the AW Server to be equivalent to predicate devices listed in section 6."
- "GE has assessed and tested this device as a software moderate Level of Concern device."
These statements suggest that the "acceptance criteria" were primarily demonstrating equivalence in functionality and safety to legally marketed predicate devices, rather than meeting specific quantitative performance metrics from a clinical study. The testing mentioned refers to "Software Development, Validation and Verification Process to ensure performance to specifications, Federal Regulations and user requirements" and "Adherence to industry and international standards," which are general quality and regulatory compliance activities, not a clinical performance study with specific endpoints as you've requested.
Ask a specific question about this device
(35 days)
Myrian is a multi modality medical diagnostic device. It is aimed at reviewing and analysing anatomy and pathology. It also includes DICOM communication capabilities and media interchange features (printing, CD burning, storing). It runs on any standard PC including laptops that might be purchased independently by the end user. It provides user a set of tools meant to create and modify volumes of interest. This device is not indicated for mammography use. Lossy compressed mammography images and digitized film screen images must not be used for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 mega pixel resolution and meets other technical specifications approved by the FDA.
Myrian® system is a software suite providing the following services : Import of DICOM images from any DICOM modality, workstation or PACS Visualization of DICOM images in thin MPR, thick MPR and full 3D volume rendering Creation of VOI (Volume Of Interest) with dedicated tools Calculation of volumes, surface and of average, minimum and maximum densities of VOI Follow-up of patient examination Generation of medical reports Export of DICOM images to any format, DICOM entity or media
Here's an analysis of the provided text regarding the Intrasense MYRIAN device:
Analysis of Intrasense MYRIAN Device Performance and Study
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, specific numerical acceptance criteria and corresponding reported device performance metrics are not explicitly stated. The submission focuses on demonstrating substantial equivalence to predicate devices and adherence to general software safety guidelines.
Acceptance Criteria Category | Acceptance Criteria (As stated or inferred) | Reported Device Performance (As stated or inferred) |
---|---|---|
General Compliance | Requirements of FDA "Guidance of the Content of Pre Market Submissions for Software Contained in Medical Devices" | MYRIAN meets the required specifications. |
Adverse Effects | No adverse effects detected. | No adverse effects have been detected. |
Feature Functionality | All described functionalities (Image import, Visualization, VOI creation, Calculation, Follow-up, Reporting, Export) operate as intended. | User Site Testing and Benchmarking demonstrate MYRIAN meets required specifications. Implied successful operation of features. |
Safety and Effectiveness | Substantially equivalent to predicate devices in terms of safety and effectiveness. | The technological characteristics, features, specifications, materials, mode of operation, and intended use of MYRIAN device are equivalent to those of the predicate devices. Differences do not raise new issues of safety or effectiveness. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "User Site Testing, Benchmarking and clinical data analysis" for performance verification. However, no specific sample sizes for the test set or details about data provenance (e.g., country of origin, retrospective/prospective nature) are provided.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not specify the number of experts used to establish ground truth or their qualifications. It states that "Typical users of Myrian® with its Modules are trained medical professionals, including but not limited to radiologists, technologists and clinicians," and that images, "When interpreted by a trained physician, filmed or displayed images on the Myrian® and its Modules may be used as a basis for diagnosis." This implies that medical professionals would be involved in evaluating the device, but no details on ground truth establishment are given.
4. Adjudication Method
The document does not mention any specific adjudication method (e.g., 2+1, 3+1) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. There is no mention of comparing human readers with and without AI assistance or any effect size of improvement. The device description and performance data focus on its standalone functionality and equivalence to predicate devices.
6. Standalone (Algorithm Only) Performance Study
The document implies that the device's performance was evaluated in various settings, stating "User Site Testing, Benchmarking and clinical data analysis demonstrate that MYRIAN meet the required specifications." This suggests that the algorithm and its features were tested for their intended functionality, which aligns with standalone performance evaluation. However, specific metrics of "algorithm-only" performance (like sensitivity, specificity, accuracy for a particular task) are not provided. The focus is on the software suite's general functionality for image processing, visualization, and measurement.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). While it mentions that images interpreted by a trained physician may be used for diagnosis, it doesn't describe how ground truth was established for the purpose of validating the device's performance.
8. Sample Size for the Training Set
The document does not specify the sample size used for any training set. Given the submission date (2007) and the description of the device (a software suite for general image processing and visualization), it's highly likely that this device does not utilize deep learning or other machine learning algorithms that require explicit "training sets" in the modern sense. It appears to be a rule-based or conventional image processing software.
9. How Ground Truth for the Training Set Was Established
As there's no mention of a training set, the document does not describe how ground truth for a training set was established.
Ask a specific question about this device
Page 1 of 1