Search Results
Found 7 results
510(k) Data Aggregation
(444 days)
Eigen
Artemis along with the Needle Guide Attachment is used for image-guided interventional and diagnostic procedures of the prostate gland. It provides 2D and 3D visualization of Ultrasound (US) images and the ability to fuse and register these images with those from other imaging modalities such as Ultrasound, Magnetic Resonance, Computed Tomography, etc. It also provides the ability to display a simulated image of a tracked insertion tool such as a biopsy needle, guidewire or probe on a computer monitor screen that shows images of the target organ and the projected future path of the interventional instrument taking into account patient movement. The software also provides a virtual grid on the live ultrasound for performing systematic sampling of the target organ. Other software features include patient data management, multi-planar reconstruction, segmentation, image measurements, 2D/3D image registration, reporting, and pathology management.
Artemis is intended for treatment planning and guidance for clinical, interventional and/or diagnostic procedures. The device is intended to be used in interventional and diagnostic procedures in a clinical setting. Example procedures include, but are not limited to image fusion for diagnostic clinical examinations and procedures, soft tissue ablations and placement of fiducial markers. Artemis is also intended to be used for patients in active surveillance to keep track of previous procedures information and outcomes.
Artemis Cryo Treatment Planning module is an add on to the existing Artemis software that allows physicians to prepare for cryo treatment planning based on positive pathology cores obtained during Artemis guided biopsies and registration results with other imaging modalities such as MRI, CT. The module allows accurate placement of cryo probes on targets, 3D tracking, real-time feedback on extend of cryo ice formation. The technology provided by Artemis generates ice models based on the specifications provided by the cryo device manufacturers and displays the models on the live ultrasound to provide guidance to the users during the procedure.
The module also allows outlining or segmenting other organs that surround the prostate. Organs include bladder and urethra.
Artemis is designed to display the 2-D live video received from commercially available ultrasound machines and use this 2-D video to reconstruct a 3-D ultrasound image. The system has been designed to work with the clinicians' existing ultrasound machine, probe, commercially available biopsy needle guide, needle gun combination, and cryoablation systems. Additional software features include patient data management, multi-planar reconstruction, segmentation, image measurement, reporting and 3-D image registration.
Artemis is comprised of a mechanical assembly that holds the ultrasound probe and tracks probe position. The mechanical tracker is connected to a PC-based workstation containing a video digitizing card and running the image processing software. Control of the ultrasound probe and ultrasound system is done manually by the physician, just as it would be in the absence of Artemis. However, by tracking the position and orientation of the ultrasound probe while capturing the video image, the workstation is able to reconstruct and display a 3-D image and 3-D rendered surface model of the prostate.
The reconstructed 3-D image can be further processed to perform various measurements including volume estimation, which can be examined for abnormalities by a physician. Patient information, notes, and images may be stored for future retrieval, and locations for biopsies may be selected by the physician. The system also allows previously acquired 3-D models to be recalled, aligned, or registered to the current 3-D model of the prostate, which is especially useful for patients under active surveillance.
The physician may attach a commercially available biopsy needle guide compatible to the ultrasound probe and use the probe and needle to perform tissue biopsy and or cryoablation. Whenever the ultrasound machine is turned on by the physician, the live 2-D ultrasound image is displayed on the screen of Artemis during the procedure. As the ultrasound probe with attached needle guide is maneuvered by the physician, the position and orientation of the probe with respect to the organ is tracked. Artemis is able to add, display and edit loaded plans for the procedure as well as provide the probe position and needle trajectory relative to the 3-D image and 3-D rendered surface model of the prostate.
In addition to standard transrectal needle guidance procedures, Artemis also supports transperineal needle guidance by mounting a Needle Guide Attachment (NGA). A commercially available needle guide compatible with the NGA is used. This NGA will be used for both biopsy and cryo needles. The NGA provides additional data to track the needle direction angle. When using transperineal mode, the procedure planning, segmentation, registration and navigation are performed in the same way as the standard transrectal procedure. The only difference lies in how the needle guide needs to be moved to target the different planned locations. For the transrectal procedure, the needle guide is always attached to the probe. Therefore moving the probe moves the needle guide. In transperineal needle guidance procedures the needle is not attached to the probe. Therefore the NGA needs to be moved to move the needle guide. Artemis highlights the closed target to the current needle guide position.
Artemis offers the physician additional 3-D information for assessing prostate abnormalities, planning and implementing biopsy procedures. The additional image processing features are generated with minimal changes to previous Ultrasound probe based procedures, and the physician always has access to the live 2-D ultrasound image during prostate assessment or biopsy procedure. The device also provides automated reports with information and pictures from the procedure.
The provided text describes the acceptance criteria and the study proving the device meets these criteria for the Artemis medical imaging system.
Here's a breakdown of the requested information based on the text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of predetermined acceptance criteria with corresponding performance results. Instead, it broadly states that "Nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met." It mentions that "Measurement validation using, phantoms, clinical CT, and MRI images were used to show that Artemis performs as well as or better than the predicate devices and furthermore shows that Artemis was safe and effective."
Below is a table summarizing the types of tests and the overall conclusion regarding acceptance, as the specific numerical criteria and results are not detailed in this public summary.
Acceptance Criteria Category | Reported Device Performance (General Statement) |
---|---|
Design Validation | Met; performed by designated individuals. |
Function Validation | Met; performed by designated individuals. |
Specification Validation | Met; performed by designated individuals. |
Input Functions Testing | Passed all in-house testing criteria. |
Output Functions Testing | Passed all in-house testing criteria. |
Actions in Each Operational Mode | Passed all in-house testing criteria. |
Safety and Effectiveness | Performs as well as or better than predicate devices; safe and effective. |
Compliance with Applicable Standards (Emissions, Immunity, Risk, Usability) | Complies with IEC/EN 60601-1-2, EN 55011, CISPR 11, IEC 61000 series, EN/ISO 14971, IEC 62366. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document does not specify the exact sample size (number of phantoms or clinical images) used for the measurement validation or other performance tests. It states "Measurement validation using, phantoms, clinical CT, and MRI images were used."
- Data Provenance: The provenance (e.g., country of origin, retrospective or prospective) of the clinical CT and MRI images used for measurement validation is not specified in the provided text. The testing appears to be non-clinical and performed at the manufacturer's facility ("at the manufacturer's facility and has passed all in-house testing criteria").
3. Number of Experts and Qualifications for Ground Truth
The document does not specify the number of experts used to establish ground truth or their qualifications. The testing described is "nonclinical and performance testing" and "measurement validation," suggesting a focus on technical accuracy rather than human interpretation studies.
4. Adjudication Method for the Test Set
The document does not mention any adjudication method for establishing ground truth, such as 2+1 or 3+1.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of an MRMC comparative effectiveness study being performed, nor any effect size regarding how human readers might improve with AI vs. without AI assistance. The study described focuses on the device's technical performance and comparison to predicate devices, not on human-in-the-loop performance.
6. Standalone (Algorithm Only) Performance
The testing primarily focuses on the device's technical performance, including "measurement validation." This strongly implies that a standalone (algorithm only) performance evaluation was conducted to ensure the device's core functionalities, such as image reconstruction, segmentation, registration, and ice model generation, meet design specifications independently. The statement "Artemis has been assessed and tested at the manufacturer's facility and has passed all in-house testing criteria including validating design, function and specifications" supports this. Specific performance metrics (e.g., accuracy, precision) for these standalone functions are not provided, only a general statement of meeting acceptance criteria.
7. Type of Ground Truth Used
The type of ground truth used for performance validation included:
- Phantoms: For measurement validation.
- Clinical CT images: For measurement validation.
- Clinical MRI images: For measurement validation.
The basis for the "ground truth" on these phantoms and clinical images (e.g., known measurements for phantoms, expertly annotated features on clinical images) is implied but not explicitly detailed.
8. Sample Size for the Training Set
The document does not provide information regarding the sample size used for any potential training set. The descriptions of "nonclinical and performance testing" and "measurement validation" focus on evaluation (test set) rather than model training. It's possible that a training set was used for specific software features involving image processing or reconstruction, but this is not mentioned.
9. How the Ground Truth for the Training Set Was Established
As no information about a training set is provided, there is also no information on how its ground truth might have been established.
Ask a specific question about this device
(349 days)
Eigen
ProFuse CAD is a post-processing software tool intended for viewing, reviewing and reporting imaging files such as Magnetic Resonance Imaging (MR), Computed Tomography (CT) and Positron Emission Tomography (PET) studies.
The software is able to process time series modality data acquired before, during, and after a contrast agent has been administered to the patient. The software allows a physician to evaluate tissue characteristics based on contrast enhancement visible on the time series data can also be processed to get subtraction image time -series with any reference time point.
ProFuse CAD also provides capability to process a specialized MRI series called diffusion weighted imaging to evaluate tissue characteristics based on water diffusion. Some of the other posts -processing features include multi-planar reformats and registration between images.
ProFuse CAD has a marking feature that allows the user to outline organs with minimal input and also annotate regions in the image in 3D. The software also allows for grading the annotated regions as defined by the user. After review, the software automatically generates a patient report that provides information like; organ volume, annotations; and grading along with automatically generated screenshots of the annotations in different images.
Planned data from ProFuse CAD can be used for interventional procedures like MR - TRUS fusion biopsies. The data can be displayed on medical device data systems.
ProFuse CAD is used as a tool to review and add to the results of interventional procedures like; adding pathology information when reviewing MR-TRUS fusion biopsy.
When interpreted by a skilled physician, this device provides information that is intended to be used for screening, analysis, and interventional planning. Patient management decisions should not be made based solely on the results of ProFuse CAD.
ProFuse CAD is intended to be used as an image viewer of multi-modality, digital images, including Ultrasound, CT, PET.
ProFuse CAD is intended to be used in a variety of setting such as; medical offices, clinics, hospital, and home office.
ProFuse CAD is standalone Computer Aided Detection (CAD) software that is used by radiologists to visualize, analyze, plan and interpret medical images using tools available in the software. Some of the tools in the software are
- Visualization: The software allows simultaneous visualization of different images with the ability to view them in different orientations e.g. transverse, sagittal, coronal and 3D.
- Organ segmentation: The software allows user to outline the organ of interest (e.g. prostate)
- Image annotation: The software allows user to mark regions of interest (ROI) on images
- Time-series analysis: The software allows user to view the time-curve plots for a single location or average over a region (ROI). The user could also calculate different parametric maps based on pharmacokinetic modeling. The parametric maps could be viewed as a separate series or be overlaid as color on another series. The software also allows users to calculate subtracting images from a customizable user defined time point.
- Diffusion series analysis: Diffusion weighted series is a special type of magnetic resonance imaging (MRI) sequence that measures diffusion of water molecules in the body. A set of different diffusion weighted images are obtained and from the set of images the software allows the user to fit different mathematical models to extract different parametric maps. The parametric maps could be viewed as a separate series or be overlaid as color on another series.
- Report: The software allows the user to create reports following either the PIRADS (see definition) standard or a customizable standard.
- Export: The software allows the planned images to be exported for different procedures e.g. biopsy, therapy, etc. Information exported by ProFuse CAD could then be used in other software solutions like ProFuse Bx, ProFuse FP.
- Review: Review, add and modify data from the 3D visualization software (Artemis). The data includes acquired volume, organ segmentation, planned and recorded biopsy location and pathology information.
The provided text describes the ProFuse CAD device and its regulatory submission (K173744) but does NOT contain specific, detailed information about acceptance criteria for particular performance metrics or the quantitative results of a study designed to prove the device meets those criteria.
The document discusses "Testing and Performance Data" and "Nonclinical Testing and Performance Information," stating that "all product and engineering specifications were verified and validated" and that "nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met." However, it fails to specify what those acceptance criteria were or what the quantitative performance results were. It also mentions the use of "simulated and retrospective clinical data" for verification of diffusion and perfusion analysis tools but does not give the sample sizes or other study details requested.
Therefore, I cannot fully complete the requested table and information based on the provided text.
Here's what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
- Cannot create. The document states that predetermined acceptance criteria were met, but it does not specify what those criteria are (e.g., minimum accuracy, sensitivity, specificity, or volume estimation error) nor does it provide precise numerical results for the device's performance against any specific metric. It only vaguely mentions "measurement validation using phantoms, clinical images were used to show that ProFuse CAD performs as well as or better than the other primary and referenced predicate devices."
2. Sample size used for the test set and the data provenance
- Partial information: "Simulated and retrospective clinical data were used for verification of the diffusion and perfusion analysis tools."
- Missing: Specific sample sizes (number of cases/studies) for the test set. Country of origin for the retrospective clinical data is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Missing. The document does not specify how ground truth was established, nor does it mention the number or qualifications of experts involved in the test set evaluation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Missing. No information on adjudication methods for establishing ground truth or evaluating device performance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Missing. The document describes ProFuse CAD as a "post-processing software tool" and "standalone Computer Aided Detection (CAD) software." It states it's intended to be "interpreted by a skilled physician" and that "Patient management decisions should not be made based solely on the results of ProFuse CAD." This implies human-in-the-loop usage. However, it does not describe an MRMC study comparing human performance with and without AI assistance, nor does it provide any effect size.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Implied but not detailed: The document states that "nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met." It also mentions "measurement validation using phantoms, clinical images were used to show that ProFuse CAD performs as well as or better than the other primary and referenced predicate devices." This implies standalone performance was evaluated, especially for features like volume estimation accuracy with phantoms and diffusion/perfusion analysis. However, no specific standalone performance metrics or results are given.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Partial/Vague:
- For "volume estimation accuracy studies," "test phantoms incorporating simulated prostates" were used. For these, the ground truth would likely be the known volumes of the phantoms.
- For "diffusion and perfusion analysis tools," "simulated and retrospective clinical data" were used. The document does not specify how ground truth was established for these clinical cases (e.g., expert reads, pathology, follow-up).
- The "Intended Use" section mentions "adding lab pathology information when reviewing MR-TRUS fusion biopsy," which hints at pathology being a relevant ground truth for certain clinical applications, but it's not explicitly stated as the ground truth for the verification studies.
8. The sample size for the training set
- Missing. The document does not provide any information about the training set, indicating it was either not applicable (e.g., if it's a rule-based system or not a machine learning model where a distinct "training set" and "test set" are used in the typical ML sense, although "simulated and retrospective clinical data" were used for verification) or that this information was not deemed necessary for the 510(k) summary provided.
9. How the ground truth for the training set was established
- Missing. As the training set size is missing, so is this information.
In summary, the provided document is a high-level 510(k) clearance letter and a summary of the device's features and intended use. It asserts that performance testing was conducted and acceptance criteria were met but does not provide the detailed quantitative results or the comprehensive study design information that would be necessary to fill out the requested table and answer many of the questions.
Ask a specific question about this device
(45 days)
EIGEN
Artemis along with the Needle Guide Attachment is used for image-guided interventional and diagnostic procedures of the prostate gland. It provides 2D and 3D visualization of Ultrasound (US) images and register these images with those from other imaging modalities such as Ultrasound, Magnetic Resonance, Computed Tomography, etc. It also provides the ability to display a simulated image of a tracked insertion tool such as a biopsy needle, guidewire or probe on a computer monitor screen that shows images of the target organ and the projected future path of the interventional instrument taking into account patient movement. The software also provides a virtual grid on the live ultrasound for performing systematic sampling of the target organ. Other software features include patient data management, multi-planar reconstruction, segmentation, image measurements, 2D/3D image registration, reporting, and pathology management.
Artemis is intended for treatment planning and guidance for clinical, interventional and/or diagnostic procedures. The device is intended to be used in interventional and diaical setting. Example procedures include, but are not limited to image fusion for diagnostic clinical examinations and procedures, soft tissue ablations and placement of fiducial markers. Artemis is also intents in active surveillance to keep track of previous procedures information and outcomes.
Artemis is designed to display the 2-D live video received from commercially available ultrasound machines and use this 2-D video to reconstruct a 3-D ultrasound image. The system has been designed to work with the clinicians' existing ultrasound machine, TRUS probe, commercially available needle guide, and needle gun combination. Additional software features include patient data management, multi-planar reconstruction, segmentation, image measurement, reporting and 3-D image registration.
Artemis is comprised of a mechanical assembly that holds the ultrasound probe and tracks probe position. The mechanical tracker is connected to a PC-based workstation containing a video digitizing card and running the image processing software. Control of the ultrasound probe and ultrasound system is done manually by the physician, just as it would be in the absence of Artemis. However, by tracking the position and orientation of the ultrasound probe while capturing the video image, the workstation is able to reconstruct and display a 3-D image and 3-D rendered surface model of the prostate.
The reconstructed 3-D image can be further processed to perform various measurements including volume estimation, and can be examined for abnormalities by the physician. Patient information, notes, and images may be stored for future retrieval. Locations for biopsies may be selected by the physician, displayed on the 3-D image and 3-D rendered surface model, and stored. Previously stored 3-D models may be recalled and a stored 3-D model may be aligned or registered to the current 3-D model of the prostate. This is especially useful for patients under active surveillance.
The physician may attach a commercially available biopsy needle guide compatible to the TRUS probe and use the probe and biopsy needle to perform tissue biopsy. Whenever the ultrasound machine is turned on by the physician, the live 2-D ultrasound image is displayed on the screen of Artemis during the biopsy. As the TRUS probe with attached needle guide is maneuvered by the physician, the position and orientation of the probe with respect to the organ is tracked. Artemis is able to add, display and edit loaded plans for biopsy as well as provide the probe position and needle trajectory relative to the 3-D image and 3-D rendered surface model of the prostate.
In addition to standard transrectal needle guidance procedures, Artemis also supports transperineal needle guidance by mounting a Needle Guide Attachment (NGA). A commercially available needle guide compatible with the NGA is used. The NGA provides additional data to track the needle direction angle. When using transperineal mode, the procedure planning, segmentation, registration and navigation are performed in the same way as the standard transrectal procedure. The only difference lies in how the needs to be moved to target the different planned locations. For the transrectal procedure, the needle guide is always attached to the probe. Therefore moving the probe moves the needle guide. In transperineal needle guidance procedures the needle is not attached to the probe. Therefore the NGA needs to be moved to move the needle guide. Artemis highlights the closed target to the current needle guide position.
Artemis offers the physician additional 3-D information for assessing prostate abnormalities, planning and implementing biopsy procedures. The additional image processing features are generated with minimal changes to previous TRUS probe based procedures, and the physician always has access to the live 2-D ultrasound image during prostate assessment or biopsy procedure. The device also provides automated reports with information and pictures from the procedure.
The provided document is a 510(k) summary for the medical device "Artemis". This document focuses on demonstrating substantial equivalence to predicate devices and outlines nonclinical testing performed. However, it does not contain detailed acceptance criteria, specific reported device performance metrics (e.g., sensitivity, specificity, accuracy, dice score), or a study that directly proves the device meets specific performance acceptance criteria related to clinical efficacy or diagnostic accuracy.
The document states: "Nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met. The Artemis has been designed to comply with the applicable standards." However, these detailed results and acceptance criteria are not elaborated within this specific K162474 summary. The provided text primarily focuses on regulatory compliance, safety, and a comparison of technological characteristics with predicate devices to establish substantial equivalence.
Based on the available text, here's what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Criterion Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Safety & Effectiveness | Device labeling contains instructions for use and necessary cautions, warnings, and notes. Risk Management procedure identifies and controls potential hazards. | Passed all in-house testing criteria validating design, function, and specifications. Measurement validation using phantoms, clinical CT, and MRI images showed performance as well as or better than predicate devices, and demonstrated safety and effectiveness. |
Regulatory Compliance | Complies with applicable standards (listed below for Emissions, Immunity, Risk, Usability). | Designed to comply with: |
- IEC/EN 60601-1-2:2007/AC:2010, EN 55011:2009+A1:2010, CISPR 11:2009+A1:2010, IEC 61000-3-2:2005+A1:2009 +A2:2009, EN 61000-3-2:2006+A1:2009 +A2: 2009, IEC 61000-3-3:2008, EN 61000-3-3:2008 (Emissions)
- IEC/EN 60601-1-2:2007/AC:2010, IEC 61000-4-2:2008, EN 61000-4-2:2009, IEC 61000-4-3:2006+A1:2008 +A2:2010, EN 61000-4-3:2006+A1:2008 +A2:2010, IEC 61000-4-4: 2004+A1:2010, EN 61000-4-4:2004+A1:2010, IEC 61000-4-5:2005, EN61000-4-5:2006, IEC61000-4-6:2004/A2:2006, EN61000-46:2009, IEC 61000-4-8:2009,EN61000-4-8:2010,IEC61000-4-11:2004,, EN61000-4-11:2004 (Immunity)
- EN/ISO 14971:2012, IEC 62366:2007, IEC 60601-1-6:2010 (Risk and Usability) |
Note: The document explicitly states "Nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met," but the specific numerical acceptance criteria for performance (e.g., accuracy of measurement, registration error tolerance, segmentation precision) and the corresponding quantitative results are not included in this summary. The "Reported Device Performance" column above summarizes the claim of meeting criteria rather than the data itself.
Regarding the study proving the device meets acceptance criteria:
The document mentions "Measurement validation using, phantoms, clinical CT, and MRI images were used to show that Artemis preforms as well as or better than the other predicate devices and furthermore shows that Artemis was safe and effective." This indicates that nonclinical testing was performed.
However, the specific "study" details are limited in this summary. This document is a summary and refers to more detailed "Nonclinical and performance testing results...provided in the 510(k)." The information below is based only on what is explicitly stated in the provided text.
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not specified in the provided summary.
- Data Provenance: The document mentions "phantoms, clinical CT, and MRI images." It does not specify the country of origin of the clinical data. It is implied these were retrospective images used for validation, as it doesn't mention a prospective clinical trial for this 510(k).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided in the summary.
4. Adjudication method for the test set
- This information is not provided in the summary.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study involving human readers with and without AI assistance is not mentioned in this summary. The device's primary function is image-guided interventional and diagnostic procedures (e.g., visualization, registration, navigation, segmentation, measurement for prostate biopsies/treatment), which are often tools for physicians rather than AI for interpretation of images. The comparison focuses on the device's technical performance against predicate devices, not on physician performance improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document describes Artemis as providing "2D and 3D visualization," "ability to fuse and register images," "display a simulated image of a tracked insertion tool," "virtual grid," "patient data management, multi-planar reconstruction, segmentation, image measurements, 2D/3D image registration, reporting, and pathology management." These functionalities are tools to assist a human physician. The text "Control of the ultrasound probe and ultrasound system is done manually by the physician, just as it would be in the absence of Artemis" further indicates a human-in-the-loop system. While several features (segmentation, registration, measurement) are algorithmic, the context suggests these are components of a larger system used by a physician, rather than a standalone diagnostic AI algorithm. Therefore, a standalone performance evaluation in the typical sense of a diagnostic AI is not specifically detailed or claimed in this summary.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The summary states "Measurement validation using, phantoms, clinical CT, and MRI images." For phantoms, the ground truth would be the known physical dimensions or properties of the phantom. For clinical CT and MRI images, the method for establishing ground truth (e.g., pathology, surgical findings, expert measurement, or other imaging modalities) is not specified.
8. The sample size for the training set
- The document describes "nonclinical testing" and "measurement validation." It does not provide details on a "training set" for a machine learning algorithm, which suggests the device might not heavily rely on a machine learning model that requires a distinct training/test split in the way modern AI algorithms do for clinical inference. The discussion around "software source code for basic system functionality" points to more traditional image processing and navigation algorithms. Therefore, a "training set" in the context of deep learning is not applicable or mentioned in this summary.
9. How the ground truth for the training set was established
- As a training set is not explicitly mentioned or clearly applicable based on the summary, how its ground truth was established is not provided.
Ask a specific question about this device
(74 days)
IGT LLC DBA EIGEN
Multi-Modality Image Fusion is a software application to be used by physicians in the clinic or hospital for 2-D and 3-D visualization, image registration, and fusion of MRI, CT and Ultrasound imaging modalities for mapping planning information across modalities. Additional software features include database management, data communication, surface rendering, segmentation, regions of interest (ROI) delineation, volumetric measurements, and data reporting.
Multi-Modality Image Fusion (MMIF) is software, which comprises of two software components, which is referred to as offline and online. The offline component perfains to the preparation of gland and suspected lesion boundaries on a DICOM image file days or hours prior to the biopsy procedure. The online component fuses the DICOM image files, which were, prepared on the offline component, with a snap shot incoming TRUS image. Each of the two software components can work together or independently.
The provided 510(k) summary for the Multi-Modality Image Fusion device does not contain a specific table of acceptance criteria or detailed results of a study proving the device meets those criteria. However, it does outline the types of tests performed and provides a general statement of success.
Based on the available information, here is a description of the acceptance criteria (inferred from the testing) and the study conducted:
1. Acceptance Criteria and Reported Device Performance
Since explicit acceptance criteria are not provided, they are inferred from the types of tests performed. The document states that "All product and engineering specifications were verified and validated" and that "All the above mentioned tests passed." This implies that the device successfully met the intended performance parameters for each test.
Acceptance Criteria (Inferred from Tests) | Reported Device Performance |
---|---|
Functional/Hardware Verification | Tests Passed |
- Hardware Verification | Successfully verified the MMIF hardware. |
- Online Software Application Verification | Successfully verified the online software application. |
- Offline Software Application Verification | Successfully verified the offline software application. |
Risk Mitigation Verification | Tests Passed |
- Risk Mitigation Verification | Successfully verified risk mitigation strategies. |
Accuracy (Benchmarking) | Tests Passed |
- Registration Accuracy for Clinical Data | Demonstrated accuracy in registration for clinical data. |
- Phantom Volume Measurements | Demonstrated accurate volume measurements using phantoms. |
- Phantom Registration Accuracy | Demonstrated accurate registration using phantoms. |
System Performance (Static & Dynamic) | Tests Passed |
- Overall System Performance | Confirmed static and dynamic performance of the complete system (both online and offline software on specified hardware) and compliance to specifications in a simulated real environment. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "clinical data collected from the hospital setting" for the "Registration Accuracy for Clinical Data (bench test)." However, the sample size for this test set (number of cases/patients) is not specified. The data provenance is generally stated as "clinical data collected from the hospital setting," indicating it is likely retrospective clinical data used for a bench test.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their qualifications for establishing the ground truth of the "clinical data" used in testing. It only mentions that the "offline software would be typically executed by Radiologists."
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth on the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not mentioned or described in the provided summary. The focus of the performance testing appears to be on the device's standalone accuracy and functionality.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation of the algorithm (referred to as "the new MMIF software") was performed. The "Testing and Performance Data" section describes "bench testing" activities conducted "to ensure the performance of the new MMIF software by verifying the accuracy of the specifications and simulating real customer data collected from the hospital setting." The registration accuracy and volume measurements were assessed for the software itself.
7. The Type of Ground Truth Used
The ground truth for the bench tests involved:
- Simulated prostates in test phantoms: Used for "Phantom Volume Measurements" and "Phantom Registration Accuracy." This represents a controlled and precisely known ground truth.
- "Clinical data collected from the hospital setting": Used for "Registration Accuracy for Clinical Data." The method for establishing ground truth for this clinical data (e.g., expert consensus, pathology, surgical outcomes) is not explicitly stated.
8. The Sample Size for the Training Set
The document does not specify a training set size. The summary focuses on device verification and validation, implying that the device was perhaps developed or configured based on various data, but the specific characteristics of a "training set" (in the machine learning sense) are not outlined.
9. How the Ground Truth for the Training Set Was Established
Since a training set is not explicitly mentioned, the method for establishing its ground truth is not provided. The device seems to be a software application for image processing and fusion, rather than a machine learning model that requires explicit training data in the context of this 510(k) summary. For software applications of this nature, ground truth is often implicitly established by the underlying physical principles of image registration algorithms and validated against physical models or existing clinical standards.
Ask a specific question about this device
(14 days)
EIGEN LLC
The DSA 2000ex device is used in vascular imaging applications. During X-ray exposures, the DSA 2000ex is used to acquire video images from the video display chain provided by the X-ray manufacturer's system. The images are stored in the DSA 2000ex solid state memory, and written to the hard disk medium. Images are processed in real-time to provide increased image usability. The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, image rotation and pixel shifting. The Eigen DSA 2000ex device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists, and cardiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions. Additional functions include allowing measurements to be made for quantizing stenosis and guidance of catheters in the Roadmapping mode.
The DSA 2000ex device is used in vascular imaging applications. During X-ray exposures, the DSA 2000ex is used to acquire video images from the video display chain provided by the X-ray manufacturer's system. The images are stored in the DSA 2000ex solid state memory, and written to the hard disk medium. Images are processed in real-time to provide increased image usability. The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, image rotation and pixel shifting. The Eigen DSA 2000ex device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions. Additional functions include allowing measurements to be made for quantizing stenosis and guidance of catheters in the Roadmapping mode.
The provided text describes the Eigen DSA 2000ex, a Digital Subtraction Angiography device, and its substantial equivalence to predicate devices. However, the document does not contain a detailed study with specific acceptance criteria, reported device performance metrics, or information about sample sizes, ground truth establishment, expert involvement, or MRMC studies that are typically associated with AI/ML device evaluations.
The relevant section, "Testing and Performance Data," states: "All product and engineering specifications were verified and validated. Test images as well as test phantoms incorporating simulated stenosis were developed and used to verify system performance through verification, validation and benchmarking." This is a very high-level statement and lacks the specificity required to answer the questions thoroughly.
Therefore, for aspects related to detailed performance studies and acceptance criteria as you've requested, the information is not available in the provided document.
Here's a breakdown of what can be extracted and what is not available:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not Available | Not Available (beyond general statement of "system performance through verification, validation and benchmarking") |
The document mentions "product and engineering specifications were verified and validated," and "Test images as well as test phantoms incorporating simulated stenosis were developed and used to verify system performance." However, specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds for stenosis detection) and the corresponding measured performance values are not provided.
2. Sample size used for the test set and the data provenance
- Sample Size (test set): Not Available. The document mentions "test images" and "test phantoms incorporating simulated stenosis" but does not specify the number of these.
- Data Provenance: Not Available. Given the nature of "test images" and "test phantoms," these are likely internally generated or simulated, not clinical patient data from a specific country or retrospective/prospective study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Not Available.
- Qualifications of experts: Not Available.
The document does not describe the establishment of ground truth by human experts for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not Available. No information on expert review or adjudication is provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This is not an AI/ML device in the modern sense; it's an image processing system. Therefore, an MRMC study comparing human readers with and without AI assistance is not described or relevant for this type of device according to the provided text. The device "enhances diagnostic capabilities of radiologists, and cardiologists" by improving image usability, but this is through image processing, not an AI-driven diagnostic aid.
- Effect Size: Not Applicable/Not Available.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Study: Not explicitly described. The device itself is an image processing system, which operates in a "standalone" fashion to process images. However, a formal "standalone performance study" with metrics like sensitivity/specificity for a diagnostic task, as would be expected for an AI algorithm, is not detailed. The system's "performance" is verified against engineering specifications and test phantoms, implying system-level functional performance rather than diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: "Test phantoms incorporating simulated stenosis." This suggests an engineered, known truth set for evaluating the device's ability to process and visualize specific features. It's not clinical ground truth derived from pathology or patient outcomes.
8. The sample size for the training set
- Sample Size (training set): Not Applicable/Not Available. This device is an image processing system, not an AI/ML device that undergoes "training" in the contemporary sense. It's built based on established algorithms for image subtraction, noise reduction, etc., not trained on a dataset.
9. How the ground truth for the training set was established
- Ground Truth Establishment (training set): Not Applicable/Not Available. As it's not an AI/ML device that requires training, the concept of a training set ground truth does not apply. The algorithms are predefined based on image processing principles rather than learned from data.
Ask a specific question about this device
(14 days)
EIGEN LLC
The 3-D Imaging Workstation is intended to be used by physicians in the clinic or hospital for 2-D and 3-D visualization of ultrasound images of the prostate gland. Additional software features include patient data management, multi-planar reconstruction, segmentation, image measurement and 3-D image registration.
The 3-D Imaging Workstation is designed to display the 2-D live video received from commercially available ultrasound machines and use this 2-D video to reconstruct a 3-D ultrasound image. The system has been designed to work with the clinicians' existing ultrasound machine, end fire TRUS probe, commercially available needle guide and needle gun combination. Additional software features include patient data management, multiplanar reconstruction, segmentation, image measurement and 3-D image registration.
The 3-D Imaging Workstation is comprised of a mechanical assembly that holds the ultrasound probe and tracks probe position while the physician performs a normal ultrasound imaging procedure of the subject prostate. The mechanical tracker is connected to a PC-based workstation containing a video digitizing card and running the image processing software. Control of the ultrasound probe and ultrasound system is done manually by the physician, just as it would be in the absence of the 3-D Imaging Workstation. However, by tracking the position and orientation of the ultrasound probe while capturing the video image, the workstation is able to reconstruct and display a 3-D image and 3-D rendered surface model of the prostate.
The reconstructed 3-D image can be further processed to perform various measurements including volume estimation, and can be examined for abnormalities by the physician. Patient information, notes, and images may be stored for future retrieval.
Locations for biopsies may be selected by the physician, displayed on the 3-D image and 3-D rendered surface model, and stored. Previously stored 3-D models may be recalled and a stored 3-D model may be aligned or registered to the current 3-D model of the prostate.
Finally, the physician may attach a commercially available biopsy needle guide to the TRUS probe and use the probe and biopsy needle to perform tissue biopsy. Whenever the ultrasound machine is turned on by the physician, the live 2-D ultrasound image is displayed on the screen of 3-D Imaging Workstation during the biopsy. As the TRUS probe with attached needle guide is maneuvered by the physician, the position and orientation of the probe is tracked. The 3-D Imaging Workstation is able to add, display and edit plans for biopsy sites as well as an estimate of the probe position and needle trajectory relative to the 3-D image and 3-D rendered surface model of the prostate.
The 3-D Imaging Workstation offers the physician additional 3-D information for assessing prostate abnormalities, planning and implementing biopsy procedures. The additional image processing features are generated with minimal changes to previous TRUS probe based procedures, and the physician always has access to the live 2-D ultrasound image during prostate assessment or biopsy procedure.
Here's an analysis of the acceptance criteria and study information for the 3-D Imaging Workstation, based on the provided text:
Important Note: The provided 510(k) summary is very high-level and does not detail specific acceptance criteria or quantitative performance metrics typically found in a robust validation study report. Instead, it focuses on demonstrating "substantial equivalence" to predicate devices. Therefore, much of the requested information cannot be extracted directly from this document. The answers below reflect what can be found or inferred from the text.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Inferred from "Substantial Equivalence" claim): The device's performance characteristics (e.g., image measurement, multi-planar reformatting, segmentation, image registration, image storage/retrieval, patient information management) are sufficiently similar to the predicate devices (3-Dnet Suite K063107 and XELERIS 2 Processing and K051673) to achieve "substantial equivalence" for its intended use. While no specific quantitative thresholds are stated, the implication is that the performance meets industry standards and is clinically acceptable for its intended purpose.
Feature | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
3-D Ultrasound Reconstruction | Ability to reconstruct 3-D ultrasound images from 2-D video. | Reconstructs and displays 3-D images and surface models. |
Multiplanar Reconstruction | Comparable to predicate devices | Supports multiplanar reconstruction. |
Segmentation | Comparable to predicate devices | Supports segmentation. |
Image Measurement | Comparable to predicate devices (e.g., volume estimation). | Supports various measurements, including volume estimation. |
3-D Image Registration | Comparable to predicate devices; ability to align previously stored 3-D models to current ones. | Supports 3-D image registration. |
Patient Data Management | Comparable to predicate devices; ability to store and retrieve patient information, notes, and images. | Supports patient data management, storage, and retrieval. |
Biopsy Planning/Guidance | Ability to display and edit biopsy plans, estimate probe position, and needle trajectory relative to 3-D image. | Adds, displays, and edits biopsy plans, estimates probe position and trajectory. |
Clinical Workflow Integration | Minimal changes to existing TRUS probe-based procedures; access to live 2-D ultrasound during procedures. | Integrates without significant changes to workflow; provides live 2-D ultrasound display. |
Compatibility | Interoperability with existing ultrasound machines and TRUS probes. | Designed to work with clinicians' existing ultrasound machine, end fire TRUS probe, needle guide, and needle gun. |
Verification & Validation | All product and engineering specifications are verified and validated. | "All product and engineering specifications were verified and validated." |
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified in the provided text. The testing involved "Test phantoms incorporating simulated prostates."
- Data Provenance: The device was tested using "Test phantoms incorporating simulated prostates." This implies the data was generated in a controlled, artificial environment rather than derived from human patient data. There is no mention of country of origin, retrospective, or prospective data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. It's unclear if human experts were involved in establishing ground truth for the phantom studies, as phantoms often have known, measurable properties that can serve as ground truth directly.
4. Adjudication method for the test set
- Adjudication Method: Not specified.
5. If a multi-reader multicase (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader multicase (MRMC) comparative effectiveness study was not mentioned or described. This study focuses on validating the device's functionality and its "substantial equivalence" to predicate devices, not on comparing reader performance with and without the device. The device is a "3-D Imaging Workstation," not an AI-assisted diagnostic tool in the sense of directly altering human reader performance outcomes in an MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The study described uses "Test phantoms" for "verification, validation... and benchmarking." This type of testing would primarily evaluate the algorithm's performance in reconstructing images, calculating volumes, and tracking, independent of real-time human interaction with live patient data for diagnostic decision-making. Therefore, a form of standalone performance evaluation on simulated data was conducted for the technical aspects of the software. However, it's not a standalone diagnostic performance reported with metrics like sensitivity/specificity for disease detection on clinical data.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: The ground truth for the test set was based on the known properties of "Test phantoms incorporating simulated prostates." This implies a known physical standard rather than expert consensus, pathology, or outcomes data from human subjects.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable. The document describes a 510(k) submission for a medical imaging workstation, not a machine learning or AI algorithm development process that typically involves a distinct training set. The device's functionality is based on established image processing algorithms, not a trainable model.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable. As noted above, the device does not appear to be an AI algorithm developed with a training set in the conventional sense. Its ground truth for validation was based on physical phantoms.
Ask a specific question about this device
(30 days)
EIGEN
The Eigen DSA 2000 product is used in vascular imaging applications. During X-ray exposures, the DSA 2000 is used to acquire video images from the video display chain provided by the X-ray manufacturer's system. The images are stored in the DSA 2000 solid state memory, and written to the hard disk medium. Images are processed in real-time to provide increased image usability. The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, and pixel shifting. The Eigen DSA 2000 device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists and cardiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions.
The Eigen Digital Subtraction Angiography® (DSA) is a real-time video acquisition device that can be added to an existing standard line-rate X-ray system and provides creation of photos and real-time digital subtraction taken from a mask image. The DSA acquires and transmits data to a DICOM workstation or PAC system. The data will then be available for display. The DSA output conforms to the DICOM 3.0XA Standard for lossless images. The Eigen DSA is assembled on a Hewlett Packard (HP)/Intel platform and uses the Microsoft Windows XP® operating system.
The Eigen Digital Subtraction Angiography (DSA) 2000 is an image processing system used in vascular imaging applications. The information provided outlines its intended use and a general conclusion of its performance rather than specific quantitative acceptance criteria or a detailed study plan.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Functional Equivalence to Predicate Device (Digital 8) with modifications. | "Actual device performance as tested internally conforms to the system requirements." |
"The test results support the conclusion that the DSA 2000 is substantially equivalent to its predicate device, D8." | |
Maintenance of Intended Use and Fundamental Scientific Technology. | "The modifications made to the DSA 2000 do not alter the intended use or the fundamental scientific technology of the device." |
Real-time video acquisition and image processing. | The device "acquires video images from the video display chain," "Images are processed in real-time to provide increased image usability." |
Specific Processing Functions (Subtraction, Window/Level, Noise Reduction, Landscaping, Pixel Shifting). | "The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, and pixel Shifting." |
Image Storage and Archiving. | "Images are stored in the DSA 2000 solid state memory, and written to the hard disk medium." |
"Data Archiving" is listed as a function. | |
DICOM 3.0XA Standard Conformance for Lossless Images. | "The DSA output conforms to the DICOM 3.0XA Standard for lossless images." |
Enhance diagnostic capabilities for radiologists and cardiologists with minimal user intervention. | "The Eigen DSA 2000 device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists and cardiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions." |
2. Sample size used for the test set and the data provenance
The document states: "Testing was performed at the module and system level according to written test protocols established before the testing was conducted." However, it does not provide any specific sample size for the test set (e.g., number of images, patients, or cases). It also does not specify the data provenance (e.g., country of origin, retrospective or prospective). The testing appears to be internal (hence "tested internally").
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document mentions that "Test results were reviewed by designated technical professionals before release of the software." However, it does not specify the number of experts used or their qualifications. It also does not explicitly state that these "technical professionals" established ground truth for a test set in a clinical context; their role appears to be reviewing the internal technical test results.
4. Adjudication method for the test set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for a clinical test set. The review mentioned seems to be technical verification of internal test results.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was mentioned in the provided text. The device is presented as a tool to enhance diagnostic capabilities, but no study comparing human readers with and without the device's assistance is described, nor is an effect size provided.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
The document describes the device as an "image processing system" that "enhances diagnostic capabilities of radiologists and cardiologists." This implies it is a human-in-the-loop system. No standalone algorithm performance study is indicated. The "Test Discussion" section refers to "module and system level" testing, which likely refers to the functional performance of the software and hardware components rather than a standalone clinical performance evaluation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not describe the type of ground truth used for any clinical evaluation as it focuses on functional equivalence and internal system requirements rather than a clinical performance study with established ground truth.
8. The sample size for the training set
The device described is an "image processing system" and not explicitly an AI/ML device that requires a training set in the contemporary sense. It performs fixed algorithms (e.g., subtraction, window/level, noise reduction). Therefore, no training set is mentioned or applicable in the context of this device's description.
9. How the ground truth for the training set was established
As there is no training set mentioned or applicable for this type of image processing system, the method for establishing its ground truth is not relevant and therefore not provided.
Ask a specific question about this device
Page 1 of 1