Search Results
Found 2 results
510(k) Data Aggregation
(479 days)
CerebralGo Plus is an image processing software package to be used by trained professionals, including, but not limited to physicians and medical technicians. The software runs on standard "off-the-shel" hardware and can be used for image viewing and processing. Data and images are acquired through DICOM compliant imaging devices.
CerebralGo Plus provides viewing and processing capabilities for imaging datasets acquired with adult's CTA (CT Angiography).
CerebralGo Plus is not intended for primary diagnostic use.
CerebralGo Plus is a medical image management and processing software package to be used by trained professionals, including, but not limited to physicians and medical technicians.
The software runs on standard "off-the-shelf" hardware and can be used for image viewing, and processing images of DICOM compliant CTA imaging which, when interpreted by a trained clinician, may yield information useful in clinical decision making.
CerebralGo Plus system provides a wide range of basic image viewing, processing, and manipulation functions, through multiple output formats. The software is used to visualize large vessels from head and neck CTA imaging.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria & Reported Device Performance
The document does not explicitly state formal acceptance criteria in a table format with pass/fail thresholds. Instead, it reports performance metrics (Dice Coefficient and 95% Hausdorff Distance) from a verification study. The implied "acceptance" is that these metrics demonstrate the device's efficacy for its intended use, particularly for segmentation tasks (implied by Dice and Hausdorff metrics which are common in image segmentation validation).
Metric / Parameter | Unit of Measurement | Reported Device Performance |
---|---|---|
Dice Coefficient | N/A (unitless, a similarity measure) | 0.942 |
95% Hausdorff Distance | Millimeters (implied for medical imaging) | 3.692 |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 141 images.
- Data Provenance: The images were collected from the US and are described as covering different genders, ages, ethnicities, equipment, and CT protocols. It is a retrospective dataset as it was collected for verification of a pre-existing algorithm.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 radiologists.
- Qualifications of Experts: All 3 radiologists were from the US. Their specific experience level (e.g., years of experience, subspecialty) is not explicitly stated in the provided text.
4. Adjudication Method for the Test Set
- Adjudication Method: When the first two radiologists conflicted (presumably in their independent assessments), a third radiologist would arbitrate to generate the reference standard (ground truth). This can be described as a 2+1 adjudication model.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done? No, a human-in-the-loop MRMC study comparing human readers with AI assistance versus without AI assistance was not done. The study described is a standalone (algorithm-only) performance evaluation against a consensus ground truth.
- Effect Size of Human Improvement (if applicable): Not applicable, as no MRMC study was conducted.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study Done? Yes. The performance data section explicitly states "Stand-alone software performance testing" and the results are presented for the algorithm itself (Dice Coefficient and Hausdorff Distance).
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the test set was established through expert consensus among 3 radiologists, using a 2+1 adjudication method.
8. Sample Size for the Training Set
- Training Set Sample Size: The exact sample size for the training set is not specified. The document only states that "Algorithm training of CerebralGo Plus has been conducted on images collected from China as training dataset."
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: The document does not provide details on how the ground truth for the training set (images collected from China) was established. It only mentions that these images were used for algorithm training.
Ask a specific question about this device
(227 days)
PerfusionGo Plus is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians. The software runs on a standard "off-the-shelf" computer, and can be used to perform image processing, analysis, and communication of computed tomography (CT) perfusion scans of the brain. Data and images are acquired through DICOM-compliant imaging devices.
PerfusionGo Plus is a standalone software package that is comprised of several modules including Login Module, Image List Module, Image Processing Module and Management Configuration Module. PerfusionGo Plus allows a DICOM-compliant device to send files directly from the image modality, through a node on a local network, or from a PACS server. The device is designed to automatically receive, identify, extract, and analyze a CTP study of the head embedded in DICOM image data. The software outputs parametric maps related to tissue blood flow (perfusion) and tissue blood volume that are written back to the source DICOM. Following such analysis, the results of analysis can be exported manually. The software allows for repeated use and continuous processing of data and can be deployed on a supportive infrastructure that meets the minimum system requirements.
PerfusionGo Plus image analysis includes calculation of the following perfusion related parameters:
- Cerebral Blood Flow (CBF)
- Cerebral Blood Volume (CBV)
- Mean Transit Time (MTT)
- Time-to-peak (TTP)
- Residue function time-to-peak (Tmax)
- Time-density curve (TDC)
The primary users of PerfusionGo Plus are medical imaging professionals who analyze dynamic CT perfusion studies. The results of image analysis produced by PerfusionGo Plus should be viewed through appropriate diagnostic viewers when used in clinical decision making.
The provided document is a 510(k) Premarket Notification from the U.S. FDA for the device "PerfusionGo Plus". It primarily focuses on demonstrating substantial equivalence to a predicate device (Viz CTP, K180161) rather than detailing a specific clinical performance study with acceptance criteria and ground truth for a novel AI device.
Therefore, the document does not contain the information required to fully answer all aspects of your request, particularly regarding specific acceptance criteria for a clinical study, human-in-the-loop performance, or the detailed methodology for establishing ground truth from expert consensus or pathology data for a large test set.
However, I can extract the information that is present and indicate what is missing:
Information Present in the Document:
- 1. A table of acceptance criteria and the reported device performance: Not explicitly provided as a formal table with acceptance criteria for a clinical study on device performance (e.g., accuracy against ground truth). The document states: "The results of performance testing showed that the PerfusionGo Plus achieved the pre-established performance goals for each perfusion parameters: CBF, CBV, MTT and Tmax." However, the specific pre-established performance goals (acceptance criteria) and the numerical results (reported device performance) are not detailed.
- 2. Sample sized used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The document refers to "commercially available simulated datasets (digital phantom)".
- Data Provenance: "commercially available simulated datasets (digital phantom) generated by simulating tracer kinetic theory." This implies synthetic data rather than real patient data from a specific country or collected retrospectively/prospectively.
- 3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The ground truth was established by "simulating tracer kinetic theory" for the digital phantoms, not by human experts.
- 4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable, as ground truth was not expert-based.
- 5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No, an MRMC study was not described. The document focuses on the standalone algorithm's performance against simulated ground truth. The device is described as "image processing software package to be used by trained professionals," suggesting human-in-the-loop use, but a study evaluating the impact of AI assistance on human readers is not presented.
- 6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, the described "performance testing" was of the standalone algorithm against simulated ground truth.
- 7. The type of ground truth used: "commercially available simulated datasets (digital phantom) generated by simulating tracer kinetic theory, and includes a wide range of clinically relevant values of perfusion parameters as ground truth." This is simulated/theoretical ground truth.
- 8. The sample size for the training set: Not mentioned. The document describes verification and validation testing, but does not specify a training set as would be typical for an AI model development. This might imply that the product is a rules-based image processing software rather than a machine learning/AI model that requires a training set. Given the context of "simulating tracer kinetic theory," it's highly likely to be a mathematical model implementation rather than a data-driven AI.
- 9. How the ground truth for the training set was established: Not applicable, as no training set is described.
Summary of Missing Information (based on the typical requirements for AI/ML device approval when clinical studies are performed):
- Specific numerical acceptance criteria for perfusion parameters (CBF, CBV, MTT, Tmax).
- Specific numerical results for the device's performance against these criteria.
- The exact sample size of the test set (number of simulated cases).
- Details on how the "commercially available simulated datasets" were verified or validated themselves to represent clinical reality.
- Any information regarding training data, which suggests this is not a traditional machine learning/AI device requiring such data, but rather an implementation of established tracer kinetic theory.
- Any details on human expert involvement (e.g., radiologists) in ground truth establishment or human-in-the-loop performance studies.
Conclusion based on the provided document:
The PerfusionGo Plus device underwent performance testing against "commercially available simulated datasets (digital phantom) generated by simulating tracer kinetic theory." The ground truth for this testing was the theoretical values derived from these simulations. The document states that the device "achieved the pre-established performance goals" for perfusion parameters (CBF, CBV, MTT, and Tmax), demonstrating accurate computation. However, the exact numerical acceptance criteria and the quantitative results are not provided in this 510(k) summary. No details on training data, human expert ground truth, or human-in-the-loop studies are included, indicating that this submission likely pertains to a deterministic image processing algorithm rather than a data-driven AI/ML solution.
Ask a specific question about this device
Page 1 of 1