Search Results
Found 1 results
510(k) Data Aggregation
(169 days)
The Avenda Health AI Prostate Cancer Planning Software is an artificial intelligence (AI)-based decision support software, indicated as an adjunct to the review of magnetic resonance (MR) prostate images and biopsy findings in the prostate oncological workflow. The Avenda Health AI Prostate Cancer Planning Software is designed to support the prostate oncological workflow by helping the user with the segmentation of MR image features, including the prostate; in the evaluation, quantification, and documentation of lesions; and in pre-planning for diagnostic and interventional procedures such as biopsy and/or soft tissue ablation. The device is intended to be used by physicians trained in the oncological workflow in a clinical setting for planning and guidance for clinical, interventional, diagnostic, and/or treatment procedures of the prostate.
The Avenda Health Al Prostate Cancer Planning Software's lesion characterization functions are intended for use on patients with a pathology-confirmed Gleason Grade Group (GGG) ≥ 2 lesion and for whom corresponding biopsy coordinate information have been uploaded. These functions are indicated for the extent of known disease. Extent of known disease refers to the boundary of a pathology confirmed lesion of GGG ≥ 2 for a particular patient. Specifically, using prostate MR images, biopsy, pathology, and clinical data, the device creates and displays a cancer map that assigns a probability to each voxel within the prostate, indicating its probability for containing clinically significant prostate cancer (csPCa, defined as GGG ≥ 2 ). A user selects a threshold for the cancer map to create a boundary of the lesion. The lesion boundary is assigned an Encapsulation Confidence Score indicating the confidence that all csPCa is encapsulated within the boundary. The Encapsulation Confidence Score is from a lookup table generated by a database of cases with known ground-truth. When interpreted by a trained physician, this information may be useful in supporting lesion characterization and subsequent patient management.
The Avenda Health Al Prostate Planning Software may also be used as a medical image application, for the viewing. manipulation, 3D-visualization, and comparison of MR prostate images can be viewed in a number of output formats including volume rendering. It enables visualization of information that would otherwise have to be visually compared disjointedly.
The Avenda Health AI Prostate Cancer Planning Software ("AI Prostate Cancer Planning Software" or "Software") is an artificial intelligence (AI)-based decision support software, indicated as an adjunct to the review of magnetic resonance (MR) prostate images and biopsy findings in the prostate oncological workflow. The Avenda Health AI Prostate Cancer Planning Software is designed to support the prostate oncological workflow by helping the user with the segmentation of MR image features, including the prostate; in the evaluation, quantification, and documentation of lesions; and in pre-planning for diagnostic and interventional procedures such as biopsy and/or soft tissue ablation. The device is intended to be used by physicians trained in the oncological workflow in a clinical setting for planning and guidance for clinical, interventional, diagnostic, and/or treatment procedures of the prostate. The software has three main features:
-
- Artificial Intelligence (AI) Powered Prostate MRI Segmentation Tool,
-
- AI Powered Lesion Contour Tool, and
-
- Simulated Interventional Tool Placement.
The user can choose which subset of features of the Software to employ based on the specific oncological workflow. Not all features are required to be used for every workflow. Once the user has completed planning and has reviewed and verified the information, it can be exported into a supported file format such that it can be imported into a compatible interventional system or biopsy system.
The provided text is a 510(k) summary for the Avenda Health AI Prostate Cancer Planning Software. While it describes the device, its intended use, and generally states that performance testing was conducted, it does not provide a detailed table of acceptance criteria and reported device performance metrics with specific values beyond high-level summaries of reader study results. Similarly, it does not explicitly detail the sample size for the training set or the exact method used to establish ground truth for the training set.
However, based on the provided text, I can extract and infer some information to answer your questions as best as possible.
Acceptance Criteria and Study to Prove Device Meets Criteria
The document states that the device was deemed "substantially equivalent" to a predicate device (Quantitative Insights, Inc. QuantX) based on "performance bench, usability, and reader performance testing." While specific numerical acceptance criteria for each test are not explicitly detailed in a table, the effectiveness of the device is primarily demonstrated through the Multi-Reader, Multi-Case (MRMC) study for human reader performance and standalone performance testing for the prostate segmentation and lesion contouring algorithms.
1. Table of Acceptance Criteria and Reported Device Performance
As a detailed table of specific acceptance criteria values is not present, I will construct a table based on the stated performance outcomes for the key functionalities assessed. The document describes the "superiority" of the device-assisted contours over standard of care, implying these performance improvements were the implicit "acceptance criteria" for demonstrating effectiveness.
| Feature Assessed | Acceptance Criteria (Implicitly Met) | Reported Device Performance (Mean) |
|---|---|---|
| Lesion Contouring (Reader Study - AI Assisted vs. SOC) | Improved sensitivity in encapsulating csPCa compared to SOC. | 97.4% (AI-assisted) vs. 38.2% (SOC) |
| Improved specificity compared to hemi-gland contours. | 72.1% (AI-assisted) | |
| Improved balanced accuracy compared to SOC and hemi-gland contours. | 84.7% (AI-assisted) vs. 67.2% (SOC) & 75.9% (hemi-gland) | |
| Improved "clinical quality" of contours. | 99% (AI-assisted) vs. 60% (hemi-gland) of cases | |
| Improved complete csPCa encapsulation rate. | 72.8% (AI-assisted) vs. 1.6% (SOC) | |
| Prostate Segmentation (Standalone) | Accurately segment the prostate organ in T2-weighted MRI. | Achieved in a standalone test set of 137 patients. (Specific metric e.g., Dice Score not provided) |
| Lesion Contouring (Standalone) | Accuracy in contouring GGG >2 lesions. | Validated in an independent whole mount pathology dataset of N=50 patients. (Specific metric e.g., Dice Score not provided) |
Note: The document presents the results as demonstrations of improvement and validation rather than explicitly defined "acceptance criteria" with thresholds that were individually met. The P-values (< 0.0001) indicate statistical significance for the observed improvements.
2. Sample Sizes Used for the Test Set and Data Provenance
-
Human Reader Study Test Set:
- Cases: Composed of cases within a prostate whole mount pathology database derived from GGG 2-3 patients. The exact number of cases is not explicitly stated beyond "Overall, the study dataset consisted of cases..."
- Data Provenance: Implied to be from a general "prostate whole mount pathology database." The country of origin is not specified.
- Retrospective/Prospective: The use of an existing "pathology database" implies the data was retrospective.
-
Standalone Performance Testing Test Sets:
- Prostate Segmentation Test Set: 137 patients.
- Lesion Contouring Test Set: N=50 patients (independent whole mount pathology dataset).
- Data Provenance: Not specified for either dataset.
- Retrospective/Prospective: The use of existing "standalone test datasets" and "whole mount pathology dataset" implies the data was retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
-
Human Reader Study Ground Truth:
- Number of Experts: Not explicitly stated as directly establishing ground truth for the test set. Instead, the ground truth was "whole mount pathology data."
- Qualifications: Not applicable as ground truth was pathology.
-
Standalone Performance Testing Ground Truth:
- Prostate Segmentation & Lesion Contouring: Ground truths were "clinically valid ground truths" and "whole mount pathology data." The number and qualifications of experts involved in creating or confirming these ground truths are not specified in the provided text.
4. Adjudication Method for the Test Set
- Human Reader Study: The ground truth was "whole mount pathology data." This is a definitive ground truth and typically does not require adjudication in the same way as expert consensus based on image review. The readers in the MRMC study provided their contours, which were then compared against this pathology ground truth.
- Standalone Performance Testing: Ground truth established against "clinically valid ground truths" and "whole mount pathology data." Adjudication methods for establishing these ground truths are not specified.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the Effect Size of how human readers improve with AI vs without AI assistance
- Yes, an MRMC study was done.
- Effect Size / Improvement:
- Sensitivity (csPCa encapsulation): Mean 97.4% (with AI) vs 38.2% (SOC) – an improvement of 59.2 percentage points.
- Specificity (compared to hemi-gland contours): Mean 72.1% (with AI). (SOC specificity not given in direct comparison, but "superior specificity" is stated).
- Balanced Accuracy: Mean 84.7% (with AI) vs 67.2% (SOC) – an improvement of 17.5 percentage points. Also 84.7% (with AI) vs 75.9% (hemi-gland) – an improvement of 8.8 percentage points.
- Complete csPCa Encapsulation Rate: 72.8% (with AI) vs 1.6% (SOC) – an improvement of 71.2 percentage points.
- "Clinical Quality": Improved in 99% of cases with AI vs 60% with hemi-gland contours – an improvement of 39 percentage points.
The document consistently states p-values of < 0.0001, indicating a highly statistically significant improvement for all measured metrics when readers used the AI-assisted tool compared to SOC methods.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, standalone performance testing was done for:
- Prostate Segmentation Algorithm: "accurately segment the prostate organ in T2-weighted MRI in a standalone test set of 137 patients."
- Lesion Contouring Algorithm: "validated for accuracy in contouring GGG >2 lesions in the intended use population within a representative, independent whole mount pathology dataset of N=50 patients."
7. The Type of Ground Truth Used
- Human Reader Study: "Whole mount pathology data" registered to pre-operative T2-weighted MRI. This is a definitive, pathology-based ground truth.
- Standalone Performance Testing:
- Prostate Segmentation: "Clinically valid ground truths." (Specifics not provided)
- Lesion Contouring: "Whole mount pathology data."
8. The Sample Size for the Training Set
The sample size for the training set used to develop the AI algorithms is not explicitly stated in the provided 510(k) summary.
9. How the Ground Truth for the Training Set Was Established
The method for establishing ground truth for the training set is not explicitly stated. The document mentions that the lesion characterization functions' "Encapsulation Confidence Score is from a lookup table generated by a database of cases with known ground-truth." This implies the training data for the lesion characterization component also relied on "known ground-truth," likely pathology, but the process of its establishment is not detailed for the training set itself.
Ask a specific question about this device
Page 1 of 1