Search Results
Found 2 results
510(k) Data Aggregation
(128 days)
ART-Plan's indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.
ART-Plan is not intended for patients less than 18 years of age.
The indicated users are trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process.
The indicated use environments are, but not limited to, hospitals, clinics and any health facility involved in radiation therapy.
The ART-Plan application consists of three key modules: SmartFuse,Annotate and AdaptBox, allowing the user to display and visualise 3D multi-modal medical image data. The user may process, render, review, store, display and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks.
Compared to Ethos Treatment, 2.1; Ethos Treatment Planning, 1.1 (primary predicate), the following additional feature has been added to ART-Plan v2.1.0:
- generation of synthetic CT from MR images. This does not represent an additional . claim as the technological characteristics are the same and it does not raise different questions of safety and effectiveness. Also, this feature is already covered by reference and previous version of the device ART-Plan v1.10.1.
The ART-Plan technical functionalities claimed by TheraPanacea are the following:
- . Proposing automatic solutions to the user, such as an automatic delineation, automatic multimodal image fusion, etc. towards improving standardization of processes/ performance / reducing user tedious / time consuming involvement.
- . Offering to the user a set of tools to assist semi-automatic delineation, semi-automatic registration towards modifying/editing manually automatically generated structures and addina/removing new/undesired structures or imposing user-provided correspondences constraints on the fusion of multimodal images.
- . Presenting to the user a set of visualization methods of the delineated structures, and registration fusion maps.
- . Saving the delineated structures / fusion results for use in the dosimetry process.
- . Enabling rigid and deformable registration of patients images sets to combine information contained in different or same modalities.
- Allowing the users to generate, visualize, evaluate and modify pseudo-CT from MRI and CBCT images.
- . Allowing the users to generate, visualize and analyze dose on images of CT modality (only within the AdatpBox workflow)
- . Presenting to the user metrics to define if there is a need for replanning or not.
The provided document describes the acceptance criteria and the study that proves the ART-Plan device meets these criteria across its various modules (Autosegmentation, SmartFuse, AdaptBox, Synthetic-CT generation, and Dose Engine).
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
Autosegmentation Tool
Acceptance Criteria Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Quantitative (DSC) | a) DSC (mean) ≥ 0.8 (AAPM criterion) OR | |
b) DSC (mean) ≥ 0.54 (inter-expert variability) OR | ||
DSC (mean) ≥ mean (DSC inter-expert) + 5% | Duodenum: DICE diff inter-expert = 1.32% (Passed) | |
Large bowel: DICE diff inter-expert = 1.19% (Passed) | ||
Small bowel: DICE diff inter-expert = 2.44% (Passed) | ||
Qualitative (A+B%) | A+B % ≥ 85% (A: acceptable without modification, B: acceptable with minor modifications/corrections, C: requires major modifications) | Right lacrimal gland: A+B = 100% (Passed) |
Left lacrimal gland: A+B = 100% (Passed) | ||
Cervical lymph nodes VIA: A+B = 97% (Passed) | ||
Cervical lymph nodes VIB: A+B = 100% (Passed) | ||
Pharyngeal constrictor muscle: A+B = 100% (Passed) | ||
Anal canal: A+B = 98.68% (Passed) | ||
Bladder: A+B = 93.42% (Passed) | ||
Left femoral head: A+B = 100% (Passed) | ||
Right femoral head: A+B = 100% (Passed) | ||
Penile bulb: A+B = 96.05% (Passed) | ||
Prostate: A+B = 92.10% (Passed) | ||
Rectum: A+B = 100% (Passed) | ||
Seminal vesicle: A+B = 94.59% (Passed) | ||
Sigmoid: A+B = 98.68% (Passed) |
SmartFuse Module (Image Registration)
Acceptance Criteria Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Quantitative (DSC) | a) DSC (mean) ≥ 0.81 (AAPM criterion) OR | |
b) DSC (mean) ≥ 0.65 (benchmark device) | No specific DSC performance values are directly listed for SmartFuse, but the qualitative evaluations imply successful registration leading to acceptable contours. | |
Qualitative (A+B%) | Propagated Contours: A+B% ≥ 85% for deformable, A+B% ≥ 50% for rigid. | |
Overall Registration Output: A+B% ≥ 85% for deformable, A+B% ≥ 50% for rigid. | for tCBCT - sCT (Overall Registration Output): Rigid: A+B%=95.56% (Passed); Deformable: A+B%=97.78% (Passed) | |
for tsynthetic-CT - sCT (Propagated Contours): Deformable: A+B%=94.06% (Passed) | ||
for tCT - sSCT (Overall Registration Output): Rigid: A+B%=70.37% (Passed) | ||
Geometric | 2) Jacobian Determinant must be positive. |
- Target Registration Error (TRE)
Ask a specific question about this device
(144 days)
Smart Segmentation Knowledge Based Contouring provides a combined atlas and model based approach for automated and manual segmentation of structures including target volumes and organs at risk to support the radiation therapy treatment planning process.
Smart Segmentation - Knowledge Based Contouring is a software only product that provides a combined atlas and model based approach to automated segmentation of structures together with tools for manual contouring or editing of structures. A library of already contoured expert cases is provided which is searchable by anatomy, staging, or free text. Users also have the ability to add or modify expert cases to suit their clinical needs. Expert cases are registered to the target image and selected structures propagated. Smart Segmentation Knowledge Based Contouring supports inter and intra user consistency in contouring. This product also provides an anatomy atlas which gives examples of delineated organs for the whole upper body, as well as anatomy images and functional description for selectable structures.
The provided 510(k) summary for Varian's Smart Segmentation Knowledge Based Contouring (K133227) is primarily focused on demonstrating substantial equivalence to a predicate device (K112778 and K102011) due to changes in existing features and the addition of new ones (support for 4D-CT data and a new algorithm for mandible segmentation). The document does not contain a detailed study demonstrating specific acceptance criteria with reported performance metrics in the format requested.
The document states "Verification testing was performed to demonstrate that the performance and functionality of the new and existing features met the design input requirements" and "Validation testing was performed on a production equivalent device, under clinically representative conditions by qualified personnel." However, the specific acceptance criteria, performance results, and details of these tests (like sample sizes, ground truth establishment, expert qualifications, etc.) are not included in the provided text.
Therefore, for most of the requested information, a direct answer cannot be extracted from the given input.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and the reported device performance
- Cannot be provided. The document states that "performance and functionality of the new and existing features met the design input requirements" and "Results from Verification and Validation testing demonstrate that the product met defined user needs and defined design input requirements." However, specific numerical acceptance criteria (e.g., Dice similarity coefficient > 0.8) and the corresponding reported device performance values are not detailed.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. The document mentions "Validation testing was performed... under clinically representative conditions," but it does not specify the sample size of the test set, the country of origin of the data, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Cannot be provided. The document refers to "expert cases" in the context of the device's functionality (a library of already contoured expert cases), but it does not detail the number or qualifications of experts used to establish ground truth for validation testing of the device itself.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. The document does not describe any adjudication methods used for establishing ground truth or evaluating the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Cannot be provided. The document does not mention an MRMC comparative effectiveness study or the effect size of AI assistance on human readers. The device is described as "supporting inter and intra user consistency in contouring," but no study is detailed to quantify this improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Implicitly yes, but no details are provided. The device is described as having "Automated Structure Delineation" and a "new algorithm for segmentation of the mandible." The "Verification testing" and "Validation testing" would logically evaluate the performance of these automated functions, implying a standalone evaluation. However, no specific performance metrics or study details for this standalone performance are given.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Implicitly expert contoured data, but no specific details for validation. The device itself uses a "library of already contoured expert cases." It is reasonable to infer that the ground truth for validation testing would also be based on expert contoured data, but the document does not explicitly state this for the validation set, nor does it specify if this was expert consensus, single expert, or another method.
8. The sample size for the training set
- Cannot be provided. The document mentions a "library of already contoured expert cases" which is central to a "knowledge based" system. This library would constitute the training data (or knowledge base). However, the sample size of this library or training set is not specified.
9. How the ground truth for the training set was established
- Implicitly by experts, but no specific details. The device uses a "library of already contoured expert cases." This implies the ground truth for these training cases was established by "experts." However, details on how these experts established this ground truth (e.g., number of experts, consensus process, qualifications) are not provided.
Ask a specific question about this device
Page 1 of 1