Search Results
Found 5 results
510(k) Data Aggregation
(266 days)
The primary function of ARTAssistant is to facilitate image processing with image registration and synthetic CT (sCT) generation in adaptive radiation therapy. This enables users to meticulously design ART plans based on the processed images.
ARTAssistant, is a standalone software which is positioned as an adaptive radiotherapy auxiliary system, aiming to provide a complete solution to assist the implementation of adaptive radiotherapy, helping hospitals to implement adaptive radiotherapy on ordinary image-guided accelerators based on the current situation. This system is mainly used to assist in the image processing of online adaptive radiotherapy, thereby helping users complete the design of the daily adaptive radiotherapy plan based on the processed images.
The product has three main functions on image processing:
- Automatic registration: rigid and deformable registration, and
- Image conversion: generation of synthetic CT from CBCT or MR, and
- Image contouring: it can manual contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas assisted contouring tools.
It also has the following general functions:
- Receive, add/edit/delete, transmit, input/export medical images and DICOM data;
- Patient management;
- Review of processed images.
Here's an analysis of the ARTAssistant device, focusing on its acceptance criteria and the study that proves it meets those criteria, based on the provided FDA 510(k) clearance letter:
There is no specific table of acceptance criteria or reported device performance for ARTAssistant directly included in the provided 510(k) summary. The summary primarily focuses on comparing ARTAssistant's technological characteristics to predicate and reference devices and describes the performance tests conducted rather than explicit pass/fail criteria or quantitative results against those criteria.
However, based on the performance test descriptions, we can infer the intent of the acceptance criteria and how the device performance was evaluated.
Inferred Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Inferred/Stated Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Automatic Rigid Registration | Non-inferiority in Normalized Mutual Information (NMI) and Hausdorff Distance (HD) compared to predicate device K221706. | "NMI and HD values of the proposed device was non-inferiority compares with that of the predicate device." |
| Automatic Deformable Registration | Non-inferiority in Normalized Mutual Information (NMI) and Hausdorff Distance (HD) compared to predicate device K221706. | "NMI and HD values of the proposed device was non-inferiority compares with that of the predicate device." |
| Image Conversion (sCT Generation) - Dosimetric Accuracy | Gamma Pass Rate within the acceptable range of AAPM TG-119 when comparing RTDose and sRTDose. | "Gamma Pass Rate of all test results is within the acceptable range of AAPM TG-119, which demonstrates the accuracy of the image conversion function." |
| Image Conversion (sCT Generation) - Anatomic/Geometric Accuracy | Segmentation results of ROIs on sCT compared to CBCT/MR demonstrate required geometric accuracy (evaluated by Dice similarity coefficient). | "The results indicate that the geometric accuracy of sCT images generated from both CBCT and MR meets the requirements." |
| Software Verification & Validation | Meet user needs and intended use, pass all software V&V tests. | "ARTAssistant passed all software verification and validation tests." |
Study Details:
1. Sample Size Used for the Test Set and Data Provenance:
- Automatic Rigid & Deformable Registration Functions:
- Sample Size: Not explicitly stated, but implies a collection of "multi-modality image sets from different patients." The count of sets/patients is not provided.
- Data Provenance: All fixed and moving images were generated in healthcare institutions in the U.S. Retrospective or prospective is not specified, but typically, such datasets are retrospective.
- Image Conversion Function:
- Sample Size: 247 testing image sets.
- Data Provenance: All test images were generated in the U.S. The data provenance is retrospective.
- Patient Demographics: 57% male, 43% female. Ages: 21-40 (13%), 41-60 (44.1%), 61-80 (36.8%), 81-100 (6.1%). Race: 78% White, 12% Black or African American, 10% Other.
- Cancer Types: Covers 6 cancer types (Intracranial tumor, nasopharyngeal carcinoma, esophagus cancer, lung cancer, liver cancer, cervical cancer) with specific distributions for both MR/CT and CBCT/CT test datasets.
- Scanner Models:
- CT: GE (28.3%), Philips (41.7%), Siemens (30%)
- MR: GE (21.6%), Philips (56.9%), Siemens (21.6%)
- CBCT: Varian (58.8%), Elekta (41.2%)
- Slice Thicknesses: Distributed as 1mm (19%), 2mm (22.8%), 2.5mm (17.4%), 3mm (17%), 5mm (23.8%).
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- The document does not explicitly state the number of experts or their qualifications used to establish ground truth for the test set.
- For the Image Conversion Dosimetric Accuracy, the AAPM TG-119 method is mentioned, which implies established phantom-based criteria or expert-derived dose distributions as a reference.
- For the Image Conversion Anatomic/Geometric Accuracy (Dice coefficient), the "segmentation results of each ROI on CBCT/MR" were compared, implying these "true" segmentations would likely have been established by qualified medical professionals, but this is not confirmed.
3. Adjudication Method for the Test Set:
- The document does not explicitly state an adjudication method (such as 2+1 or 3+1) for the test set. The evaluation methods described (NMI, HD, Gamma Pass Rate, Dice coefficient) are quantitative metrics compared against either a predicate device's output or established physical/dosimetric accuracy standards.
4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
- No, an MRMC comparative effectiveness study was not explicitly mentioned or performed.
- The performance tests focused on the algorithm's standalone performance in comparison to either a predicate device's algorithm or established accuracy standards, not on how human readers improve with AI assistance.
5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Yes, a standalone performance evaluation was conducted. The described "Performance Test Report on Rigid Registration Function," "Performance Test Report on Deformable Registration Function," and "Performance Test Report on Image Conversion Function" all relate to the algorithm's direct output and quantitative measurements without human intervention being part of the primary performance evaluation.
6. The Type of Ground Truth Used:
- For Rigid and Deformable Registration: The ground truth for comparison was the performance metrics (NMI and HD) of the predicate device (AccuContour, K221706). This indicates a comparative ground truth rather than an absolute biological or pathological ground truth.
- For Image Conversion (Dosimetric Accuracy): The ground truth was based on the AAPM TG-119 method, implying a phantom-based or established dosimetric standard against which the sRTDose was compared to the RTDose derived from true CT.
- For Image Conversion (Anatomic/Geometric Accuracy): The ground truth was the segmentation results of ROIs on the original CBCT/MR images, against which the segmentations on the sCT images were compared using the Dice similarity coefficient. This suggests expert consensus or manually established contours on the original images as ground truth.
7. The Sample Size for the Training Set:
- For the deep learning model for image conversion: There were 560 training image sets.
- The document does not specify training set sizes for the rigid or deformable registration algorithms.
8. How the Ground Truth for the Training Set Was Established:
- For the deep learning model for image conversion: The document does not explicitly detail how the ground truth for the 560 training image sets was established. Given the nature of synthetic CT generation, the "ground truth" for training would typically involve pairs of input images (e.g., MR/CBCT) and corresponding reference CT images. This would likely be derived from clinical scans, potentially aligned and processed for model training, but the process of establishing the "correctness" of these pairs (e.g., precise anatomical alignment, image quality) is not elaborated upon.
- Data Provenance (Training Set): The training image set source is from China.
Ask a specific question about this device
(231 days)
AccuCheck is a quality assurance software to the quality assurance of general offline planning, online adaptive planning and various radiotherapy technology such as photon and proton. It is used for data transfer integrity check, secondary dose calculation with Monte Carlo algorithm, and treatment plan verification in radiotherapy. AccuCheck also provides independent dose verification based on Accelerator delivery log after radiotherapy plan execution.
AccuCheck is not a treatment planning system or a radiation delivery device. It is to be used only by trained radiation oncology personnel for quality assurance purposes.
AccuCheck, defined as a radiotherapy plan quality assurance system, aims to improve the clinical efficiency of offline and online quality control. AccuCheck supports Monte Carlo dose calculation engine, and is applicable to the quality assurance of general offline planning, online adaptive planning and various radiotherapy technology such as photon and proton.
AccuCheck is to be used for the quality assurance of offline plans and online adaptive radiotherapy plans, where the TPS Check module is used to check whether the parameters related to treatment plan are within the executable range of the machine; the Dose Check module is designed to use an independent dose calculation engine to re-calculate the original plan before the treatment, and is compared with the dose of the original plan; the Transfer Check module could verify whether errors are occurred during transferring from the TPS system to the accelerator; the Log Check module is used to obtain execution log of each execution of the accelerator, calculate dose through an independent dose calculation engine, and compare it with the dose of original plan; Treatment Summary supports physical dose accumulation of doses executed multiple times for a single plan, which reflect the stability of the accelerator operating, it could at the same time support the reconstruction of log to the fractional images so as to evaluate the daily exposure dose of the patient. AccuCheck provides abundant auxiliary analysis tools, including DVH Graph, Gamma Analysis, Target Coverage, Gamma Pass Rate of each ROI, Dose Statistics, and Clinical Goals Evaluation.
N/A
Ask a specific question about this device
(210 days)
AccuCheck is a quality assurance software used for data transfer integrity check, secondary dose calculation with Monte Carlo algorithm, and treatment plan verification in radiotherapy. AccuCheck also provides independent dose verification based on LINAC delivery log after radiotherapy plan execution.
AccuCheck is not a treatment planning system or a radiation delivery device. It is to be used only by trained radiation oncology personnel for quality assurance purposes.
AccuCheck is a quality assurance software used for data transfer integrity check, secondary dose calculation with Monte Carlo algorithm, and treatment plan verification in radiotherapy. AccuCheck also provides independent dose verification based on LINAC delivery log after radiotherapy plan execution. AccuCheck is not a treatment planning system or a radiation delivery device. It is to be used only by trained radiation oncology personnel for quality assurance purposes.
AccuCheck performs using the TPS Check module to check related parameters in the radiotherapy plan to determine if the plan is executable by the linear accelerator( LINAC).
AccuCheck also performs using the Dose Check module to conduct dose calculation verification for radiation treatment plans before radiotherapy by doing an independent calculation of radiation dose using Monte Carlo algorithm. Radiation dose is initially calculated by a Treatment Planning System (TPS).
AccuCheck performs using the Transfer Check module to verify the integrity of the treatment plan transmitted from TPS to the LINAC to check if errors occur during the transmission.
AccuCheck performs dose delivery quality assurance for radiation treatment plans by using the measured data recorded in a LINAC's delivery log files to reconstruct executed plan and calculate delivered dose. This is achieved through the software module of the Subject Device called Pre-treatment Check and Treatment Check. The difference lies in the usage scenario, where Pre-treatment Check processes the logs of the first execution of the treatment plan in LINAC without a patient actually being treated, while treatment check processes the logs of the second and subsequent execution of the treatment plan in LINAC with a patient actually being treated. AccuCheck cannot be used for log verification, but rather for dose calculation based on logs such as LINAC delivery log data. The reconstruct of the executed plan and calculation of the delivered dose from delivery logs on LINAC machines, including Varian LINAC and Elekta LINAC, are supported by AccuCheck.
The product provides with multiple tools to assist the analysis, including dose volume histogram, Gamma analysis, target coverage, Gamma passing rate of each ROI, dose statistics and clinical targets evaluation.
AccuCheck: Acceptance Criteria and Performance Study
This document outlines the acceptance criteria for the AccuCheck device and details the study conducted to demonstrate the device's performance against these criteria.
1. Acceptance Criteria and Reported Device Performance
The provided 510(k) summary describes a secondary dose calculation verification test as a key performance evaluation. While explicit numerical acceptance criteria are not presented in a table format, the narrative indicates that "The results of all test cases passed the test criteria". Based on the checked items during the test, the implied acceptance criteria are the accurate and consistent representation of various dose analysis metrics by AccuCheck when compared to FDA-cleared Treatment Planning Systems (TPS).
Table 1: Implied Acceptance Criteria and Reported Device Performance
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Accurate Dose-Volume Histogram (DVH) representation | Passed for all test cases. Differences in DVH limits were checked. |
| Accurate Dose Index calculation | Passed for all test cases. Differences in dose indices were checked. |
| Accurate 3D Dose Distribution representation | Passed for all test cases. |
| Accurate Dose Profile representation | Passed for all test cases. |
| Accurate Gamma Distribution calculation | Passed for all test cases. |
| Correct Pass/Fail results for DVH limits | Passed for all test cases. |
| Accurate 3D Gamma Passing Rate calculation | Passed for all test cases. |
| Acceptable differences in dose indices compared to FDA-cleared TPS | Passed for all test cases. |
2. Sample Size and Data Provenance
- Sample Size for Test Set: 20 patients.
- Case Distribution:
- 10 samples for Head and Neck cancers.
- 5 samples for Chest cancers.
- 5 samples for Abdomen cancers.
- Specific cancer types mentioned: Brain, Lung, Head and Neck, and GI cancers.
- Data Provenance: The document does not explicitly state the country of origin or if the data was retrospective or prospective. However, it mentions that the patients "have been treated with IMRT and VMAT techniques," suggesting the use of retrospective data from actual patient treatments. The joint testing devices included two FDA-cleared LINACs and two FDA-cleared TPS systems, implying that these treatment plans and potentially associated patient data originated from clinical settings where these devices are used.
3. Number and Qualifications of Experts for Ground Truth
The document does not explicitly state the number of experts used to establish the ground truth or their specific qualifications (e.g., radiologist with X years of experience). However, the ground truth is established by the "FDA cleared TPS" (Treatment Planning Systems), implying that the generated dose calculations from these cleared systems serve as the reference for accuracy. The expertise is inherently embedded in the design and validation of these cleared TPS and the physics teams operating them in clinical practice.
4. Adjudication Method
The document does not describe a specific adjudication method (e.g., 2+1, 3+1). The evaluation appears to be a direct comparison between the AccuCheck's calculations and those of the FDA-cleared TPS, with the "test criteria" seemingly pre-defined based on acceptable differences or concordance metrics.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was mentioned. The study described focuses on the device's standalone performance compared to established TPS. There is no information provided regarding human readers improving with AI vs. without AI assistance.
6. Standalone Performance Study
Yes, a standalone performance study was conducted. The "Secondary Dose Calculation verification test" directly assesses the AccuCheck device's ability to perform secondary dose calculations independently and accurately when compared against established FDA-cleared TPS. The device operates without human intervention in its calculation process for this specific test.
7. Type of Ground Truth Used
The ground truth used for the secondary dose calculation verification test is based on the dose calculations generated by FDA-cleared Treatment Planning Systems (TPS). This implies a reference to established and regulatory-approved dose calculation methods.
8. Sample Size for the Training Set
The document does not provide information about the sample size used for the training set. The descriptions focus solely on the verification and validation (test) phase.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for any training set was established, as it does not explicitly mention a training phase for the device development. The performance data section describes verification and validation using established TPS as the reference.
Ask a specific question about this device
(209 days)
The MOZI Treatment Planning System (MOZI TPS) is used to plan radiotherapy treatments with malignant or benign diseases. MOZI TPS is used to plan external beam irradiation with photon beams.
The proposed device, MOZI Treatment Planning System (MOZI TPS), is a standalone software which is used to plan radiotherapy treatments (RT) for patients with malignant or benign diseases. Its core functions include image processing, structure delineation, plan design, optimization and evaluation. Other functions include user login, graphical interface, system and patient management. It can provide a platform for completing the related work of the whole RT plan.
The provided text describes the performance data for the MOZI TPS device, focusing on its automatic contouring (structure delineation) feature. Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided document:
1. A table of acceptance criteria and the reported device performance
The primary acceptance criterion mentioned for structure delineation (automatic contouring) is based on the Mean Dice Similarity Coefficient (DSC). The study aimed to demonstrate non-inferiority compared to a reference device (AccuContour™ - K191928). While explicit thresholds for "acceptable" Mean DSC values are not given as numerical acceptance criteria in the table below, the text states "The result demonstrated that they have equivalent performance," implying that the reported DSC values met the internal non-inferiority standard set by the manufacturer against the performance of the reference device.
| Body Part | OAR | Acceptance Criterion (Implicit) | Reported Mean DSC values | Mean standard deviation |
|---|---|---|---|---|
| Head&Neck | Mean DSC non-inferior to reference device (AccuContour™ - K191928) | |||
| Brainstem | "equivalent performance" to K191928 | 0.88 | 0.03 | |
| BrachialPlexus_L | "equivalent performance" to K191928 | 0.61 | 0.05 | |
| BrachialPlexus_R | "equivalent performance" to K191928 | 0.64 | 0.05 | |
| Esophagus | "equivalent performance" to K191928 | 0.84 | 0.02 | |
| Eye-L | "equivalent performance" to K191928 | 0.93 | 0.02 | |
| Eye-R | "equivalent performance" to K191928 | 0.93 | 0.02 | |
| InnerEar-L | "equivalent performance" to K191928 | 0.78 | 0.06 | |
| InnerEar-R | "equivalent performance" to K191928 | 0.82 | 0.04 | |
| Larynx | "equivalent performance" to K191928 | 0.87 | 0.02 | |
| Lens-L | "equivalent performance" to K191928 | 0.77 | 0.07 | |
| Lens-R | "equivalent performance" to K191928 | 0.72 | 0.08 | |
| Mandible | "equivalent performance" to K191928 | 0.90 | 0.02 | |
| MiddleEar_L | "equivalent performance" to K191928 | 0.73 | 0.04 | |
| MiddleEar_R | "equivalent performance" to K191928 | 0.74 | 0.04 | |
| OpticNerve_L | "equivalent performance" to K191928 | 0.61 | 0.07 | |
| OpticNerve_R | "equivalent performance" to K191928 | 0.62 | 0.08 | |
| OralCavity | "equivalent performance" to K191928 | 0.90 | 0.03 | |
| OpticChiasm | "equivalent performance" to K191928 | 0.64 | 0.10 | |
| Parotid-L | "equivalent performance" to K191928 | 0.83 | 0.03 | |
| Parotid-R | "equivalent performance" to K191928 | 0.83 | 0.04 | |
| PharyngealConstrictors_U | "equivalent performance" to K191928 | 0.87 | 0.03 | |
| PharyngealConstrictors_M | "equivalent performance" to K191928 | 0.88 | 0.02 | |
| PharyngealConstrictors_L | "equivalent performance" to K191928 | 0.87 | 0.03 | |
| Pituitary | "equivalent performance" to K191928 | 0.74 | 0.14 | |
| SpinalCord | "equivalent performance" to K191928 | 0.85 | 0.04 | |
| Submandibular_L | "equivalent performance" to K191928 | 0.86 | 0.04 | |
| Submandibular_R | "equivalent performance" to K191928 | 0.87 | 0.03 | |
| TemporalLobe_L | "equivalent performance" to K191928 | 0.89 | 0.03 | |
| TemporalLobe_R | "equivalent performance" to K191928 | 0.89 | 0.03 | |
| Thyroid | "equivalent performance" to K191928 | 0.86 | 0.03 | |
| TMJ_L | "equivalent performance" to K191928 | 0.79 | 0.06 | |
| TMJ_R | "equivalent performance" to K191928 | 0.74 | 0.06 | |
| Trachea | "equivalent performance" to K191928 | 0.90 | 0.02 | |
| Thorax | Esophagus | "equivalent performance" to K191928 | 0.80 | 0.05 |
| Heart | "equivalent performance" to K191928 | 0.98 | 0.01 | |
| Lung_L | "equivalent performance" to K191928 | 0.99 | 0.00 | |
| Lung_R | "equivalent performance" to K191928 | 0.99 | 0.00 | |
| Spinal Cord | "equivalent performance" to K191928 | 0.97 | 0.02 | |
| Trachea | "equivalent performance" to K191928 | 0.95 | 0.02 | |
| Abdomen | Duodenum | "equivalent performance" to K191928 | 0.64 | 0.05 |
| Kidney_L | "equivalent performance" to K191928 | 0.96 | 0.02 | |
| Kidney_R | "equivalent performance" to K191928 | 0.97 | 0.01 | |
| Liver | "equivalent performance" to K191928 | 0.95 | 0.02 | |
| Pancreas | "equivalent performance" to K191928 | 0.79 | 0.04 | |
| SpinalCord | "equivalent performance" to K191928 | 0.82 | 0.02 | |
| Stomach | "equivalent performance" to K191928 | 0.89 | 0.02 | |
| Pelvic-Man | Bladder | "equivalent performance" to K191928 | 0.92 | 0.03 |
| BowelBag | "equivalent performance" to K191928 | 0.89 | 0.04 | |
| FemurHead_L | "equivalent performance" to K191928 | 0.96 | 0.02 | |
| FemurHead_R | "equivalent performance" to K191928 | 0.95 | 0.02 | |
| Marrow | "equivalent performance" to K191928 | 0.90 | 0.02 | |
| Prostate | "equivalent performance" to K191928 | 0.85 | 0.04 | |
| Rectum | "equivalent performance" to K191928 | 0.88 | 0.03 | |
| SeminalVesicle | "equivalent performance" to K191928 | 0.72 | 0.07 | |
| Pelvic-Female | Bladder | "equivalent performance" to K191928 | 0.88 | 0.02 |
| BowelBag | "equivalent performance" to K191928 | 0.87 | 0.02 | |
| FemurHead_L | "equivalent performance" to K191928 | 0.96 | 0.02 | |
| FemurHead_R | "equivalent performance" to K191928 | 0.95 | 0.02 | |
| Marrow | "equivalent performance" to K191928 | 0.89 | 0.02 | |
| Rectum | "equivalent performance" to K191928 | 0.77 | 0.04 |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: 187 image sets (CT structure models).
- Data Provenance: The testing image source is from the United States. The data is retrospective, as it consists of existing CT datasets.
- Patient demographics: 57% male, 43% female. Ages: 21-30 (0.3%), 31-50 (31%), 51-70 (51.3%), 71-100 (14.4%). Race: 78% White, 12% Black or African American, 10% Other.
- Anatomical regions: Head and Neck (20.3%), Esophageal and Lung (Thorax, 20.3%), Gastrointestinal (Abdomen, 20.3%), Prostate (Male Pelvis, 20.3%), Female Pelvis (18.7%).
- Scanner models: GE (28.3%), Philips (33.7%), Siemens (38%).
- Slice thicknesses: 1mm (5.3%), 2mm (28.3%), 2.5mm (2.7%), 3mm (23%), 5mm (40.6%).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Six
- Qualifications of experts: Clinically experienced radiation therapy physicists.
4. Adjudication method for the test set
- Adjudication method: Consensus. The ground truth was "generated manually using consensus RTOG guidelines as appropriate by six clinically experienced radiation therapy physicists." This implies that the experts agreed upon the ground truth for each case.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader, multi-case comparative effectiveness study was not performed to assess human reader improvement with AI assistance. The study focused on the standalone performance of the AI algorithm (automatic contouring) and its comparison to a reference device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, a standalone performance evaluation of the automatic segmentation algorithm was performed. The reported Mean DSC values are for the MOZI TPS device's auto-segmentation function without direct human-in-the-loop interaction during the segmentation process. The comparison to the reference device AccuContour™ (K191928) was also a standalone comparison.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Expert consensus. The ground truth was "generated manually using consensus RTOG guidelines as appropriate by six clinically experienced radiation therapy physicists."
8. The sample size for the training set
- Training Set Sample Size: 560 image sets (CT structure models).
9. How the ground truth for the training set was established
- The document states that the training image set source is from China. It does not explicitly detail the method for establishing ground truth for the training set. However, given that the ground truth for the test set was established by "clinically experienced radiation therapy physicists" using "consensus RTOG guidelines," it is highly probable that a similar methodology involving expert delineation and review was used for the training data to ensure high-quality labels for the deep learning model. The statement that "They are independent of each other" (training and testing sets) implies distinct data collection and ground truth establishment processes, but the specific details for the training set are not provided.
Ask a specific question about this device
(269 days)
It is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.
The proposed device, AccuContour, is a standalone software which is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.
The product has two image processing functions:
- (1) Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,
- (2) Automatic registration: rigid and deformable registration, and
- (3) Manual contouring.
It also has the following general functions:
- Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
- Patient management;
- Review of processed images;
- Extension tool;
- Plan evaluation and plan comparison;
- Dose analysis.
This document (K221706) is a 510(k) Premarket Notification for the AccuContour device by Manteia Technologies Co., Ltd. It declares substantial equivalence to a predicate device and several reference devices. The focus here is on the performance data related to the "Deep learning contouring" feature and the "Automatic registration" feature.
Based on the provided document, here's a detailed breakdown of the acceptance criteria and the study proving the device meets them:
I. Acceptance Criteria and Reported Device Performance
The document does not explicitly provide a clear table of acceptance criteria and the reported device performance for the deep learning contouring in the format requested. Instead, it states that "Software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." This implies that internal acceptance criteria were met, but these specific criteria and the detailed performance results (e.g., dice scores, Hausdorff distance for contours) are not disclosed in this summary.
However, for the deformable registration, it provides a comparative statement:
| Feature | Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|---|
| Deformable Registration | Non-inferiority to reference device (K182624) based on Normalized Mutual Information (NMI) | The NMI value of the proposed device was non-inferior to that of the reference device. |
It's important to note:
- For Deep Learning Contouring: No specific performance metrics or acceptance criteria are listed in this 510(k) summary. The summary only broadly mentions that the software "can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas." The success is implicitly covered by the "Software verification and validation testing" section.
- For Automatic Registration: The criterion is non-inferiority in NMI compared to a reference device. The specific NMI values are not provided, only the conclusion of non-inferiority.
II. Sample Size and Data Provenance
- Test Set (for Deformable Registration):
- Sample Size: Not explicitly stated as a number, but described as "multi-modality image sets from different patients."
- Data Provenance: "All fixed images and moving images are generated in healthcare institutions in U.S." This indicates prospective data collection (or at least collected with the intent for such testing) from the U.S.
- Training Set (for Deep Learning Contouring):
- Sample Size: Not explicitly stated in the provided document.
- Data Provenance: Not explicitly stated in the provided document.
III. Number of Experts and Qualifications for Ground Truth
- For the Test Set (Deformable Registration): The document does not mention the use of experts or ground truth establishment for the deformable registration test beyond the use of NMI for "evaluation." NMI is an image similarity metric and does not typically require human expert adjudication of registration quality in the same way contouring might.
- For the Training Set (Deep Learning Contouring): The document does not specify the number of experts or their qualifications for establishing ground truth for the training set.
IV. Adjudication Method for the Test Set
- For Deformable Registration: Not applicable in the traditional sense, as NMI is an objective quantitative metric. There's no mention of human adjudication for registration quality here.
- For Deep Learning Contouring (Test Set): The document notes there was no clinical study included in this submission. This implies that if a test set for the deep learning contouring was used, its ground truth (and any adjudication process for it) is not described in this 510(k) summary. Given the absence of a clinical study, it's highly probable that ground truth for performance evaluation of deep learning contouring was established internally through expert consensus or other methods, but details are not provided.
V. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: No, an MRMC comparative effectiveness study was not reported. The document explicitly states: "No clinical study is included in this submission."
- Effect Size: Not applicable, as no such study was performed or reported.
VI. Standalone (Algorithm Only) Performance Study
- Was it done?: Yes, for the deformable registration feature. The NMI evaluation was "on two sets of images for both the proposed device and reference device (K182624), respectively." This is an algorithm-only (standalone) comparison.
- For Deep Learning Contouring: While the deep learning contouring is a standalone feature, the document does not provide details of its standalone performance evaluation (e.g., against expert ground truth). It only states that software verification and validation were performed to meet acceptance criteria.
VII. Type of Ground Truth Used
- Deformable Registration: The "ground truth" for the deformable registration evaluation was implicitly the images themselves, with NMI being used as a metric to compare the alignment achieved by the proposed device versus the reference device. It's an internal consistency/similarity metric rather than a "gold standard" truth established by external means like pathology or expert consensus.
- Deep Learning Contouring: Not explicitly stated in the provided document. Given that it's an AI-based contouring tool and no clinical study was performed, the ground truth for training and internal testing would typically be established by expert consensus (e.g., radiologist or radiation oncologist contours) or pathology, but the document does not specify.
VIII. Sample Size for the Training Set
- Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration.
IX. How Ground Truth for the Training Set was Established
- Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration. For deep learning, expert-annotated images are the typical method, but details are absent here.
Ask a specific question about this device
Page 1 of 1