(249 days)
PV med Contouring Software is a software package. It allows the display, annotation, volume operation, volume rendering, and fusion of medical CT images as an aid during use by radiation therapy planning. it is limited to autocontouring of OARs for head and neck, chest and abdomen, abdomen and pelvis.
The device, PVmed Contouring Software is a software package. It allows the display, annotation, volume operation, volume rendering, and fusion of medical CT images as an aid during use by radiation therapy planning. it is limited to auto-contouring of OARs for head and neck, chest and abdomen, abdomen and pelvis.
The product mainly has the following image processing functions:
Support contour drawing of organs at risk in head and neck, chest and abdomen, abdomen and pelvis.
It also has the following general functions:
- Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
- Data set management, patient data management;
- Review of processed images;
- Image fusion;
- Open and Save of files.
The provided text describes the PVmed Contouring Software and its substantial equivalence determination for FDA clearance (K210916). However, the document's detail regarding the acceptance criteria and the specific study that proves the device meets the acceptance criteria is somewhat limited, especially concerning quantitative metrics and the setup of the ground truth for human-only or human-in-the-loop performance improvements.
Based on the available information, here's an attempt to structure the answer according to your request, with limitations noted for absent information.
Acceptance Criteria and Device Performance Study for PVmed Contouring Software
The PVmed Contouring Software underwent non-clinical testing to demonstrate its drawing accuracy for organs at risk (OARs) when compared to manual delineations by radiotherapy doctors.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the PVmed Contouring Software, specifically for drawing accuracy, were evaluated using the Dice Similarity Coefficient (DSC). While the specific numerical acceptance threshold (e.g., DSC > 0.8) is not explicitly stated, the reported performance indicates that the software met these requirements.
Metric (Type of Criteria) | Acceptance Criteria (Threshold/Target) | Reported Device Performance |
---|---|---|
Drawing Accuracy | Dice Similarity Coefficient (DSC) | "The results show that the software can meet the requirements of drawing accuracy." (Specific numerical DSC values are not provided in the document.) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The test set included CT images "of who were admitted to 5 hospitals for radiotherapy in the past 3 years." The exact number of patients or images is not specified.
- Data Provenance: Retrospective, as the data consisted of CT images from patients admitted "in the past 3 years." The country of origin is not explicitly stated, but the submitter's address is in Guangzhou, Guangdong, China.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The ground truth (control group) was established by "radiotherapy doctors." The exact number of doctors involved is not specified.
- Qualifications of Experts: The experts were "radiotherapy doctors with more than 5 years of practice."
4. Adjudication Method for the Test Set
The document states that the test group was delineated by the software, and the control group (ground truth) was delineated by radiotherapy doctors. This implies a comparison between the software's output and the expert's output. The method for resolving discrepancies among multiple experts (e.g., 2+1, 3+1) or if multiple experts were used for each case to establish a consensus ground truth is not specified. It appears to be a direct comparison to expert-drawn contours, implying consensus or a single expert for each contour, but this is not explicitly detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, a formal MRMC comparative effectiveness study to assess how much human readers improve with AI vs. without AI assistance was not described or included in this submission. The study focused on the standalone performance of the auto-contouring feature comparing it to manual contours.
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Yes, the "Summary of Non-Clinical Trial" describes a standalone performance evaluation of the auto-contouring software. The software's output ("test group was delineated by software") was directly compared to the manual delineations by radiotherapy doctors ("control group").
7. Type of Ground Truth Used
The ground truth used was expert consensus (or expert delineation, if not a consensus of multiple experts, which is unclear). Specifically, it was the "drawing results manually drawn by radiotherapy doctors."
8. Sample Size for the Training Set
The document does not provide any information regarding the training set's sample size for the AI model. The "Summary of Non-Clinical Trial" only details the test set used for performance validation.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth was established for the training data used to develop the PVmed Contouring Software.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).