(144 days)
Smart Segmentation Knowledge Based Contouring provides a combined atlas and model based approach for automated and manual segmentation of structures including target volumes and organs at risk to support the radiation therapy treatment planning process.
Smart Segmentation - Knowledge Based Contouring is a software only product that provides a combined atlas and model based approach to automated segmentation of structures together with tools for manual contouring or editing of structures. A library of already contoured expert cases is provided which is searchable by anatomy, staging, or free text. Users also have the ability to add or modify expert cases to suit their clinical needs. Expert cases are registered to the target image and selected structures propagated. Smart Segmentation Knowledge Based Contouring supports inter and intra user consistency in contouring. This product also provides an anatomy atlas which gives examples of delineated organs for the whole upper body, as well as anatomy images and functional description for selectable structures.
The provided 510(k) summary for Varian's Smart Segmentation Knowledge Based Contouring (K133227) is primarily focused on demonstrating substantial equivalence to a predicate device (K112778 and K102011) due to changes in existing features and the addition of new ones (support for 4D-CT data and a new algorithm for mandible segmentation). The document does not contain a detailed study demonstrating specific acceptance criteria with reported performance metrics in the format requested.
The document states "Verification testing was performed to demonstrate that the performance and functionality of the new and existing features met the design input requirements" and "Validation testing was performed on a production equivalent device, under clinically representative conditions by qualified personnel." However, the specific acceptance criteria, performance results, and details of these tests (like sample sizes, ground truth establishment, expert qualifications, etc.) are not included in the provided text.
Therefore, for most of the requested information, a direct answer cannot be extracted from the given input.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and the reported device performance
- Cannot be provided. The document states that "performance and functionality of the new and existing features met the design input requirements" and "Results from Verification and Validation testing demonstrate that the product met defined user needs and defined design input requirements." However, specific numerical acceptance criteria (e.g., Dice similarity coefficient > 0.8) and the corresponding reported device performance values are not detailed.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. The document mentions "Validation testing was performed... under clinically representative conditions," but it does not specify the sample size of the test set, the country of origin of the data, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Cannot be provided. The document refers to "expert cases" in the context of the device's functionality (a library of already contoured expert cases), but it does not detail the number or qualifications of experts used to establish ground truth for validation testing of the device itself.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. The document does not describe any adjudication methods used for establishing ground truth or evaluating the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Cannot be provided. The document does not mention an MRMC comparative effectiveness study or the effect size of AI assistance on human readers. The device is described as "supporting inter and intra user consistency in contouring," but no study is detailed to quantify this improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Implicitly yes, but no details are provided. The device is described as having "Automated Structure Delineation" and a "new algorithm for segmentation of the mandible." The "Verification testing" and "Validation testing" would logically evaluate the performance of these automated functions, implying a standalone evaluation. However, no specific performance metrics or study details for this standalone performance are given.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Implicitly expert contoured data, but no specific details for validation. The device itself uses a "library of already contoured expert cases." It is reasonable to infer that the ground truth for validation testing would also be based on expert contoured data, but the document does not explicitly state this for the validation set, nor does it specify if this was expert consensus, single expert, or another method.
8. The sample size for the training set
- Cannot be provided. The document mentions a "library of already contoured expert cases" which is central to a "knowledge based" system. This library would constitute the training data (or knowledge base). However, the sample size of this library or training set is not specified.
9. How the ground truth for the training set was established
- Implicitly by experts, but no specific details. The device uses a "library of already contoured expert cases." This implies the ground truth for these training cases was established by "experts." However, details on how these experts established this ground truth (e.g., number of experts, consensus process, qualifications) are not provided.
§ 892.5050 Medical charged-particle radiation therapy system.
(a)
Identification. A medical charged-particle radiation therapy system is a device that produces by acceleration high energy charged particles (e.g., electrons and protons) intended for use in radiation therapy. This generic type of device may include signal analysis and display equipment, patient and equipment supports, treatment planning computer programs, component parts, and accessories.(b)
Classification. Class II. When intended for use as a quality control system, the film dosimetry system (film scanning system) included as an accessory to the device described in paragraph (a) of this section, is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.