Search Filters

Search Results

Found 559 results

510(k) Data Aggregation

    K Number
    K252174
    Device Name
    AquaCast Mask
    Date Cleared
    2025-09-03

    (54 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K243142
    Manufacturer
    Date Cleared
    2025-06-23

    (266 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cranial 4Pi is intended for patient immobilization in radiotherapy and radiosurgery procedures.

    Cranial 4Pi is indicated for any medical condition in which the use of radiotherapy or radiosurgery may be appropriate for cranial and head & neck treatments.

    Device Description

    Cranial 4Pi is an assembly of the following medical device/ accessory groups:

    • CRANIAL 4PI OVERLAYS (CRANIAL 4PI CT OVERLAY, CRANIAL 4PI TREATMENT OVERLAY)
    • CRANIAL 4PI HEADRESTS (CRANIAL 4PI HEADREST STANDARD, CRANIAL 4PI HEADREST LOW-NECK, CRANIAL 4PI HEADREST PLATFORM)
    • CRANIAL 4PI HEADREST INLAYS (CRANIAL 4PI HEADREST INLAY STANDARD, CRANIAL 4PI HEADREST INLAY OPEN FACE, CRANIAL 4PI HEADREST INLAY H&N, CRANIAL 4PI HEAD SUPPORT STANDARD, CRANIAL 4PI HEAD SUPPORT WIDE)
    • CRANIAL 4PI MASKS (CRANIAL 4PI BASIC MASK, CRANIAL 4PI OPEN FACE MASK, CRANIAL 4PI EXTENDED MASK, CRANIAL 4PI STEREOTACTIC MASK, CRANIAL 4PI STEREOTACTIC MASK 3.2MM)
    • CRANIAL 4PI WEDGES AND SPACERS (CRANIAL 4PI WEDGE 5 DEG., CRANIAL 4PI WEDGE 10 DEG., CRANIAL 4PI SPACER 20MM, CRANIAL 4PI INDEXING PLATE)

    The Cranial 4Pi Overlays are medical devices used for fixation of the patient in a CT- resp. linear accelerator - environment.

    The Cranial 4Pi Headrests and the Cranial 4Pi Headrest Inlays are accessories to the Cranial 4Pi Overlays to allow an indication specific positioning of the patient's head and neck. The Cranial 4Pi Wedges and Spacers are accessories to the Cranial 4Pi Headrest Platform to adapt the inclination of the head support to the patients necks.

    The Cranial 4Pi Masks are accessories to the Cranial 4Pi Overlays used for producing individual custom-made masks for patient immobilization to the Cranial 4Pi Overlay.

    AI/ML Overview

    The provided text is a 510(k) Clearance Letter and 510(k) Summary for a medical device called "Cranial 4Pi Immobilization." This document focuses on demonstrating substantial equivalence to a predicate device, as required for FDA 510(k) clearance.

    However, the provided text does not contain the detailed information typically found in a clinical study report or a pre-market approval (PMA) submission regarding acceptance criteria, study methodologies, or specific performance metrics with numerical results (like sensitivity, specificity, or AUC) that would be used to "prove the device meets acceptance criteria" for an AI/ML-driven device. The document primarily describes the device's components, indications for use, and a comparison to a predicate device to establish substantial equivalence.

    The "Performance Data" section primarily addresses biocompatibility, mechanical verification, dosimetry, compatibility with another system, and mask stability. It does not describe a study to prove AI model performance against clinical acceptance criteria. The "Usability Evaluation" section describes a formative usability study, which is different from a performance study demonstrating clinical effectiveness or accuracy.

    Therefore, many of the requested elements (especially those related to AI/ML model performance, ground truth establishment, expert adjudication, MRMC studies, or standalone algorithm performance) cannot be extracted from the provided text. The Cranial 4Pi Immobilization device appears to be a physical immobilization system, not an AI/ML diagnostic or prognostic tool.

    Given the nature of the document (510(k) for an immobilization device), the concept of "acceptance criteria for an AI model" and "study that proves the device meets the acceptance criteria" in the traditional sense of an AI/ML clinical study does not apply here.

    I will answer the questions based on the closest relevant information available in the provided text, and explicitly state where the information is not available or not applicable to the type of device described.


    Preamble: Nature of the Device and Submission

    The Cranial 4Pi Immobilization device is a physical medical device designed for patient immobilization during radiotherapy and radiosurgery. The 510(k) premarket notification for this device seeks to demonstrate substantial equivalence to an existing predicate device (K202050 - Cranial 4Pi Immobilization). This type of submission typically focuses on comparable intended use, technological characteristics, and safety/performance aspects relevant to the physical device's function (e.g., biocompatibility, mechanical stability, dosimetry interaction).

    The provided documentation does not describe an AI/ML-driven component that would require acceptance criteria related to AI model performance (e.g., accuracy, sensitivity, specificity, AUC) or a study to prove such performance. Therefore, many of the questions asking about AI-specific validation (like ground truth, expert adjudication, MRMC studies, training/test sets for AI) are not applicable to this type of device and submission.


    1. A table of acceptance criteria and the reported device performance

    Based on the provided document, specific numerical "acceptance criteria" and "reported device performance" in the context of an AI/ML model are not available and not applicable. The document focuses on demonstrating substantial equivalence of a physical immobilization device.

    However, the "Performance Data" section lists several tests and their outcomes, which serve as evidence that the device performs as intended for its physical function. These are not acceptance criteria for an AI model.

    Test CategoryAcceptance Criteria (Explicitly stated or Inferred)Reported Device Performance (as stated)
    BiocompatibilityRisk mitigated by limited exposure and intact skin contact for Irritation/Sensitization; low unbound residues for coating. Cytotoxicity to be performed.Cytotoxicity Testing: Amount of non-reacted ducts is considered low.
    Sensitization Testing (ISO 10993-10):
    • Saline Extraction: No sensitization reactions observed.
    • Cottonseed Oil Extraction: No sensitization reactions observed.
      Test article did not elicit sensitization reactions (guinea pigs). Positive controls validated sensitivity.
      Irritation Testing (ISO 10993-23):
    • No irritation observed (rabbits) compared to control based on erythema and edema scores for saline and cottonseed oil extracts.
      Test article met requirements for Intracutaneous (Intradermal) Reactivity Test. Positive controls validated sensitivity. |
      | Mechanical Tests | Relevant for fulfillment of IEC 60601-1 requirements. | All mechanical tests relevant for fulfillment of IEC 60601-1 requirements were carried out successfully. |
      | Dosimetry Tests | Verify that dose attenuation is acceptable. | Tests to verify that dose attenuation is acceptable with the hardware components were carried out successfully. |
      | Compatibility Tests| Compatibility with ExacTrac Dynamic 2.0. | Compatibility with ExacTrac Dynamic 2.0 was tested successfully. |
      | Mask Stability | Cranial 4Pi SRS mask 3.2 mm (vs. 2mm predicate) to have higher stability against head movement. | Technical validation test to prove that the Cranial 4Pi SRS mask 3.2 mm... having a 3.2 mm top mask sheet instead of 2mm has a higher stability against head movement was carried out successfully. |
      | Usability Evaluation| Evaluate the usability of the subject devices. | Formative usability evaluation performed in three different clinics with seven participants to evaluate the usability of the subject devices. (Specific findings not detailed, but the study was performed). |

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not applicable/not stated in the context of an AI/ML test set. The usability evaluation involved "seven participants" in "three different clinics." For biocompatibility, animal studies were performed (guinea pigs for sensitization, rabbits for irritation; specific number of animals not stated but implied to be sufficient for ISO standards).
    • Data Provenance: Not applicable for an AI/ML test set. The usability evaluation involved "three different clinics" but the country of origin is not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable. This device is a physical immobilization system, not an AI/ML diagnostic or prognostic tool that requires expert-established ground truth on medical images.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. This information is relevant to validating AI/ML diagnostic performance against ground truth, which is not described for this device.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This is an AI/ML-specific study design. The device is a physical immobilization system, not an AI assistance tool for human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This is an AI/ML-specific validation. There is no AI algorithm component described for this physical device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not applicable. No ground truth for diagnostic or prognostic purposes is established for this physical device. The "performance data" relies on standards compliance (e.g., ISO, IEC), physical measurements, and usability feedback.

    8. The sample size for the training set

    • Not applicable. There is no AI model described that would require a training set.

    9. How the ground truth for the training set was established

    • Not applicable. There is no AI model described.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243301
    Device Name
    MapRT
    Manufacturer
    Date Cleared
    2025-05-19

    (213 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MapRT is indicated for assisting with planning of radiation therapy by:

    • Assessing which combinations of gantry/couch angle and isocentre may result in a collision and which are available to potentially enhance the dose distribution; and
    • Predicting when a treatment plan might result in a collision between the treatment machine and the patient or support structures
    Device Description

    MapRT is used by radiotherapy professionals during the CT simulation and treatment planning stages of radiotherapy for collision avoidance and facilitating dose optimisation.

    MapRT uses two lateral wide-field cameras in simulation to deliver a full 3D model of patients and accessories. This model is then used to calculate a clearance map for every couch (x-axis) and gantry (y-axis) angles. Radiotherapy treatment plans can then be imported automatically to check beams, arcs, and the transition clearance.

    AI/ML Overview

    The provided document is a 510(k) clearance letter for a software device called MapRT, which assists in radiation therapy planning by predicting collisions. However, the document explicitly states: "As with the predicate device, no clinical investigations were performed for MapRT. Verification tests were performed to ensure that the module works as intended and pass/fail criteria were used to verify requirements. Validation testing was performed using summative evaluation techniques per 62366-1:2015/A1:2020. Verification and validation testing passed in all test cases."

    This means the submission did not include a study design or performance data in the typical sense of a clinical trial or a multi-reader multi-case (MRMC) study to prove the device meets acceptance criteria related to clinical performance metrics like sensitivity, specificity, accuracy, or reader improvement. Instead, the clearance relies on:

    1. Substantial Equivalence: The primary argument for clearance is that MapRT v1.2 is substantially equivalent to its predicate device (MapRT v1.0, K231185). The document highlights that the indications for use, functionality, technological characteristics, and intended users are the same as the predicate.
    2. Verification and Validation (V&V) Testing: The document mentions that "Verification and validation testing passed in all test cases," indicating that the software meets its design specifications and functions as intended, primarily in terms of software functionality and accuracy of collision prediction within its defined operational parameters.

    Given this information, it's not possible to fill out all aspects of your requested response, particularly those related to clinical studies, ground truth establishment, expert consensus, and MRMC studies, as they were explicitly not performed.

    Here's an attempt to answer based on the provided document, noting where information is not available:


    Device Acceptance Criteria and Study Performance for MapRT

    The FDA 510(k) clearance for MapRT v1.2 is primarily based on demonstrating substantial equivalence to a legally marketed predicate device (MapRT v1.0, K231185) and successful completion of software verification and validation activities. The submission explicitly states that "no clinical investigations were performed for MapRT." Therefore, the acceptance criteria and performance proof are framed in the context of software verification and validation, and functional accuracy rather than clinical efficacy studies.

    1. Acceptance Criteria and Reported Device Performance

    The core functional acceptance criterion is the accuracy of collision prediction.

    Acceptance Criterion (Functional/Technical, as per document)Reported Device Performance
    Accuracy of Gantry Clearance CalculationCalculates gantry clearance with an accuracy of ± 2cm.
    Verification & Validation (V&V) Testing"Verification and validation testing passed in all test cases." This implies meeting all internal design specifications and functional requirements as per 62366-1:2015/A1:2020 for summative evaluation techniques. The device "continues to meet the design specifications and performs as intended."
    Substantial EquivalenceDemonstrated substantial equivalence to predicate device (MapRT v1.0, K231185) in Indications for Use, Intended Users, Contraindications, Functionality, Technology, Input/Output, and Design (with minor non-safety impacting GUI differences).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document does not specify a "test set" in the context of patient data or clinical cases. The performance data mentioned refer to software verification and validation tests, which would involve a set of test cases designed to cover various scenarios and functional requirements. The specific number or nature of these test cases is not detailed.
    • Data Provenance: Not applicable for a clinical test set, as no clinical investigations were performed. The V&V testing would likely involve simulated data, synthetic models, or potentially anonymized patient models used for testing collision detection scenarios. The provenance (country of origin, retrospective/prospective) of such test data is not provided.

    3. Number of Experts Used to Establish Ground Truth for Test Set and Their Qualifications

    Not applicable. Since no clinical investigations were performed, there was no clinical "ground truth" established by experts in the context of patient outcomes or image interpretation. The ground truth for functional testing of collision prediction would be derived from precise engineering specifications and physical measurements, likely validated internally by the manufacturer's engineering team.

    4. Adjudication Method for the Test Set

    Not applicable, as no clinical test set requiring expert adjudication was used.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No. The document explicitly states: "As with the predicate device, no clinical investigations were performed for MapRT." Therefore, no MRMC study was conducted to compare human reader performance with or without AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done

    Yes, in essence. The stated "accuracy of ± 2cm" for gantry clearance calculation and the passing of "all test cases" in verification and validation testing refer to the isolated performance of the MapRT algorithm in predicting collisions and calculating clearance. This implies an evaluation of the algorithm's functional accuracy independent of human interaction beyond inputting treatment plans. However, the details of how this accuracy was measured (e.g., against a gold standard derived from physical models or high-precision simulations) are not provided in this summary.

    7. The Type of Ground Truth Used

    For the accuracy of gantry clearance calculation (± 2cm), the ground truth would typically be established through:

    • Precise engineering specifications and measurements of physical models of the treatment machine, patient, and support structures.
    • High-fidelity simulation data where collision events and clearances can be precisely calculated geometrically.

    It is not based on expert consensus, pathology, or outcomes data, as these are typically associated with clinical diagnostic or prognostic devices.

    8. Sample Size for the Training Set

    Not applicable. MapRT is a software device that simulates radiation treatment plans and predicts collisions based on geometric models and calculations. There is no indication that it is an AI/Machine Learning model that requires a "training set" of data in the conventional sense (e.g., for image classification or pattern recognition). Its "knowledge" of collision mechanics and geometries comes from programmed rules and pre-loaded models (e.g., LiDAR scans or 3D CAD models of equipment), not from learning from a dataset.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no "training set" for an AI/ML model for MapRT based on the provided information.


    Ask a Question

    Ask a specific question about this device

    K Number
    K250099
    Device Name
    Mobius3D (4.1)
    Date Cleared
    2025-05-16

    (122 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Mobius3D software is used for quality assurance, treatment plan verification, and patient alignment and anatomy analysis in radiation therapy. It calculates radiation dose three dimensionally in a representation of a patient or a phantom. The calculation is based on read-in treatment plans that are initially calculated by a treatment planning system, and may additionally be based on external measurements of radiation fields from other sources such as linac delivery log data. Patient alignment and anatomy analysis is based on read-in treatment planning images (such as computed tomography) and read-in daily treatment images (such as registered cone beam computed tomography).

    Mobius3D is not a treatment planning system. It is to be used only by trained radiation oncology personnel as a quality assurance tool.

    Device Description

    Mobius3D is a software product used within a radiation therapy clinic for quality assurance and treatment plan verification. It is important to note that while Mobius3D operates in the field of radiation therapy, it is neither a radiation delivery device (e.g. a linear accelerator), nor is it a Treatment Planning System (TPS). Mobius3D cannot design or transmit instructions to a delivery device, nor does it control any other medical device. Mobius3D is an analysis tool meant solely for quality assurance (QA) purposes when used by trained medical professionals. Being a software only QA tool, Mobius3D never comes into contact with patients.

    AI/ML Overview

    It appears there's a misunderstanding based on the provided document. The request asks for acceptance criteria and a study that proves the device meets those criteria, including specifics like sample sizes, expert qualifications, and ground truth establishment.

    However, the provided FDA 510(k) clearance letter for Mobius3D (4.1) does not contain the detailed performance study results that would prove the device meets specific acceptance criteria.

    The 510(k) summary (pages 5-7) primarily discusses:

    • Device Description and Intended Use: What Mobius3D is and what it's used for (QA, treatment plan verification, patient alignment).
    • Comparison to Predicate Device: How Mobius3D 4.1 differs from 4.0.
    • Summary of Performance Testing (Non-Clinical):
      • Mentions software verification and validation, including unit, integration, and end-to-end testing.
      • Highlights MLC Modelling Accuracy testing comparing different Mobius3D versions, measurements, and a Treatment Planning System (Eclipse TPS 16.1).
      • States that "studies and reviews have been performed to assess the accuracy of newly introduced features and modifications" for Rapid Arc Dynamic Support and MLC Tongue and Groove Modelling.
      • Notes conformance to cybersecurity and interoperability requirements.
      • Crucially, it explicitly states: "No animal studies or clinical tests have been included in this pre-market submission." This means there isn't a human-in-the-loop study or a study directly demonstrating clinical performance against ground truth in a clinical setting.
    • Use of Consensus Standards: A list of standards the device's design and evaluation conform to.
    • Determination of Substantial Equivalence: Varian's conclusion that the device is substantially equivalent to the predicate.

    Therefore, many of the specific details requested (Table of acceptance criteria, sample sizes for test sets, number/qualifications of experts for ground truth, adjudication methods, MRMC study, standalone performance, type of ground truth, training set sample size/ground truth establishment) are NOT present in this 510(k) clearance letter.

    The letter focuses on the regulatory submission process and the FDA's determination of substantial equivalence based on the provided non-clinical testing and comparison to a predicate device. It doesn't typically include the full, detailed study reports with precise performance metrics and ground truth methodologies. Such details would typically be found in the more extensive technical documentation submitted by the manufacturer to the FDA, but they are summarized at a high level in the public 510(k) summary.

    In summary, based only on the provided text, I cannot provide the detailed information requested regarding the specific acceptance criteria and the study that proves the device meets those criteria in the format you've requested. The document indicates non-clinical software testing and accuracy assessments were performed but does not provide the specific metrics or study design details for clinical performance proof.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    All devices apart from Handgrips HNS, set/2 (111730):

    Positioning and immobilization of the patient during radiotherapy. This includes positioning and immobilization of the patient during image acquisition to support treatment.

    Handgrips HNS, set/2 (111730):

    Positioning of the patient during radiotherapy and radio diagnostics, including MR where indicated.

    Device Description

    The DSPS-Prominent baseplate, MR is an MR Safe baseplate which supports the positioning and immobilization of the patient within DSPS-Prominent occipital and facial masks. The device features a cantilevered frame on the cranial side which facilitates the use of both facial and occipital head, neck and shoulder masks and head only masks. The device is fixed to the couch strips. A Coil Reference Tool accessory is available.

    The DSPS-Prominent baseplate supports the positioning and immobilization of the patient within DSPS-Prominent occipital and facial masks. The device features a cantilevered frame on the cranial side which facilitates the use of both facial and occipital head, neck and shoulder masks and head only masks. The device is fixed to the couch using couch strips. Optional accessory hand grips are available.

    The DSPS-Prominent cradles support the positioning and immobilization of the patient within DSPS-Prominent occipital and facial masks. The devices feature a cantilevered frame which facilitates the use of both facial and occipital head, neck and shoulder masks and head only masks. The device is required to be fixed to a baseplate.

    The DSPS-Prominent Masks are facial and occipital thermoplastic masks which are used together with a DSPS-Prominent baseplate or cradle to facilitate accurate positioning and immobilization of the head and neck or head, neck and shoulder region of the patient. The Masks which do not support the positioning of the shoulders are termed 'head only' masks. Shoulder Profile and Shim accessories are available.

    AI/ML Overview

    The provided document does not detail specific acceptance criteria or an associated study with quantitative results for the device's performance against said criteria. Instead, it focuses on demonstrating substantial equivalence to predicate devices through comparisons of design, materials, technological characteristics, and indications for use.

    The document states:

    • "Non-clinical performance testing was completed to ensure that the devices fulfilled the defined requirements."
    • "Clinical testing was performed to ensure that the use of the devices supports the achievement of submillimeter positional accuracy."
    • "In addition, attenuation measurements were taken, and water equivalence measurements were calculated, for the devices."
    • "The testing confirmed that the new devices are as safe and effective as the predicate devices."

    However, it lacks the specific numerical acceptance criteria for these tests (e.g., "positional accuracy must be within X mm") and the reported performance values from these tests. It also does not provide details on the study design for the "non-clinical performance testing" or "clinical testing," such as sample size, ground truth establishment, expert qualifications, or adjudication methods.

    Therefore, I cannot populate the requested table or answer the specific questions about sample size, data provenance, expert qualifications, adjudication methods, MRMC studies, standalone performance, or training set details. This information is not present in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241937
    Date Cleared
    2025-03-18

    (259 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    With radiotherapy equipment (Medical Linac, CT-Sim), it is used for patient positioning before treatment and continuous monitoring of patients during treatment, and can also be used to track patients' breathing mode (including DIBH, EEBH, 4DCT three breathing modes), in order to implement image acquisition synchronized with breathing and radiation therapy.

    Device Description

    Klarity SGRT system offers a dedicated non-irradiating and non-invasive surface guided radiation therapy solution. SGRT system includes three main application modules: patient positioning, treatment (motion) monitoring and respiratory gating.

    With the patient positioning module, the SGRT system continuously senses the 3D patient body surface and compares it with the prerecorded reference surface by 3D image registration: The calculated 6 degree-of-freedom positioning deviations are visualized and sent to the positioning couch, to ensure a consistent and efficient inter-session patient setup/positioning, both manually and automated.

    With the treatment (motion) monitoring module, the SGRT system monitors the patient's surface in real-time throughout the treatment session. Once the body part surface exceeds the predefined tolerance, the system will automatically alert the therapist and turnoff the beam immediately.

    With the respiratory gating module, the system keeps tracking the patient's respiratory movements in real-time. During the treatment delivery, this functionality is used to maximize the protection of organs-at-risk; And in CT room, this functionality is used to make the 4DCT imaging best adapt to the patient's breathing in order to minimize the 4DCT imaging artifacts caused by the respiratory motion.

    Klarity SGRT system mainly consists of advanced software. PC workstation, one or three 3D cameras and calibration tools. SGRT system provides two models, ARSG-E1A and ARSG-E3A.

    AI/ML Overview

    The Klarity SGRT System (ARSG-E1A, ARSG-E3A) is a surface-guided radiation therapy solution. The following table outlines its acceptance criteria and reported performance based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance Criteria (Subject Device)Reported Device Performance (Subject Device)Predicate Device (K113276) PerformanceReference Device (K082582) Performance
    Measurement AccuracyBetter than 1mmBetter than 1mmBetter than 1mmBetter than 1mm
    Measurement ReproducibilityNot greater than 0.5mmNot greater than 0.5mm0.2mm0.2mm
    Respiratory Gating Accuracy≤1mm≤1mmWithin 1mm for rigid bodyWithin 1mm for rigid body

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the sample size used for the test set. It mentions "Non-clinical tests were conducted to verify that the Klarity SGRT System (ARSG-E1A, ARSG-E3A) meets all design specifications". The provenance of the data for these non-clinical tests is not specified in terms of country of origin or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not provide information on the number of experts used to establish ground truth or their qualifications. The studies mentioned are non-clinical performance and comparative tests, not expert-opinion based evaluations.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method for a test set, as no clinical study involving human judgment is detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    No MRMC comparative effectiveness study was done. The document explicitly states: "The clinical test is not applicable, there's no clinical data."

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, numerous standalone performance tests were conducted. The document states: "Non-clinical tests were conducted to verify that the Klarity SGRT System (ARSG-E1A, ARSG-E3A) meets all design specifications...". This includes specific performance metrics like measurement accuracy, reproducibility, and respiratory gating accuracy. These tests inherently evaluate the algorithm's performance in a standalone manner, separate from human intervention in a clinical setting.

    7. The Type of Ground Truth Used

    For the non-clinical performance tests, the "ground truth" would be established by precise physical measurements and reference systems used during these engineering and performance verification tests. This is not derived from expert consensus, pathology, or outcomes data, but rather from direct physical measurement.

    8. The Sample Size for the Training Set

    The document does not provide information about the sample size for a training set. As the device is referred to as "advanced software" and performs tasks like 3D image registration and real-time motion monitoring, it likely involves algorithms that would require training data, but details are not disclosed.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for any potential training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242957
    Device Name
    Identify (4.0)
    Date Cleared
    2025-02-07

    (135 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IDENTIFY is indicated for adult patients undergoing radiotherapy treatment simulation and/or delivery. IDENTIFY is indicated for positioning of patients, and for monitoring patient motion including respiratory patterns. It allows for data output to radiotherapy devices to synchronize image acquisition or treatment delivery with the acquired motion information.

    Device Description

    IDENTIFY is a system for motion monitoring during radiotherapy treatment simulation and delivery. It incorporates patient safety, quality, and workflow efficiency. Its high precision SGRT cameras support proper patient positioning and enable to monitor the patient's respiratory motion and to detect intra-fraction patient position changes during the treatment.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or a dedicated study proving the device meets these criteria with reported performance metrics. The document is primarily a 510(k) summary for the IDENTIFY (4.0) device, outlining its intended use, description, and non-clinical testing for substantial equivalence to a predicate device.

    Therefore, I cannot fulfill all parts of your request. I can, however, extract information about what non-clinical testing was conducted, which suggests certain underlying acceptance criteria related to safety, effectiveness, and performance against standards.

    Here's what can be inferred and stated based on the provided text:

    Acceptance Criteria and Study Information (Based on Inferred Information from Non-Clinical Testing Section):

    The document does not present a table of explicit acceptance criteria with reported device performance in the format requested. Instead, it describes general compliance with standards and internal testing to ensure safety, effectiveness, and performance.

    Inferred Acceptance Criteria from Non-Clinical Testing:

    While not explicitly listed as a table of "acceptance criteria," the non-clinical testing section implies the device needed to meet the following:

    • Conformance to Applicable Requirements Specifications: The device must meet its defined functional and performance specifications.
    • Hazard Safeguards Functioning Properly: Safety mechanisms must work as intended.
    • Software Compliance: Adherence to FDA's "Content of Premarket Submissions for Device Software Functions" guidance, specifically for a "major" level of concern.
    • Electrical Safety: Compliance with IEC 60601-1 standards.
    • Electromagnetic Compatibility (EMC): Compliance with IEC 60601-1-2 standard.
    • Quality Management System: Adherence to ISO 13485.
    • Risk Management: Adherence to ISO 14971.
    • No Unresolved Anomalies: No Discrepancy Reports (DRs) with "Safety Intolerable" or "Customer Intolerable" priority remaining.
    • Performance at least as well as Predicate Device: The device performs comparably in terms of safety and effectiveness to the IDENTIFY (K230576).

    Reported Device Performance:

    The document states: "Test results showed conformance to applicable requirements specifications and assured hazard safeguards functioned properly." And "The outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports/Unresolved Anomalies) remaining which had a priority of Safety Intolerable or Customer Intolerable (applicable to the US)."

    Missing Information:

    The document does not provide the following details that would be typically found in a clinical study report or a more detailed performance evaluation:

    • A specific table of quantitative acceptance criteria and corresponding numerical performance results.
    • Details on a specific "study" with a test set, sample sizes for test or training sets, data provenance, expert qualifications, or ground truth establishment methods for a clinical or performance evaluation.
    • Information on MRMC studies or a human-in-the-loop effect size.
    • Information on standalone algorithm performance.

    Summary of Available Information from the Text:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria (Inferred): Conformance to requirements, proper functioning of hazard safeguards, compliance with specific software guidances (FDA Software Functions guidance, "major" level of concern), electrical safety (IEC 60601-1), EMC (IEC 60601-1-2), Quality Management (ISO 13485), Risk Management (ISO 14971), and absence of critical unresolved anomalies (Safety Intolerable or Customer Intolerable DRs). Performance at least as well as the predicate device.
      • Reported Device Performance: "Test results showed conformance to applicable requirements specifications and assured hazard safeguards functioned properly." "The outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports/Unresolved Anomalies) remaining which had a priority of Safety Intolerable or Customer Intolerable (applicable to the US)."
    2. Sample sizes used for the test set and the data provenance:

      • Not provided. The document refers to "hardware and software verification and validation testing" but does not specify sample sizes for a test set of patient data or data provenance (e.g., country of origin, retrospective/prospective). This often implies bench testing and software verification without a dedicated clinical performance study.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not provided. This information would be relevant for a clinical performance study using expert labels, which isn't detailed here.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not provided.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not provided. The document focuses on the device's own performance and substantial equivalence, not an MRMC study with human readers.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The "Non-clinical Testing" section refers to "hardware and software verification and validation testing," implying performance of the device's functions, which would be "standalone" in nature. However, specific metrics or a dedicated "standalone study" in terms of clinical performance are not detailed. It's more about technical compliance.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Not specified as a formal ground truth for a clinical dataset. The "ground truth" for the non-clinical testing likely refers to engineering specifications, established safety standards, and validated software requirements.
    8. The sample size for the training set:

      • Not provided. Training set information is relevant for AI/ML devices, but IDENTIFY is described as a "system for motion monitoring," not explicitly an AI/ML diagnostic or predictive device in the traditional sense that would require a large training dataset with labeled ground truth of patient outcomes.
    9. How the ground truth for the training set was established:

      • Not applicable/Not provided. As above, a specific training set with associated ground truth is not detailed, as this appears to be a traditional medical device verification and validation rather than an AI/ML model for clinical decision support.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240431
    Manufacturer
    Date Cleared
    2024-07-24

    (161 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ExacTrac Dynamic is intended to position patients at an accurately defined point within the treatment beam of a medical accelerator for stereotactic radiosurgery or radiotherapy procedures, to monitor the patient position and to provide a beam hold signal in case of a deviation in order to treat lesions, tumors and conditions anywhere in the body when radiation treatment is indicated.

    Device Description

    ExacTrac Dynamic (ETD) is a patient positioning and monitoring device used in a radiotherapy environment as an add-on system to standard linear accelerators (linacs). It uses radiotherapy treatment plans and the associated computed tomography (CT) data to determine the patient's planned position and compares it via oblique X-ray images to the actual patient position. The calculated correction shift will then be transferred to the treatment machine to align the patient correctly at the machine's treatment position. During treatment, the patient is monitored with a thermal-surface camera and X-ray imaging to ensure that there is no misalignment due to patient movement. Positioning and monitoring are also possible in combination with implanted markers. By defining the marker positions, ExacTrac Dynamic can position the patient by using X-rays and thereafter monitor the position during treatment.

    Additionally, ExacTrac Dynamic features a breath-hold (BH) functionality to serve as a tool to assist respiratory motion management. This functionality includes special features and workflows to correctly position the patient at a BH level and thereafter monitor this position using surface tracking. Regardless of the treatment indication, a correlation between the patient's surface and internal anatomy must be evaluated with Image-Guided Radiation Therapy. The manually acquired X-ray images support a visual inspection of organs at risk (OARs). The aim of this technique is to treat the patient only during breath hold phases where the treatment target is at a certain position to reduce respiratory-induced tumor motion and to ensure a certain planned distance to OARs such as the heart. In addition to the X-ray based positioning technique, the system can also monitor the patient after external devices such as Cone-Beam CT (CBCT has been used to position the patient).

    The ExacTrac Dynamic Surface (ETDS) is a camera-only platform without the X-ray system and is available as a configuration which enables surface-based patient monitoring. This system includes an identical thermal-surface camera, workstation, and interconnection hardware to the linac as the ETD system. The workflows supported by ETDS are surface based only and must be combined with an external IGRT device (e.g., CBCT).

    AI/ML Overview

    The FDA 510(k) summary for Brainlab AG's ExacTrac Dynamic (2.0) and ExacTrac Dynamic Surface provides information regarding its performance testing to demonstrate substantial equivalence to its predicate device, ExacTrac Dynamic 1.1 (K220338).

    Here is a breakdown of the requested information based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state a table of "acceptance criteria" side-by-side with "reported device performance" values for all aspects of the device. However, it does reference the AAPM Task Group 1472 guidelines as a reference for the accuracy of surface-based monitoring and indicates that specific tests aimed to verify that accuracy specifications were not negatively affected. From the "Bench Tests" section, we can infer the objectives of the tests and how performance was evaluated.

    Feature/TestAcceptance Criteria (Inferred/Referenced)Reported Device Performance
    Rigid Body Surface Monitoring Accuracy TestFeasibility of surface-based patient monitoring for radiotherapy and adherence to AAPM Task Group 1472 guidelines for non-radiographic radiotherapy localization and positioning systems.The accuracy of the surface-based monitoring functionality was "checked using this new camera revision and an in-house phantom" and the goal was to "prove the feasibility". The document implies successful demonstration of feasibility and adherence.
    Workflow & Accuracy Test ExacTrac DynamicAccuracy specifications for patient positioning and monitoring at phantom treatment with ExacTrac Dynamic are not affected by relevant conditions, settings, and workflows.The test was conducted to "verify that accuracy specifications... are not affected". The conclusion of substantial equivalence implies these specifications were met.
    Response Time MeasurementImplicit: To measure the time between phantom movement and the "Beam-off" signal, and the "out of tolerance" signal appearance. No explicit numerical threshold is given in the provided text.The test "measures the time" and "is tracked." The conclusion of substantial equivalence implies acceptable response times.
    Verification of the Radiation Isocenter Calibration in ETD 2.0Not inferior to the previous, well-established Radiation Isocenter Calibration in ETD 1.1 by more than a given threshold.The test was intended to "demonstrate that the Radiation Isocenter Calibration in ETD 2.0 is not inferior" within the specified threshold. The conclusion of substantial equivalence implies this was demonstrated.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document mentions the use of an "in-house phantom" for the Rigid Body Surface Monitoring Accuracy Test and "phantom treatment" for the Workflow & Accuracy Test. It does not provide specific numerical sample sizes (e.g., number of phantom instances, number of trials).
    • Data Provenance: All testing appears to be retrospective (bench tests, phantom studies) and conducted internally (in-house phantom). There is no mention of data from human subjects or specific countries of origin.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    This information is not provided in the document. The testing described focuses on device performance against physical standards (e.g., phantom, previous system performance) and referenced guidelines (AAPM Task Group 1472), rather than expert-established ground truth on clinical images.

    4. Adjudication Method for the Test Set

    This information is not provided in the document. Given the nature of the bench and phantom tests, an adjudication method by experts is not described as it would be for clinical image interpretation studies.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, an MRMC comparative effectiveness study was not reported. The document states, "No clinical testing was required for the subject device." The testing described focuses on the device's physical performance, accuracy, and workflow.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    The described tests, specifically the "Rigid Body Surface Monitoring Accuracy Test," "Workflow & Accuracy Test ExacTrac Dynamic," "Response Time Measurement," and "Verification of the Radiation Isocenter Calibration," appear to evaluate the device's inherent performance characteristics, often in a controlled phantom environment. This implies a focus on the standalone capabilities of the system, even though its ultimate use is in assisting human operators in a radiotherapy setting. The "Beam-off signal" in response to movement is an automated system response, indicating standalone algorithmic functioning.

    7. Type of Ground Truth Used

    The ground truth for the performance tests appears to be:

    • Physical Phantoms: An "in-house phantom" for surface monitoring accuracy and "phantom treatment" for workflow and accuracy.
    • Established Reference System Performance: Comparison to the "previous, well-established Radiation Isocenter Calibration in ETD 1.1."
    • Industry Guidelines: Reference to "quality assurance guidelines for non-radiographic radiotherapy localization and positioning systems, that were defined by AAPM Task Group 1472."
    • Expected System Behavior: Verification of expected responses (e.g., "Beam-off signal") to phantom movement.

    8. Sample Size for the Training Set

    This information is not provided in the document. The 510(k) summary focuses on verification and validation testing, not the development or training of specific algorithms that would require a "training set" in the context of machine learning.

    9. How the Ground Truth for the Training Set was Established

    As no training set is mentioned (see point 8), this information is not applicable/provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232738
    Date Cleared
    2024-05-31

    (267 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BeamDose is a software for the following purposes in radiotherapy:

    • absolute dose measurements as field class dosemeter (according to IEC 60731)
    • monitor calibration
    • positioning of detectors in PTW water phantoms

    The software enables the user of a BEAMSCAN, TANDEM, TANDEM XDR, UNIDOS E, UNIDOS webline, UNIDOS Tango, UNIDOS Romeo or MULTIDOS electrometer to operate the electrometer as a therapy dosemeter in accordance with IEC 60731.

    The software establishes the communication with the electrometer, provides calibration and correction factors for various detectors and displays the measurement results.

    Additionally, the software enables the positioning of a measuring detector in the desired measuring depth with a motorized PTW water phantom.

    The measured absolute dose values must not be used directly in radiation therapy. They have to be checked for plausibility by qualified personnel.

    The software must be used only by qualified personnel, usually the medical physicist responsible for the radiotherapy system or an authorized person.

    Device Description

    The software measures with BEAMSCAN, TANDEM, TANDEM XDR, UNIDOS E, UNIDOS webline, UNIDOS Tango, UNIDOS Romeo, and MULTIDOS and calculates absolute dose values.

    The software controls the positioning of detectors in BEAMSCAN, MP3, MP2, and MP1 water phantoms.

    The software comprises the readout of the detector data from a data base (Detector Library) with calibration factors and other detector parameters.

    The software corrects measurement data according to temperature and atmospheric pressure and with user correction factor.

    The software supports RS232 and TCP/IP interfaces to read out measurement data from the electrometers and to operate the water phantoms.

    AI/ML Overview

    The provided text describes the BeamDose software, its intended use, and its performance relative to a predicate device, DoseView 3D, in the context of a 510(k) premarket notification. The document states that the BeamDose software was tested to evaluate and verify that it meets the required performance specifications which are defined in the product standard IEC 60731:2011 (Medical electrical equipment - Dosimeters with ionization chambers as used in radiotherapy).

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The acceptance criteria for the BeamDose software are primarily derived from the product standard IEC 60731:2011. The reported device performance is presented as fulfilling these criteria when used with specific electrometers (BEAMSCAN, TANDEM, TANDEM XDR).

    Performance Metric (Acceptance Criteria per IEC 60731:2011)Reported Device Performance (BeamDose with BEAMSCAN)Reported Device Performance (BeamDose with TANDEM/TANDEM XDR)
    Measuring specifications:
    Zero drift≤ ± 0.5 %≤ ± 1 %
    Non-linearity≤ ± 0.5 %≤ ± 0.5 %
    Effect of influence quantities:
    Range changing (response)≤ ± 1 %± (0.5 % + 1 digit) of display
    Stabilization time (response)
    Ask a Question

    Ask a specific question about this device

    K Number
    K232923
    Date Cleared
    2024-04-30

    (224 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    IYE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ethos Treatment Management is indicated for use in managing and monitoring radiation therapy treatment plans and sessions.

    Ethos Treatment Planning is indicated for use in generating and modifying radiation therapy treatment plans.

    Device Description

    Ethos Treatment Management is a software product designed to help radiation therapy medical professionals manage treatments for patients with malignant or benign diseases for whom radiation therapy is indicated. It allows the physician to create and communicate radiation treatment intent (RT intent) to the treatment planner, review and approve candidate plans, and monitor treatment progress. It is intended to be used with a treatment planning system to treat or alleviate disease in humans by streamlining the treatment management and monitoring processes.

    Ethos Treatment Planning is a standalone software device designed to generate and modify radiation therapy treatment plans and manage treatment sessions. The device supports the traditional and adapted treatments, in which the scheduled plan is adapted to the patient's anatomy at the time of treatment.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the AI segmentation models within the Ethos Treatment Management 3.0 and Ethos Treatment Planning 2.0 devices, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance (AI Model Validation for Contouring)

    Validation CharacteristicAcceptance Criteria (Implied)Reported Device Performance
    Contour QualityMinor or no adjustments needed in at least 80% of test cases.Consistently produced contours that needed minor or no adjustments in at least 80% of the test cases.
    Quantitative Metric (DICE coefficient)Comparison benchmark against references published in the literature.Used as a comparison benchmark, especially when introducing a model for a new organ.
    Model TypeNot explicitly stated as acceptance criteria, but a characteristic of the model.Convolutional neural networks with static weights; do not continuously learn.
    Image Resolution HandlingNot explicitly stated as acceptance criteria, but a characteristic of the model's operation.Operates on suitable image resolutions, patient images are resampled before inference, and label maps are sampled back onto the patient image grid.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 1045 scans from various body sites.
      • Full body: (part of 179 total patients)
      • Head and Neck: (part of 1173 total patients)
      • Thorax: (part of 600 total patients)
      • Abdomen: (part of 527 total patients)
      • Bowel: (part of 507 total patients)
      • Pelvis: (part of 1192 total patients)
    • Data Provenance: The largest number and percentage proportionally of scans originated from patients in the United States. Other country origins are not specified but implied to be varied due to "various healthcare facilities worldwide" mentioned for expert evaluation. The data appears to be retrospective, collected from patients with existing treatment indications for various cancers.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated, but referred to as "Experts" (plural).
    • Qualifications of Experts: Radiation oncologists, dosimetrists, and physicists from various healthcare facilities worldwide, with "significant clinical experience in segmentation of CT imaging for the different disease sites covered by the AI models."

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated as a formal adjudication method (like 2+1 or 3+1). The text mentions that "Experts... evaluated the quality of the contours across test sets to assess the need and the type of contour adjustments." This suggests a consensus-based or individual expert assessment of the AI-generated contours against their clinical judgment, but not a specific multi-reader adjudication protocol for the initial ground truth creation for the test set. For the model validation of contour quality, experts assessed the AI output.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • A formal MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not explicitly described in the provided text. The validation process involved experts evaluating the AI-generated contours to assess "the time saved on contouring tasks," which hints at an indirect measure of assistive benefit, but not a direct MRMC study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance assessment was done. The "Contouring performance undergoes rigorous evaluation through verification and validation processes." This included quantitative metrics like the DICE similarity coefficient. The validation also focused on the AI models producing contours that required "minor or no adjustments in at least 80% of the test cases," which is a metric of the AI's standalone output quality observed by experts.

    7. The Type of Ground Truth Used

    • Ground Truth Type: Expert consensus based on "human anatomy experts" following "RTOG and DAHANCA clinical guidelines." Pathology or outcomes data were not mentioned as ground truth for segmentation.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: 4769 scans from various body sites.
      • Full body: (part of 179 total patients)
      • Head and Neck: (part of 1173 total patients)
      • Thorax: (part of 600 total patients)
      • Abdomen: (part of 527 total patients)
      • Bowel: (part of 507 total patients)
      • Pelvis: (part of 1192 total patients)

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: "Ground truth annotations were established by human anatomy experts as part of the algorithm development following RTOG and DAHANCA clinical guidelines. A single set of contours was produced for each training image. These clinical experts have significant clinical experience in segmentation of CT imaging for the different disease sites covered by the AI models. To ensure accuracy, contour definitions available in contouring guidelines are established prior to contouring tasks."
    Ask a Question

    Ask a specific question about this device

    Page 1 of 56