Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K171352
    Device Name
    EZFluence
    Manufacturer
    Date Cleared
    2017-12-01

    (206 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141283

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EZFluence is intended to assist radiation treatment planning professionals in generating optimal fluences for producing a homogeneous dose distribution in external beam radiation therapy treatment plans consisting of photon treatment fields.

    Device Description

    The EZFluence device (model RADEZ) is software is intended to assist radiation treatment planning professionals in generating optimal fluences for producing a homogeneous dose distribution in external beam radiation therapy treatment plans consisting of photon treatment fields. Inputs are obtained from plan and patient data obtained from the Eclipse Treatment Planning System (also referred to as Eclipse TPS) of Varian Medical Systems. EZFluence runs as a dynamic link library (DLL) plugin to Varian Eclipse. It is designed to run on the Windows Operating System. EZFluence performs calculations on the plan obtained from Eclipse TPS (Version 13.5 (K141283) and Version 13.7 (K152393)) which is a software device used by trained medical professionals to design and simulate radiation therapy treatment plans for malignant or benign diseases.

    AI/ML Overview

    The document provided discusses the EZFluence device, a software intended to assist radiation treatment planning professionals. However, it does not contain specific details about acceptance criteria, reported device performance figures (like sensitivity, specificity, or accuracy), sample sizes for test or training sets, data provenance, the number or qualifications of experts, ground truth establishment, or any MRMC studies.

    The document primarily focuses on establishing substantial equivalence to a predicate device (Eclipse Treatment Planning System) based on similar indications for use and technological characteristics. It mentions that "Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements," but it does not elaborate on these criteria or the results.

    Therefore, based solely on the provided text, I cannot generate a table of acceptance criteria and reported performance, nor can I answer many of the specific questions about the study design and results, as this information is not present in the excerpt.

    Here's what I can extract and what's missing:


    Inability to Fulfill Request Due to Lack of Information

    The provided document (K171352) is a 510(k) summary for the EZFluence device. While it describes the device's intended use and compares it to a predicate device, it does not contain the detailed performance data, acceptance criteria, sample sizes, expert qualifications, or ground truth methodology that would be required to answer the questions in the prompt.

    The document explicitly states under "5.7 Performance Data": "As with the Predicate Device, no clinical trials were performed for EZFluence. Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements." However, it does not provide the specific "pass/fail criteria" or the results of these "verification tests."

    Therefore, I cannot construct the requested table or provide answers to most of the specific questions.


    What can be inferred or directly stated from the document:

    • Device Name: EZFluence
    • Intended Use: To assist radiation treatment planning professionals in generating optimal fluences for producing a homogeneous dose distribution in external beam radiation therapy treatment plans consisting of photon treatment fields. (Section 5.5)
    • Regulatory Class: Class II (Section 5.2)
    • Predicate Device: Eclipse Treatment Planning System (K152393) (Section 5.3)
    • Study Type: Verification tests were performed; no clinical trials were performed. (Section 5.7)
    • Performance Metrics Reported: None explicitly stated (e.g., no sensitivity, specificity, accuracy, or any quantitative metric of "optimal fluence" or "homogeneous dose distribution" quality).

    Missing Information (Cannot be answered from the provided text):

    1. A table of acceptance criteria and the reported device performance: This information is not present. The document mentions "pass/fail criteria were used to verify requirements" but does not detail them or the results.
    2. Sample sizes used for the test set and the data provenance: Not mentioned.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
    4. Adjudication method for the test set: Not mentioned.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: The document explicitly states "no clinical trials were performed for EZFluence," suggesting no such MRMC study was conducted. The device is also described as assisting professionals, not as an AI-driven diagnostic tool for human readers in the typical MRMC context.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not explicitly detailed. The "verification tests" mentioned are likely standalone software tests, but no performance metrics are given.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not mentioned.
    8. The sample size for the training set: Not mentioned.
    9. How the ground truth for the training set was established: Not mentioned.

    Conclusion based on provided text: The document serves as a regulatory submission demonstrating substantial equivalence rather than a detailed scientific study report detailing performance metrics and validation methodologies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K171350
    Device Name
    Collision Check
    Manufacturer
    Date Cleared
    2017-11-29

    (204 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K131891, K141283, K152393

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CollisionCheck is intended to assist radiation treatment planners in predicting when a treatment plan might result in a collision between the treatment machine and the patient or support structures.

    Device Description

    The CollisionCheck device (model RADCO) is software intended to assist users to identify where collisions between the treatment machine and the patient or support structures may occur in a treatment plan. The treatment plans are obtained from the Eclipse Treatment Planning System (also referred to as Eclipse TPS) of Varian Medical Systems. CollisionCheck runs as a dynamic link library (DLL) plugin to Varian Eclipse. It is designed to run on the Windows Operating System. CollisionCheck performs calculations on the plan obtained from Eclipse TPS (Version 12 (K131891), Version 13.5 (K141283), and Version 13.7 (K152393) which is a software used by trained medical professionals to install and simulate radiation therapy treatments for malignant or benign diseases.

    AI/ML Overview

    The provided text describes the regulatory clearance of CollisionCheck (K171350) and compares it to a predicate device, Mobius3D (K153014). However, it does not contain specific details about acceptance criteria, the study design (e.g., sample size, data provenance, ground truth establishment, expert qualifications, or adjudication methods), or MRMC study results. The document states that "no clinical trials were performed for CollisionCheck" and mentions "Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements." This implies that the performance demonstration was likely limited to software verification and validation, rather than a clinical performance study with human-in-the-loop or standalone AI performance metrics.

    Therefore, many of the requested details cannot be extracted from the provided text. I will provide what can be inferred or stated as absent based on the document.


    Acceptance Criteria and Device Performance

    The document does not explicitly list quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or F1-score for the CollisionCheck device. Instead, the performance demonstration focuses on software verification and validation to ensure the device works as intended and is as safe and effective as the predicate device.

    Table of Acceptance Criteria and Reported Device Performance (Inferred/Based on Document Context):

    Acceptance Criterion (Inferred from regulatory context and V&V)Reported Device Performance (Inferred/Based on V&V Statement)
    Functionality: Accurately simulate treatment plan and predict gantry collisions with patient or support structures.Verification tests confirmed the software works as intended, indicating successful simulation and collision prediction. (Pass)
    Safety: Device operation does not introduce new safety concerns compared to predicate.Hazard Analysis demonstrated the device is as safe as the Predicate Device. (Pass)
    Effectiveness: Device effectively assists radiation treatment planners in identifying potential collisions.Verification tests confirmed the software works as intended, indicating effective assistance in collision identification. (Pass)
    Algorithm Accuracy (Collision Prediction): Implicitly, the algorithm should correctly identify collision events when they occur and not falsely identify them when they do not.No specific accuracy metrics (e.g., sensitivity, specificity, precision recall) reported. Performance is based on successful completion of verification tests.
    Comparison to Predicate: Substantially equivalent to Mobius3D's collision check feature regarding safety and effectiveness.Minor technological differences do not raise new questions on safety and effectiveness. Deemed substantially equivalent. (Pass)

    Study Details:

    Given the statement "no clinical trials were performed for CollisionCheck," and the focus on "Verification tests," most of the questions regarding a typical AI performance study (like those involving test sets, ground truth experts, MRMC studies) cannot be answered with specific data from this document. The performance demonstration appears to have been solely based on internal software verification and validation activities.

    1. Sample sizes used for the test set and data provenance:

      • Test Set Sample Size: Not specified. The document only mentions "verification tests" and "pass/fail criteria."
      • Data Provenance: Not specified. It's likely synthetic or internal clinical data used for software testing, rather than a distinct, prospectively collected, or retrospectively curated clinical test set for performance evaluation in a regulatory sense.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not applicable/Not specified. Given that "no clinical trials were performed," it's highly improbable that a formal expert-adjudicated ground truth was established for a test set in the context of an AI performance study. Ground truth in this context would likely be defined by the physics-based simulation of collisions within the software's design.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable/Not specified. No adjudication method is mentioned, consistent with the absence of a clinical performance study involving human readers.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. The document explicitly states, "no clinical trials were performed." Therefore, no effect size of human reader improvement with AI assistance is reported.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • While the "verification tests" would evaluate the algorithm's standalone functionality, the document does not provide specific performance metrics (e.g., sensitivity, specificity) for its standalone performance that would typically be seen in a standalone AI evaluation. The device assists a human user, so its "standalone" performance wouldn't be in isolation but rather its ability to correctly identify collisions as defined by its internal models.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The document implies a physics-based or computational ground truth. The device performs calculations and simulations. The "ground truth" for its verification and validation would be whether its simulation correctly identifies collisions based on defined geometric and physical parameters. It's not based on expert consensus, pathology, or outcomes data, as it's a planning assistance tool, not a diagnostic one.
    7. The sample size for the training set:

      • Not applicable/Not specified. The document describes CollisionCheck as software that performs calculations and simulations (modeling the linac as a cylinder, supporting applicators, etc.). It is not described as an AI or machine learning model that requires a "training set" in the conventional sense of supervised learning on a large dataset. Its functionality is likely rule-based or physics-informed, rather than learned from data.
    8. How the ground truth for the training set was established:

      • Not applicable/Not specified. Since it's not described as an ML model with a training set, the concept of establishing ground truth for a training set does not apply here. The "ground truth" for its development would be the accurate mathematical and physical modeling of collision scenarios.
    Ask a Question

    Ask a specific question about this device

    K Number
    K162468
    Device Name
    ClearCheck
    Manufacturer
    Date Cleared
    2016-12-01

    (90 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K131891, K141283

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClearCheck is intended for quality assessment of radiotherapy treatment plans.

    Device Description

    The ClearCheck device (model RADCC) is a software intended to present treatment plans obtained from Eclipse Treatment Planning System (also referred to as Eclipse TPS) of Varian Medical Systems in a user friendly way (numerical form of data) for user approval of the treatment plan. ClearCheck runs as a dynamic link library (dll) plugin to Varian Eclipse. It is designed to run on the Windows Operating System and generated reports can be viewed on Internet Explorer. ClearCheck performs calculations on the plan obtained from Eclipse TPS (Version 12 (K131891) and Version 13.5 (K141283)) which is a software used by trained medical professionals to design and simulate radiation therapy treatments for malignant or benign diseases. ClearCheck has two components: A standalone Windows Operating System executable application that is used for 1. administrative operations to set specified default settings and user settings. 2. A plan evaluation application that is a dynamic link library (dll) file that is a plugin to the Varian Medical Systems Eclipse TPS. The plugin is designed to evaluate the quality of an Eclipse treatment plan. Plan quality is based on user specified Dose Constraints and Plan Check Parameters.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the ClearCheck device, organized according to your request.

    Please note: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device. It explicitly states that "no clinical trials were performed for ClearCheck" (Section 5.7). Therefore, a substantial portion of your requested information (e.g., MRMC studies, specific performance metrics against ground truth, expert qualifications, adjudication methods, sample sizes for test/training sets with ground truth derivation methods) is not present in this type of regulatory submission. The verification tests mentioned are likely internal software validation rather than clinical performance studies.


    1. Table of acceptance criteria and the reported device performance

    The document does not provide a specific table of quantitative acceptance criteria for device performance based on clinical outcomes or accuracy metrics. Instead, "pass/fail criteria were used to verify requirements" during internal verification tests. These requirements are implicit in the comparison to the predicate device and the claim of substantial equivalence.

    Acceptance Criteria CategoryReported Device Performance / Assessment
    Intended UseClearCheck is intended for quality assessment of radiotherapy treatment plans, equivalent to the predicate device.
    Pure Software DeviceYes, equivalent to the predicate device.
    Intended UsersMedical physicists, medical dosimetrists, and radiation oncologists, equivalent to the predicate device.
    OTC/RxPrescription use (Rx), equivalent to the predicate device.
    Operating SystemRuns on Windows 7, 8, 10, Server 2008, 2008 RS, 2012. Supported an additional OS (Windows 10) compared to the predicate, which does not raise new safety/effectiveness questions.
    CPU2.4+ GHz and Multi-core processors (2+ cores, 4+ threads), equivalent to the predicate device.
    Hard Drive SpaceRequires ~3.5MB for software (vs. 20MB for predicate), suggests 100GB for patient data (vs. 900GB for predicate). Difference acknowledged and deemed not to raise new safety/effectiveness questions because ClearCheck stores constraint templates, not large DICOM datasets like the predicate.
    Display Resolution & Color Depth1280 x 1024, 24- or 32-bit color depth (vs. 1920 x 1080 for predicate). Difference acknowledged and deemed not to raise new safety/effectiveness questions as it supports smaller monitors without affecting image quality.
    Software FunctionalityPerforms calculations on plans from Eclipse TPS based on user-specified Dose Constraints and Plan Check Parameters. Verification tests performed to ensure the software works as intended and passed requirements.
    Safety and EffectivenessDeemed as safe and effective as the predicate device through Verification and Validation testing and Hazard Analysis.

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set" in the context of clinical or performance data. It mentions "Verification tests" were performed for the software. These tests would involve internal generated data or existing clinical plans to validate the software's functionality, but no details on sample size, data provenance, or specific test cases are provided for external review.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided. Since no clinical trials or external performance evaluations of this nature were conducted (as stated in Section 5.7), the concept of "ground truth" as derived by experts for a test set is not applicable to the submitted performance data.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided for the same reasons as above.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The document explicitly states "no clinical trials were performed for ClearCheck." The device is a "quality assessment" tool for radiotherapy plans, not an AI for image interpretation or diagnosis that would typically involve human reader performance studies.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    A standalone performance assessment in the sense of the algorithm's internal calculations and functionality was performed as part of "Verification tests." However, specific quantitative metrics common for standalone AI algorithms (e.g., sensitivity, specificity, AUC against a clinical ground truth) are not provided in this regulatory summary. The device's "performance" is primarily assessed by its functional correctness and consistency with the predicate device's overall purpose of quality assessment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    This is not explicitly stated. For the internal "Verification tests," the "ground truth" would likely be the expected outputs or calculated values based on established physics principles and treatment planning guidelines, which the software is designed to implement and report. This is not the same as clinical "ground truth" derived from patient outcomes or expert consensus on a diagnosis.

    8. The sample size for the training set

    This information is not applicable and not provided. ClearCheck is described as a software tool that performs calculations and presents data based on user-specified dose constraints and plan check parameters from an existing Eclipse TPS plan. It is not an AI/ML algorithm that learns from a "training set" of data to make predictions or classifications.

    9. How the ground truth for the training set was established

    This information is not applicable and not provided, as the device does not employ a machine learning model that requires a training set with associated ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K151560
    Device Name
    QuickPlan
    Manufacturer
    Date Cleared
    2015-09-11

    (93 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141283

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The QuickPlan Treatment Planning System is indicated for use in planning radiotherapy treatments for patients with malignant or benign diseases. After image acquisition, QuickPlan supports the treatment planning process for external beam irradiation with photon, electron, and proton beams by predicting a plan.

    The QuickPlan software does not provide full plan generation; it does not include final dose calculation, final beam geometry, nor does it enable plan approval. QuickPlan is not connected to any radiation emitting equipment. The QuickPlan software is intended for use by trained medical professionals to use in clinical settings. The QuickPlan software is compatible with Treatment Planning Systems that use the DICOM-RT format.

    Device Description

    The new QUICKPLAN Software manufactured by Siris Medical is an independent software solution to plan radiotherapy treatments for patients with malignant or benign diseases. It is used to plan external beam irradiation with photon, electron, and proton beams. The new QUICKPLAN Software is intended for trained medical professionals to use in clinical settings. The new QUICKPLAN software application includes three modules:

    • QuickMatch Matches a critical structure-set to the closest match within a database. It is a rapid file locator.
    • QuickPredict Predicts the dose to critical structures based upon models extracted from historical data looking through previous patient data.
    • QuickCompare - Provides dose estimates to critical structures with one or more alternative energy modalities, i.e. Photon vs. Proton.
      The new QuickPlan software does not provide full plan generation; it does not include final dose calculation, final beam geometry, nor does it enable plan approval.
    AI/ML Overview

    The provided text does not contain the acceptance criteria or the details of a study demonstrating the device meets such criteria.

    The document is a 510(k) premarket notification summary for the QuickPlan device, focusing on demonstrating substantial equivalence to a predicate device. It highlights the device's indications for use, technological features, and compliance with software standards, but does not present acceptance criteria or a study with performance metrics against those criteria.

    Here's an breakdown of what information is missing based on your request:

    1. A table of acceptance criteria and the reported device performance: This is entirely absent. The document mentions "performance bench testing" but does not provide any specific criteria or quantitative results.
    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective): No test set details are provided.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience): No ground truth establishment details are provided.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: No adjudication method is mentioned.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study or human reader improvement data is presented.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document states "No clinical testing has been performed in support of this QUICKPLAN software 510(k) submission," implying no standalone clinical performance evaluation was done. The bench testing performed would likely be considered standalone in a technical sense, but no performance metrics are given.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not specified.
    8. The sample size for the training set: Not specified. The document mentions "models extracted from historical data looking through previous patient data" but does not provide the size of this historical dataset.
    9. How the ground truth for the training set was established: Not specified.

    In summary: The provided document is a regulatory submission focused on substantial equivalence based on design, intended use, technology, and compliance with general software and medical device standards, rather than a detailed performance study with quantifiable acceptance criteria. It explicitly states, "No clinical testing has been performed in support of this QUICKPLAN software 510(k) submission." Therefore, it does not contain the information required to answer your specific questions about acceptance criteria and a study proving those criteria are met.

    Ask a Question

    Ask a specific question about this device

    K Number
    K151369
    Device Name
    .decimal p.d
    Manufacturer
    Date Cleared
    2015-08-07

    (78 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141283

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The p.d software is used by radiation therapy professionals to assist in the design, manufacturing, and quality assurance testing of various radiation therapy devices used for cancer patients. The p.d software performs three distinct, primary functions which each are described below.

    1. The p.d software takes a design of a compensating filter from a Treatment Planning System and converts the Treatment Planning System compensator filter files into a .decimal file format. This file can then be electronically submitted to .decimal through the software, so that we can manufacture the device.
    2. The p.d software can design a beam shaping and compensating filters based on Treatment Planning System and other user supplied data. The device designs for compensating filters will be transferred back into the Treatment Planning System for final dose verification before devices are ordered and used for patient treatment.
    3. The p.d software can perform quality assurance testing of the physical characteristics of treatment devices using data from various types of scanned images, including computed tomography images.
    Device Description

    The .decimal p.d device is a software application that will enable users of various radiation treatment planning systems (TPS) to design, measure, and order beam shaping and modulating devices used in the delivery of various types of radiotherapy, including photon, electron, and particle therapy. The input from the treatment planning systems to the p.d product is generally received in DICOM file format. but other vendor specific or generic file formats are also utilized. p.d will also provide a simplified radiation dose calculator for the purpose of improving its ability to accurately create/modify patientspecific radiation beam modifying devices without the need for iteration with other treatment planning systems. However, all modulating devices will have final dose verification performed in a commissioned Treatment Planning System before devices are used for patient treatment. Additionally, the p.d software contains tools for analyzing scanned image data that aids users in performing quality assurance measurement and testing of radiotherapy devices.

    AI/ML Overview

    The provided text describes the p.d 5.1 software, a device used in radiation therapy. However, it does not contain the detailed information required to fully answer your request regarding acceptance criteria and a specific study proving the device meets them. This document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a performance study with detailed acceptance criteria.

    While the document indicates some testing was done, it doesn't provide the specifics you're asking for. Here's what can be inferred and what's missing:


    1. A table of acceptance criteria and the reported device performance

    Missing Information: The document does not provide a table of acceptance criteria with specific quantitative thresholds or reported device performance metrics. The testing described is more qualitative and focused on comparing to predicate devices and general software validation.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Missing Information:

    • Sample Size: Not specified.
    • Data Provenance: Not specified. The document mentions "hospital-based testing partners" but doesn't detail the origin or nature of the data used in validation.
    • Retrospective/Prospective: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Missing Information: The document does not specify the number or qualifications of experts used for establishing ground truth in the test set. It mentions "Clinically oriented validation tests were written and executed by .decimal personnel and hospital-based testing partners," but this doesn't detail specific expert involvement for ground truth adjudication.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Missing Information: The document does not describe any specific adjudication method for establishing ground truth for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Missing Information:

    • MRMC Study: No MRMC comparative effectiveness study is mentioned.
    • Effect Size: Not applicable, as no MRMC study was described. The focus is on demonstrating substantial equivalence of the software's functionality to existing tools. This device is an aid to radiation therapy professionals, not an AI to improve human reader performance in a diagnostic context.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    Information Available (Inferred): The testing seems to have been primarily standalone, focusing on the software's ability to perform its functions (design filters, convert files, perform QA measurements) and comparing its output to predicate devices. The document states "Clinical testing was not performed... since testing can be performed such that no human subjects are exposed to risk." This suggests the validation was primarily of the software's internal logic and output, rather than its performance in conjunction with a human user in a clinical setting with real patient outcomes.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Information Available: The ground truth for the validation tests was established by:

    • Comparing results to those of known predicate devices (p.d software version 5.0 and Eclipse TPS).
    • Performing quality assurance measurements on devices of known quality.

    This implies a form of "reference standard" or "known truth" derived from established systems and manufactured devices, rather than clinical pathology or patient outcomes.

    8. The sample size for the training set

    Not Applicable/Missing Information: The document describes software validation and verification, not the training of a machine learning model. Therefore, there is no "training set" in the context of AI/ML. If the "p.d software" incorporates algorithms that are based on machine learning, this information is not provided. The phrasing "using nearly identical algorithms and processes" to the predicate software suggests it's more of a deterministic software rather than a trained AI model.

    9. How the ground truth for the training set was established

    Not Applicable/Missing Information: As there's no mention of a training set for an AI/ML model, this question is not applicable.


    Summary of Device Performance (from the document):

    The document concludes with: "These tests show that the p.d software performed equivalently to the predicate device when appropriate and that the software is deemed safe and effective for clinical use."

    This is a general statement of performance, but it lacks the specific, quantifiable acceptance criteria and corresponding reported performance metrics that your request specifies. The 510(k) process primarily aims to demonstrate substantial equivalence, and the provided document reflects that focus.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1