Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K222359
    Date Cleared
    2023-05-30

    (299 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    Quicktome, K203518, Brainlab iPlan Cranial, K113732

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Quicktome Software Suite is composed of a set of modules intended for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements, planning, 3D visualization (MPR reconstructions and 3D volume rendering) and display of BOLD (blood oxygen level dependent) resting-state MRI scan studies.

    Modules are available for image processing, atlas-assisted visualization, resting state analysis and visualization, and target export creation, where an output can be generated for use by a system capable of reading DICOM image sets.

    Quicktome is indicated for use in the processing of diffusion-weighted MRI sequences into 3D maps that represent whitematter tracts based on constrained spherical deconvolution methods and for the use of said maps to select and create exports. Quicktome can generate motor, language, and vision resting state fMRI correlation maps using task-analogous seeds.

    Typical users of Quicktome are medical professionals, including but not limited to surgeons, clinicians, and radiologists.

    Device Description

    Quicktome is a software-only, cloud-deployed, image processing package which can be used to perform DICOM image viewing, image processing, and analysis.

    Quicktome can receive ("import") DICOM images from picture archiving and communication systems (PACS), acquired with MRI, including Diffusion Weighted Imaging (DWI) sequences, T1, T2, BOLD, and FLAIR images. Quicktome can also receive Resting State functional MRI (rs-fMRI) blood-oxygen-level-dependent (BOLD) datasets. Once received, Quicktome removes protected health information (PHI) and links the dataset to an encryption key, which is then used to relink the data back to the patient when the data is exported to hospital PACS or other DICOM device.

    The software provides a workflow for a clinician to:

    • . Select an image for planning and visualization,
    • Validate image quality,
    • Explore the available anatomical regions, network templates, tractography bundles, and ● parcellations,
    • . Select regions of interest,
    • . Display resting state fMRI (BOLD) correlation maps using task-analogous seeds for Motor, Vision and Language networks, and
    • . Export black and white and color DICOMs for use in systems that can view DICOM images.
    AI/ML Overview

    The provided text describes the Quicktome Software Suite (K222359), a medical image management and processing system, and its performance evaluation for FDA 510(k) clearance.

    Here's a breakdown of the acceptance criteria and study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't provide a precise, quantified table of acceptance criteria with corresponding performance metrics in a single, clear format. However, it implicitly states the key performance evaluation for the BOLD processing pipeline, which is a significant new feature of this version of the Quicktome Software Suite.

    The primary acceptance criteria for the BOLD processing pipeline appears to be the comparability of resting-state fMRI correlation maps generated by Quicktome to task-based fMRI activation maps for a range of pre-specified seeds.

    Acceptance Criteria (Implied)Reported Device Performance (as stated in the document)
    Resting-state fMRI correlation maps generated by Quicktome are analytically comparable to task-based fMRI activation maps."Analytical evaluation demonstrated that activation in a task-based activation map is represented within the bounds of a correlation map generated with resting-state data when using a range of pre-specified seeds and thresholds, supporting substantial equivalence of the two maps."
    Clinicians rate the Quicktome-generated resting-state networks as comparable to task-based fMRI maps for clinical intended uses."Clinicians rated the networks as comparable per the pre-specified acceptance criteria to task-based fMRI maps for the clinical intended uses of presurgical planning and post-surgical assessment."
    Software units and modules function as required."Testing was conducted on software units and modules. System verification was performed to confirm implementation of functional requirements."
    Cloud infrastructure is suitable."Cloud infrastructure verification was performed to ensure suitability of cloud components and services."
    Algorithm computations are sound."Algorithm performance verification was conducted to ensure computations were sound."
    Usability and design are validated by representative users."Summative usability evaluation and design validation were performed by representative users."
    BOLD processing pipeline protocols (motion/noise correction, skull stripping, coregistration, noise correction, correlation computation) perform correctly."Performance evaluations were conducted for the BOLD processing pipeline. Evaluations included protocols for motion and noise correction, skull stripping, co-registration of anatomical scans and BOLD series, physiological noise correction, and correlation matrix computation." (No explicit pass/fail
    rates or metrics are provided here beyond the statement that evaluations were conducted for the specified protocols.)

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the sample size (number of cases/patients) used for the test set. It mentions "a range of pre-specified seeds and thresholds" for the analytical evaluation and "the networks" for the clinician evaluation, implying multiple cases, but no specific count.

    The data provenance (country of origin, retrospective/prospective) for the test set is not specified in the provided text.

    3. Number of Experts and Qualifications for Ground Truth

    The document states "expert clinician evaluation" and "Clinicians rated the networks," but it does not specify the number of experts used or their specific qualifications (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set's ground truth or clinician evaluation.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The text mentions "expert clinician evaluation" where "Clinicians rated the networks as comparable...to task-based fMRI maps for the clinical intended uses of presurgical planning and post-surgical assessment." This suggests a human-in-the-loop component. However, it does not explicitly describe a traditional MRMC comparative effectiveness study designed to quantify how much human readers improve with AI vs. without AI assistance, nor does it provide an effect size for such improvement. The evaluation focuses on the comparability of the Quicktome-generated maps to established task-based fMRI maps, rather than improvement in human reader performance.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance evaluation was implicitly done. The "Analytical evaluation" compared the AI-generated resting-state correlation maps to task-based activation maps. This part of the evaluation assesses the algorithm's output directly without human intervention to rate "substantial equivalence."

    7. Type of Ground Truth Used

    The ground truth used for evaluating the BOLD processing pipeline was task-based fMRI activation maps. These are generally considered a well-established method for localizing brain function.

    • Analytical Ground Truth: Task-based fMRI activation maps for direct comparison of spatial patterns and activation.
    • Expert Consensus Ground Truth (for clinical relevance): The clinical intended uses for presurgical planning and post-surgical assessment, based on expert opinions validating the comparability of the Quicktome output to established methods.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set.

    9. How Ground Truth for the Training Set Was Established

    The document does not describe how the ground truth for the training set was established. It focuses on the validation of the device's performance post-development.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212116
    Device Name
    VBrain-OAR
    Manufacturer
    Date Cleared
    2021-10-12

    (97 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113732

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VBrain-OAR is a software device intended to assist trained radiotherapy personnel including, but not limited to, radiologists, radiation oncologists, neurosurgeons, radiation therapists, and medical physicists, during their clinical workflows of brain tumor radiation therapy treatment planning, by providing initial object contours of organs at risk in the brain (i.e., the region of interest, ROI) on axial T1 contrast-enhanced brain MRI images. VBrain-OAR is intended to be used on adult patients only.

    VBrain-OAR uses an artificial intelligence algorithm (i.e., deep learning neural networks) to contour (segment) organs at risk (brain stem, eyes, optic nerves, optic chiasm) in the brain on MRI images for trained radiotherapy personnel's attention, which is meant for informational purposes only and not intended for replacing their current standard practice of manual contouring process. VBrain-OAR does not alter the original MRI image, nor does it intend to detect tumors for diagnosis. VBrain-OAR is intended only for contouring and generating contours of organs at risk in the brain; it is not intended to be used with images of other body parts.

    VBrain-OAR also contains the automatic image register volumetric medical image data. (e.g., MR, CT). It allows rigid image registration to adjust the spatial position and orientation of two images. Radiation therapy treatment personnel must finalize (confirm or modify) the contours generated by VBrain-OAR, as necessary, using an external platform available at the facility that supports DICOM-RT viewinglediting functions, such as image visualization software and treatment planning system.

    Device Description

    VBrain-OAR is a software application system indicated for use in the contouring (segmentation) of brain MRI images for the organs at risk (OAR) in the brain during radiation treatment planning and in the registration of multi-modality images. The device consists of 2 algorithm modules, which are contouring algorithm module and registration algorithm module, and a workflow management module. The modules can work independently, and yet can be integrated with each other.

    The contouring (segmentation) algorithm module consists of image preprocessing, deep learning neural networks, and postprocessing components, and is intended to contour organs at risk in the brain on the axial T1 contrast-enhanced MR images. It utilizes deep learning neural networks to generate contours for the organs at risk in the brain and export the results as DICOM-RT objects (using the RT Structure Set ROI Contour attribute, RTSTRUCT).

    The registration algorithm module registers volumetric medical image data (e.g., MR, CT). It allows rigid image registration to adjust the spatial position and orientation of two images.

    The workflow management module is configured to work on a PACS network. Upon user's request, it will pull patient scans or users can send corresponding DICOM images, and it will trigger a predefined workflow, in which different algorithm modules are executed to generate the DICOM output. The DICOM output of a workflow are sent back to the PACS.

    AI/ML Overview

    This document, a 510(k) premarket notification for Vysioneer Inc.'s VBrain-OAR, focuses on the device's technical characteristics and claims of substantial equivalence to predicate devices, but does not provide the specific acceptance criteria and detailed study results (including performance metrics like Dice Similarity Coefficient, Hausdorff Distance, or expert review scores) that would typically be required to fully describe how the device met these criteria.

    The document states that "performance testing was conducted to evaluate the contouring (segmentation) performance and registration performance of VBrain-OAR" and that "the auto-segmentation algorithm of the VBrain-OAR algorithm module provides clinically acceptable contours for organs at risk in the brain structures on an image of a patient." However, it does not explicitly define what constitutes "clinically acceptable" or provide the quantitative results from these tests.

    Therefore, many of the requested details cannot be extracted directly from the provided text. I will provide information based on what is available and indicate where details are missing.


    Acceptance Criteria and Device Performance Study (Based on Provided Text)

    The document generally states that the device's performance was evaluated, and it met "clinically acceptable" standards. However, the specific quantitative acceptance criteria (e.g., minimum Dice Similarity Coefficient, maximum Hausdorff Distance) are not detailed in the provided text. Similarly, the reported device performance (quantitative results) against these criteria is also not included in this summary.

    In the absence of specific acceptance criteria and performance results, the table below is illustrative of what would typically be included in such a section, but the "Acceptance Criteria" and "Reported Device Performance" columns cannot be filled with concrete numbers from the provided document.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriteriaReported Device Performance
    Contouring (Segmentation) PerformanceNot explicitly stated in document (e.g., Min. Dice Similarity Coefficient, Max. Hausdorff Distance, Expert Review Score)Not explicitly stated in document (e.g., Achieved Dice scores, Hausdorff distances, Qualitative assessment)
    Brain Stem ContouringClinically acceptable*Clinically acceptable*
    Eyes ContouringClinically acceptable*Clinically acceptable*
    Optic Nerves ContouringClinically acceptable*Clinically acceptable*
    Optic Chiasm ContouringClinically acceptable*Clinically acceptable*
    Registration PerformanceNot explicitly stated in document (e.g., Registration accuracy in mm)Not explicitly stated in document (e.g., Achieved registration accuracy)
    Rigid image registrationSubstantially equivalent to predicate deviceSubstantially equivalent to predicate device

    Note: The term "clinically acceptable" is used in the document but is not defined with quantitative metrics.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document states "VBrain-OAR was tested on datasets from multiple institutions" for standalone performance testing. However, the exact number of cases or scans in the test set is not specified.
    • Data Provenance: The document mentions "datasets from multiple institutions" and "data across patient sex, multiple imaging hardware and protocols" was used for testing. However, the country of origin of the data is not specified. The document also does not explicitly state whether the data was retrospective or prospective, though typical 510(k) submissions for AI devices often rely on retrospective datasets for performance testing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The provided document does not specify the number of experts used to establish the ground truth for the test set, nor does it detail their qualifications (e.g., radiologist with X years of experience). It's implied that "trained radiotherapy personnel" are involved in the standard practice of manual contouring which the device aims to assist, but this does not directly describe the ground truth establishment process for the test data.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth for the test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    The provided summary does not indicate that an MRMC comparative effectiveness study was performed to assess how human readers improve with AI vs. without AI assistance. The device is described as an "assistance" tool, but an MRMC study demonstrating improvement in human performance is not mentioned. The focus is on the standalone performance of the AI algorithm.

    6. If a Standalone (Algorithm Only) Performance Study Was Done

    Yes, a standalone performance study was done. The document explicitly states under "5.7 Non-Clinical Test (Standalone Performance Data)":
    "Standalone performance testing was conducted to evaluate the contouring (segmentation) performance and registration performance of VBrain-OAR."

    7. The Type of Ground Truth Used

    The type of ground truth used is implied to be expert consensus or expert-derived manual contours. The device aims to "provide initial object contours... on axial T1 contrast-enhanced brain MRI images" and states it's "not intended for replacing their current standard practice of manual contouring process." This suggests that human expert manual contours would serve as the ground truth against which the AI's generated contours are compared. However, the exact methodology for establishing this ground truth (e.g., single expert, multi-expert consensus) for the test set is not detailed.

    8. The Sample Size for the Training Set

    The document does not specify the sample size used for the training set. It mentions the use of "deep learning neural networks" implying a training phase, but provides no details on the data used.

    9. How the Ground Truth for the Training Set Was Established

    The document does not specify how the ground truth for the training set was established. While it is implied that expert manual contours would be used given the device's function, the details of this process for the training data are not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203486
    Device Name
    Otoplan
    Manufacturer
    Date Cleared
    2021-08-20

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113732

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).

    Device Description

    OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can

    • import DICOM-conform medical images and view these images.
    • navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view.
    • use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
    • create a virtual trajectory, which can be displayed in the 2D images and 3D view.
    • identify electrode array contacts of a cochlear implant to assess electrode insertion and position.
    • input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.
      OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode array portfolio.
      The information provided by OTOPLAN is solely assistive and for the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.
      OTOPLAN is designed to run on a PC and requires the 64 bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use.
      For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities. The minimum hardware requirements are:
    • 12.3in wide screen
    • 8GB of RAM
    • 2 core CPU (such as a 5th generation i5 or i7)
    • dedicated GPU with OpenGL 4.0 capabilities
    • 250GB hard drive
    AI/ML Overview

    The provided text is a 510(k) summary for the OTOPLAN device. This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report with specific acceptance criteria and performance metrics for an AI/algorithm component.

    Based on the provided text, OTOPLAN is described as a software interface for displaying, segmenting, and transferring medical image data for pre-operative planning and post-operative assessment. It does include functions like semi-automatic segmentation and calculations based on manual 2D measurements, but it largely appears to be a tool that assists human users and does not replace their judgment or perform fully autonomous diagnostics. Therefore, it's unlikely to have the kind of acceptance criteria typically seen for AI/ML diagnostic algorithms (e.g., sensitivity, specificity, AUC).

    The document states that "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN. This conclusion is based upon a comparison of intended use, technological characteristics, and nonclinical performance data (Software Verification and Validation Testing, Human Factors and Usability Validation, and Internal Test Standards)." This explicitly means there was no clinical study of the type that would prove the device meets acceptance criteria related to diagnostic performance.

    However, I can extract information related to the closest aspects of "acceptance criteria" and "study that proves the device meets the acceptance criteria" from the provided text, focusing on the software's functional performance and usability. Since this is not a diagnostic AI/ML device in the sense of making independent clinical decisions, the "acceptance criteria" will be related to its intended functions and safety.

    Here's a breakdown based on the information available:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a formal table of specific, quantifiable performance acceptance criteria (e.g., segmentation accuracy, measurement precision) with numerical results as one would expect for an AI diagnostic algorithm. Instead, the "performance" is demonstrated through various validation activities.

    CategoryAcceptance Criteria (Implied from testing focus)Reported Device Performance
    Software FunctionalitySoftware functions as intended; outputs are accurate and reliable (e.g., correct calculation of cochlear length, correct display of information, accurate 2D measurements). Software is "moderate" level of concern."All tests have been passed and demonstrate that no question on safety and effectiveness is raised by this technological difference."
    "The internal tests demonstrate that the subject device can fulfill the expected performance characteristics and no questions of safety or performance were raised." (Referencing comparison with known dimensions).
    Human Factors & UsabilityDevice is safe and effective for intended users, uses, and use environments; users can successfully perform tasks and there are no critical usability errors. Conformance to FDA guidance and AAMI/ANSI/IEC 62366-1:2015."OTOPLAN has been found to be safe and effective for the intended users, uses and use environments."
    Safety and EffectivenessNo questions of safety or effectiveness are raised by technological differences or overall device operation."The subject device is equivalent to the predicate device with regard to intended use, safety and efficacy."
    "The subject device is substantially equivalent to the predicate device with regard to device performance."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Software Verification and Validation Testing & Internal Test Standards:
      • The document mentions "tests with known dimensions which were loaded into OTOPLAN." No specific "sample size" of medical images or data is mentioned for these internal software tests, nor is the provenance of this "known dimension" data explicitly stated (e.g., synthetic, real anonymized clinical data). Given it's internal testing of software functionality rather than clinical performance, it's likely proprietary test cases.
    • Human Factors and Usability Validation:
      • Sample Size: "15 users from each user group." (User groups are not specified, but typically refer to the intended users like otologists and neurotologists).
      • Data Provenance: "to be carried out in the US". This implies prospective usability testing with human users.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Software Verification and Validation & Internal Test Standards: The concept of "ground truth" as established by experts for medical image interpretation is not directly applicable here for these functional tests. The ground truth refers to "known dimensions" or expected calculation results, which are determined by the software developers and internal quality processes rather than expert radiologists.
    • Human Factors and Usability Validation: No "ground truth" in the diagnostic sense is established by experts for this type of testing. The "ground truth" for usability testing relates to whether users can successfully complete tasks and if the device performs as expected according to the user.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable. "Adjudication" methods (like 2+1 or 3+1 consensus) are used to establish ground truth in clinical image interpretation studies, typically when there's ambiguity or disagreement among expert readers. Since no clinical study involving image interpretation by multiple readers in this manner was performed (as explicitly stated that clinical testing was not required), no such adjudication method was used.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN." Therefore, no MRMC comparative effectiveness study was conducted.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: The documentation focuses on the software's functional correctness. It states that OTOPLAN "does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually." It emphasizes that "All tasks performed with OTOPLAN require user interaction" and "the user is required to have clinical experience and judgment."
      • The internal tests seem to evaluate the standalone computational aspects (e.g., "correct calculation according to the published formula and display of the information," "tests with known dimensions which were loaded into OTOPLAN and results compared to the know dimension"). This validates the algorithm's performance for specific computational tasks but not its overall clinical diagnostic performance in a "standalone" fashion that replaces human judgment.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    • Software Verification and Validation & Internal Test Standards: "Known dimensions" and
      "Published formulas" for calculations. This indicates a ground truth based on pre-defined, mathematically verifiable inputs and outputs.
    • No ground truth from expert consensus, pathology, or outcomes data was used for a clinical study, as no clinical study was performed.

    8. The sample size for the training set

    The document describes OTOPLAN as a software interface with functions like segmentation and measurement, often based on user interaction or published formulas. It does not describe a machine learning or deep learning model that requires a "training set" in the conventional sense. The "semi-automatic" segmentation is mentioned, but if it uses algorithms that learn from data, no information is provided about such a training set size. This device appears to be a software tool with algorithmic functions rather than a continuously learning AI model.

    9. How the ground truth for the training set was established

    Not applicable, as no "training set" for a machine learning model is described in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203518
    Device Name
    Quicktome
    Date Cleared
    2021-03-09

    (98 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113732

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quicktome is intended for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements, planning, and 3D visualization (MPR reconstructions and 3D volume rendering). Modules are available for image processing and atlas-assisted visualization and segmentation, where an output can be generated for use by a system capable of reading DICOM image sets.

    Quicktome is indicated for use in the processing of diffusion-weighted MRI sequences into 3D maps that represent white-matter tracts based on constrained spherical deconvolution methods.

    Typical users of Quicktome are medical professionals, including but not limited to surgeons and radiologists.

    Device Description

    Quicktome is a software-only, cloud-deployed, image processing package which can be used to perform DICOM image viewing, image processing, and analysis.

    Quicktome can retrieve DICOM images from picture archiving and communication systems (PACS), acquired with MRI, including Diffusion Weighted Imaging (DWI) sequences, T1, T2, and FLAIR images. Once retrieved, Quicktome removes protected health information (PHI) and links the dataset to an encryption key, which is then used to relink the data back to the patient when the data is exported to the hospital PACS. Processing is performed on the anonymized dataset in the cloud. Clinicians are served the processed output for planning and visualization on their local machine.

    The software provides a workflow for a clinician to:

    • Select a patient case for planning and visualization,
    • Confirm image quality,
    • Explore anatomical regions, network templates, tractography bundles, and parcellations,
    • Create and edit regions of interest, and
    • Export objects of interest in DICOM format for use in systems that can view DICOM images.
    AI/ML Overview

    The provided document is a 510(k) summary for the Quicktome device. It outlines the regulatory clearance process and describes the device's intended use and performance validation. However, it does not contain specific acceptance criteria tables nor detailed results of a study proving the device meets those criteria.

    The document broadly mentions performance and comparison validations were performed. It states:

    • "Performance and comparison validations were performed to show equivalence of generated tractography and atlas method."
    • "Evaluations included protocols to demonstrate performance and equivalence of tractography bundle and anatomical region generation (including acceptable co-registration of bundles and regions with underlying anatomical scans), and evaluation of the algorithm's performance in slice motion filtration and skull stripping."

    Without specific acceptance criteria and detailed study results from the provided text, I cannot fill out the requested table or fully describe the study in the detail you've asked for points 1-9.

    If the information were available, here's how I would structure the answer based on the typical requirements for such a study:


    Based on the provided 510(k) summary for Quicktome (K203518), the document states that performance and comparison validations were conducted. However, it does not explicitly detail the specific acceptance criteria, nor does it provide a table of reported device performance against those criteria. Therefore, the following sections will indicate where information is not provided in the given text.

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria CategorySpecific Acceptance Criterion (Not Provided in document)Reported Device Performance (Not Provided in document)
    Tractography Bundle Generation(e.g., Accuracy of tract reconstruction)(e.g., Quantitative metrics like Dice similarity, mean distance)
    Anatomical Region Generation(e.g., Accuracy of segmentation)(e.g., Quantitative metrics like Dice similarity, boundary distance)
    Co-registration with Anatomical Scans(e.g., Alignment accuracy)(e.g., Quantitative metrics like registration error)
    Slice Motion Filtration Performance(e.g., Effectiveness in reducing motion artifacts)(No specific metrics provided)
    Skull Stripping Performance(e.g., Accuracy of skull removal)(No specific metrics provided)
    Equivalence to Predicate Device(Specific metrics for equivalence)(General statement of equivalence)
    Usability(e.g., User satisfaction, task completion rate)(Summative usability evaluation performed)

    2. Sample sized used for the test set and the data provenance

    • Test Set Sample Size: Not provided. The document states, "Performance and comparison evaluations were performed by representative users on a dataset not used for development composed of normal and abnormal brains." The specific number of cases or subjects in this dataset is not mentioned.
    • Data Provenance: The document does not explicitly state the country of origin. It indicates the dataset included "normal and abnormal brains" and was "not used for development." It does not specify if the data was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not provided. The document states, "Performance and comparison evaluations were performed by representative users." It does not specify how many, if any, specific experts established ground truth, or if ground truth was established by automated means (e.g., via algorithm from the predicate).
    • Qualifications of Experts: Not provided. The document refers to "representative users" but does not detail their professional qualifications (e.g., radiologist, surgeon, years of experience, subspecialty).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not provided. The document does not describe any specific adjudication process for establishing ground truth or resolving discrepancies in the test set evaluations.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study Conducted: The document mentions "Adjudication of Results for studies conducted per representative users" but does not explicitly state that it was a multi-reader, multi-case (MRMC) comparative effectiveness study designed to show human reader improvement with AI assistance.
    • Effect Size of Human Reader Improvement: Not provided. The document focuses on the device's standalone performance and comparison/equivalence to a predicate device, rather than the performance of human readers assisted by Quicktome versus unassisted.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, implicitly. The performance and comparison validations, including evaluation of "tractography bundle and anatomical region generation," "co-registration," "slice motion filtration," and "skull stripping," indicate an assessment of the algorithm's output independently, even if "representative users" were involved in judging that output. The device itself is software for processing images.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: Not explicitly detailed. The document states "performance and equivalence of tractography bundle and anatomical region generation" were evaluated. This implies a reference or ground truth was used for comparison. Given the context, it's highly likely that ground truth for tractography and anatomical regions would be derived either from:
      • Expert Consensus/Manual Delineation: Experts manually segmenting or defining tracts/regions.
      • Validated Reference Software/Algorithm: Using output from an established, highly accurate (perhaps manually curated) system or the predicate device as a "ground truth" for comparison.
      • The document implies equivalence to the predicate device was a key benchmark, suggesting its outputs played a role in the "ground truth" for comparison.

    8. The sample size for the training set

    • Training Set Sample Size: Not provided. The document mentions the test set was "not used for development," implying a separate training/development set existed, but its size is not specified.

    9. How the ground truth for the training set was established

    • Training Set Ground Truth Establishment: Not provided. The document does not detail how ground truth was established for any data used during the development or training phase of the algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K170816
    Manufacturer
    Date Cleared
    2017-09-26

    (190 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113732

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Image Fusion Element is an application for the co-registration of image data within medical procedures by using rigid and deformable registration methods. It is intended to align anatomical structures between data sets.

    The Image Fusion Element can be used in clinical workflows that benefit from the co-registration of image data. For example, this applies to navigation systems or medical data information terminals for image processing or image guided surgery in general as well as for treatment planning software for radiosurgery and radiotherapy. The device itself does not have specific clinical indications.

    Device Description

    The Image Fusion Element is intended to co-register volumetric medical image data (e.g. MR, CT). It allows rigid image registration to adjust different spatial position and orientation of two data sets. It also offers deformable registration to compensate image distortion or spatial deviation between image acquisitions.

    The Image Fusion Element gives the possibility to show basic planning content (e.g. objects, points, trajectories) defined from one dataset on another dataset and to display datasets of corresponding anatomic planes simultaneously. Further it could process co-registered data to highlight differences between distinct scanning sequences or to assess the response to a treatment.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for Brainlab AG's "Image Fusion" device. It outlines the device's intended use, technological characteristics, and a summary of safety and effectiveness, primarily through demonstrating substantial equivalence to predicate devices.

    However, this document does not contain the detailed study information typically found in a clinical performance study report that would prove the device meets specific acceptance criteria using a test set, ground truth, and statistical analysis as requested in your prompt. The document outlines a verification and validation (V&V) process, but it does not specify quantitative acceptance criteria or report the results of a controlled study against a predefined ground truth in the manner you've described.

    Instead, the document focuses on:

    • Substantial Equivalence: Comparing the device's features and functionality to existing legally marketed predicate devices (Mirada XD, K101228, and iPlan, K113732). The "Changes to Predicate Device" section and the "Technological Characteristics" table highlight this comparison.
    • Verification and Validation Summary: Stating that verification and validation were performed according to internal plans and processes, including usability testing, to ensure design specifications are met and the device can be used safely. There are no specific performance metrics or statistical results presented.

    Therefore, I cannot provide the information requested in your numbered list for this specific device from the provided text, as the necessary details regarding acceptance criteria, study design, sample sizes, ground truth establishment, or expert involvement for a clinical performance study are not present.

    The document's statement: "Functionality and features considered as substantially equivalent have been verified and validated. The system Image Fusion with its set of functionalities is substantially equivalent to its predicate devices," serves as the core of its argument for market clearance rather than presenting a detailed clinical study demonstrating quantitative performance against specific acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1