Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K063107
    Device Name
    3DNET SUITE
    Manufacturer
    Date Cleared
    2006-10-27

    (16 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K963697,K992654

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The 3Dnet Suite is intended to be used by physicians for the display of 2D/3D visualization of DICOM compliant medical image data, such as CT, MRI, and Ultrasound scans.

    The 3Dnet Suite provides several levels of functionality to the user:

    • basic analysis tools they use on a daily basis such as 2D review, orthogonal multiplanar reconstructions (MPR), oblique MPR, curved MPR, Slab MPR AvgIP, MIP, MinIP, measurements, annotations, reporting, distribution etc.
    • tools for in-depth analysis, such as segmentation, endoscopic review, color VR slab, grayscale VR slab, 3D volume review, path definition and boundary detection etc.
    • Specialist tools and workflow enhancements for specific clinical applications which provide target workflows, custom UI, targeted measurement and visualization, including colon screening which is indented for the screening of patients for colonic polyps, tumors and other lesions using tomographic colonography.
    Device Description

    The 3Dnet Suite is a software device for evaluating scanned images of selected human organ. The basic visualization module of 3Dnet Suite is Examiner.

    The Examiner allows the processing, review, analysis, communication and media interchange of multi dimensional digital images acquired from a variety of imaging devices.

    It provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and pathology. The 3Dnet user interface follows typical clinical workflow patterns to process, review and analyze digital images including:

    • Retrieve image data over the network via DICOM
    • Select images for closer examination from a gallery of 2D and 3D views
    • Interactively manipulate an image in real time to visualize anatomy and pathology
    • Annotate, tag, measure and record selected views
    • Output selected views to DICOM and JPEG files or expert views to another DICOM device.
    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the 3Dnet Suite device:

    Acceptance Criteria and Device Performance for 3Dnet Suite

    This 510(k) summary focuses on demonstrating substantial equivalence to predicate devices for the 3Dnet Suite, a medical image processing software system. As such, the "acceptance criteria" are primarily established through a comparison of functionalities and a demonstration that the new device performs those functions equivalently and safely. Formal, quantitative performance metrics with specific thresholds (like accuracy, sensitivity, specificity percentages) are not explicitly stated as distinct acceptance criteria that the device must meet in a numerical sense. Instead, the acceptance criteria are implicitly met by:

    • Matching/exceeding functionalities of predicate devices.
    • Demonstrating accurate processing and visualization of medical images.
    • Ensuring the software operates reliably and consistently.
    • Adhering to software development standards.

    The study described is a non-clinical test performed for the determination of substantial equivalence.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance (from "Discussion of non-clinical tests")
    Functional Equivalence: Display 2D/3D visualization of DICOM compliant medical image data (CT, MRI, Ultrasound).The application provided interactive orthogonal and multiplanar reformatted 2D and 3D image from datasets to detect and evaluate known abnormalities or status of organs.
    Accurate Measurement & Analysis: Provide measurement tools (volume, linear, angular) for analysis of observed structures.The volume, linear and angular measurements features, provided in the software, were used to evaluate and quantify any abnormality of organs or status of any internal organ structures. Accuracy correlated "perfectly" with pre-calculated values for phantom datasets.
    Reliability & Usability: Operate reliably, be easy to use, and capable of evaluating DICOM compliant scanned images.The product has shown itself to be reliable, easy to use and capable of evaluating DICOM compliant scanned images of any human organs.
    Software Quality: Developed consistent with accepted standards for software development.The 3Dnet Suite has been developed in a manner consistent with accepted standards for software development, including both unit and system integration testing protocols.
    DICOM Conformance: Validate DICOM functionality with other compliant applications and tools.The DICOM functionality with regards to DICOM SOP classes as stated in the DICOM conformance statement was validated with a number of other DICOM compliant applications and DICOM validation tools as part of the development and testing process.
    Safety & Effectiveness: Pose no new questions of safety or effectiveness compared to predicate devices."We conclude from these tests that 3Dnet Suite is substantially equivalent to the predicated devices in its ability to evaluate any human organs." and "3Dnet Suite does not raise any new questions of safety or effectiveness."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: "Scanned image datasets of various patient organs with known abnormalities or status" and "phantom datasets." The exact number of patient datasets or phantom datasets is not specified in the provided text.
    • Data Provenance:
      • Patient Data: "Patients Image Data in our installations in Europe." This indicates a European origin. The text does not explicitly state if this data was retrospective or prospective, but the phrasing "known abnormalities or status" suggests it was existing, retrospective data used for testing.
      • Phantom Data: Origin not specified, but likely internally generated or standard phantoms.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used or their qualifications for establishing ground truth on the patient or phantom datasets. It refers to "known abnormalities or status" for patient data and "pre-calculated values" for phantom data, implying an established ground truth, but the method of establishment or the experts involved are not detailed.

    4. Adjudication Method for the Test Set

    The document does not specify any formal adjudication method (e.g., 2+1, 3+1). The testing appears to be an internal verification process against "known" or "pre-calculated" ground truth.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a MRMC comparative effectiveness study was not done. The submission focuses on demonstrating substantial equivalence through functional comparison and non-clinical testing, not a comparative study with human readers with and without AI assistance.

    6. If a Standalone Study (Algorithm Only, Without Human-in-the-Loop Performance) Was Done

    Yes, a standalone study was done. The "discussion of non-clinical tests" describes the device's performance against "known abnormalities or status" and "pre-calculated values" using phantom and patient datasets without mention of human interaction in the direct performance evaluation. The device is designed for physicians to use, but the testing described here evaluates the software's inherent ability to process, visualize, and measure.

    7. The Type of Ground Truth Used

    • For Patient Data: "Known abnormalities or status" (e.g., presence, location, and characteristics of polyps, tumors, or other lesions). This implies a clinical ground truth likely established by expert interpretation or, potentially, pathology/surgical findings, though not explicitly stated.
    • For Phantom Data: "Pre-calculated values." This is an objective, engineered ground truth based on the design specifications of the phantom.

    8. The Sample Size for the Training Set

    The document does not mention a training set or its sample size. This device is described as an "Image processing system" or "Medical Image Processing software system" which provides visualization and analysis tools. There is no indication that it employs machine learning or AI that would require a distinct training set in the modern sense. The "development" and "testing process" mentioned relate to software engineering principles rather than AI model training.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned for an AI/ML context, this question is not applicable. The software development process mentioned involves "unit and system integration testing protocols," which use "known abnormalities or status" (for patient data) and "pre-calculated values" (for phantom data) as verification and validation ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031779
    Device Name
    CADSTREAM
    Manufacturer
    Date Cleared
    2003-08-06

    (57 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K002519, K961023, K984221, K992654, K020546

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CADstream is a Computer Aided Detection (CAD) system intended for use in analyzing magnetic resonance imaging (MRI) studies. CADstream automatically registers serial patient image acquisitions to minimize the impact of patient motion, segments and labels tissue types based on enhancement characteristics (parametric image maps), and performs other user-defined post-processing functions (image subtractions, multiplanar reformats, maximum intensity projections).

    When interpreted by a skilled physician, this device provides information that may be useful in screening and diagnosis. CADstream can also be used to provide accurate and reproducible measurements of the longest diameters and volume of segmented tissues. Patient management decisions should not be made based solely on the results of CADstream analysis.

    Device Description

    The CADstream device relies on the assumption that pixels having similar MR signal intensities represent similar tissues. The CADstream software simultaneously analyzes the pixel signal intensities from multiple MR sequences and applies multivariate pattern recognition methods to perform tissue segmentation and classification.

    The CADstream system consists of proprietary software developed by Confirma installed on an off-the-shelf personal computer and a monitor configured as an CADstream display station.

    AI/ML Overview

    The provided document is a 510(k) summary for the CADstream Version 2.0 MRI Image Processing Software. It does not contain detailed information about specific acceptance criteria or an explicit study proving performance against such criteria. The document focuses on the device's intended use, description, software development processes, and regulatory substantiation.

    Here's an analysis based on the information provided, highlighting what is present and what is missing:


    Description of Acceptance Criteria and Study to Prove Device Meets Them

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document mentions that "Performance testing of the features described in the user manual has been successfully completed utilizing clinical datasets" and "Software beta testing also has been completed, validating that the requirements for these features have been met." However, it does not explicitly list the acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, accuracy, precision of measurements) or the quantitative results of these tests.

    The document describes the device's functions:

    • Automatically registers serial patient image acquisitions to minimize motion impact.
    • Segments and labels tissue types based on enhancement characteristics (parametric image maps).
    • Performs user-defined post-processing functions (image subtractions, multiplanar reformats, maximum intensity projections).
    • Provides accurate and reproducible measurements of the longest diameters and volume of segmented tissues.

    Without explicit acceptance criteria and corresponding performance metrics, a table cannot be constructed.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document states "Performance testing... has been successfully completed utilizing clinical datasets." However, it does not specify the sample size (number of cases or patients) used for this testing. It also does not provide information on the data provenance (e.g., country of origin, retrospective or prospective nature).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    The document does not provide details about the number of experts, their qualifications, or how ground truth was established for the clinical datasets used in performance testing.

    4. Adjudication Method for the Test Set:

    The document does not describe any adjudication method (e.g., 2+1, 3+1 consensus) used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance, nor does it specify any effect size or improvement. The "Intended Use Statement" notes that "When interpreted by a skilled physician, this device provides information that may be useful in screening and diagnosis" and "Patient management decisions should not be made based solely on the results of CADstream analysis," implying human oversight but not a formal comparative study of reader performance.

    6. Standalone (Algorithm Only) Performance Study:

    The document describes the device's features and states "CADstream automatically registers... segments and labels... and performs other user-defined post-processing functions... CADstream can also be used to provide accurate and reproducible measurements..." This implies standalone algorithmic capabilities. However, it does not present a dedicated standalone performance study with quantitative metrics (e.g., sensitivity, specificity, F1-score) in isolation from human interpretation. The primary use case described involves interpretation by a skilled physician.

    7. Type of Ground Truth Used:

    The document refers to "clinical datasets" but does not specify the type of ground truth used (e.g., expert consensus, pathology reports, follow-up outcomes data) for evaluating the device's performance.

    8. Sample Size for the Training Set:

    The document does not specify the sample size of the training set used for developing the multivariate pattern recognition methods.

    9. How the Ground Truth for the Training Set Was Established:

    The document states that "CADstream software simultaneously analyzes the pixel signal intensities from multiple MR sequences and applies multivariate pattern recognition methods to perform tissue segmentation and classification." However, it does not describe how the ground truth for training these methods was established.


    In summary, the provided document is a high-level regulatory submission that attests to developmental processes and general performance testing but lacks the specific quantitative details typically found in a clinical study report regarding acceptance criteria, sample sizes, expert involvement, and explicit performance metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K022938
    Date Cleared
    2002-10-25

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K992654, K002519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Volume Interactions Pte Ltd's Image Processing System is a medical device for the display and visualization of 3D medical image data derived from CT and MRI scans. It is intended to be used by qualified and trained medial professionals, after proper installation.

    Volume Interactions Pte Ltd's Image Processing System is not intended to be used in direct contact with the patient nor is it intended to be connected to equipment that is used in direct contact with the patient.

    Device Description

    Volume Interactions Image Processing System reads DICOM 3.0 format medical image data sets (and other formats) and displays 3D image reconstructions of these data sets through various user selectable industry standard rendering methods and algorithms. The clinical users can spatially manipulate, process to highlight structures and volumes of interest, and measure distances and volumes in the 3D image reconstructions. The processed data can be stored either as 3D image data in a proprietary format. or as 2D picture projections of the 3D image data in TIFF image format. The system runs on commercially available IBM PC compatible computers and hardware components with the Microsoft Windows NT and 2000 operating systems.

    The system consists of three product modules namely, VizDexter™ 2.0, Dextroscope™ and DextroBeam™. The modules are described as follows:

    VizDexter™ 2.0 is software that processes tomographic (e.g.: Computer Tomography, Magnetic Resonance Imaging) data and produces stereoscopic 3D renderings for surgery planning and visualization purposes. The software user selectable industry standard rendering methods and algorithms.

    Dextroscope™ is an interactive console and display system that allows the user to interact with two hands with the 3D images generated by the VizDexter™ software. The Dextroscope™ user works seated, with both forearms positioned on armrests. Wearing stereoscopic glasses, the user looks into a mirror and perceives the virtual image within comfortable reach of both hands for precise hand-eye coordinated manipulation. The hardware uses various industry standard components.

    DextroBeam™ is an interactive console intended for group collaborative discussions with 3D images using a stereoscopic projection system. The DextroBeam™ system uses the base of the Dextroscope™ as the 3D interaction interface with the virtual objects. The monitor of the Dextroscope™ is replaced by a screen projection system, so instead of looking into the mirror of the Dextroscope™, the user looks at large stereoscopic screen projections while working with the virtual data in reach of his hands. This enables the discussion of 3D data sets with other specialists in stereoscopic 3D. The hardware uses various industry standard components.

    AI/ML Overview

    The provided text is a 510(k) summary for the Volume Interactions Image Processing System (VizDexter™ 2.0, Dextroscope™, and DextroBeam™). This document focuses on demonstrating substantial equivalence to predicate devices rather than proving performance against specific acceptance criteria through a study.

    Therefore, the document does not contain the acceptance criteria or a study that proves the device meets specific performance criteria in the way a clinical validation or standalone performance study would. It primarily compares the technological characteristics and intended use of the new device with existing, legally marketed devices.

    However, based on the information provided, here's what can be inferred or explicitly stated regarding the device's nature and the lack of specific performance study details:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The 510(k) summary does not present specific quantitative acceptance criteria or a direct study measuring device performance against such criteria. The "performance" demonstrated is substantial equivalence to predicate devices in functionality and safety. The tables compare features, not quantitative performance metrics.

    2. Sample Size Used for the Test Set and Data Provenance:

    No information is provided regarding a "test set" in the context of a performance study. The submission relies on a comparison of technological characteristics with predicate devices.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    Not applicable, as no performance study with a test set requiring ground truth establishment is described.

    4. Adjudication Method for the Test Set:

    Not applicable, as no performance study with a test set requiring adjudication is described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    No MRMC study is mentioned in the document. The comparison is between the technological features of the device and its predicates, not a comparative effectiveness study involving human readers with and without AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    No standalone performance study for the algorithm itself (VizDexter™ 2.0) is described in terms of specific performance metrics. The document focuses on the system as a whole, including the human interaction components (Dextroscope™, DextroBeam™). The functions described are image processing and visualization, which inherently involve human interpretation.

    7. The Type of Ground Truth Used:

    Not applicable, as no performance study is detailed that would require a ground truth for evaluation.

    8. The Sample Size for the Training Set:

    No information is provided regarding a "training set." The device is an "Image Processing System" that utilizes "industry standard rendering methods and algorithms," implying it's not a machine learning model that would require a dedicated training set in the modern sense.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set is mentioned.


    In summary:

    This 510(k) submission is a "substantial equivalence" filing for an image processing and visualization system from 2002. It focuses on demonstrating that the device has similar technological characteristics and intended use as already marketed devices. It does not present clinical performance data, acceptance criteria, or studies of the kind typically expected for AI/ML-driven diagnostic devices today. The "proof" of meeting acceptance criteria is implicitly through the FDA's determination of substantial equivalence based on the comparison of features and the device's classification as a Picture Archiving and Communications System, where the clinician retains responsibility for interpretation.

    Ask a Question

    Ask a specific question about this device

    K Number
    K020546
    Device Name
    FUSION 7D
    Date Cleared
    2002-04-26

    (66 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K010336, K983256, K992654

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Fusion7D registers pairs of anatomical and functional volumetric images (e.g. MRI-SPECT, MRI-PET, CT-SPECT, CT-PET), or pairs of anatomical volumetric images (e.g. MRI-MRI, CT-CT and MRI-CT) as a means to ease the comparison of image volume data by the clinician. The result of the registration operation aims to help the clinician obtain a better understanding of the joint information that would otherwise have to be compared visually. This is useful for a wide range of clinical and therapeutic applications. It is important to note that the clinician retains the ultimate responsibility for making the pertinent diagnosis based on their standard procedures including visual comparison of the separate unregistered images. Fusion7D is a complement to these standard procedures.

    Device Description

    Fusion7D is a software program running on a PC platform, which brings into alignment (registers) pairs of images from different imaging modalities. Fusion7D also includes functionality to read, display, and save the original volumetric data and the results of the registration operation by means of a graphic user interface that includes visualization, file browsing and control of input and output as described in the following text.

    AI/ML Overview

    The provided text describes Fusion7D, a software program for registering and fusing medical images. However, it does not include detailed acceptance criteria or a study that specifically proves the device meets such criteria in terms of quantitative performance metrics, sample sizes, expert involvement, or statistical analysis.

    The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to predicate devices, rather than providing a detailed performance study with acceptance criteria.

    Therefore, the following information cannot be extracted from the provided text:

    • A table of acceptance criteria and the reported device performance: This information is not present. The document describes the device's capabilities and intended use but does not quantify performance against specific criteria.
    • Sample size used for the test set and the data provenance: No performance study details are given.
    • Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
    • Adjudication method for the test set: Not mentioned.
    • If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance: This type of study is not described. The device is a registration tool, not an AI diagnostic aid in the sense of improving human reader performance on a diagnostic task, although it aims to "ease the comparison of image volume data."
    • If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not explicitly stated or quantified in terms of performance. The document implies automated registration capabilities but doesn't provide a standalone performance evaluation against a gold standard.
    • The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not mentioned.
    • The sample size for the training set: No training data or set is mentioned, as this is more a description of the final device functionality rather than its development.
    • How the ground truth for the training set was established: Not applicable, as no training set is described.

    Summary of what can be inferred about "acceptance criteria" and "study" implicitly from the document:

    The "acceptance criteria" for Fusion7D, as implied by the 510(k) process, primarily revolve around demonstrating substantial equivalence to legally marketed predicate devices in terms of intended use, technological characteristics, and safety/effectiveness. The "study" largely consists of the submission itself, detailing the device's functionality and comparing it to existing, approved devices.

    The document states:

    • "Fusion7D is a software program running on a PC platform, which brings into alignment (registers) pairs of images from different imaging modalities."
    • It supports "manual," "semi-automatic," and "automatic" registration, limited to "rigid body deformation."
    • It provides "standard visualization facilities" and allows "registration results to be displayed in a variety of ways."
    • Intended Use: "Fusion7D registers pairs of anatomical and functional volumetric images... as a means to ease the comparison of image data."

    The FDA's approval letter confirms that the device was found "substantially equivalent" based on its comparison to the predicate devices listed (K010336, K983256, K992654). This substantial equivalence is the de facto "acceptance criteria" for this 510(k) submission, and the "study" is the submission argument itself.

    Ask a Question

    Ask a specific question about this device

    K Number
    K013878
    Manufacturer
    Date Cleared
    2001-12-07

    (14 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K992654

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The V-works is a software application for the display and 3D visualization of medical image files from scanning devices, such as CT, MRI or 3D Ultrasound. It is intended for use by radiologists, clinicians and referring physicians to acquire, process, render, review, store, print and distribute DICOM 3.0 compliant image studies, utilizing standard PC hardware.

    Device Description

    The V-works™ is a software application for the display and 3D visualization of medical image files from scanning devices, such as CT, MRI or 3D Ultrasound. It is intended for use by radiologists, clinicians and referring physicians to acquire, mocess, render, review, store, print and distribute DICOM 3.0 complaint image process, reliaer, 101107, 1011, 1978, dware. All of the functions are supported on standard personal computer platform for ease of cost and maintenance. The use of Microsoft personal compacer pratessional/NT 4.0 operating system makes the V-works™ software easy to use and capable of being integrated with other computer needs.

    AI/ML Overview

    The provided text is a 510(k) Summary for the CyberMed Inc., V-works™ device. It describes the device, its intended use, and its substantial equivalence to a predicate device (Voxar Limited Plug'n View 3D). However, it does not contain information about acceptance criteria, device performance studies, sample sizes, ground truth establishment, or expert qualifications in the way a clinical validation study typically reports it.

    The document focuses on establishing substantial equivalence based on technical characteristics and intended use, rather than presenting a performance study with specific acceptance criteria and results.

    Therefore, I cannot fulfill your request for:

    • A table of acceptance criteria and the reported device performance: This information is not present in the document.
    • Sample sized used for the test set and the data provenance: This information is not present in the document.
    • Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not present in the document.
    • Adjudication method for the test set: This information is not present in the document.
    • If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size: This information is not present in the document.
    • If a standalone performance (i.e. algorithm only without human-in-the loop performance) was done: This information is not present in the document.
    • The type of ground truth used: This information is not present in the document.
    • The sample size for the training set: This information is not present in the document.
    • How the ground truth for the training set was established: This information is not present in the document.

    The document mainly emphasizes that the V-works™ software is "designed, developed, tested and validated according to written procedures" and that "hazard analysis on this product has been performed throughout the definition, design, coding and testing phases." It concludes that the "Level of Concern" is "Minor" and that potential hazards are similar to other PACS components and unlikely to result in patient death or injury. This reflects a software development and risk management approach, not a clinical performance study as typically seen for AI/CAD devices.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1