Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K201092
    Device Name
    LSN
    Date Cleared
    2020-10-29

    (189 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Imaging Biometrics, LLC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    LSN (Liver Surface Nodularity) is an image analysis software application intended to assist radiologists and other trained healthcare professionals in analyzing and reporting on the liver morphology depicted in computed tomography (CT) images for use in assessment of chronic liver disease. LSN is designed to assist the user in the evaluation and documentation of liver morphology, specifically liver surface nodularity, provided that the surface nodularity is adequately depicted on the CT images.

    LSN provides quantitative metrics related to liver fibrosis by automating segmentation of the liver surface within userdefined Regions of Interest (ROIs) and calculating distances and means related to the liver surface nodularity. LSN also offers reporting capabilities for documenting user-confirmed results, thereby facilitating communication with other trained healthcare professionals and assessment of changes over time.

    LSN is intended to provide image-related information that is interpreted by a trained professional, but it does not directly generate any diagnosis. The information provided by LSN should not be used in isolation when making patient management decisions.

    LSN is not intended for use with or for the diagnostic interpretation of mammography images.

    Device Description

    LSN (Liver Surface Nodularity) is a post-processing software application which assists trained professionals in evaluating DICOM computed tomography image studies of patients with chronic liver disease. The software provides tools to enable the user to make quantitative measurements related to liver surface nodularity as depicted on CT images.

    The generated information consists of a LSN Score (reported in tenths of a millimeter), a quantitative measure of the surface nodularity based on a set of user-defined ROIs sampling the liver surface. LSN calculates the distance between the detected liver edge and a smoothed polynomial line (spline) on a pixel-by-pixel basis inside ROIs and reports the mean of these distances on a per-slice basis as well as an overall LSN Score for the imaging series.

    LSN provides the user with information the progression of chronic liver disease. LSN does not make clinical decisions and the information provided by LSN must not be used in isolation when making patient management decisions. The LSN Score may provide value by standardizing terminology used to describe surface neporting, thereby facilitating communications between radiologists and other clinicans invalient's treatment planning. In addition, standardized reporting metrics may also be helpful in assessing changes for the same patient over time.

    LSN functions by displaying a DICOM CT abdominal series to the user paints a broad region of interest (RO)) delineating the liver edge on a subset of image slices. Then, for the painted region on each slice, the edge is detected using multiple algorithms. For each detected edge, a spline is fit to the edge and the shortes from each edge pixel to the spline are calculated and averaged, resulting in a potential LSN value. The maximum LSN value calculated for an edge is reported as the LSN values for all slices on which ROIs have been painted are then averaged to determine the overall LSN score.

    The core LSN algorithms are implemented in platform-independent code, and have been integrated into both a standalone PC research application and a Mac-based viewer plugin for clinical use. Both platforms produce an equivalent LSN score; the sthe algorithm to require less re-work by the user. The clinical version also produces a report containing images of the scores for each slice, and the overall LSN score. The report is produced in both PDF and DICOM formats and is ready for upload to PACS.

    AI/ML Overview

    The provided text describes the acceptance criteria and a study to prove the device meets these criteria for the LSN (Liver Surface Nodularity) software.

    Here's the breakdown of the information requested:


    1. A table of acceptance criteria and the reported device performance:

    The document describes the acceptance criteria in terms of the results of testing done. While it doesn't present a formal table of quantitative acceptance criteria and corresponding performance metrics, it states general criteria that were met.

    Acceptance Criteria (Inferred from "Testing Information and Performance" section)Reported Device Performance
    All product specifications verified."All product specifications were verified."
    Product meets user needs."the product to meet user needs was validated."
    Testing performed according to internal company procedures."Testing was performed according to internal company procedures."
    Software testing and validation conducted according to written testing procedures."Software testing and validation were conducted according to written testing was conducted."
    Test results reviewed by design personnel before software release."Test results were reviewed by designals before software proceeded to release."
    Validation test results support design intent."Validation test results support the conclusion that actual device performance satisfies the design intent."
    Functional verification met design requirements."functional verfication...all met design requirements."
    Licensing met design requirements."licensing...all met design requirements."
    Labeling met design requirements."labeling...all met design requirements."
    Feature functionality met design requirements."feature functionality all met design requirements."
    Arithmetic accuracy verified and validated."Arithmetic ... accuracy was veilied and validated by comparison to alternative calculation mechanisms."
    Report accuracy verified and validated."report accuracy was veilied and validated by comparison to alternative calculation mechanisms."
    Clinical operation validated through usability testing."clinical operation was validated through usability testing."
    LSN output is repeatable for different CT imaging and reconstruction parameters."LSN output is repeatable for different CT imaging and reconstruction parameters."
    LSN output is reproducible across different CT scanner types and vendors."reproducible across different CT scanner types and vendors."
    Intra-observer measurement variability is low."the intra- and inter-observer measurement variability is low."
    Inter-observer measurement variability is low."the intra- and inter-observer measurement variability is low."
    Risk analysis completed and risk control implemented to mitigate unacceptable hazards."The LSN risk analysis was completed and risk control mere implemented to mitigate unacceptable hazards."
    Verification testing results supported claims of substantial equivalence."Verfication testing results supported the claims of substantial equivalence."

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: The document does not explicitly state the sample size (number of cases or images) used for the testing/validation set.
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It generally refers to "different CT imaging and reconstruction parameters" and "different CT scanner types and vendors."
    • Racial Backgrounds: "LSN has not been evaluated with images from patients of all ethnicities. It has been primarily evaluated with White and Black racial backgrounds. LSN has not been evaluated with images from pediatric patients."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    The document mentions "usability testing" and "user expertise" but does not specify a number of experts used to establish ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience"). It only generally refers to "highly-trained healthcare professionals such as radiologists and medical imaging technologists."

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    The document does not describe any specific adjudication method (like 2+1 or 3+1) for establishing ground truth on the test set.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document does not describe an MRMC comparative effectiveness study directly measuring human reader improvement with AI assistance. The study focuses on the device's technical performance and consistency, stating that it "assists radiologists" and "does not directly generate any diagnosis."

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The document implies that the device's output (LSN score calculation, segmentation, etc.) was tested for repeatability, reproducibility, and variability, suggesting a standalone component for these technical measurements. However, the overall device function is described as "intended to assist radiologists," meaning it's not purely standalone in its intended clinical use. The "arithmetic and report accuracy" validation could be considered aspects of standalone performance proof.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The document speaks of "verification" and "validation" against "design requirements" and "alternative calculation mechanisms" for arithmetic accuracy. For "clinical operation," it mentions "usability testing." While it implies the existence of a 'correct' or 'intended' output, it does not explicitly state the specific type of ground truth (e.g., expert consensus readings, histopathology confirmation) used for validating the LSN score itself or the segmentation accuracy.

    8. The sample size for the training set:

    The document does not mention the sample size for any training set. It primarily discusses "bench testing" and "validation" of the final product.

    9. How the ground truth for the training set was established:

    Since no training set is mentioned or implied, no information is provided on how its ground truth might have been established. The focus of this document is on the validation of the device for regulatory submission, not its development process.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191530
    Device Name
    StoneChecker
    Date Cleared
    2019-09-26

    (108 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Imaging Biometrics, LLC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    StoneChecker is a standalone post-processing software application which assists trained professionals in evaluating DICOM computed tomography image studies of patients diagnosed with kidney stones. The software provides tools to enable the user to navigate images, select regions of interest, and generate information from those regions.

    The generated information consists of regional statistical measurements of image texture and heterogeneity, including means, standard deviation, skewness, and kurtosis. The information also includes regional physical measurements of stone size, volume, and position.

    StoneChecker does not make clinical decisions and the information provided by StoneChecker must not be used in isolation when making patient management decisions.

    Device Description

    StoneChecker (SC) is a standalone software application intended to load DICOMformatted CT studies, let the trained user identify stone regions of interest, and provide computed information consisting of physical measurements and statistical measurements of stone heterogeneity from a single source making it easier for the user to determine the best treatment option. SC is an optional tool used during the treatment planning of a patient diagnosed with kidney stones.

    StoneChecker provides the user tools to select and evaluate various physical characteristics of a kidney stone displayed on a non-contrast enhanced Kidneys, Ureters, and Bladder (KUB) CT scan slice. The measurement and calculated values are displayed on the PC screen for the user and the user has an option to generate a report. The calculated output includes stone volume, mean Hounsfield Unit (HU) density, skin to stone distance, and texture values (mean, mean of positive pixels, standard deviation, skewness, kurtosis, and entropy). This data can be used by the physician as an aid to decision making and are intended to be an adjunct to other clinical data such as medical history, physical examination, and urine analysis. Thus, additional analysis of all kidney stones is required. StoneChecker software is designed exclusively for use in assessing kidney stones.

    StoneChecker is designed to provide easy-to-acquire useful data for helping clinicians make the best decisions for their patients.

    StoneChecker includes the following features:

    • . Processes standard DICOM image sets,
    • . Novel proven statistical algorithms,
    • Time-saving kidney stone regions of interest (ROI) and measurement tools, .
    • . Rapid calculation results, and
    • Saves results in standard Excel spreadsheets. .
    AI/ML Overview

    The provided text describes the StoneChecker device, its indications for use, and a comparison to predicate devices, but it does not contain detailed information about specific acceptance criteria and a study proving the device meets those criteria with quantitative performance metrics.

    The document states:

    • "All product specifications were verified and validated. Testing was performed according to internal company procedures. Software testing and validation were conducted according to written test protocols established before testing was conducted. Test results were reviewed by designated technical professionals before software proceeded to release. Test results support the conclusion that actual device performance satisfies the design intent."
    • "Bench testing (functional and integration) was conducted for StoneChecker during product development. Test results demonstrate StoneChecker output is computed accurately based on input."

    However, it lacks the specific numerical acceptance criteria for measurements like stone volume, HU density, or texture values, and thus does not present a table of acceptance criteria and reported device performance as requested. It also doesn't detail a formal comparative study with AI vs. without AI assistance.

    Therefore, I cannot populate the requested information in the desired format using only the provided text. The following points represent the information that can be extracted or inferred from the provided text, and explicitly state what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance:

    Not available in the provided text. The document states that "Test results demonstrate StoneChecker output is computed accurately based on input" but does not provide specific acceptance criteria (e.g., minimum accuracy/error percentage for volume, HU density, etc.) nor the numerical performance results against such criteria.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: Not explicitly stated. The document mentions "usage validation at two clinical sites in Oxford, UK and Beijing, China" but does not provide the number of cases or patients used in this validation.
    • Data Provenance: The usage validation was conducted at "two clinical sites in Oxford, UK and Beijing, China." The data used would therefore be from these locations. It is implied to be prospective or retrospective clinical data given the nature of "usage validation," but the document doesn't specify.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not explicitly stated. The text refers to "physicians use StoneChecker to analyze KUB CT scans" during the usage validation, but not how many physicians were involved in establishing ground truth.
    • Qualifications of Experts: The text refers to "trained professionals," "physicians," and "trained physicians, Radiologists" as intended users, but does not specify the qualifications (e.g., years of experience) of those involved in establishing ground truth for the validation.

    4. Adjudication method for the test set:

    Not available in the provided text. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth during the usage validation.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Not available in the provided text. The document states StoneChecker "assists trained professionals," but it does not report a formal MRMC comparative effectiveness study measuring the improvement of human readers with AI assistance versus without.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, implicitly. The bench testing described ("Test results demonstrate StoneChecker output is computed accurately based on input") would likely constitute standalone performance testing for the algorithms' accuracy in calculating measurements. However, no specific metrics from this testing are provided beyond a general statement of accuracy.

    7. The type of ground truth used:

    • For the "bench testing (functional and integration)", the ground truth would likely be computational accuracy based on known inputs and expected outputs (i.e., verifying the algorithms correctly compute derived values from an ROI, such as volume or HU density).
    • For the "usage validation," the ground truth would be based on clinician assessment/consensus as they used the tool to "analyze KUB CT scans," but the specific method for establishing this ground truth is not detailed.

    8. The sample size for the training set:

    Not available in the provided text. The document does not mention the training set size, as it focuses on validation and regulatory aspects. This suggests it might not be a deep learning model requiring a distinct training set in the conventional sense, or the information is simply omitted.

    9. How the ground truth for the training set was established:

    Not available in the provided text. As the training set size is not mentioned, neither is the method for establishing its ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K123302
    Device Name
    IB CLINIC
    Date Cleared
    2013-01-11

    (80 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    IMAGING BIOMETRICS, LLC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IB Clinic v1.0 (Clinic) is a post-processing software toolkit designed to be integrated into existing medical image visualization applications running on standard computer hardware. Clinic accepts relevant DICOM image sets, such as dynamic perfusion and diffusion image sets. Clinic generates various perfusion- and diffusion-related parameters, standardized image sets, and image intensity differences. The results are saved to a DICOM image file and may be further visualized on an imaging workstation.

    Clinic is designed to aid trained physicians in advanced image assessment, treatment consideration, and monitoring of therapeutic response. The information provided by Clinic should not be used in isolation when making patient management decisions.

    Dynamic Perfusion Analysis - Generates parametric perfusion maps used for visualization of temporal variations in dynamic datasets, showing changes in image intensity over time. These maps may aid in the assessment of the extent and type of perfusion, blood volume and vascular permeability changes.

    Dynamic Diffusion Analysis - Generates apparent diffusion coefficient maps used for the visualization of apparent water movement in soft tissue throughout the body on both voxel-by-voxel and sub-voxel bases. These images may aid in the assessment of the extent of diffusion in tissue.

    Image Comparison - Generates co-registered image sets. Generates standardized image sets calibrated to an arbitrary scale to facilitate comparisons between independent image sets. Generates voxel-by-voxel maps of the image intensity differences between image sets acquired at different times. Facilitates selection and DICOM export of user-selected regions of interest (ROIs) These processes may enable easier identification of image intensity differences between images and easier selection and processing of ROIs.

    Device Description

    Clinic is a platform independent image processing library which consists of a set of code modules which run on standard computer hardware that computes a variety of numerical analyses, image parameter maps, and other image manipulations based on DICOM images captured via MR and CT modalities. These actions include:

    • Retrieval of MR and CT DICOM image studies from PACS and/or OS-. based file storage.
    • Computation of parameter maps for: .
      • DSC perfusion (based on MR and CT studies) O
      • o ADC diffusion (based on MR studies)
    • Image manipulations (of MR and CT studies): .
      • Registration of images generated at different time points o
      • Standardized scaling of image intensities O
      • o Comparison of registered and/or standardized images
      • Region of Interest (ROI) selection o
      • Generation of ROI datasets in DICOM formats O
    • Output of the above maps in DICOM format for export to PACS and/or OS . file storage
    • . Generation of reports summarizing the computations performed

    The IB Clinic code library can be used within standalone FDA cleared applications or can be "plugged in" and launched from within other FDA cleared applications such as Aycan's OsiriX PRO workstation. They are intended for distribution both in combination and in modular form, with functional subsets geared toward specific types of image analysis and marketed with corresponding names, including IB Neuro, IB Diffusion, and IB Delta Suite.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study information for the IB Clinic v1.0 device:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Based on the provided text, there are no explicit, quantitative acceptance criteria or numerical performance metrics for the IB Clinic v1.0 device. The submission focuses on demonstrating substantial equivalence to predicate devices, rather than meeting specific performance thresholds. The text describes the functionalities of the device but does not quantify their accuracy, precision, or efficiency.

    Acceptance Criteria (Not Explicitly Stated)Reported Device Performance
    Implicit Criteria:
    - Ability to retrieve MR and CT DICOM imagesYes, device performs this.
    - Ability to compute DSC perfusion mapsYes, device performs this.
    - Ability to compute ADC diffusion mapsYes, device performs this.
    - Ability to register imagesYes, device performs this.
    - Ability to standardize image intensitiesYes, device performs this.
    - Ability to compare imagesYes, device performs this.
    - Ability to select Regions of Interest (ROIs)Yes, device performs this.
    - Ability to generate ROI datasets in DICOMYes, device performs this.
    - Ability to output maps in DICOM formatYes, device performs this.
    - Ability to generate reportsYes, device performs this.
    - Substantial equivalence to predicate devices in intended use and performance characteristics.Confirmed by FDA clearance.

    2. Sample Size for Test Set and Data Provenance:

    The document does not specify a sample size for a test set or the provenance of any data. The submission relies on non-clinical tests (quality assurance measures) and a comparison to predicate devices, rather than a clinical trial with a defined test set.

    3. Number of Experts and Qualifications for Ground Truth (Test Set):

    Not applicable. No clinical test set with expert-established ground truth is mentioned in the document.

    4. Adjudication Method for Test Set:

    Not applicable. As there is no described clinical test set, there is no mention of an adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No. The document explicitly states: "Discussion of Clinical Tests Performed: N/A". This indicates that no MRMC or any other clinical effectiveness study involving human readers or AI assistance was conducted or reported in this submission.

    6. Standalone Performance Study (Algorithm Only):

    No. The document states "N/A" for clinical tests. While the device is a "post-processing software toolkit" and "platform independent image processing library," the submission does not present any standalone performance metrics or studies directly demonstrating the algorithm's accuracy or efficacy on a dataset. The validation described is primarily related to software development processes and comparison to predicate devices.

    7. Type of Ground Truth Used:

    Not explicitly stated for any performance evaluation. The "ground truth" for the device's functionality appears to be derived from the inherent mathematical and algorithmic correctness of the image processing operations it performs, as verified through "Performance testing (verification)" and "Product software validation" (listed under non-clinical tests). There's no mention of a clinical ground truth like pathology or outcomes data to assess the accuracy of the generated perfusion/diffusion maps.

    8. Sample Size for Training Set:

    Not applicable. As this is a software toolkit for image processing, not a machine learning model that typically requires a training set, no training set size is mentioned. The device computes parameters based on established physical models (e.g., perfusion, diffusion) rather than learning from data.

    9. How Ground Truth for Training Set Was Established:

    Not applicable. See point 8.

    Ask a Question

    Ask a specific question about this device

    K Number
    K080762
    Date Cleared
    2008-05-15

    (58 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    IMAGING BIOMETRICS, LLC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IB Neuro™ software allows the post-processing and display of dynamically acquired MR datasets to evaluate image intensity variations over time. IB Neuro™ v1.0 plug-in accepts data from existing MRI systems, performs quality control checks and generates parametric perfusion maps such as Relative Cerebral Blood Volume (rCBV), Cerebral Blood Flow (CBF), Mean Transit Time (MTT) and Time to Peak (TTP) and sends the maps to a PACS for subsequent viewing. These images when interpreted by a trained physician may yield information useful in clinical applications. Our advanced technology is designed to be compliant with healthcare standards such as DICOM and is easily and rapidly integrated into existing medical image visualization applications.

    Device Description

    IB Neuro "" OsiriX Plugin is software designed to analvze dynamically acquired datasets. Using well-established algorithms, parametric perfusion maps can be generated such as Relative Cerebral Blood Volume (rCBV), Cerebral Blood Flow (CBF), Mean Transit Time (MTT) and Time to Peak (TTP). The strength of our software is its ability to extend the productivity of any existing viewer, CAD workstation or PACS via a platform-independent base library that allows for quick and seamless integration into existing server and workstation applications. It also includes other critical features such as:

    • Enables rapid creation of a complete array of critical perfusion parameter o maps of rCBV, CBF, MTT, TTP
    • o Automated correction of contrast agent leakage for rCBV maps
    • o Automated brain mask generation
    • Ability to normalize parameters to normal appearing white matter (NAWM) o
    • o Automated report generation
    • View dynamic signal time course on a per-voxel basis o
    • Interactive Arterial Input Function (AIF) selection o
    • o Automatic export of perfusion parameter maps to DICOM images within the same study
    AI/ML Overview

    Acceptance Criteria and Device Performance Study for IB Neuro™ v1.0

    The provided document describes the 510(k) submission for IB Neuro™ v1.0. This submission primarily focuses on establishing substantial equivalence to predicate devices, rather than presenting a performance study with specific acceptance criteria and detailed performance metrics.

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission does not provide specific, quantifiable acceptance criteria or a table of reported device performance in terms of diagnostic accuracy metrics (e.g., sensitivity, specificity, AUC). The primary performance claim is that the device "performs quality control checks and generates parametric perfusion maps" and for "image analysis and processing and generation of parametric maps to provide additional information beyond standard imaging."

    Since there are no explicit acceptance criteria or quantitative performance metrics reported in the provided text, the table below reflects the general claims and the basis for the 510(k) clearance:

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence: Ability to accept data from existing MRI systems."IB Neuro™ v1.0 plug-in accepts data from existing MRI systems"
    Parametric Map Generation: Ability to generate rCBV, CBF, MTT, and TTP maps."generates parametric perfusion maps such as Relative Cerebral Blood Volume (rCBV), Cerebral Blood Flow (CBF), Mean Transit Time (MTT) and Time to Peak (TTP)"
    Quality Control: Performs quality control checks."performs quality control checks"
    Data Export: Sends maps to PACS for subsequent viewing."sends the maps to a PACS for subsequent viewing." and "Automatic export of perfusion parameter maps to DICOM images within the same study"
    Additional Features: Automated correction of contrast agent leakage, automated brain mask, normalization to NAWM, automated report generation, view dynamic signal time course, interactive AIF selection.Device description lists these features.
    Substantial Equivalence: Features and intended use are similar to predicate devices, and differences do not raise new safety/effectiveness questions."The intended use and performance characteristics for IB Neuro™ are substantially equivalent to the predicate devices" and "documentation supplied in this submission demonstrates that any difference in technological characteristics do not raise any new questions of safety or effectiveness."
    Software Validation: Compliance with FDA's software validation guidance."Performance testing included software validation, verification and testing per FDA's software validation guidance."

    2. Sample Size Used for the Test Set and Data Provenance

    The provided document states: "Discussion of Clinical Tests Performed: N/A". This indicates that no clinical tests, and therefore no specific test set with a defined sample size, data provenance, or ground truth, were performed for this 510(k) submission. The clearance was based on demonstrating substantial equivalence to predicate devices and non-clinical software validation.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    As no clinical tests were performed, there was no test set requiring expert-established ground truth.

    4. Adjudication Method for the Test Set

    As no clinical tests were performed, there was no test set requiring an adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC study was conducted as indicated by "Discussion of Clinical Tests Performed: N/A". Therefore, there is no reported effect size of how much human readers improve with AI vs. without AI assistance.

    6. Standalone Performance Study (Algorithm only without human-in-the-loop performance)

    No standalone performance study of the algorithm's diagnostic accuracy metrics was conducted for this submission, as indicated by "Discussion of Clinical Tests Performed: N/A". The focus was on the software's functional capabilities and substantial equivalence.

    7. Type of Ground Truth Used

    No clinical ground truth (e.g., expert consensus, pathology, outcomes data) was used for this submission, as clinical tests were not performed.

    8. Sample Size for the Training Set

    The document does not provide information about a training set or its sample size. This is typical for submissions based on substantial equivalence and software validation, where the focus is on the software's ability to process and generate standard outputs rather than its performance against a diagnostic gold standard learned from data. The algorithms are described as "well-established."

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied, the method for establishing ground truth for a training set is not applicable here.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1