Search Results
Found 1 results
510(k) Data Aggregation
(137 days)
AI Metrics is a software solution intended to be used for viewing, manipulation, storage, annotation, analysis, and comparison of medical images from multiple imaging modalities and/or multiple time points. The application supports images and anatomical datasets, such as CT and MR. AI Metrics is a software only medical device to be deployed via internet software download and installed by trained AI Metrics technicians.
AI Metrics enables visualization of information that would otherwise have to be visually compared disjointedly. AI Metrics provides analytical and workflow automation tools to help the user assess and document the extent of a disease and/or the response to therapy in accordance with user selected standards and assess changes in imaging findings over multiple time-points. Al Metrics supports the interpretation and evaluations and follow up documentation of findings within healthcare institutions, for example, in Radiology, Oncology, and other Medical Imaging environments.
The product is intended to be used as a workflow automation tool by trained medical professionals. It is intended to provide image and related information that is interpreted by a trained professional but does not directly generate any diagnosis or potential findings.
Note: The medical professional retains the ultimate responsibility for making the pertinent diagnosis based on their standard practices. Al Metrics is a complement to these standard procedures. Al Metrics is not to be used in mammography.
Al Metrics is a software based Picture Archiving and Communication System (PACS) used with general purpose computing hardware for the display and visualization of medical image data. It runs on either a native or a virtualized Linux platform. The application supports images and anatomical datasets, such as CT and MR.
Al Metrics is designed as a workflow automation application with analytical tools to help the user assess, categorize and document the extent of a disease and/or tumor response to therapy in accordance with user selected standards (e.g. RECIST 1.1) and assess changes in imaging findings over multiple time-points.
Al Metrics supports the interpretation and evaluation of examinations and follow up documentation of findings within healthcare institutions, for example, in Radiology, Oncology, and other Medical Imaging environments.
Al Metrics functionality provides for communication, storage, processing, rendering, and display of DICOM compliant image data derived from various sources including anatomical datasets (e.g. CT, MRI), navigation through images, selection of regions of interest, generation of information from those regions, evaluation in accordance with user selected standards, and generation of a structured report.
The user controls these functions with a system of interactive menus and semi-automated or manual workflow automation tools, including:
- Manual annotation tools for users to select regions of interest (ROIs), ●
- Semi-automatic lesion segmentation suggestions for user-selected ROIs,
- Automatic measurement and display of long and short axis of segmented lesions,
- . Automatic tabulation and summation of measurements,
- Semi-automatic lesion labelling suggestions for anatomical location (organ, body region, . and laterality),
- . Automatic calculation of quantitative and qualitative metrics using annotation data in accordance with the selected criteria, and
- Automatic generation of a structured report that includes the annotation data and ● calculated quantitative and qualitative metrics presented in a graph, table, key images, and structured text report.
The provided document does not contain specific acceptance criteria or study details with quantitative performance metrics for the AI Metrics device. The document primarily focuses on demonstrating substantial equivalence to a predicate device (mint Lesion) based on intended use and technological characteristics, as required for a 510(k) submission.
However, based on the information provided, we can extract the following:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or device performance in a table format. Instead, it states that the device "passed all required tests" against established Software Design Specifications, and "demonstrated an overall acceptable performance." This implies that the acceptance criteria were met by successfully passing these tests, but the specific metrics are not disclosed in this document.
Acceptance Criteria (Implied) | Reported Device Performance (Summary) |
---|---|
Met all requirements per FDA Guidance. | Passed all required tests against Software Design Specifications. |
Risks identified and mitigated in compliance with ISO 14971. | Risk Management Report completed; all hazards determined to be acceptable. |
Outputs of software design activity meet specified requirements. | Results found acceptable to support substantial equivalence claim. |
Overall acceptable performance. | Demonstrated overall acceptable performance. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size used for any test set or the data provenance (e.g., country of origin, retrospective or prospective nature of data). It only mentions "extensive verification and validation testing."
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not provide information on the number of experts used to establish ground truth or their qualifications. The "Adjudication method" is also not specified.
4. Adjudication Method
The document does not specify any adjudication method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention any MRMC comparative effectiveness study, nor does it provide an effect size of human reader improvement with AI assistance. The focus is on the device's standalone functionality as a workflow automation tool rather than reader performance improvement.
6. Standalone Performance Study (Algorithm Only)
The document primarily describes the AI Metrics device as a "workflow automation tool" that provides "analytical and workflow automation tools" and "does not directly generate any diagnosis or potential findings." It states that the information is "interpreted by a trained professional."
While the document mentions "semi-automatic lesion segmentation suggestions" and "automatic measurement," it does not present specific quantitative standalone algorithm performance metrics (e.g., sensitivity, specificity, accuracy for lesion detection or segmentation compared against a ground truth). It describes the testing against "Software Design Specifications" but does not detail the nature of these tests in terms of standalone algorithmic performance against clinical ground truth.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for testing. Given the features like "lesion segmentation suggestions" and "automatic measurement," one might infer that ground truth related to lesion boundaries and measurements would be relevant, but this is not confirmed or detailed. The overall ground truth for the system's performance is implied to be its adherence to "Software Design Specifications."
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size for the training set.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established, as it does not mention a training set or machine learning components in a way that would require this detail. While AI Metrics uses "semi-automatic lesion segmentation suggestions," the specifics of how these suggestions are developed and validated (e.g., using labelled training data) are not detailed in this submission summary.
Ask a specific question about this device
Page 1 of 1