Search Results
Found 5 results
510(k) Data Aggregation
(117 days)
BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of head CT Angio (CTA) images. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive cases of Brain Aneurysm (BA) findings above 5 mm size.
BriefCase uses an artificial intelligence algorithm to analyze images and flag suspect cases on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for suspect cases. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase is a radiological computer-assisted triage and notification software device. The software system is based on an algorithm programmed component and consists of a standard off-the-shelf operating system, the Microsoft Windows server 2012 64bit, and additional applications, which include PostgreSQL, DICOM module and the BriefCase Image Processing Application. The device consists of the following three modules: (1) Aidoc Hospital Server (AHS); (2) Aidoc Cloud Server (ACS); and (3) Aidoc Worklist Application that is installed on the user's desktop and provides the user interface in which notifications from the BriefCase software are received and the worklist is presented.
DICOM images are received, saved, filtered and de-identified before processing. Filtration matches metadata fields with keywords. Series are processed chronologically by running the algorithms on each series to detect suspected cases. The software then flags suspect cases by sending notifications to the Worklist desktop application, thereby prompting triage and prioritization by the user. As the BriefCase software platform harbors several triage algorithms, the user may opt to filter out notifications by pathology, e.g., a chest radiologist may choose to filter out notifications on LVO cases, and a neuro-radiologist would opt to divert PE notifications. Where several medical centers are linked to a shared PACS, a user may read cases for a certain center but not for another, and thus may opt to filter out notification by center. Activating the filter does not impact the order in which notifications are presented in the Aidoc worklist application.
The Worklist Application displays the pop-up text notifications of new studies with suspected findings when they come in. Notifications are in the form of a small pop-up containing patient name, accession number and the relevant pathology (e.g., BA). A list of all incoming cases with suspected findings is also displayed. Hovering over a notification or a case in the worklist pops up a compressed, small black and white, unmarked image that is captioned "not for diagnostic use" and is displayed as a preview function. This compressed preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.
Presenting the users with notification facilitates earlier triage by prompting them to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria (Performance Goal) | Reported Device Performance |
---|---|
Sensitivity > 80% | 88.5% (95% CI: 80.4%, 94.1%) |
Specificity > 80% | 89.5% (95% CI: 84.0%, 93.7%) |
Conclusion: The device (BriefCase for Brain Aneurysm triage) met both primary acceptance criteria, exceeding the 80% performance goal for both sensitivity and specificity.
Study Details:
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: 268 cases (96 positive cases, 172 negative cases).
- Data Provenance: Retrospective, multinational study from five study sites, including 2 US-based study sites. The text does not specify the other countries of origin.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: 3 experts.
- Qualifications: Two US Board-certified radiologists and a third one (whose specific qualifications beyond being available to resolve inconsistencies are not stated, but presumably also a radiologist).
4. Adjudication method for the test set:
- Adjudication Method: "Ground truthing was performed by two US Board-certified radiologists and a third one to resolve inconsistencies." This implies a 2+1 adjudication method, where two experts independently establish the ground truth, and a third expert resolves any disagreements between the first two.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers' improvement with AI assistance was not explicitly detailed. The study focused on the standalone performance of the AI (sensitivity, specificity) and its impact on workflow efficiency (time to notification vs. time to exam open). The secondary endpoint compared time metrics, not reader diagnostic performance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Standalone Performance: Yes, the primary endpoint evaluated the standalone performance of the BriefCase software in identifying head CTs containing Brain Aneurysm, reporting its sensitivity and specificity. The described study evaluates the algorithm's performance without a human in the diagnostic loop.
7. The type of ground truth used:
- Ground Truth Type: Expert consensus. The ground truth was established by two US Board-certified radiologists, with a third to resolve inconsistencies, based on reports on images with and without Brain Aneurysm findings.
8. The sample size for the training set:
- Training Set Sample Size: The text states, "No patient data were reused between the training and the pivotal datasets," and "The subject BriefCase for BA triage and the predicate device BriefCase for ICH triage (K203508 initially cleared under K180647) are identical in all aspects and defer only with respect to the training of the algorithm on BA and ICH findings, respectively." However, the specific sample size of the training set used for the BA algorithm is not provided in the given text. It only mentions that the algorithm was trained on a "database of images."
9. How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Similar to the training set sample size, the text broadly states the device uses "Artificial intelligence Deep-learning algorithm with database of images" and that the algorithms were "trained on BA and ICH findings." However, the specific methodology for establishing the ground truth for the training set (e.g., number of experts, qualifications, adjudication method) is not explicitly detailed in the provided document.
Ask a specific question about this device
(117 days)
See-Mode AVA (Augmented Vascular Analysis) is a stand-alone, image processing software for analysis, measurement, and reporting of DICOM-compliant vascular ultrasound images obtained from carotid and lower limb arteries. The analysis includes segmentation of vessels walls and measurement of the intima-media thickness (IMT) of the carotid artery in B-Mode images, finding velocities in Doppler images, and reading annotations on the images. The software generates a vascular ultrasound report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. The client software is designed to run on a standard desktop or laptop computer. See-Mode AVA is intended to be used by trained medical professionals, including but not limited to physicians and medical technicians. The software is not intended to be used as an independent source of medical advice, or to determine or recommend a course of action or treatment for patients.
See-Mode AVA (Augmented Vascular Analysis) is a standalone software for analysis and reporting of vascular ultrasound images. There is no dedicated medical equipment required for operation of this software except for an ultrasound machine that is the source of image acquisition. The software runs on a standard off-the-shelf computer and is accessible within a web browser.
See-Mode AVA takes as input DICOM-compliant vascular ultrasound images. The software uses proprietary algorithms for image analysis. including segmentation of vessel walls and measurement of the intima-media thickness (IMT) of the carotid artery in B-Mode images and finding peak systolic and end diastolic velocities (PSV and EDV) from Doppler images. The software generates a vascular ultrasound report based on the image analysis results to be reviewed and approved by a qualified clinician after performing quality control. Any information within this report must be fully reviewed and approved by a qualified clinician before the vascular ultrasound report is finalized.
See-Mode AVA is not intended to be used as an independent source of analysis and reporting vascular ultrasound images. Any information provided by the software has to be reviewed by a qualified clinician (including sonographers, radiologists, and cardiologists) and can be modified to correct any possible mistakes. The software provides multiple methods for performing quality control and modification of image analysis results. When the vascular ultrasound report is finalized by a qualified clinician, See-Mode AVA exports the report. This report can be used adjunctly with other medical data by a physician to help in the assessment of the cardiovascular health of the patient.
Here's an analysis of the acceptance criteria and study details for the See-Mode AVA device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly list "acceptance criteria" for all tasks in a table format. However, it does present performance metrics that imply the criteria met for each function. I've extracted these and presented them in a table, along with the device's reported performance.
Device Function | Implied Acceptance Criteria (Based on reported performance) | Reported Device Performance |
---|---|---|
Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement | Strong correlation with expert measurements; Outperform predicate device. | IMT Correlation Coefficient: 0.89 (with average of 2 experts) |
Outperforms predicate (reported correlation 0.6) | ||
Text Recognition (Reading Annotations) | High accuracy in reading various annotation types. | Accuracy: 92% to 96% (depending on annotation type) |
Signal Processing (Reading PSV & EDV from Doppler Waveforms) | Strong correlation with clinician annotations. | PSV Correlation Coefficient: 0.98 |
EDV Correlation Coefficient: 0.97 | ||
Waveform Type Classifier (Lower Limb Doppler Images) | Strong agreement with expert annotations. | Overall Accuracy: 93% |
2. Sample Size Used for the Test Set and Data Provenance
- Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement:
- Sample Size: 205 longitudinal B-mode carotid images.
- Data Provenance: Retrospective dataset from multiple centers. The document does not specify the country of origin.
- Text Recognition (Reading Annotations):
- Sample Size: Varied from 783 to 1432 images, depending on the type of annotation being read.
- Data Provenance: Retrospective vascular ultrasound dataset. The document does not specify the country of origin.
- Signal Processing (Reading PSV & EDV from Doppler Waveforms):
- Sample Size: 1117 images.
- Data Provenance: Images where clinicians annotated PSV and EDV values at the time of image acquisition. The document does not specify the country of origin or whether it's retrospective or prospective, though the nature of "annotations at the time of image acquisition" suggests a retrospective analysis of existing data.
- Waveform Type Classifier (Lower Limb Doppler Images):
- Sample Size: 150 images.
- Data Provenance: A collection of images representing typical use cases in the clinical field. The document does not specify the country of origin or whether it's retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement:
- Number of Experts: 2 expert readers.
- Qualifications: Not explicitly stated beyond "expert readers."
- Text Recognition (Reading Annotations):
- Number of Experts: Not explicitly stated, implied to be based on existing annotations, likely from clinicians.
- Qualifications: Not explicitly stated.
- Signal Processing (Reading PSV & EDV from Doppler Waveforms):
- Number of Experts: Clinicians.
- Qualifications: "Clinicians at the time of image acquisition." No further details on their specific roles or experience are provided.
- Waveform Type Classifier (Lower Limb Doppler Images):
- Number of Experts: Expert readers.
- Qualifications: Not explicitly stated beyond "expert readers."
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1, 3+1, or none) for any of the test sets.
- For IMT measurement, it compares the algorithm to the "average of two experts," implying that their individual measurements were used, but not necessarily a consensus process or adjudication beyond averaging.
- For other tasks, it refers to "expert annotations" or "clinician annotations" without detailing how disagreements, if any, were resolved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size How Much Human Readers Improve with AI vs. Without AI Assistance
No, an MRMC comparative effectiveness study that measures the improvement of human readers with AI assistance versus without AI assistance was not explicitly described.
The studies primarily evaluated the standalone performance of the AVA device against ground truth established by experts/clinicians or against the performance of a predicate device. While it claims the device "outperforms the reported results of the predicate device" for IMT, this is a comparison of standalone algorithm performance, not human-in-the-loop effectiveness.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone (algorithm only) performance evaluations were done for all the described device functions:
- Segmentation of B-mode carotid ultrasound images and IMT measurement.
- Text recognition algorithm for reading annotations.
- Signal processing algorithm for analyzing doppler waveforms (PSV and EDV).
- Waveform type classifier on lower limb doppler images.
The results presented (correlation coefficients, accuracy) are indicative of the algorithm's direct performance.
7. The Type of Ground Truth Used
The following types of ground truth were used:
- Expert Consensus/Annotations:
- For Segmentation of B-mode Carotid Ultrasound Images & IMT Measurement, ground truth was established by "2 expert readers' measurements" (implied average).
- For Waveform Type Classifier (Lower Limb Doppler Images), ground truth was "annotations (i.e., waveform type) by expert readers."
- Clinician Annotations:
- For Signal Processing (Reading PSV & EDV from Doppler Waveforms), ground truth was "annotations (i.e. PSV and EDV values) on the images annotated by clinicians at the time of image acquisition."
- Existing Image Annotations:
- For Text Recognition (Reading Annotations), the algorithm's performance was evaluated against "reading different types of annotations," implying these annotations were present as ground truth on the images.
No pathology or outcomes data was mentioned as ground truth.
8. The Sample Size for the Training Set
The document does not provide any specific information or sample size for the training set used for the AI/ML algorithms in See-Mode AVA. It only mentions that the device "incorporates a logical update to use artificial intelligence for image analysis" and benefits from "established machine learning methods."
9. How the Ground Truth for the Training Set Was Established
Since no information about the training set's sample size or data is provided, the document does not describe how the ground truth for the training set was established.
Ask a specific question about this device
(79 days)
Viz ICH is a notification-only, parallel workflow tool for use by hospital networks and trained clinicians to identify and communicate images of specific patients to a specialist, independent of care workflow.
Viz ICH uses an artificial intelligence algorithm to analyze images for findings suggestive of a prespecified clinical condition and to notify an appropriate medical specialist of these findings in parallel to standard of care image interpretation. Identification of suspected findings is not for diagnostic use beyond notification. Specifically, the device analyzes non-contrast CT images of the brain acquired in the acute setting, and sends notifications to a neurovascular or neurosurgical specialist that a suspected intracranial hemorrhage has been identified and recommends review of those images. Images can be previewed through a mobile application.
lmages that are previewed through the mobile application may be compressed and are for informational purposes only and not intended for diagnostic use beyond notification. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests. Viz ICH is limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis.
Viz ICH is contraindicated for analyzing non-contrast CT scans that are acquired on scanners from manufacturers other than General Electric (GE) or its subsidiaries (i.e. GE Healthcare). This contraindication applies to NCCT scans that conform to all applicable Patient Inclusion Criteria, are of adequate technical image quality, and would otherwise be expected to be analyzed by the device for a suspected ICH.
Viz ICH is a software-only, parallel workflow tool for use by hospital networks and trained clinicians to identify and communicate images of specific patients to an appropriate specialist, such as a neurovascular specialist or neurosurgeon, independent of the standard of care workflow. The system automatically receives and analyzes non-contrast CT (NCCT) studies of patients for image features that indicate the presence of an intracranial hemorrhage (ICH) using an artificial intelligence algorithm, and upon detection of a suspected ICH, sends a notification so as to alert a specialist clinician of the case.
Viz ICH consists of backend and mobile application component software. The Backend software includes a DICOM router and backend server. The DICOM router transmits NCCT images of the head acquired on a local healthcare network to the Backend Server. The Backend Server receives, stores, processes and serves received NCCT scans. The Backend Server also includes an artificial intelligence algorithm that analyzes the received NCCT images for image characteristics that indicate an intracranial haemorrhage (ICH) and, upon detection, sends a notification of the suspected finding to pre-determined specialists.
The Viz ICH Mobile Application software receives notifications generated by the Backend of suspected image findings and allows the notification recipient to view the analyzed NCCT images through a non-diagnostic viewer, as well as patient information that was embedded in the image metadata. Image viewing through the mobile application is for informational purposes only and is not intended for diagnostic use.
Here's a summary of the acceptance criteria and study details for Viz ICH, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Pre-specified performance goal) | Reported Performance (95% CI) |
---|---|---|
Sensitivity | ≥ 80% | 93% (87%-97%) |
Specificity | ≥ 80% | 90% (84%-94%) |
AUC | Not explicitly stated as an acceptance criterion, but 0.96 was demonstrated as clinical utility | 0.96 |
Time to Alert | Not explicitly stated as an acceptance criterion for the device, but comparative data was provided | 0.49 ± 0.15 minutes (device) vs. 38.2 ± 84.3 minutes (Standard of Care) |
2. Sample size used for the test set and the data provenance
- Sample Size: 261 non-contrast Computed Tomography (NCCT) scans (studies). Approximately equal numbers of positive (47%) and negative (53%) cases were included.
- Data Provenance: Retrospective study. Data obtained from two clinical sites in the U.S.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated, but "trained neuro-radiologists" were used.
- Qualifications of Experts: "Trained neuro-radiologists". Specific years of experience are not mentioned.
4. Adjudication method for the test set
- The document implies a consensus-based ground truth ("ground truth, as established by trained neuro-radiologists"). However, the specific adjudication method (e.g., 2+1, 3+1) is not detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
- No, a multi-reader multi-case (MRMC) comparative effectiveness study with human readers was not described. The study focused on the standalone performance of the AI algorithm and a comparison of notification times.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance study of the image analysis algorithm was conducted. The sensitivity and specificity reported are for the algorithm only.
7. The type of ground truth used
- Expert consensus, established by "trained neuro-radiologists," in the detection of intracranial hemorrhage (ICH).
8. The sample size for the training set
- The sample size for the training set is not provided in the document. The information focuses only on the test set.
9. How the ground truth for the training set was established
- The method for establishing ground truth for the training set is not described in the provided document.
Ask a specific question about this device
(271 days)
K180647, BriefCase
DeepCT is a notification-only, parallel workflow tool for use by hospital networks and trained clinicians to identify and communicate images of specific patients to a specialist, independent of standard of care workflow. DeepCT uses an artificial intelligence algorithm to analyze images for findings suggestive of a pre-specified clinical condition and to notify an appropriate medical specialist of these findings in parallel to standard of care image interpretation. Identification of suspected findings is not for diagnostic use beyond notification. Specifically, the device analyzes non-contrast CT images of the brain acquired in the acute setting and sends notifications to a specialist that a suspected ICH (intracranial hemorrhage) has been identified and recommends review of those images. Notified clinicians are responsible for viewing non-contrast CT images of the brain on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating specialist before making care-related decisions or requests. DeepCT is limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis.
This software is used to analyze the head computed tomography image of a patient suspected of having intracranial hemorrhage and/or hematoma (hereinafter referred to as "ICH"). Provide a "present" situation (with ICH) notification, send a text message to the user.
DeepCT (Ver. 4.1.4) is a software-only device that uses two components: (1) Image Forwarding Software and (2) Image Processing and Analysis Server.
(1) The Image Forwarding Software is configured by the hospital to be used on a computer and is responsible for transmitting a copy of DICOM files from the local through a secured channel to the Image Processing and Analysis Server.
When the Image Forwarding Software receives the interpretation result from the Image Processing and Analysis Server, it shows the result on the screen. If there is a suggestive of ICH, the Image Forwarding Software sends a notification to the specialist identifying the study of interest. While the software informs the notification process, no other diagnostic information is generated from the software or available to the user beyond the notification.
(2) The Image Processing and Analysis Server is responsible for receiving, assembling, processing, analyzing and storing DICOM images. This component includes the algorithm that is responsible for identifying and quantifying image characteristics that are consistent with a ICH and transmit the result back to the Image Forwarding Software.
Here's a breakdown of the acceptance criteria and the study details for the DeepCT device, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Sensitivity $\ge$ 80% | 93.8% (95% CI: 88.3%-96.8%) |
Specificity $\ge$ 80% | 92.3% (95% CI: 86.4%-95.7%) |
Processing Time | 30.6 seconds (95% CI: 25.8-35.4 seconds), which is lower than the processing time reported by the Aidoc BriefCase device (the predicate device, though exact Aidoc time not provided). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 260 cases
- Data Provenance: Retrospective, multicenter, multinational.
- Countries: 5 clinical sites (2 US and 3 OUS - Outside US). Specific countries are not mentioned beyond "US" and "OUS".
- Distribution: 130 cases from US sites and 130 cases from OUS sites.
- Case Balance: Approximately an equal number of positive (images with ICH) and negative (images without ICH) cases.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts used to establish ground truth for the test set or their qualifications. It only states that the study evaluated "the software's performance in identifying non-contrast CT head images containing ICH findings," implying an established ground truth, but details are absent.
4. Adjudication Method for the Test Set
The document does not specify the adjudication method used for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No. The document describes a standalone study evaluating the algorithm's performance against a pre-established ground truth. It does not mention a comparative effectiveness study involving human readers with and without AI assistance.
6. Standalone Performance Study
Yes. The study described in the "Performance Testing" section is a standalone study of the algorithm's performance. It evaluates DeepCT's sensitivity, specificity, and processing time in identifying ICH without human-in-the-loop performance measurement.
7. Type of Ground Truth Used
The document implies a ground truth based on the presence or absence of "ICH findings" in the images, but it does not explicitly state the method used to establish this ground truth (e.g., expert consensus, pathology, outcomes data).
8. Sample Size for the Training Set
Radiology records were collected from 21,603 patients who underwent head CT scans between 2007 and 2017. This dataset was used for DeepCT development and deployment. It is implied this was the training set, or at least a significant portion of it.
9. How the Ground Truth for the Training Set Was Established
The document states: "The Tri-Service General Hospital Institutional Review Board, Kaohsiung Veterans General Hospital Institutional Review Board and National Taiwan University Hospital Research Ethics Committee all approved and consented the use of the retrospective image data for DeepCT development and deployment without relevant ethical concern."
While Institutional Review Board (IRB) approval is mentioned for the use of the data, the document does not explicitly describe how the ground truth labels (i.e., presence or absence of ICH) were established for this large training dataset. It only refers to "radiology records" and the "retrospective image data." It's highly probable that these labels were derived from radiologists' interpretations in the original radiology reports, but this is not definitively stated.
Ask a specific question about this device
(77 days)
Accipiolx is a software workflow tool designed to aid in prioritizing the clinical assessment of adult non-contrast head CT cases with features suggestive of acute intracranial hemorrhage in the acute care environment. Accipiolx analyzes cases using an artificial intelligence algorithm to identify suspected findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage.
Accipiolx is not intended to direct attention to specific portions of an image or to anomalies other than acute intracranial hemorrhage. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out hemorrhage or otherwise preclude clinical assessment of CT cases.
Accipiolx is a software device designed to be installed within healthcare facility radiology networks to identify and prioritize non-contrast head CT (NCCT) scans based on algorithmically-identified findings of acute intracranial hemorrhage (alCH). The device, developed using computer vision and deep learning technologies, facilitates prioritization of CT scans containing findings of alCH. There are two main components of the software device: (1) the Accipiolx Agent and (2) the MaxQ-Al Engine. The Agent serves as an active conduit which receives head CT studies from a PACS and transfers them to the Engine. After successful processing of a case via the MaxQ-Al Engine, the Accipiolx Agent receives the Engine results and returns them to the PACS or workstation for use in worklist prioritization.
Accipiolx works in parallel to and in conjunction with the standard care of workflow. After a CT scan has been performed, a copy of the study is automatically retrieved and processed by Accipiolx. The device performs identification and classification of objects consistent with alCH, and provides a case-level indicator which facilitates prioritization of cases with potential acute hemorrhagic findings for urgent review.
Here's an analysis of the acceptance criteria and study as described in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Predefined Performance Goals) | Reported Device Performance |
---|---|---|
Sensitivity | Less than 92% (inferred from "exceeded" statement) | 92% (95% Cl: 87.29-95.68%) |
Specificity | Less than 86% (inferred from "exceeded" statement) | 86% (95% Cl: 80.18-90.81%) |
Notes: The document states that the reported results "exceeded the predefined performance goals." This implies the acceptance criteria were less than the achieved performance, meaning the device had to perform at least
as well as a certain threshold. However, the exact numerical thresholds for the acceptance criteria are not explicitly stated, so I've inferred them based on the "exceeded" statement.
2. Sample size used for the test set and the data provenance
- Sample Size: 360 cases
- Data Provenance:
- Country of Origin: Not explicitly stated, but collected from "over 30 US sites." This suggests the data originated from the United States.
- Retrospective/Prospective: Retrospective study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: At least two expert neuroradiologist readers.
- Qualifications of Experts: Expert neuroradiologist readers. No specific experience in years is provided.
4. Adjudication method for the test set
- Adjudication Method: Concurrence of at least two expert neuroradiologist readers. This implies a 2-reader consensus model.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study involving human readers with vs. without AI assistance was not done. The study described focused on the standalone performance of the AI algorithm.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone study was done. The performance testing specifically evaluated the "device sensitivity and specificity...compared to ground truth." This describes the algorithm's performance in isolation.
7. The type of ground truth used
- Type of Ground Truth: Expert consensus (established by concurrence of at least two expert neuroradiologist readers).
8. The sample size for the training set
- Sample Size for Training Set: Not explicitly stated. The text mentions "Accipiolx was developed using a training CT cases collected from multiple institutions and CT manufacturers," but it does not provide a specific number for the training set size.
9. How the ground truth for the training set was established
- How Ground Truth for Training Set was Established: Not explicitly stated how the ground truth for the training specific was established. The document mentions "optimization of object and feature identification, algorithmic training and selection/optimization of thresholds," which implies a process was followed, but the method of establishing training ground truth is not detailed.
Ask a specific question about this device
Page 1 of 1