Search Results
Found 1 results
510(k) Data Aggregation
(22 days)
XIDF-AWS801, Angio Workstation (Alphenix Workstation), V9.5
The Angio Workstation (XIDF-AWS801) is used in combination with an interventional angiography system (Alphenix series systems, Infinix-i series systems and INFX series systems) to provide 2D and 3D imaging of selective catheter angiography procedures for the whole body (includes heart, chest, abdomen, brain and extremity).
When XIDF-AWS801 is combined with Dose Tracking System (DTS), DTS is used with selective catheter angiography procedures for the heart, chest abdomen, pelvis and brain.
The XIDF-AWS801, Angio Workstation (Alphenix Workstation), V9.5 is used for images input from Diagnostic Imaging System and Workstation, image processing and display. The processed images can be outputted to Diagnostic Imaging System and Workstation.
Please note that the provided text is a 510(k) summary for a medical device (Angio Workstation) and primarily focuses on demonstrating substantial equivalence to a predicate device due to minor software changes. It does not contain a detailed clinical study demonstrating the performance of an AI algorithm against specific acceptance criteria in the way a new, high-risk AI device submission typically would.
The only AI-related change mentioned is the improvement to the "Dynamic Device Stabilizer (DDS) software with deep learning" to "improve the detection rate of the stent marker." The document states "Testing was conducted to verify the fixed display performance was improved in V9.5 algorithm compared to V9.3 algorithm for Dynamic Device Stabilizer (DDS)..." However, it does not provide the specifics of this test in terms of acceptance criteria, sample size, or ground truth establishment relevant to an AI model's performance as you've requested. It implies that the test was a "bench test" and verified "fixed display performance," which is more aligned with system-level performance rather than the diagnostic performance of an AI.
Therefore, I cannot fully answer your request based on the provided text, as it lacks the detailed AI study information you've asked for.
However, I can extract the limited information present and highlight what is missing.
Acceptance Criteria and Device Performance (Limited Information)
The document mentions that the DDS software with deep learning was changed to "improve the detection rate of the stent marker" and that "Testing was conducted to verify the fixed display performance was improved in V9.5 algorithm compared to V9.3 algorithm."
Missing Information:
- Specific quantitative acceptance criteria for "detection rate of the stent marker" (e.g., minimum sensitivity, precision, F1-score).
- The actual reported performance metric for the stent marker detection by the V9.5 algorithm.
- The definition of "fixed display performance" in quantitative terms and its relationship to the deep learning component's performance.
Given the limited information, a table of acceptance criteria and reported device performance cannot be fully constructed as requested. The document only broadly states "performance was improved" without quantitative metrics.
Study Details (Based on Available Information)
As the document only mentions an improvement to an existing feature (DDS with deep learning for stent marker detection) and treats it as a "modification of a cleared device" under a Special 510(k), it does not describe a full standalone clinical validation study for a novel AI device. The information below extracts what can be inferred or is explicitly stated, and highlights significant gaps.
-
A table of acceptance criteria and the reported device performance:
Feature/Metric Targeted by AI Acceptance Criteria (Stated/Inferred) Reported Device Performance (Stated/Inferred) Stent Marker Detection Rate "Improve the detection rate" "performance was improved" (V9.5 vs V9.3) "Fixed Display Performance" Improved "was improved" (V9.5 vs V9.3) Note: These are qualitative statements of improvement, not specific quantitative acceptance criteria or performance metrics as typically seen for AI device validation.
-
Sample size used for the test set and the data provenance:
- Sample Size: Not specified. The document states "Testing was conducted to verify the fixed display performance..." The nature of this "testing" is unclear in terms of data samples.
- Data Provenance: Not specified (e.g., country of origin).
- Retrospective or Prospective: Not specified, but "bench testing" usually implies retrospective analysis of existing data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
- Method of Ground Truth Establishment: Unclear. For "stent marker detection," ground truth would typically involve manual annotation by expert radiologists or cardiologists. This is not described.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Adjudication Method: Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: Not mentioned or implied. The focus is on the performance of the software modification itself, not on human-in-the-loop performance.
- Effect Size: Not applicable as no MRMC study is mentioned.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: The description "To improve the detection rate of the stent marker" and "Testing was conducted to verify the fixed display performance was improved" suggests that some form of standalone evaluation of the algorithm's output was done. However, specific metrics (e.g., sensitivity, specificity, F1-score for detection) are not provided. The term "fixed display performance" might refer to the accuracy of the algorithm's output as displayed.
-
The type of ground truth used:
- Type of Ground Truth: Not specified. For "stent marker detection," it would typically be expert annotation of stent markers on imaging data.
-
The sample size for the training set:
- Training Set Sample Size: Not specified. The document refers to "deep learning" but provides no details on its training.
-
How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not specified.
Summary of Gaps:
The document is a regulatory submission focused on demonstrating substantial equivalence for a minor software upgrade. It does not provide the level of detail typically found in a clinical study report for a new or significantly modified AI/ML-based medical device. Specifically, for the deep learning component, there is a lack of quantitative acceptance criteria, specific performance metrics, detailed information on test and training datasets (size, provenance), and the methodology for establishing ground truth and expert involvement. The "testing" mentioned appears to be more focused on system-level performance verification ("fixed display performance") rather than a rigorous diagnostic performance study of the AI algorithm.
Ask a specific question about this device
Page 1 of 1