Search Results
Found 564 results
510(k) Data Aggregation
(74 days)
MALVERN, PA 19355
Re: K251523
Trade/Device Name: Cios Spin
Regulation Number: 21 CFR 892.1650
Interventional Fluoroscopic X-Ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
Fluoroscopic X-Ray System
Classification Panel: Radiology
Page 6
Regulation Number: 21 CFR §892.1650
Interventional Fluoroscopic X-Ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
The Cios Spin is a mobile X-ray system designed to provide X-ray imaging of the anatomical structures of patients during clinical applications. Clinical applications may include but are not limited to interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care, and emergency room procedures. The patient population may include pediatric patients.
The Cios Spin (VA31A) mobile fluoroscopic C-arm X-ray System is designed for the surgical environment. The Cios Spin provides comprehensive image acquisition modes to support orthopedic and vascular procedures. The system consists of two major components:
a. The C-arm with X-ray source on one side and the flat panel detector on the opposite side. The c-arm can be angulated in both planes and be lifted vertically, shifted to the side and move forward/backward by an operator.
b. The second unit is the image display station with a moveable trolley for the image processing and storage system, image display and documentation. Both units are connected to each other with a cable.
The following modifications were made to the Predicate Device the Cios Spin Mobile X-ray System cleared under Premarket Notification K210054 on February 5, 2021. Siemens Medical Solutions USA, Inc. submits this Traditional 510(k) to request clearance for the Subject Device Cios Spin (VA31A). The following modification is incorporated in the Predicate Device to create the Subject Device, for which Siemens is seeking 510(k) clearance:
- Software updated from VA30 to VA31A to support the below software features
A. Updated Retina 3D for optional enlarged 3D Volume of 25cm x 25cm x 16cm
B. Introduction of NaviLink 3D Lite
C. Universal Navigation Interface (UNI)
D. Updated InstantLink with Extended NXS Interface - Updated Collimator
- Updated FLC Imaging system PC with new PC hardware Updated AppHost PC with High Performance Graphic Card
- New Eaton UPS 5P 850i G2 as successor of UPS 5P 850i due to obsolescense
Based on the provided FDA 510(k) clearance letter for the Siemens Cios Spin (VA31A), here's an analysis of the acceptance criteria and the study proving the device meets them:
Important Note: The provided document is a 510(k) summary, which often summarizes testing without providing granular details on study design, sample sizes, and ground truth establishment to the same extent as a full clinical study report. Therefore, some information requested (e.g., specific number of experts for ground truth, adjudication methods) may not be explicitly stated in this summary. The focus of this 510(k) is primarily on demonstrating substantial equivalence to a predicate device, especially for software and hardware modifications, rather than a de novo effectiveness study.
Acceptance Criteria and Reported Device Performance
The 510(k) summary primarily focuses on demonstrating that the modifications to the Cios Spin (VA31A) do not introduce new safety or effectiveness concerns compared to its predicate device (Cios Spin VA30) and a reference device (CIARTIC Move VB10A) that incorporates some of the new features. The acceptance criteria are implicitly tied to meeting various industry standards and demonstrating functionality and safety through non-clinical performance testing.
Table 1: Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria (Implicit/Explicit from Text) | Reported Device Performance / Evidence |
---|---|---|
Software Functionality | Software specifications met acceptance criteria as stated in test plans. | "All test results met all acceptance criteria." |
Enlarged Volume Field of View (Retina 3D) | Functionality and performance of new 25cm x 25cm x 16cm 3D volume. | "A non-clinical test 'Enlarged Volume Field of View' testing were conducted." The feature was cleared in the CIARTIC Move (K233748), implying its performance was previously validated. |
NaviLink 3D Lite Functions | Functionality and performance of the new navigation interface. | Part of software updates VA31A; "All test results met all acceptance criteria." |
Universal Navigation Interface (UNI) | Functionality and performance of UNI. | Part of software updates VA31A; "All test results met all acceptance criteria." UNI was present in the reference device CIARTIC Move (K233748). |
InstantLink with Extended NXS Interface | Functionality and performance of updated interface. | Part of software updates VA31A; "All test results met all acceptance criteria." |
Electrical Safety | Compliance with IEC 60601-1, IEC 60601-2-43, IEC 60601-2-54. | "The system complies with the IEC 60601-1, IEC 60601-2-43, and IEC 60601-2-54 standards for safety." |
Electromagnetic Compatibility (EMC) | Compliance with IEC 60601-1-2. | "The system complies with... the IEC 60601-1-2 standard for EMC." |
Human Factors/Usability | Device is safe and effective for intended users, uses, and environments. Human factors addressed. | "The Human Factor Usability Validation showed that Human factors are addressed in the system test according to the operator's manual and in clinical use tests with customer reports and feedback forms." |
Risk Mitigation | Identified hazards are controlled; risk analysis completed. | "The Risk analysis was completed, and risk control was implemented to mitigate identified hazards." |
Overall Safety & Effectiveness | No new issues of safety or effectiveness introduced by modifications. | "Results of all conducted testing and clinical assessments were found acceptable and do not raise any new issues of safety or effectiveness." |
Compliance with Standards/Regulations | Adherence to various 21 CFR regulations and standards (e.g., ISO 14971, IEC 62304). | Extensive list of complied standards, including 21 CFR sections 1020.30, 1020.32, and specific IEC/ISO standards mentioned in Section 9. |
Study Details Proving Device Meets Acceptance Criteria
The study described is primarily a non-clinical performance testing and software verification and validation effort rather than a traditional clinical trial.
-
Sample sizes used for the test set and data provenance:
- Test Set Sample Size: Not explicitly stated as a "sample size" in the context of patients or images for performance evaluation. The testing described is "Unit, Subsystem, and System Integration testing" and "software verification and regression testing." This type of testing uses a diverse set of test cases designed to cover functionality, performance, and safety requirements. For the "Enlarged Volume Field of View," it's a non-clinical test, likely using phantoms or simulated data.
- Data Provenance: Not applicable in terms of patient data provenance for the non-clinical and software testing described. This is bench testing and software validation. Customer reports and feedback forms are mentioned for human factors, but specific details on their origin (country, etc.) are not provided. The manufacturing site is Kemnath, Germany.
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- Not explicitly stated. For non-clinical performance and software testing, "ground truth" is typically established by engineering specifications, known correct outputs for given inputs, and compliance with industry standards. If clinical use tests involved subjective evaluation, the number and qualifications of experts are not detailed, but they are implied to be "healthcare professionals" (operators are "adequately trained").
-
Adjudication method for the test set:
- Not applicable/Not explicitly stated. For software and bench testing, adjudication usually refers to a process of resolving discrepancies in ratings or measurements. Given the nature of this submission (software/hardware modifications and non-clinical testing), formal clinical adjudication methods (like 2+1, 3+1 for image reviews) are not described as part of the primary evidence. Acceptance is based on test cases meeting predefined engineering requirements.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. An MRMC study was not conducted. This 510(k) is for a mobile X-ray system with software and hardware updates, not an AI-assisted diagnostic device where evaluating human reader performance with and without AI would be relevant. The "AI" mentioned (Retina 3D, NaviLink 3D) refers to advanced imaging/navigation features, not machine learning for diagnostic interpretation.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The "non-clinical test 'Enlarged Volume Field of View' testing" and other "Unit, Subsystem, and System Integration testing" for functionality and performance are essentially standalone tests of the device's components and software without immediate human interpretation in a diagnostic loop. The acceptance criteria for these tests refer to technical performance endpoints, not diagnostic accuracy.
-
The type of ground truth used:
- Engineering Specifications and Standard Compliance: For the performance and safety testing, the "ground truth" is adherence to predefined engineering requirements (e.g., image dimensions, system response times, electrical safety limits) and compliance with national and international industry standards (e.g., IEC 60601 series, ISO 14971, NEMA PS 3.1).
- For the Human Factors Usability Validation, "customer reports and feedback forms" serve as a form of "ground truth" regarding user experience and usability.
-
The sample size for the training set:
- Not applicable. This submission describes modifications to an X-ray imaging system, not the development of a machine learning algorithm that requires a separate training set. The existing software (VA30) was updated to VA31A. The "training" for the software itself would have occurred during its initial development, not for this specific 510(k) submission.
-
How the ground truth for the training set was established:
- Not applicable. As above, this information is not relevant to this specific 510(k) submission, as it focuses on modifications to an existing device rather than the development of a new AI/ML algorithm requiring a training set and its associated ground truth.
Ask a specific question about this device
(259 days)
Name:** Vascular Navigation PAD 2.0
Navigation Software Vascular PAD
Regulation Number: 21 CFR 892.1650
Interventional fluoroscopic x-ray system |
| Product Code | OWB; LLZ |
| Regulation Number | 892.1650
| Panel | Radiology |
| Predicate Device | K222070 – EndoNaut
Regulation Number: 21 CFR 892.1650
The software supports image guidance by overlaying vessel anatomy onto live fluoroscopic images in order to navigate guidewires, catheters, stents and other endovascular devices.
The device is indicated for use by physicians for patients undergoing endovascular PAD interventions of the lower limbs including iliac vessels.
The device is intended to be used in adults.
There is no other demographic, ethnic or cultural limitation for patients.
The information provided by the software or system is in no way intended to substitute for, in whole or in part, the physician's judgment and analysis of the patient's condition.
The Subject Device is a standalone medical device software supporting image guidance in endovascular procedures of peripheral artery disease (PAD) in the lower limbs, including the iliac vessels. Running on a suitable platform and connected to an angiographic system, the Subject Device receives and displays the images acquired with the angiographic system as a video stream. It provides the ability to save and process single images out of that video stream and is able to create a vessel tree consisting of angiographic images. This allows to enrich the video stream with the saved vessel tree to continuously localize endovascular devices with respect to the vessel anatomy.
The medical device is intended for use with compatible hardware and software and must be connected to a compatible angiographic system via video connection.
Here's a breakdown of the acceptance criteria and study information for the Vascular Navigation PAD 2.0, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for Vascular Navigation PAD 2.0
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Video Latency (Added) | $\le$ 250 ms | $\le$ 250 ms (for Ziehm Vision RFD 3D, Siemens Cios Spin, and combined) |
Capture Process Timespan (initiation to animation start) | $\le$ 1s | Successfully passed |
Stitching Timespan (entering stitching to calculation result) | $\le$ 10s | Successfully passed |
Roadmap/Overlay Display Timespan (manual initiation / selection / realignment to updated display) | $\le$ 10s | Successfully passed |
System Stability (Stress and Load, Anti-Virus) | No crashes, responsive application (no significant waiting periods), no significant latencies of touch interaction/animations, normal interaction possible. | Successfully passed |
Level Selection and Overlay Alignment (True-Positive Rate for suggested alignments) | Not explicitly stated as a number, but implied to be high for acceptance. | 95.71 % |
Level Selection and Overlay Alignment (Average Registration Accuracy for proposed alignments) | Not explicitly stated (but the stated "2D deviation for roadmapping $\le$ 5 mm" likely applies here as an overall accuracy goal). | 1.49 ± 2.51 mm |
Level Selection Algorithm Failures | No failures | No failures during the test |
Modality Detection (Prediction Rate in determining image modality) | Not explicitly stated ("consequently, no images were misidentified" implies 100% accuracy) | 99.25 % |
Modality Detection (Accuracy for each possible modality) | Not explicitly stated (but 100% for acceptance) | 100 % |
Roadmapping Accuracy (Overall Accuracy) | $\le$ 5 mm | 1.57 ± 0.85 mm |
Stitching Algorithm (True-Positive Rate for suggested alignments) | $\ge$ 75 % | 95 % |
Stitching Algorithm (False-Positive Rate for incorrect proposal of stitching) | $\le$ 25 % | 6.4 % |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a single number.
- For Latency Tests: Data from Siemens Cios Spin and Ziehm Vision RFD 3D.
- For Level Selection and Overlay Alignment: Images acquired with Siemens Cios Spin, Ziehm Vision RFD 3D, and GE OEC Elite CFD.
- For Modality Detection: Image data from Siemens Cios Spin, GE OEC Elite CFD, Philips Zenition, and Ziehm Vision RFD 3D.
- For Roadmapping Accuracy: Image data from Siemens Cios Spin.
- For Stitching Algorithm: Image data from Philips Azurion, Siemens Cios Spin, GE OEC Elite CFD, and Ziehm Vision RFD 3D.
- Data Provenance:
- Retrospective/Prospective: Not explicitly stated for all tests. However, the Level Selection and Overlay Alignment and Roadmapping Accuracy tests mention using "cadaveric image data" which implies a controlled, likely prospective, acquisition for testing purposes rather than retrospective clinical data. Other tests reference "independent image data" or data "acquired using" specific devices, suggesting a dedicated test set acquisition.
- Country of Origin: Not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not explicitly stated. The document mentions "manually achieved gold standard registrations" for Level Selection and Overlay Alignment and "manually comparing achieved gold standard (GS) stitches" for the Stitching Algorithm, implying human expert involvement in establishing ground truth, but specific details on the number or qualifications of these "manual" reviewers are absent. The phrase "if a human would consider the image pairs matchable" in the stitching section further supports human-determined ground truth.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly described. The ground truth seems to be established through "manually achieved gold standard" or "manual comparison," implying a single expert or a common understanding rather than a formal adjudication process between multiple conflicting expert opinions (e.g., 2+1 or 3+1).
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- Was it done? No. The submission focuses on standalone technical performance measures and accuracy metrics of the algorithm rather than comparing human reader performance with and without AI assistance.
6. Standalone Performance Study
- Was it done? Yes. The entire "Performance Data" section details the algorithm's performance in various standalone tests, such as latency, stress/load, level selection and overlay alignment, modality detection, roadmapping accuracy, and stitching algorithm performance. The results are quantitative metrics of the device itself.
7. Type of Ground Truth Used
- Type of Ground Truth:
- Expert Consensus / Manual Gold Standard: For Level Selection and Overlay Alignment ("manually achieved gold standard registrations") and for the Stitching Algorithm ("manually comparing achieved gold standard (GS) stitches"). This implies human experts defined the correct alignment or stitch.
- Technical Metrics: For Latency, Capture Process, Stitching Timespan, Roadmap/Overlay Display Timespan, and System Stability, the ground truth is based on objective technical measurements against defined criteria.
- True Modality: For Modality Detection, the ground truth is simply the actual modality of the image (fluoroscopy vs. angiography) as known during test data creation or acquisition.
8. Sample Size for the Training Set
- Sample Size: Not provided. The submission focuses solely on the performance characteristics of the tested device and its algorithms, without detailing the training data or methods used to develop those algorithms.
9. How the Ground Truth for the Training Set Was Established
- How Established: Not provided. As with the training set size, the information about the training process and ground truth for training is outside the scope of the clearance letter's performance data section.
Ask a specific question about this device
(131 days)
K250660**
Trade/Device Name: LUMINOS Q.namix T
LUMINOS Q.namix R
Regulation Number: 21 CFR 892.1650
fluoroscopic x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1650
System, x-ray, fluoroscopic, image-intensified, Solid State X-ray imager | Same |
| Regulation Number | 892.1650
| 892.1650 | Same |
| Classification Product Code | JAA, OWB | JAA, OWB | Same |
| Indications for use
| 892.1650 | Same |
| Classification Product Code | JAA, OWB | JAA, OWB | Same |
| Indications for use
LUMINOS Q.namix T and LUMINOS Q.namix R are devices intended to visualize anatomical structures by converting an X-ray pattern into a visible image. It is a multifunctional, general R/F system, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics.
LUMINOS Q.namix T and LUMINOS Q.namix R are not intended to be used for mammography examinations.
The LUMINOS Q.namix T is an under-table fluoroscopy system and the LUMINOS Q.namix R is an over-table fluoroscopy system. Both systems are multifunctional, general R/F systems, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics. They are designed as modular systems with components such as main fluoro table including fixed fluoroscopy detector and X-ray tube, a ceiling suspension with X-ray tube, Bucky wall stand, X-ray generator, monitors, a bucky tray in the table as well as portable wireless and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
This FDA 510(k) clearance letter and summary discuss the LUMINOS Q.namix T and LUMINOS Q.namix R X-ray systems. The provided documentation does not include specific acceptance criteria (e.g., numerical thresholds for image quality, diagnostic accuracy, or performance metrics) in the same way an AI/ML device often would. Instead, it relies on demonstrating substantial equivalence to predicate devices and adherence to recognized standards.
The study presented focuses primarily on image quality evaluation for the new detectors (X.fluoro and X.wi-D24) for diagnostic acceptability, rather than establishing acceptance criteria for the entire system's overall performance.
Here's an attempt to extract and present the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
As explicit quantitative acceptance criteria for the overall device performance are not stated in the provided 510(k) summary, this section will reflect the available qualitative performance assessment for the new detectors. The primary "acceptance criterion" implied for the overall device is substantial equivalence to predicate devices and acceptability for diagnostic use.
Feature/Metric | Acceptance Criteria (Implied/Direct) | Reported Device Performance (LUMINOS Q.namix T/R with new detectors) |
---|---|---|
Overall Device Equivalence | Substantially equivalent to predicate devices (Luminos Agile Max, Luminos dRF Max) in indications for use, design, material, functionality, technology, and energy source. | Systems are comparable and substantially equivalent to predicate devices. Test results show comparability. |
New Detector Image Quality (X.fluoro, X.wi-D24) | Acceptable for diagnostic use in radiography & fluoroscopy. | Evaluated images and fluorography studies from different body regions were qualified for proper diagnosis by a US board-certified radiologist and by expert evaluations. |
Compliance with Standards | Compliance with relevant medical electrical safety, performance, and software standards (e.g., IEC 60601 series, ISO 14971, IEC 62304, DICOM). | The LUMINOS Q.namix T/LUMINOS Q.namix R systems were tested and comply with the listed voluntary standards. |
Risk Management | Application of risk management process (per ISO 14971). | Risk Analysis was applied. |
Software Life Cycle | Application of software life cycle processes (per IEC 62304). | IEC 62304 (Medical device software - Software life cycle processes) was applied. |
Usability | Compliance with usability engineering standards (per IEC 60601-1-6, IEC 62366-1). | IEC 60601-1-6 and IEC 62366-1 were applied. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Description: "expert evaluations" for the new detectors X.fluoro and X.wi-D24.
- Sample Size: The exact number of images or fluorography studies evaluated is not specified. The document mentions "multiple images and fluorography studies from different body regions" for the US board-certified radiologist's evaluation.
- Data Provenance:
- Countries of Origin: Germany (University Hospital Augsburg, Klinikum rechts der Isar Munich, Herz-Jesu-Krankenhaus Münster/Hiltrup) and Belgium (ZAS Jan Palfijn Hospital of Merksem).
- Retrospective or Prospective: Not explicitly stated, but clinical image quality evaluations often involve prospective data collection or a mix with retrospective cases. Given they are evaluating "new detectors" and "clinical image quality evaluation", it implies real or simulated clinical scenarios.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts:
- Initial Evaluations: Multiple "expert evaluations" (implies more than one) were conducted across the listed hospitals. The exact number of individual experts is not specified.
- Specific Evaluation: One "US board-certified radiologist" performed a dedicated clinical image quality evaluation.
- Qualifications of Experts:
- For the general "expert evaluations": Not specified beyond being "experts."
- For the specific evaluation: "US board-certified radiologist." No mention of years of experience is provided.
4. Adjudication Method for the Test Set
The document does not specify any formal adjudication method (e.g., 2+1, 3+1 consensus voting) for establishing ground truth or evaluating the image quality. The evaluations appear to be individual or group assessments leading to a conclusion of "acceptability for diagnostic use."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? The document does not describe a formal MRMC comparative effectiveness study designed to quantify the improvement of human readers with AI vs. without AI assistance.
- Effect Size of Human Reader Improvement: Therefore, no effect size is reported.
- Note: While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the study described is an evaluation of the detectors' image quality and the overall system's substantial equivalence, not the clinical impact of these specific AI features on human reader performance.
6. Standalone Performance Study (Algorithm Only)
- The document primarily describes an evaluation of the new detectors within the LUMINOS Q.namix T/R systems and the overall system's substantial equivalence.
- While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the document does not report on a standalone performance study specifically for these AI algorithms in isolation from the human-in-the-loop system. The AI features are listed as technological characteristics that contribute to the device's overall updated design.
7. Type of Ground Truth Used
For the detector image quality evaluation, the ground truth was based on expert assessment ("qualified for proper diagnosis"). This falls under expert consensus or expert judgment regarding diagnostic acceptability.
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set for any AI components. The focus of this 510(k) summary is on substantiating equivalence and safety/effectiveness of the entire X-ray system, not on the development of individual AI algorithms within it.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about a training set, the method for establishing its ground truth is not mentioned in the document.
Ask a specific question about this device
(54 days)
19355
Re: K251520
Trade/Device Name: Cios Alpha; Cios Flow
Regulation Number: 21 CFR 892.1650
Interventional Fluoroscopic X-Ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
Interventional Fluoroscopic X-Ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
Interventional Fluoroscopic X-ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
Interventional Fluoroscopic X-ray System
Classification Panel: Radiology
Regulation Number: 21 CFR §892.1650
The Cios Alpha is a mobile X-Ray system designed to provide X-ray imaging of the anatomical structures of patient during clinical applications. Clinical applications may include, but are not limited to: interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care, and emergency room procedures. The patient population may include pediatric patients.
The Cios Flow is a mobile X-Ray system designed to provide X-ray imaging of the anatomical structures of patient during clinical applications. Clinical applications may include, but are not limited to: interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care and emergency room procedures. The patient population may include pediatric patients.
The Cios Alpha and Cios Flow (VA31A) mobile fluoroscopic C-arm X-ray System is designed for the surgical environment. The Cios Alpha and Cios Flow provide comprehensive image acquisition modes to support orthopedic and vascular procedures. The system consists of two major components:
a) The C-arm with X-ray source on one side and the flat panel detector on the opposite side. The C-arm can be angulated in both planes and lifted vertically, shifted to the side, and moved forward/backward by an operator.
b) The second unit is the image display station with a movable trolley for the image processing and storage system, image display, and documentation. Both units are connected with a cable.
The main unit is connected to the main power outlet, and the trolley is connected to a data network.
The following modifications were made to the predicate device Cios Alpha and Cios Flow. Siemens Medical Solutions USA, Inc. submits this Bundled Traditional 510k to request clearance for Subject Devices Cios Alpha and Cios Flow (VA31A) for the following device modifications made to the Predicates Device Cios Alpha and Cios Flow (VA30).
This 510k submission, Subject Devices "Cios Alpha" and "Cios Flow" with software version VA31A, will support the following categories of modifications made to the Subject Devices in comparison to the Predicate Devices:
- Software updated from VA30 to VA31A to support the following software features: A. Updated InstantLink with Extended NXS Interface
- Updated Collimator
- New optional flat detector Trixell Pixium 3131SOD with IGZO (Indium Gallium Zinc Oxide) technology
- Updated FLC imaging system with new PC hardware Updated the High Performance Graphic Card on the Apphost PC
- Updated Eaton UPS 5P 850i G2 as successor of UPS 5P 850i due to obsolescense
- The Cios Alpha is also known as "Cios Alpha.neo" The Cios Flow is also known as Cios Flow.neo
The provided 510(k) clearance letter details modifications to an existing fluoroscopic X-ray system, Cios Alpha and Cios Flow, specifically focusing on software updates and hardware changes (e.g., a new flat detector).
However, the provided text does not contain explicit acceptance criteria tables for performance metrics (such as image quality, diagnostic accuracy, sensitivity, specificity, or AUC) or the results of a statistically powered, pre-specified study proving the device meets these criteria in a comparative effectiveness setting (e.g., MRMC study).
The document primarily focuses on bench testing, software validation, and compliance with recognized standards to demonstrate the substantial equivalence of the modified device to its predicate. It states that "All test results met all acceptance criteria" for software modifications and that a "Clinical Cadaver Report" was conducted to assess the non-inferiority of a new flat panel detector's subjective image quality. This suggests acceptance criteria were established internally for these tests, but they are not detailed in the provided document.
Therefore, many of the requested details about acceptance criteria, study design, and performance metrics for clinical effectiveness are not present in this 510(k) clearance letter summary. The document's purpose is to justify substantial equivalence based on safety, hardware/software changes, and compliance with standards, rather than proving enhanced clinical effectiveness through a comparative study.
Here's an attempt to answer based on the available information, noting what is not provided:
Acceptance Criteria and Device Performance
No explicit quantitative acceptance criteria table for clinical performance (e.g., diagnostic accuracy metrics like sensitivity, specificity, AUC) is provided in the document. The document discusses "acceptance criteria" in the context of:
- Software Validation: "The testing results show that all the software specifications have met the acceptance criteria." (Page 14)
- Non-clinical Testing: "All test results met all acceptance criteria." (Page 10)
- Clinical Cadaver Report (Subjective Image Quality): The IGZO detector was considered "non-inferior (equal or better) concerning the subjective image quality for four anatomical regions that have been investigated in the ortho-trauma setting." (Page 14) This implies a qualitative acceptance criterion of non-inferiority for subjective image quality, but no numerical thresholds are given.
Since no specific performance metrics with numerical acceptance criteria are provided for clinical use, a table demonstrating reported device performance against such criteria cannot be created from this text. The document refers broadly to testing results meeting "acceptance criteria" but does not define them publicly.
Study Details Proving Device Meets Acceptance Criteria
The primary "study" mentioned for clinical relevance is a Clinical Cadaver Report.
-
Sample Size and Data Provenance:
- Test Set Sample Size: Not specified for the Clinical Cadaver Report.
- Data Provenance: The study was a "Clinical Cadaver Report." This implies an experimental, non-human, pre-clinical study. The country of origin is not specified but given the manufacturing site in Germany, it's possible the testing was conducted there or at Siemens facilities elsewhere. It is inherently prospective as it's a pre-market development activity.
-
Number of Experts and Qualifications:
- Number of Experts: Not specified.
- Qualifications: Not specified.
-
Adjudication Method:
- Adjudication Method: Not specified. Given it was a "subjective image quality" assessment, it would likely involve multiple readers, but the method (e.g., 2+1, 3+1) is not disclosed.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was an MRMC study done? The document does not indicate that a formal MRMC comparative effectiveness study was done to show human readers improve with AI vs. without AI assistance. The "Clinical Cadaver Report" focused on the subjective image quality of the new detector, not human performance with AI. The device described primarily appears to be an imaging system, not an AI-assisted diagnostic tool that would typically undergo MRMC studies for improved human interpretation.
-
Standalone Performance:
- Was a standalone (algorithm only without human-in-the-loop performance) done? Not explicitly stated in the context of clinical performance. The "software functional, verification, and System validation testing" (Page 11) and "software validation data" (Page 14) refer to the algorithm's internal performance against specifications, not its standalone diagnostic accuracy on clinical images.
-
Type of Ground Truth Used:
- Ground Truth for Clinical Cadaver Report: In the context of "subjective image quality," the "ground truth" would be the consensus assessment of the evaluating experts regarding the quality of the images generated by the new IGZO detector compared to the a-Si detector. It is not pathology or outcomes data.
-
Training Set (if applicable for AI/Software components):
- Sample Size for Training Set: The document does not mention an AI component that would require a distinct "training set" in the common understanding of machine learning. The "software" referred to is control software for the X-ray system, not a diagnostic AI algorithm.
-
Ground Truth for Training Set:
- How ground truth was established for training set: Not applicable, as there's no indication of machine learning model training. The software modifications are described as updates to system control, interfaces, and hardware support.
In summary: The provided 510(k) clearance letter demonstrates that the modified Cios Alpha and Cios Flow systems meet regulatory requirements for substantial equivalence, primarily through non-clinical testing, compliance with safety standards, and software validation against internal acceptance criteria. A "Clinical Cadaver Report" assessed the subjective image quality of a new detector, finding it non-inferior. However, the document does not contain the specific details of clinical performance acceptance criteria, sample sizes for such studies, or a multi-reader comparative effectiveness study as would be seen for AI-enabled diagnostic tools.
Ask a specific question about this device
(201 days)
75013
FRANCE
Re: K243884
Trade/Device Name: TAVIPILOT
Regulation Number: 21 CFR 892.1650
** CTO
Date of Preparation: July 7, 2025
Trade Name: TAVIPILOT
Regulation: 21 CFR 892.1650
5684 PC Best The Netherlands |
Trade Name: HeartNavigator Release 2.0
Regulation: 21 CFR 892.1650
Classification name: Image-intensified fluoroscopic x-ray system
Classification regulation: 21 CFR 892.1650
Classification name: Image-intensified fluoroscopic x-ray system
Classification regulation: 21 CFR 892.1650
TAVIPILOT is an intra-operative software which provides real-time fluoroscopy detection, tracking and marking of the Non-Coronary Cusp and the prosthetic valve, to allow optimal guidance for precise positioning of the prosthetic valve, according to the planning phase, for TAVI/TAVR (transcatheter aortic valve implantation/replacement) procedures. The guidance provided by TAVIPILOT is not intended to substitute the cardiac surgeon's or the interventional cardiologist's judgment and analysis of the patient's condition.
The device is only intended for adults (i.e., 21 years and older).
Contra-indications:
- Patients who have already undergone a TAVI/TAVR or SAVR (surgical aortic valve replacement)
- Patients diagnosed with aortic insufficiency
- Patients for whom the main access for the TAVI/TAVR catheter is not femoral
- Patients who have a non-tricuspid native valve
- Patients who have a permanent Pacemaker implant or temporary Pacemaker within 2 cm from the aortic root, other than the pacing guidewire
- Patients who have thoracic surgical implants
- Patients who are not adults
TAVIPILOT is an intra-operative software which provides real-time fluoroscopy detection, tracking and marking of the Non-coronary Cusp and the prosthetic valve, to allow optimal guidance for precise positioning of the prosthetic valve, according to the planning phase, for TAVI/TAVR (transcatheter aortic valve implantation/replacement) procedures.
The guidance provided by the TAVIPILOT is not intended to substitute the cardiac surgeon's or the interventional cardiologist's judgment and analysis of the patient's condition.
The TAVIPILOT software tool is intended to be used in combination with FDA cleared X-ray systems to assist cardiac surgeons and interventional cardiologists with the treatment of structural heart diseases using minimal invasive interventional techniques for which TAVI/TAVR is indicated.
In addition to conventional live fluoroscopy TAVIPILOT provides the user with tools to guide the procedure using a 2D projection of the aortic root-related landmarks and transcatheter aortic valve overlayed on the 2D X-ray image data from the FDA cleared X-ray systems.
During Live phase the SD offers anatomical detection and tracking of the aortic root-related landmarks and transcatheter aortic valve which is overlayed in 2D on the 2D fluoroscopy x-ray image data using trained AI/ML model.
TAVIPILOT does not change or influence the TAVI procedure.
The main operating principle of TAVIPILOT consists of the following SW workflow:
- Preparing for Live task
- Live Task
Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided FDA 510(k) clearance letter for TAVIPILOT:
Acceptance Criteria and Device Performance Study for TAVIPILOT
The TAVIPILOT device, an intra-operative software utilizing AI/ML for real-time fluoroscopy detection, tracking, and marking of the Non-Coronary Cusp (NCC) and transcatheter aortic valve (TAV), was validated to demonstrate its performance and substantial equivalence to a predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Parameter | Acceptance Criteria | Reported Device Performance |
---|---|---|
NCC Detection, Tracking, and Marking Accuracy | NCC detected, tracked, and marked within ≤ 2 mm | NCC detected, tracked, and marked ≤ 2 mm in 100% of all patients tested with statistical significance. |
TAV Detection, Tracking, and Marking Accuracy | TAV detected, tracked, and marked within ≤ 1 mm | TAV detected, tracked, and marked ≤ 1 mm in 100% of all patients tested with statistical significance. |
Accuracy in Contrasted/Non-contrasted Images | Accuracy maintained in both contrasted and non-contrasted images. | Accuracy was obtained in both contrasted and non-contrasted images. |
Comparison to Predicate Device (NCC) | Equivalent or better detection, tracking, and marking of the NCC compared to the predicate device. | TAVIPILOT has equivalent or better detection, tracking, and marking of the NCC compared to the predicate device in all patients. |
Compatibility/Interoperability with C-arm X-ray Devices | Compatible and interoperable with FDA cleared GE, Philips, and Siemens C-arm X-ray devices meeting specified requirements. | TAVIPILOT was validated for compatibility and interoperability with FDA cleared C-arm X-ray devices (using data from GE, Philips, and Siemens devices) and confirmed compatible. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: "a representative and statistically supported number of patients, representative of both EU and US TAVI populations, including gender and ethnicity considerations." (Specific number not provided, but stated to be statistically supported and representative).
- Data Provenance: The patients/data were "representative of both EU and US TAVI populations, including gender and ethnicity considerations." This implies retrospective or prospective acquisition from these regions. The document does not explicitly state if the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated, but implies a collective of experts ("Ground truth was performed by board certified experts").
- Qualifications of Experts: "board certified experts with substantial experience with relevant clinical tasks, thus ensuring quality annotations."
4. Adjudication Method for the Test Set
The adjudication method is not explicitly mentioned. It only states that "Ground truth was performed by board certified experts." This could imply single-reader, multiple-reader consensus, or other methods, but no specific method like 2+1 or 3+1 is detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done: No. The provided document details a standalone performance test of the algorithm's accuracy against ground truth, and a comparison to a predicate device but without human readers. The study focuses on the AI's direct detection, tracking, and marking performance rather than its impact on human reader performance.
- Effect Size of Human Readers Improvement: Not applicable, as no MRMC study with human-in-the-loop was reported.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study Done: Yes. The "NCC and TAV detection, tracking and marking validation testing" and "Comparison to Predicate Device testing" sections describe the algorithm's performance directly against ground truth, independent of human readers. This represents a standalone (algorithm-only) performance evaluation.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert Consensus. "Ground truth was performed by board certified experts with substantial experience with relevant clinical tasks, thus ensuring quality annotations."
8. Sample Size for the Training Set
The document does not provide information regarding the sample size used for the training set. It only mentions the use of "trained AI/ML model" and refers to the validation test set.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It only refers to the training of the AI/ML model. However, given that "Ground truth was performed by board certified experts" for the test set, it is plausible that a similar method (expert annotation) was used for the training data, but this is not confirmed in the text.
Ask a specific question about this device
(125 days)
Re: K250587
Trade/Device Name: Orthoscan TAU Mini C-Arm
Regulation Number: 21 CFR 892.1650
Usual Names:** Fluoroscopic X-Ray System, Mobile Mini Mobile C-arm, Mini C-arm
Regulation: 21 CFR 892.1650
Primary Predicate:** Orthoscan TAU Mini C-Arm
510(k) Number: K213113
Regulation: 21 CFR 892.1650
Secondary Predicate:** Orthoscan VERSA TAU Mini C-Arm
510(k) Number: K243452
Regulation: 21 CFR 892.1650
The Orthoscan TAU Mini C-arm X-ray system is designed to provide physicians with general fluoroscopic visualization, using pulsed or continuous fluoroscopy, of a patient including but not limited to, diagnostic, surgical, and critical emergency care procedures for patients of all ages including pediatric populations when imaging limbs/extremities, shoulders, at locations including but not limited to, hospitals, ambulatory surgery, emergency, traumatology, orthopedic, critical care, or physician office environments.
The proposed modifications to Orthoscan TAU Mini C-Arm system models 1000-0015, 1000-0016, 1000-0017 retain identical function as the predicate Orthoscan TAU Mini C-arm (K213113) and the Orthoscan VERSA Mini C-arm (K243452) as a mobile fluoroscopic mini C-arm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment and surgical procedures involving anatomical regions such as but not limited to that of extremities, limbs, shoulders and knees and hips. The system consists of C-arm support attached to the image workstation.
The changes to the Orthoscan TAU Mini C-Arm X-ray system models 1000-0015, 1000-0016, 1000-0017 represent a modification of our presently legally marketed devices Orthoscan TAU Mini C-Arm (K213113) and Orthoscan VERSA Mini C-arm (K243452). The proposed modifications to the predicate encompass the implementation of a LINUX based operating system upgrade from Ubuntu version 16.04 to Ubuntu version 20.04, revisions to generator printed circuit board to improve power management efficiency, implementation of an alternate generator radiation shielding material to reduce environmental impact of lead, update to wireless footswitch communication protocol, an alternate detector for Orthoscan TAU Mini C-arm model 1000-0017 and the introduction of an optional 32in. display monitor.
The proposed device replicates the features and functions of the predicate devices without impacting image clarity or dose levels.
For both the predicate TAU (K213113) and proposed device, the following are unchanged; C-arm support of flat panel detector, generator and x-ray controls, mechanical connections, balancing, locking, rotations, work-station platform, main user interface controls, touch screen interface, selectable imaging, X-ray technique control, entry of patient information, wired footswitch operation, interface connection panel and DICOM fixed wire and wireless network interfaces.
The provided FDA 510(k) clearance letter and summary for the Orthoscan TAU Mini C-Arm details a modification to an existing device rather than a new device with novel performance claims. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are primarily focused on demonstrating substantial equivalence to existing predicate devices, particularly in terms of image quality and safety, rather than establishing absolute performance metrics for a completely new clinical claim.
Here's a breakdown of the requested information based on the provided document:
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for this 510(k) submission is to demonstrate substantial equivalence to the predicate devices (Orthoscan TAU Mini C-Arm K213113 and Orthoscan VERSA Mini C-arm K243452) in terms of image quality, safety, and functionality, despite the implemented modifications.
Since this is a modification to an existing fluoroscopic X-ray system, the "performance" is assessed relative to the predicate, with the aim of ensuring no degradation, and ideally, slight improvement in certain aspects. The document doesn't provide a table of precise quantitative acceptance criteria for image quality metrics (e.g., spatial resolution in lp/mm, contrast-to-noise ratio) because the primary goal was comparative equivalence, not meeting predefined numerical thresholds for a new claim.
However, the reported device performance, relative to the predicate, is implicitly stated:
Acceptance Criterion (Implicit) | Reported Device Performance (Relative to Predicate) |
---|---|
Image Quality Equivalence/Improvement | "His conclusion was that the image quality at same or similar patient dose rates will result in equivalent or slight improvement in patient care (images) for the proposed modified TAU device over the predicate device." |
"Image quality acquired using the proposed alternate detector was of equal or slightly improved image quality..." | |
Dose Rate Equivalence | "the image quality at same or similar patient dose rates..." |
"maintaining or improving image at same or similar dose..." | |
Safety (Radiation, Mechanical, Electrical, Cybersecurity) | "The proposed modified Orthoscan TAU Mini C-arm's potential radiation, mechanical, and electrical hazards are identified and analyzed as part of risk management and controlled by meeting the applicable CDRH 21 CFR subchapter J performance requirements, Recognized Consensus Standards, designing and manufacturing under Ziehm-Orthoscan, Inc. Quality System, and system verification and validation testing ensure the device performs to the product specifications and its intended use. The adherence to these applicable regulations and certification to Recognized Consensus Standards that apply to this product provides the assurance of device safety and effectiveness." |
"...cybersecurity controls are improved..." | |
Certified compliant with 60601-1 ED 3.2 series, including IEC 60601-2-54, well as IEC 62304:2006 + A1:2015 Medical device software – Software life cycle processes. Met all applicable sections of 21 CFR Subchapter J performance standards. Software and cybersecurity testing performed to meet requirements from FDA guidances "Content of Premarket Submissions for Device Software Functions" (2023) and "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions" (2023). | |
Functionality Equivalence | "The proposed device replicates the features and functions of the predicate devices without impacting image clarity or dose levels." |
"For both the predicate TAU (K213113) and proposed device, the following are unchanged; C-arm support of flat panel detector, generator and x-ray controls, mechanical connections, balancing, locking, rotations, work-station platform, main user interface controls, touch screen interface, selectable imaging, X-ray technique control, entry of patient information, wired footswitch operation, interface connection panel and DICOM fixed wire and wireless network interfaces." |
Study Details:
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: The document does not specify a numerical "sample size" in terms of number of unique phantoms or individual images. It states "Numerous image comparison sets were taken" and "Images collected included phantom motion that was representative of typical clinical use". For the alternate detector evaluation, "Images collected included phantom motion... These images were reviewed by a Certified Radiologist who confirmed that the image quality acquired using the proposed alternate detector was of equal or slightly improved image quality...".
- Data Provenance: The data was generated through "Non-clinical image and dose lab testing" and "bench testing". This implies controlled laboratory conditions, not patient data. Country of origin for data generation is not explicitly stated but can be inferred as likely being in the US, given the US-based company and FDA submission. The study was inherently prospective in that new images were generated for the purpose of the comparison.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "a Radiologist" (singular) performed an assessment of individual images.
- Qualifications of Experts: "Certified Radiologist". No further details on years of experience are provided, but "Certified" implies meeting professional board certification standards.
-
Adjudication method for the test set:
- The document states "a Radiologist performed an assessment of individual images arranged in groups of image sets." There is no mention of an adjudication method involving multiple readers, as only a single radiologist was used for the image quality assessment.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC study was not done. The document explicitly states: "Orthoscan TAU Mini C-arm system did not require live human clinical studies to support substantial equivalence...". The image quality assessment was performed by a single certified radiologist using phantom images. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance is not the subject of this 510(k) (it's a hardware/OS/component modification, not an AI diagnostic tool).
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This question is not applicable in the context of this 510(k). The device is an imaging system (C-arm), not an AI algorithm that performs standalone diagnoses. Its performance is assessed in terms of image generation quality, which is then interpreted by a human user (physician).
-
The type of ground truth used:
- The "ground truth" for the image quality comparison was established by expert assessment (a Certified Radiologist's qualitative judgment) of images generated from anthropomorphic (PMMA material) phantoms and anatomical simulation phantoms. This is considered a "phantom-based" ground truth, which is a common approach for demonstrating equivalence in imaging system modifications where clinical studies are not deemed necessary.
-
The sample size for the training set:
- Not applicable. The document describes modifications to an existing fluoroscopic X-ray system, including an OS upgrade and hardware changes. There is no indication of a machine learning or AI component that would require a "training set" in the conventional sense of data used to train an algorithm. The development involved risk analysis, design reviews, component testing, integration testing, performance testing, safety testing, and product use testing of the system itself.
-
How the ground truth for the training set was established:
- Not applicable, as there is no "training set" for an AI algorithm in this context.
Ask a specific question about this device
(159 days)
KOREA
Re: K250010
Trade/Device Name: Extron 3; Extron 5; Extron 7
Regulation Number: 21 CFR 892.1650
fluoroscopic x-ray system
- Classification Panel: Radiology
- Classification Regulation: 21 CFR 892.1650
fluoroscopic x-ray system - Classification Panel: Radiology
- Classification Regulation: 21 CFR 892.1650
Classification Panel | Radiology | | | | Equivalent |
| Classification Regulation | 21 CFR 892.1650
| 21 CFR 892.1650 | 21 CFR 892.1650 | 21 CFR 892.1650 | Equivalent |
| Product Code | OWB, OXO,
EXTRON Series are a mobile fluoroscopic X-ray system with high output capacity, high thermal capacity and high resolution image processing system, which provides X-ray images of the patient's anatomy during surgery or treatment. This device plays an important role in emergency injury treatment, orthopedic surgery, neurosurgery surgery, bone surgery, etc. This device has a function to save important a specific images as records, so you can easily search for the images and transmit it to the PACS system in the hospital to help the medical staff in diagnosis.
Examples of a clinical application may include: Neurosurgery, Orthopedics, Anesthesiology, Urology, Gynecology, Internal Medicine
(※ This device is not intended for mammography applications.)
EXTRON Series are mobile fluoroscopic X-ray systems with high output capacity, high thermal capacity, and high-resolution image processing systems that provide X-ray images of the patient's anatomical structures during surgery or treatment. This device plays an important role in emergency injury treatment, orthopedic surgery, neurosurgery surgery, bone surgery, etc. This device has a function to save important a specific images as records, so you can easily search for the images and transmit it to the PACS system in the hospital to help the medical staff in diagnosis.
The EXTRON Series are composed of a C-arm main body and a monitor cart. The C-arm main body is composed of an X-ray tube, a flat panel detector, a collimator, a generator, a touch panel, foot switch, hand switch and an XConsoleOP program, while the monitor cart is composed of a monitor, a thermal transfer printer, a mouse, a keyboard and an XConsole program.
The operating principle of the device is designed to expose the patient to X-ray beams. The range of X-ray irradiation are adjusted by the collimator.
X-rays can penetrate into the human body through a two-step conversion process.
X-ray photons are converted into light. The light is then converted into electrical signals through the sensor. The electrical charges are transmitted as the sensor output and converted into signals. These signals are digitized and captured by memory. The captured images are processed and displayed on the monitor. The displayed images can be saved or transmitted to an external storage device, such as a network printer.
Here's a breakdown of the acceptance criteria and the study information for the DRTECH Corporation EXTRON Series, based on the provided FDA 510(k) clearance letter.
It's important to note that this document is a 510(k) summary, which often emphasizes equivalence to a predicate device rather than presenting a novel clinical study with explicit acceptance criteria for a new device's performance. The "performance" here refers to demonstrating equivalence to the predicate, primarily through non-clinical testing and image quality assessment.
Acceptance Criteria and Device Performance for DRTECH Corporation EXTRON Series
Based on the provided 510(k) summary, the device's "acceptance criteria" are implied by its demonstration of substantial equivalence to predicate devices through compliance with established international and FDA-recognized consensus standards and a comparison of technological characteristics. The study primarily relies on non-clinical performance and a qualitative assessment of image quality.
1. Table of Acceptance Criteria and Reported Device Performance
Parameter / Acceptance Criteria Category | Specific Criteria (Implied) | Reported Device Performance (EXTRON Series) | Discussion / Justification of Equivalence |
---|---|---|---|
Indications for Use | Equivalent to Predicate Devices | Equivalent | The Indications for Use are consistent with the predicate devices, covering mobile fluoroscopic X-ray imaging during surgery/treatment in various applications (Neurosurgery, Orthopedics, Anesthesiology, Urology, Gynecology, Internal Medicine), excluding mammography. |
Target Population | Equivalent to Predicate Devices | Adults and Pediatrics (similar to predicates, except neonates for one predicate) | The target population (Adults and Pediatrics) is comparable to the predicate devices. Differences regarding neonates in one predicate are noted but not deemed to raise new safety/effectiveness concerns. |
Mobile Platform | Mobile | Yes | Equivalent |
X-ray Tube Type | Safe and effective as per IEC 60601-2-28 and IEC 60601-1 series | EXTRON 3: Stationary Anode; EXTRON 5/7: Rotating Anode | "Equivalent: X-ray tubes and systems verified according to the IEC 60601-2-28 and IEC 60601-1 series meet strict international safety and performance standards. Therefore, differences in X-ray tubes do not raise new concerns regarding safety and effectiveness." |
Radiographic Mode (kV Range) | 40-120kV | 40-120kV | Equivalent |
Radiographic Mode (mA Range) | Within acceptable limits compared to predicates | EXTRON 7: Up to 150mA; EXTRON 3: Up to 100mA | "Equivalent: Alteration in the mA does not give rise to any novel concerns regarding safety and effectiveness." |
Fluoroscopic Mode (kV Range) | 40-120kV | 40-120kV | Equivalent |
Fluoroscopic Mode (mA Range) | Within acceptable limits compared to predicates | EXTRON 3: Up to 30mA; EXTRON 5: Up to 40mA; EXTRON 7: Up to 60mA | "Equivalent: Alteration in the mA does not give rise to any novel concerns regarding safety and effectiveness." |
Dimension (Immersion Depth, Free Space, Orbital Movement) | Safe and functional, comparable to predicates | Immersion Depth: 73-74cm; Free Space: 80cm; Orbital Movement: 165° | "Equivalent: Alteration in the dimension does not give rise to any novel concerns regarding safety and effectiveness. Additionally, due to the greater scope of movement, the Subject device offers a higher degree of convenience compared to the Predicate device." |
Laser Guide | Present | Yes | Equivalent |
Foot Switch | Wired and/or Wireless | Wired Foot Switch, Wireless Foot Switch | Equivalent |
Detector Pixels | Within accepted ranges for fluoroscopy, no new safety concerns | 1500x1500 to 2048x2048 pixels | "Equivalent: Alteration in the detector pixels do not give rise to any novel concerns regarding safety and effectiveness." |
DQE (Detective Quantum Efficiency) | Clinically comparable image quality to predicates | 55% @1lp/mm (vs. 70% @0p/mm, 62-63% @0.5lp/mm) | "Equivalent: Similarly, while there is a difference in DQE values, the Subject Device demonstrated clinically comparable image quality to the Predicate and Reference Devices during clinical image comparison evaluations. Thus, no novel concerns regarding safety and effectiveness are introduced." This means the functional outcome (image quality) was met, despite a numerical difference. |
MTF (Modulation Transfer Function) | Image resolution equivalent to or better than predicates | 55% @1lp/mm (vs. 59% @1lp/mm) | "Equivalent: Although there is a difference in MTF values, actual clinical image comparison evaluations confirmed that the Subject Device provides image resolution that is equivalent to or better than that of the Predicate and Reference Devices. Therefore, this difference does not give rise to any novel concerns regarding safety and effectiveness." This indicates the functional outcome (resolution) was met. |
Compliance with Standards | Adherence to relevant FDA-recognized consensus standards | Compliant with ISO 14971, IEC 60601 series (1, 1-2, 1-3, 1-6, 2-28, 2-43, 2-54), IEC 62366-1, IEC 62304, ANSI UL 2900-1, IEC 81001-5-1. | Demonstated substantial equivalence through non-clinical performance in compliance with these standards. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a numerical sample size for a "test set" in the traditional sense of a clinical trial. Instead, it states: "Sample clinical images using anthropomorphic phantoms representative of the indicated anatomies and populations have been taken for both the proposed devices (EXTRON 3/5/7) and the predicate devices (Veradius Unity and OEC 9900 ELITE)."
- Data Provenance: The data primarily comes from non-clinical testing using anthropomorphic phantoms. There is no mention of human subject data (clinical images from patients). The provenance of the phantoms themselves (e.g., manufacturer) or the exact location where these phantom images were acquired is not stated, but the manufacturer is based in South Korea. The study is retrospective in the sense that the comparison is made against existing predicate devices, but the image acquisition for the subject device is new. It is explicitly stated that "Clinical studies were not performed."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: The document states: "These images have been reviewed and compared by qualified clinical experts." The exact number of experts is not specified.
- Qualifications of Experts: The experts are described as "qualified clinical experts." No specific qualifications (e.g., radiologist with X years of experience, board certification) are provided in this summary.
4. Adjudication Method for the Test Set
- Adjudication Method: The document states that the phantom images "have been reviewed and compared by qualified clinical experts." It does not specify a formal adjudication method (e.g., 2+1, 3+1 consensus). It appears to be a comparative review rather than a ground truth establishment process requiring formal adjudication for diagnostic accuracy.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. This device is an X-ray imaging system, not an AI software intended to assist human readers in diagnosis. The study focused on demonstrating the image quality equivalence of the X-ray system itself. Therefore, the question about human reader improvement with/without AI assistance is not applicable.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is an X-ray system, not an algorithm. The "standalone performance" implicitly refers to the performance of the X-ray system in producing images, which was assessed through non-clinical tests and qualitative image comparisons with predicate devices.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the image quality comparison, the "ground truth" for the phantom images is the inherent physical properties of the anthropomorphic phantoms themselves, as imaged by both the subject and predicate devices. The "qualified clinical experts" then assessed if the images produced by the subject device were "clinically comparable" to those from the predicate devices. There is no mention of pathology, outcomes data, or a formal expert consensus to establish a diagnostic ground truth in the traditional sense, as these were phantom images, not patient images.
8. The Sample Size for the Training Set
- Not applicable. This device is an X-ray imaging hardware system, not an AI/machine learning model that requires a training set.
9. How the Ground Truth for the Training Set Was Established
- Not applicable, as there is no training set for an AI/machine learning model.
Ask a specific question about this device
(199 days)
CHINA
Re: K243411
Trade/Device Name: Diagnostic X-ray System
Regulation Number: 21 CFR 892.1650
Regulation name: Image-intensified fluoroscopic x-ray system
Regulation Number: 21 CFR 892.1650
III Predicate Device
510(k) Number: K222080
Product Code: OWB
Classification: 21 CFR 892.1650
The Diagnostic X-ray System is intended to be used and operated by: adequately trained, qualified and authorized health care professionals who have full understanding of the safety information and emergency procedures as well as the capabilities and functions of the device. The device is used for radiological guidance and visualization during diagnostic, interventional and surgical procedures on all patients, except neonates (birth to one month), within the limits of the device. The device is to be used in health care facilities both inside and outside the operating room, sterile as well as non-sterile environment in a variety of procedures.
The Diagnostic X-ray System is a mobile (within an imaging facility) general-purpose diagnostic fluoroscopic X-ray system that uses a C-arm and digital techniques for image capture, display and manipulation and is designed to be used in a variety of general-purpose applications requiring real-time fluoroscopic imaging capabilities.
The Diagnostic X-ray System is consists of X-ray source assembly (combined type), collimator, flat-panel detector, image processing workstation, C-arm and mobile rack, Medical Image Workstation Software.
It appears that the provided FDA 510(k) clearance letter and associated summary pertain to a Diagnostic X-ray System (Trade Name: Diagnostic X-ray System, Model: PLX119C) manufactured by Nanjing Perlove Medical Equipment Co., Ltd.
Crucially, the document explicitly states: "No clinical study is included in this submission."
Therefore, I cannot provide details about acceptance criteria derived from a clinical study, as no such study was presented for this device's 510(k) clearance.
However, I can extract the information provided regarding the non-clinical performance and testing. It's important to understand that for a general-purpose diagnostic X-ray system, substantial equivalence is often demonstrated through comparisons of technical specifications and non-clinical performance to a legally marketed predicate device, rather than a full-scale clinical trial proving "improved" diagnostic accuracy in a specific clinical context.
Here's a breakdown of the available information based on your request, with a clear note about the absence of a clinical study for demonstrating performance against acceptance criteria in a clinical setting:
1. A table of acceptance criteria and the reported device performance
Since no clinical study was performed to establish clinical performance acceptance criteria and then demonstrate the device meets them, I can only present the reported non-clinical performance and compliance with relevant standards. The "acceptance criteria" here are implied by meeting recognized standards and demonstrating comparable technical characteristics to the predicate device.
Category | Acceptance Criteria / Standard Compliance | Reported Device Performance |
---|---|---|
Electrical Safety | Compliance with IEC 60601-1-2: 2014+AMD1:2020 / EN 60601-1-2: 2015+A1: 2021 | The system complies with IEC 60601-1-2: 2014+AMD1:2020 / EN 60601-1-2: 2015+A1: 2021. |
EMC Testing | Compliance with IEC 60601-1-2: 2014+AMD1:2020 / EN 60601-1-2: 2015+A1: 2021 | The system complies with IEC 60601-1-2: 2014+AMD1:2020 / EN 60601-1-2: 2015+A1: 2021. |
X-ray Equipment | Compliance with IEC 60601-2-54: 2009+AMD2:2018 / EN 60601-2-54: 2009+A2:2019 (For X-ray equipment for radiography and radioscopy) | The system complies with IEC 60601-2-54: 2009+AMD2:2018 / EN 60601-2-54: 2009+A2:2019. |
Radiation Protection | Compliance with IEC 60601-1-3:2008+A1:2013+A2:2021 / EN 60601-1-3:2008+A1:2013+A2:2021 (For radiation protection in diagnostic X-ray equipment) | The system complies with IEC 60601-1-3:2008+A1:2013+A2:2021 / EN 60601-1-3:2008+A1:2013+A2:2021. |
Interventional X-ray | Compliance with IEC 60601-2-43:2010+A1:2017+A2:2019 / EN 60601-2-43:2010+A1:2018+A2:2020 (For X-ray equipment for interventional procedures) | The system complies with IEC 60601-2-43:2010+A1:2017+A2:2019 / EN 60601-2-43:2010+A1:2018+A2:2020. |
Detector Performance | Bench testing of imaging metrics (MTF, DQE comparable to predicate). Expected to meet industry standards for diagnostic image quality. | MTF @ 1.0 LP/mm: Typical 59% (Predicate: Typical 64%) |
DQE @ 0 LP/min: Typical 77% (Predicate: Typical 77%) | ||
Note: While MTF is slightly lower than predicate, it's implied to be within acceptable diagnostic limits for the intended use and often compensated by other factors in real-world imaging. | ||
Software V&V | Compliance with FDA's "Content of Premarket Submissions for Software Contained in Medical Devices" guidance; basic level documentation sufficient as failure not expected to lead to death or serious injury. | Software verification and validation testing were conducted and documentation was provided as recommended. The embedded and workstation software for this device required basic level documentation, as a failure of software function(s) would not present a hazardous situation with a probable risk of death or serious injury. |
Cybersecurity | Compliance with FDA guidance "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions" | Cybersecurity documentation was provided according to FDA guidance. |
Usability | Compliance with IEC 60601-1-6 and IEC 62366-1 | Usability validation was performed and documentation was provided, complying with IEC 60601-1-6 and IEC 62366-1. |
Biocompatibility | Compliance with ISO 10993-1:2018 "Biological Evaluation of Medical Devices − Part 1: Evaluation and Testing Within a Risk Management Process," as recognized by FDA. | Biocompatibility evaluation for the proposed device was conducted in accordance with ISO 10993-1:2018. |
Clinical Image Quality | Implied acceptance criteria: acceptable imaging performance for diagnostic/interventional guidance, comparability to predicate. | Clinical images of multiple body parts were taken using radiography and fluoroscopy, with motion, to demonstrate acceptable imaging performance of the Diagnostic X-ray System. These images were reviewed and assessed by a qualified radiologist. (Specificity of "acceptable" is not defined, but the assessment implies it met expectations for its intended use.) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
-
Clinical Images: "Clinical images of multiple body parts were taken using radiography and fluoroscopy, with motion".
- Sample Size: Not specified (described as "multiple body parts").
- Data Provenance: Not specified, but generally, for 510(k) submissions from foreign manufacturers, testing would occur at a site capable of generating such data, potentially in the country of origin (China, in this case, for the manufacturer). The images would be prospective as they were "taken" for the purpose of demonstrating performance.
-
Bench Testing (Detector Imaging Metrics):
- Sample Size: Not specified.
- Data Provenance: Not specified, but typically conducted in a controlled laboratory environment by the manufacturer or a contracted testing facility.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Clinical Images: "reviewed and assessed by a qualified radiologist."
- Number of Experts: One ("a qualified radiologist").
- Qualifications: "qualified radiologist" – further specific experience or board certification is not detailed in the provided text.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Clinical Images: The document states the images were "reviewed and assessed by a qualified radiologist." This implies a single-reader assessment, with no mention of an adjudication process (e.g., if multiple readers disagreed). Therefore, the adjudication method was none.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. The device is a diagnostic X-ray system, not an AI-powered diagnostic aide. The submission explicitly states: "No clinical study is included in this submission." Therefore, there is no information about human reader improvement with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Not applicable. This device is a diagnostic X-ray system and does not appear to contain an AI algorithm for standalone diagnostic performance. The primary focus is on the image acquisition, processing, and display capabilities for human interpretation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the "clinical images," the "ground truth" or standard of reference for "acceptable imaging performance" was the assessment by a qualified radiologist. This is a form of expert assessment/consensus (though by a single expert in this case for the review of performance). There is no mention of pathology or outcomes data being used as ground truth for performance evaluation of the imager itself.
8. The sample size for the training set
- Not applicable / Not specified. This device is hardware with associated software for image acquisition and processing. There is no mention of machine learning or deep learning algorithms requiring a distinct "training set" in the context of AI. The software verification and validation are standard procedures for medical device software, not AI training.
9. How the ground truth for the training set was established
- Not applicable. As there is no mention of a traditional "training set" for an AI algorithm, the concept of establishing ground truth for it does not apply here. The software development follows general software engineering principles and V&V, not AI model training.
Ask a specific question about this device
(248 days)
Device Name:** MasteRad MiniX Mobile Digital Imaging System (Mini-X)
Regulation Number: 21 CFR 892.1650
system |
| Classification Name | Image-intensified fluoroscopic x-ray system |
| Regulation Number | 892.1650
Mini-X is intended for use by qualified/trained medical professionals who fully understand the safety information, emergency procedures, and the device's capabilities and function. The device provides fluoroscopic imaging and is used for guidance and visualization during diagnostic radiography and surgical procedures of the extremities. The device will be used in healthcare facilities inside and outside the hospital, using various methods for the extremities on all patients except neonates (birth to one month) within the limits of the device. Applications can be performed with the patient sitting, standing, or lying in a prone or supine position. The system is not intended for mammography applications. (Rx Only)
The Mini-X system, a unique mobile imaging system, can acquire, process, and display fluoroscopic images. Its portability allows for easy positioning within a room and movement from room to room within a facility, facilitating on-demand fluoroscopic examinations. The system's innovative design incorporates a low-powered mono-block generator and a dynamic flat-panel detector, enabling it to be powered through a single-phase 120VAC power outlet.
The Insight Enhanced™ DRF Digital Imaging System, a cutting-edge tool for healthcare professionals, offers full control over the imaging chain. It empowers the operator to view and enhance high-definition fluoroscopy images up to 30 fps, bringing out diagnostic details that are challenging or impossible to see using conventional imaging techniques. The system's versatility is demonstrated by its ability to store images locally for short-term storage, produce hardcopy images with a laser printer, or send images over a network for longer-term storage. Its primary components, including a dynamic flat panel detector, monitors, and an image processor PC, underscore its comprehensive and advanced capabilities.
It seems there's a misunderstanding of the provided text. The document is an FDA 510(k) Clearance Letter for a mobile digital imaging system. It describes the device, its intended use, and its substantial equivalence to predicate devices based on technological similarities and differences, and outlines general testing performed (bench testing, compliance with IEC standards).
**Crucially, this document does not describe:
- An AI/ML device or its performance characteristics.
- Acceptance criteria related to AI/ML performance metrics (e.g., sensitivity, specificity, AUC).
- A study proving an AI device meets acceptance criteria.**
The document discusses imaging quality and radiation dose comparisons, and compliance with general medical device standards. It briefly mentions "software" and "cybersecurity" in generic terms, but not in the context of an AI algorithm or its performance.
Therefore, I cannot fulfill the request to describe the acceptance criteria and a study proving an AI device meets those criteria based on the provided text, as the text describes a standard imaging system, not an AI-powered one.
To answer your request, I would need a 510(k) submission document or a scientific paper that specifically details the clinical validation of an AI/ML medical device, including the elements you listed.
Hypothetical Example (if the document were about an AI device):
If this document were to refer to an AI device for, say, detecting fractures on Mini-X images, the relevant sections would typically be under "Non-Clinical and/or Clinical Tests Summary & Conclusions" (Page 7) or a dedicated "Performance Benchmarks" section, which are absent here for AI.
To illustrate what you're asking for, here's an example of how I would answer if the provided text contained the necessary information about an AI device:
(This is a hypothetical response, as the provided text doesn't contain this information.)
Based on the provided (hypothetical) information regarding an AI-powered diagnostic device, here's a description of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance (Hypothetical)
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Sensitivity | ≥ 90% for detecting [condition A] | 92.5% |
Specificity | ≥ 80% for detecting [condition A] | 85.1% |
AUC | ≥ 0.90 for [condition A] detection | 0.93 |
False Positive Rate | ≤ 0.05 cases/image for [normal anatomy] | 0.03 cases/image |
Inference Time | ≤ 2 seconds per image for standard processing | 1.5 seconds/image |
2. Sample Size Used for the Test Set and Data Provenance (Hypothetical)
- Test Set Sample Size: 1500 unique imaging studies.
- Data Provenance: Retrospective and prospective data collected from multiple hospitals across the United States (70% retrospective, 30% prospective). The retrospective data covered a period of 5 years (2018-2023).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications (Hypothetical)
- Number of Experts: A panel of 3 independent radiologists.
- Qualifications: All radiologists were board-certified with a minimum of 10 years of experience in diagnostic radiography, specializing in musculoskeletal imaging. One radiologist had subspecialty fellowship training in advanced imaging.
4. Adjudication Method for the Test Set (Hypothetical)
- Adjudication Method: 2+1 adjudication method was employed.
- Initially, two radiologists independently reviewed each case.
- If their interpretations agreed, that consensus was taken as the preliminary ground truth.
- If their interpretations disagreed, a third, senior radiologist served as an adjudicator and made the final decision to establish the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study (Hypothetical)
- MRMC Study Done: Yes, an MRMC study was conducted to evaluate the impact of AI assistance on human reader performance.
- Effect Size: The study demonstrated a significant improvement in reader performance. Human readers, when assisted by the AI device, showed an average 15% increase in sensitivity for detecting [condition A] and a 5% reduction in reading time per case, compared to reading without AI assistance, while maintaining specificity. The estimated Area Under the Free-Response Receiver Operating Characteristic (FROC) curve, a common metric in MRMC studies, improved from 0.78 (unaided) to 0.86 (AI-aided).
6. Standalone (Algorithm Only) Performance Study (Hypothetical)
- Standalone Study Done: Yes, a standalone performance evaluation was conducted on the full test set (1500 cases) against the established ground truth.
- Standalone Performance Metrics:
- Sensitivity: 92.5%
- Specificity: 85.1%
- F1-score: 0.88
- AUC: 0.93
7. Type of Ground Truth Used (Hypothetical)
- Type of Ground Truth: Expert consensus, established through the 2+1 adjudication process involving three qualified radiologists. In cases where available and relevant, this was supplemented or confirmed by pathology reports or follow-up outcomes data (e.g., surgical confirmation or clinical progression documented over 6 months).
8. Sample Size for the Training Set (Hypothetical)
- Training Set Sample Size: 50,000 imaging studies, collected from a diverse patient population.
9. How Ground Truth for the Training Set Was Established (Hypothetical)
- Ground Truth Establishment for Training Set: The ground truth for the training set was primarily established through a combination of:
- Radiologist Consensus: A larger team of 10 radiologists (separate from the test set readers) annotated the training data. Each image was reviewed by at least two radiologists, with disagreements resolved by an internal consensus committee.
- Clinical Records & Reports: For a subset of cases, ground truth was derived from detailed clinical reports, electronic health records, and existing radiology reports.
- Automated Labeling (with verification): For a large portion of the normal or clearly pathological cases, a pre-existing, highly accurate internal model was used for initial labeling, which was then systematically reviewed and corrected by human annotators to ensure high fidelity. All ambiguous or complex cases were subjected to full manual review by multiple radiologists.
Ask a specific question about this device
(89 days)
24060
ITALY
Re: K250282
Trade/Device Name: Persona C HR
Regulation Number: 21 CFR 892.1650
number:** K250282
Classification Name: Image-intensified fluoroscopic x-ray system (21 CFR, Part 892.1650
number:** K182086
Classification Name: Image-intensified fluoroscopic x-ray system (21 CFR, Part 892.1650
PERSONA C HR is a mobile X-ray device used for radiological guidance and visualization during diagnostic, interventional and surgical procedures.
PERSONA C HR device can be used on all patients, except pediatric patients, within the limits of the device.
Examples of clinical applications could be Orthopedic surgery, General surgery, Cardiac procedures, Thoracic surgery, Vascular procedures, Pain therapy and Urological procedures.
PERSONA C HR is a C-arm mobile unit with flat panel detector.
It allows imaging under the following modes:
- Low Dose Fluoroscopy,
- High Quality Fluoroscopy,
- High Quality + Fluoroscopy
- Digital radiography (Snapshot),
- Fluoroscopy in Road Mapping mode (optional),
- Fluoroscopy in DSA mode (optional).
It is provided with a 30x30 cm Flat Panel detector and a 25 kW X-ray generator.
The unit is composed of: Monitor unit, Stand with c-arm, X-ray commands, Printer (optional).
The device acquires images employing X-rays emitted by an X-ray source which can produce up to 25kW.
The unit, which is powered by a single-phase electrical supply, generates ionizing radiations (X-rays) which, as they pass through the patient, reach a flat panel detector that produces radiological images.
An x-ray collimator, placed on the monobloc, is responsible for limiting the emission beam of ionizing radiation.
The images collected by the detector are sent to a video processor, which, after processing the information received, allows radiological images to be displayed on the display monitor.
I am sorry, but the provided text does not contain the detailed information necessary to fully address all aspects of your request regarding acceptance criteria and a study proving device performance. The document is an FDA 510(k) clearance letter and summary, which confirms substantial equivalence to a predicate device but does not typically include the granular details of performance studies you are looking for, such as specific acceptance criteria thresholds, detailed test set data, expert adjudication methods, or MRMC study results with effect sizes.
Specifically, the document states: "Substantial equivalence was supported by engineering-based performance testing, including evaluation of image quality using test phantoms and cadaver images by an 'American Board of Radiology' radiologist." However, it does not provide the specifics of this evaluation in the way you've outlined in your request.
Therefore, I cannot generate the table of acceptance criteria with reported device performance or the detailed study breakdown that you've requested beyond what is implicitly stated about image quality evaluation.
If you have a different document that contains this information (e.g., a detailed clinical study report or a more extensive validation report), I would be happy to analyze it for you.
Ask a specific question about this device
Page 1 of 57