Search Results
Found 7 results
510(k) Data Aggregation
(104 days)
This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures. This device is not indicated for use in mammography, fluoroscopy, and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an X-ray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
- This device is not intended for use in mammography
- This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
The SKR 3000 is distributed under a commercial name AeroDR 3.
The purpose of the current premarket submission is to add pediatric use indications for the SKR 3000 imaging system.
The provided FDA 510(k) clearance letter and summary for the SKR 3000 device focuses on adding a pediatric use indication. However, it does not contain the detailed performance data, acceptance criteria, or study specifics typically found in a clinical study report. The document states that "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'" and that "pediatric image evaluation using small-size phantoms was performed on the P-53." It also mentions that "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."
Based on the information provided, it's not possible to fully detail the acceptance criteria and the study that proves the device meets them according to your requested format. The document implies that the "acceptance criteria" likely revolved around demonstrating "substantially equivalent image performance" to a predicate device (AeroDR System 2 with P-52) for pediatric use, primarily through phantom studies, rather than a clinical study with human patients and detailed diagnostic performance metrics.
Therefore, many of the requested fields cannot be filled directly from the provided text. I will provide the information that can be inferred or directly stated from the document and explicitly state when information is not available.
Disclaimer: The information below is based solely on the provided 510(k) clearance letter and summary. For a comprehensive understanding, one would typically need access to the full 510(k) submission, which includes the detailed performance data and study reports.
Acceptance Criteria and Device Performance Study for SKR 3000 (Pediatric Use Indication)
The primary objective of the study mentioned in the 510(k) summary was to demonstrate substantial equivalence for the SKR 3000 (specifically with detector P-53) for pediatric use, compared to a predicate device (AeroDR System 2 with P-52).
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the submission (adding a pediatric indication based on substantial equivalence), the acceptance criteria are not explicitly quantifiable metrics like sensitivity/specificity for a specific condition. Instead, the focus was on demonstrating "substantially equivalent image performance" through phantom studies.
Acceptance Criteria (Inferred from Document) | Reported Device Performance (Inferred/Stated) |
---|---|
Image quality of SKR 3000 with P-53 for pediatric applications to be "substantially equivalent" to predicate device (AeroDR System 2 with P-52). | "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use." |
Compliance with "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" for pediatric image evaluation using small-size phantoms. | "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'. Pediatric image evaluation using small-size phantoms was performed on the P-53." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not specified. The document indicates "small-size phantoms" were used, implying a phantom study, not a human clinical trial. The number of phantom images or specific phantom configurations is not detailed.
- Data Provenance: Not specified. Given it's a phantom study, geographical origin is less relevant than for patient data. It's an internal study conducted to support the 510(k) submission. Retrospective or prospective status is not applicable as it's a phantom study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified. Given this was a phantom study, ground truth would likely be based on physical measurements of the phantoms and expected image quality metrics, rather than expert interpretation of pathology or disease. If human evaluation was part of the "comparative image evaluation," the number and qualifications of evaluators are not provided.
- Qualifications: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. For a phantom study demonstrating "substantially equivalent image performance," adjudication methods like 2+1 or 3+1 (common in clinical reader studies) are generally not applicable. The comparison would likely involve quantitative metrics from the generated images.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No. The document states "comparative image evaluation" and "pediatric image evaluation using small-size phantoms." This strongly implies a technical performance assessment using phantoms, rather than a clinical MRMC study with human readers interpreting patient cases. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance in image interpretation (e.g., CAD) is not the focus of this submission; it's about the imaging system's ability to produce quality images for diagnosis.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Standalone Performance: Not applicable in the traditional sense of an AI algorithm's diagnostic performance. The device is an X-ray imaging system. The "performance" being evaluated is its ability to generate images, not to provide an automated diagnosis. The "Intelligent-Grid" feature mentioned is an image processing algorithm (scattered radiation correction), but its standalone diagnostic performance is not the subject of this specific submission; its prior clearance (K151465) is referenced.
7. The Type of Ground Truth Used
- Ground Truth Type: For the pediatric image evaluation, the ground truth was based on phantom characteristics and expected image quality metrics. This is inferred from the statement "pediatric image evaluation using small-size phantoms was performed."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. The SKR 3000 is an X-ray imaging system, not an AI model that requires a "training set" in the machine learning sense for its primary function of image acquisition. While image processing algorithms (like Intelligent-Grid) integrated into the system might have been developed using training data, the submission focuses on the imaging system's performance for pediatric use.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as no training set (in the context of an AI model's image interpretation learning) is explicitly mentioned or relevant for the scope of this 510(k) submission for an X-ray system.
Ask a specific question about this device
(24 days)
The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.
The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
- This device is not intended for use in mammography
- This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with Moderate level of concern. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction - (Intelligent-Grid). The Intelligent-Grid is cleared in K151465. The CS-7 modifications have been made for a wireless serial radiography.
The SKR 3000 is distributed under a commercial name AeroDR 3.
This submission is to introduce a wireless serial radiography into the SKR 3000 system. The wireless serial radiography function of P-65 / P-75 used with Phoenix was cleared under K221803. These detectors are wireless and their serial radiography functions are not being controlled by the x-ray generator. Hence no detector integration testing is necessary.
The provided text is a 510(k) Summary for the Konica Minolta SKR 3000 device, which is a digital radiography system. This document focuses on demonstrating substantial equivalence to a predicate device (K213908), rather than presenting a detailed study proving the device meets specific acceptance criteria with performance metrics, sample sizes, expert involvement, or statistical analysis.
The document states that the changes made to the SKR 3000 (specifically the addition of wireless serial radiography for P-65 and P-75 detectors) did not require clinical studies. Therefore, the information requested about a study demonstrating the device meets acceptance criteria regarding clinical performance is not available in this filing. The "Performance Data" section primarily addresses compliance with electrical safety and EMC standards.
However, based on the information provided, here's what can be extracted and inferred regarding "acceptance criteria" in the context of this 510(k) submission:
1. Table of Acceptance Criteria and Reported Device Performance
Given that no clinical study specific to this submission's modifications is presented, the "acceptance criteria" here relate to general regulatory and technical compliance rather than clinical performance metrics (e.g., sensitivity, specificity for a particular pathology). The "reported device performance" is essentially a statement of compliance.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Safety and Effectiveness | "The technological differences raised no new issues of safety or effectiveness as compared to its predicate device (K213908)." |
Performance to Specifications | "Performance tests demonstrate that the SKR 3000 performs according to specifications and functions as intended." |
Compliance with Standards | "The SKR 3000 is designed to comply with the following standard; AAMI/ANSI ES 60601-1 (Ed.3.1) and IEC 60601-1-2 (Ed.4.0)." (General electrical safety and electromagnetic compatibility standards are met.) |
Risk Analysis | "The verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results demonstrated that the predetermined acceptance criteria were met. The results of risk management did not require clinical studies to demonstrate the substantial equivalency of the subject device modifications." |
Functional Equivalence (Wireless Radiography) | The submission implies that the newly added wireless serial radiography functions (P-65 / P-75) are functionally equivalent to the wired serial radiography functions of the predicate device, especially since "no detector integration testing is necessary" because "their serial radiography functions are not being controlled by the x-ray generator." |
Note: This table reflects the nature of a 510(k) submission focused on substantial equivalence rather than a clinical performance study.
Here's the breakdown for the other requested information, based on the limitations of the provided document:
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not provided. The document states that "The results of risk management did not require clinical studies to demonstrate the substantial equivalency of the subject device modifications." This indicates that no clinical "test set" with patient data was used for this specific submission. The performance assessment was based on non-clinical testing (e.g., engineering verification, validation testing to internal specifications and regulatory standards).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. As no clinical study or test set with patient data was conducted or analyzed, there were no experts establishing ground truth for performance metrics like diagnostic accuracy.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. No clinical test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This submission is for a digital radiography system, not an AI-powered diagnostic aide. No MRMC study was performed or is relevant to this submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is an imaging system, not an algorithm for standalone diagnosis. The "performance tests" mentioned are related to the hardware and software functionality of the imaging system itself (e.g., image quality specifications, electrical safety, EMC), not an algorithm's diagnostic performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable. For the purpose of this 510(k) filing for device modifications, the "ground truth" for performance was implicitly defined by the compliance with engineering specifications, safety standards (AAMI/ANSI ES 60601-1, IEC 60601-1-2), and the functional equivalence to the predicate device. No clinical ground truth (e.g., pathology, outcomes) was established for this submission.
8. The sample size for the training set
- Not applicable. This device is not an AI/ML algorithm that requires a "training set" in the sense of patient data for learning.
9. How the ground truth for the training set was established
- Not applicable. No training set for an AI/ML algorithm.
Ask a specific question about this device
(26 days)
This is a digital mobile diagnostic x-ray system intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography.
This is a modified version of our previous predicate mobile PHOENIX. The predicate PHOENIX mobile is interfaced with Konica – Minolta Digital X-ray panels and CS-7 or Ultra software image acquisition. PHOENIX mobile systems will be marketed in the USA by KONICA MINOLTA. Models with the CS-7 Software will be marketed as AeroDR TX m01. Models with the Ultra software will be marketed as mKDR Xpress. The modification adds two new models of compatible Konica-Minolta digital panels, the AeroDR P-65 and AeroDR P-75, cleared in K210619. These newly compatible models are capable of a mode called DDR, Dynamic Digital Radiography wherein a series of radiographic exposures can be rapidly acquired, up to 15 frames per seconds maximum (300 frames).
The provided text describes a 510(k) premarket notification for a mobile x-ray system. The document focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than presenting a study to prove the device meets specific performance-based acceptance criteria for an AI/algorithm.
Therefore, many of the requested details, such as specific acceptance criteria for algorithm performance, sample sizes for test sets, expert ground truth establishment, MRMC studies, or standalone algorithm performance, are not applicable or not present in this type of submission.
The essence of this submission is that the entire mobile x-ray system, including its components (generator, panels, software), is deemed safe and effective because it is substantially equivalent to a previously cleared device, with only minor modifications (adding two new compatible digital panels and enabling a DDR function in the software, which is stated to be "unchanged firmware" and "moderate level of concern").
Here's an attempt to address your questions based on the provided text, while acknowledging that many of them pertain to AI/algorithm performance studies, which are not the focus of this 510(k):
1. A table of acceptance criteria and the reported device performance
The document does not specify performance-based acceptance criteria for an AI/algorithm. Instead, it demonstrates substantial equivalence to a predicate device by comparing technical specifications. The "acceptance criteria" in this context are implicitly met if the new device's specifications (kW rating, kV range, mA range, collimator, power source, panel interface, image area sizes, pixel sizes, resolutions, MTF, DQE) are equivalent to or improve upon the predicate, and it remains compliant with relevant international standards.
Characteristic | Predicate: K212291 PHOENIX | PHOENIX/AeroDR TX m01 and PHOENIX/mKDR Xpress. | Acceptance Criterion (Implicit) | Reported Performance |
---|---|---|---|---|
Indications for Use | Digital mobile diagnostic x-ray for adults/pediatrics, skull, spine, chest, abdomen, extremities. Not for mammography. | SAME | Must be identical to predicate. | SAME (Identical) |
Configuration | Mobile System with digital x-ray panel and image acquisition computer | SAME | Must be identical to predicate. | SAME (Identical) |
X-ray Generator(s) | kW: 20, 32, 40, 50 kW; kV: 40-150 kV (1 kV steps); mA: 10-650 mA | SAME | Must be identical to predicate. | SAME (Identical) |
Collimator | Ralco R108F | SAME | Must be identical to predicate. | SAME (Identical) |
Meets US Performance Standard | YES 21 CFR 1020.30 | SAME | Must meet this standard. | YES (Identical) |
Power Source | Universal, 100-240 V~, 1 phase, 1.2 kVA | SAME | Must be identical to predicate. | SAME (Identical) |
Software | Konica-Minolta CS-7 or Ultra | CS-7 and Ultra modified for DDR mode | Functions must be equivalent/improved; DDR enabled. | CS-7 and Ultra modified for DDR mode |
Panel Interface | Ethernet or Wi-Fi wireless | SAME | Must be identical to predicate. | SAME (Identical) |
Image Area Sizes (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
Pixel Sizes (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
Resolutions (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
MTF (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Performance must be equivalent or better. | P-65 (Non-binning) 0.62, (2x2 binning) 0.58; P-75 (Non-binning) 0.62, (2x2 binning) 0.58 |
DQE (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Performance must be equivalent or better. | P-65 0.56 @ 1 lp/mm; P-75 0.56 @ 1 lp/mm |
Compliance Standards | N/A | IEC 60601-1, -1-2, -1-3, -2-54, -2-28, -1-6, IEC 62304 | Must meet relevant international safety standards. | Meets all listed IEC standards. |
Diagnostic Quality Images | N/A | Produced diagnostic quality images as good as predicate | Must produce images of equivalent diagnostic quality. | Verified |
2. Sample size used for the test set and the data provenance
No specific test set or data provenance (country, retrospective/prospective) is mentioned for AI/algorithm performance. The "testing" involved "bench and non-clinical tests" to verify proper system operation and safety, and that the modified combination of components produced diagnostic quality images "as good as our predicate generator/panel combination." This implies physical testing of the device rather than a dataset for algorithm evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. There was no specific test set requiring expert-established ground truth for an AI/algorithm evaluation. The determination of "diagnostic quality images" likely involved internal assessment by qualified personnel within the manufacturer's testing process.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No adjudication method is described as there was no formal expert-read test set for algorithm performance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC study was not conducted as this submission is not about an AI-assisted diagnostic tool designed to improve human reader performance. It is for a mobile x-ray system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No. This submission is for a medical device (mobile x-ray system), not a standalone AI algorithm. The software components (CS-7 and Ultra) are part of the image acquisition process, and the only software "modification" mentioned is enabling the DDR function, which is a feature of the new panels, not an AI for diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. The substantial equivalence argument relies on comparing technical specifications and demonstrating that the physical device produces images of "diagnostic quality" equivalent to the predicate, rather than an AI producing diagnostic outputs against a specific ground truth.
8. The sample size for the training set
Not applicable. This is not an AI/ML algorithm submission requiring a training set. The software components are for image acquisition and processing, not for AI model training.
9. How the ground truth for the training set was established
Not applicable, as no training set for an AI/ML algorithm was used or mentioned.
Ask a specific question about this device
(48 days)
The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.
The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
- This device is not intended for use in mammography
- This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation - correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
This submission is to add new flat-panel x-ray detectors (FPDs), P-82 and P-85, into the SKR 3000. The P-82 and P-85 employ the same surface material infused with Silver ions (antibacterial properties) as the predicate device. The only difference between the P-82 and P-85 is the number of Li-ion capacitors. The P-85 has two Li-ion capacitors and the P-82 has one. These new P-82 and P-85 are not applicable to the serial radiography which acquires multiple frames of radiography image serially.
The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
The SKR 3000 is distributed under a commercial name AeroDR 3.
The provided text describes the Konica Minolta SKR 3000, a digital radiography system, and seeks 510(k) clearance by demonstrating substantial equivalence to a predicate device (K210619), which is also an SKR 3000 model. The submission focuses on adding new flat-panel x-ray detectors (FPDs), P-82 and P-85, to the existing system.
Here's an analysis of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document implicitly defines "acceptance criteria" by comparing the specifications and performance of the subject device (SKR 3000 with P-82/P-85 FPDs) against its predicate device (SKR 3000 with P-65 FPD). The acceptance criteria are essentially the performance levels of the predicate device, which the new FPDs must meet or exceed.
Feature / Performance Metric | Acceptance Criteria (Predicate P-65) | Reported Device Performance (Subject P-82/P-85) | Meets Criteria? |
---|---|---|---|
Indications for Use | Same as Subject | Generates radiographic images of human anatomy, replaces film/screen in general diagnostic procedures, not for mammography, fluoroscopy, angiography. | Yes |
Detection method | Indirect conversion method | Indirect conversion method | Yes |
Scintillator | CsI (Cesium Iodide) | CsI (Cesium Iodide) | Yes |
TFT sensor substrate | Glass-based TFT substrate | Film-based TFT substrate | N/A (difference accepted, no new safety/effectiveness issues) |
Image area size | P-65: 348.8×425.6mm (3,488×4,256 pixels) | P-82/P-85: 348.8×425.6mm (3,488×4,256 pixels) | Yes |
Pixel size | 100 µm / 200 µm / 400 µm | 100 µm / 200 µm | Yes (smaller range still includes acceptable sizes) |
A/D conversion | 16 bit (65,536 gradients) | 16 bit (65,536 gradients) | Yes |
Max. Resolution | P-65: 4.0 lp/mm | P-82/P-85: 4.0 lp/mm | Yes |
MTF (1.0 lp/mm) | (Non-binning) 0.62, (2x2 binning) 0.58 | (Non-binning) 0.62, (2x2 binning) 0.58 | Yes |
DQE (1.0 lp/mm) | 56% @ 1mR | 59% @ 1mR | Yes (exceeds) |
External dimensions | P-65: 384(W)×460(D)×15(H)mm | P-82/P-85: 384(W)×460(D)×15(H)mm | Yes |
IP Code (IEC 60529) | IPX6 | IP56 | N/A (minor difference, presumed acceptable) |
Battery Type | Lithium-ion capacitor | Lithium-ion capacitor | Yes |
Number of batteries | P-65: Two | P-82: One, P-85: Two | N/A (difference in configuration, performance evaluated) |
Battery duration in standby | P-65: Approx. 13.2 hours | P-82: Approx. 6.0 hours, P-85: Approx. 13.2 hours | Yes (P-85 meets, P-82 is different but acceptable for its configuration) |
Surface Material | Surface infused with Silver ions (antibacterial properties) | Surface infused with Silver ions (antibacterial properties) | Yes |
Communication I/F | Wired and Wireless | Wired and Wireless | Yes |
Operator console (Software) | CS-7, AeroDR3 interface for P-65 (CTDS) | CS-7, AeroDR3 interface for P-82 and P-85 (CTDS) | Yes |
Image Processing | Same complex image processing algorithms | Same complex image processing algorithms | Yes |
Serial radiography | Applicable | Not applicable | N/A (difference in feature, not an "acceptance criterion" in this context as new FPDs don't support it) |
Note: The acceptance criteria are largely implied by the claim of substantial equivalence. The document primarily focuses on demonstrating that new FPDs (P-82 and P-85) either match or improve upon the predicate's performance for critical imaging parameters. Differences in the TFT substrate material, pixel size options, number of batteries, and serial radiography capability are noted but explained as not raising new safety or effectiveness concerns.
2. Sample size used for the test set and the data provenance:
The document states: "The performance tests according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices' and the other verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results demonstrated that the predetermined acceptance criteria were met."
This indicates that specific performance tests were conducted. However, the document does not explicitly state the sample size used for the test sets (e.g., number of images, number of phantom studies, number of human subjects, if any) nor the data provenance (e.g., country of origin, retrospective or prospective nature of clinical data if used). Given the type of device (X-ray system component) and the nature of the submission (adding new FPDs to an existing cleared system), the "performance data" presented is primarily technical specifications and phantom-based measurements, not typically large-scale clinical trials with human subjects.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
The document does not mention the use of experts to establish ground truth. As this is a technical performance comparison of imaging hardware (FPDs), ground truth would likely be established through objective physical measurements and established technical standards (e.g., imaging phantoms, dosimeters) rather than expert human interpretation of medical images for diagnostic accuracy.
4. Adjudication method for the test set:
Since there is no mention of human experts or clinical image interpretation studies, there is no adjudication method described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
The document does not mention an MRMC study, nor does it refer to AI or AI-assisted improvements for human readers. This device is a digital radiography system (hardware), and the submission focuses on its technical performance compared to a predicate, not on AI algorithms or their impact on reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device described is an X-ray imaging system, not an algorithm. Therefore, a standalone algorithm-only performance study is not applicable in this context. The performance evaluated is that of the hardware components (FPDs) within the system.
7. The type of ground truth used:
The ground truth for the performance parameters (e.g., Max. Resolution, MTF, DQE) would be established through objective physical measurements using standardized phantoms and test procedures as per industry standards (e.g., "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices"). For other specifications like battery life or dimensions, ground truth is based on engineering measurements and design specifications.
8. The sample size for the training set:
The document does not refer to a training set. This is because the submission is for hardware components (FPDs) for an X-ray system, not for a machine learning or AI-based diagnostic algorithm that would require training data.
9. How the ground truth for the training set was established:
As there is no mention of a training set, there is no information on how its ground truth would be established.
Ask a specific question about this device
(54 days)
This is a digital mobile diagnostic x-ray system intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography.
This is a new type of our previous predicate mobile PhoeniX. The predicate PhoeniX mobile is interfaced with Canon Digital X-ray panels and Canon control software CXDI-NE. The new PhoeniX mobile is interfaced with Konica – Minolta Digital X-ray panels and CS-7 or Ultra software image acquisition. Phoenix mobile systems will be marketed in the USA by KONICA MINOLTA. Models with the CS-7 Software will be marketed as AeroDR Tran-X Models with the Ultra Software will be marketed as mKDR II. The compatible digital receptor panels are the same for either model. The CS-7 software was cleared under K151465/K172793, while the Ultra software is new. The CS-7 is a DIRECT DIGITIZER used with an image diagnosis device, medical imaging device and image storage device connected via the network. This device digitally processes patient images collected by the medical imaging device to provide image and patient information. By contrast the Ultra-DR software is designed as an exam-based modality image acquisition tool. Ultra-DR software and its accompanying Universal Acquisition Interface (UAI) were developed to be acquisition device independent. Basic Features of the software include Modality Worklist Management (MWM) / Modality Worklist (MWL) support, DICOM Send, CD Burn, DICOM Print, and Exam Procedure Mapping. Ultra Software is designed to increase patient throughput while minimizing data input errors. Ultra is made up of multiple components that increase efficiency while minimizing errors. The main components of Ultra are the Worklist, Acquisition Interface and Configuration Utility. These components combine to create a Stable, Powerful, and Customizable Image capture system. The intuitive graphical user interface is designed to improve Radiology, Technologist accuracy, and image quality. Worklist and Exam screens were developed to allow site specific customizations to seamlessly integrate into existing practice workflows.
Here's an analysis of the acceptance criteria and study information for the PHOENIX Digital Mobile Diagnostic X-Ray System, based on the provided text.
Based on the provided document, the PHOENIX device is a digital mobile diagnostic x-ray system, and the submission is for a modification to an existing cleared device (K192011 PHOENIX). The "study" described is primarily non-clinical bench testing to demonstrate that the modified system, with new digital flat-panel detectors (AeroDR series) and new acquisition software (Ultra), is as safe and effective as the predicate device. No clinical study information is provided in this document.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a quantitative, measurable sense for the overall device performance. Instead, it focuses on demonstrating substantial equivalence to a predicate device. The comparison is primarily in the form of feature similarity and compliance with international standards for safety and electrical performance.
Characteristic | Predicate (K192011 PHOENIX) | PHOENIX (Proposed) | Comparison of Performance |
---|---|---|---|
Indications for Use | Intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography. | SAME (includes device description as requested by FDA) | Met: Indications for use are identical, signifying no change in intended clinical application. |
Configuration | Mobile System with digital x-ray panel and image acquisition computer | SAME | Met: Basic physical configuration remains unchanged. |
X-ray Generator(s) | kW rating: 20 kW, 32 kW, 40 kW and 50 kW. kV range: from 40 kV to 150 kV in 1 kV steps. mA range: from 10 mA to 630 mA / 640 mA / 650 mA. | SAME | Met: The X-ray generator specifications are identical, ensuring consistent radiation output characteristics. |
Collimator | Ralco R108F | Ralco R108F | Met: The collimator model is identical, ensuring consistent radiation field shaping. |
Meets US Performance Standard | YES 21 CFR 1020.30 | SAME | Met: Compliance with the US Performance Standard for diagnostic X-ray systems is maintained. |
Power Source | Universal power supply, from 100 V~ to 240 V~. 1 phase, 1.2 kVA | SAME | Met: Power supply specifications are identical. |
Software | Canon control software CXDI-NE | Konica-Minolta control software CS-7 (K151465 or K172793) OR Konica-Minolta control software Ultra. | Met (by validation): New software (Ultra) validated according to FDA Guidance. CS-7 was previously reviewed. This is a key change, and compliance is asserted through specific software validation. |
Panel Interface | Ethernet or Wi-Fi wireless | SAME | Met: Interface method is unchanged. |
Image Area Sizes (Detectors) | CANON CXDI-401C 16"x 17", CXDI-701C 14" x 17", CXDI-801C 11" x 14", CXDI-710C 14" x 17", CXDI-810C 14" x 11", CXDI-410C 17" x 17" | AeroDR P-51 14" x 17", AeroDR P-52 14" x 17", AeroDR P-61 14" x 17", AeroDR P-71 17" x 17", AeroDR P-81 10" x 12". (Similar range of sizes, all previously cleared) | Met (by equivalence): The new detectors offer a "similar range of sizes" and are all "previously cleared" by FDA. This implies their performance characteristics within those sizes are acceptable. |
Pixel Sizes (Detectors) | CANON CXDI (all 125 µm) | AeroDR P-51 175 µm, AeroDR P-52 175 µm, AeroDR P-61 100/200 µm, AeroDR P-71 100/200 µm. | Met (by equivalence): The new pixel sizes are different but are associated with previously cleared detectors, implying their diagnostic utility is acceptable. Specific performance comparison (e.g., to predicate's pixel size) isn't given for diagnostic equivalence, but rather for detector equivalence. |
Resolutions (Detectors) | CANON CXDI (various, e.g., CXDI-401C 3320 × 3408 pixels) | AeroDR P-51 1994 × 2430 pixels, AeroDR P-52 1994 × 2430 pixels, AeroDR P-61 3488 × 4256 pixels, AeroDR P-71 4248 × 4248 pixels, AeroDR P-81 2456 × 2968 pixels. | Met (by equivalence): Similar to pixel size, specific resolutions differ but are for previously cleared detectors. Diagnostic equivalence is asserted by the prior clearance of the detectors themselves. |
MTF (Detectors) | CANON CXDI (all 0.35 @ 2cy/mm) | AeroDR P-51 0.30 @ 2cy/mm, AeroDR P-52 0.30 @ 2cy/mm, AeroDR P-61 0.30 @ 2cy/mm, AeroDR P-71 0.30 @ 2cy/mm, AeroDR P-81 0.30 @ 2cy/mm. | Met (by equivalence): The new detectors have slightly lower MTF values at 2cy/mm, but these are for previously cleared detectors, implying acceptable image quality for diagnostic use. |
DQE (Detectors) | CANON CXDI (all 0.6 @ 0 lp/mm) | AeroDR P-51 0.62 @ 0 lp/mm, AeroDR P-52 0.62 @ 0 lp/mm, AeroDR P-61 0.56 @ 1 lp/mm, AeroDR P-71 0.56 @ 1 lp/mm, AeroDR P-81 0.56 @ 1 lp/mm. | Met (by equivalence): DQE values differ but are for previously cleared detectors, suggesting acceptable performance. Some are higher, some are slightly lower (e.g., P61/P71/P81 at 1 lp/mm vs. predicate at 0 lp/mm). The key is the "previously cleared" status. |
Compliance with Standards | N/A (implied by predicate clearance) | IEC 60601-1:2005+A1:2012, IEC 60601-1-2:2014, IEC 60601-1-3:2008+A1:2013, IEC 60601-2-54:2009+A1:2015, IEC 60601-2-28:2010, IEC 60601-1-6:2010 + A1:2013, IEC 62304:2006 + A1:2016. | Met: Device tested and found compliant with these international standards for safety and essential performance. |
Summary of the "Study" Proving Acceptance Criteria
The study described is a non-clinical, bench testing-based assessment for demonstrating substantial equivalence rather than a clinical study measuring diagnostic performance outcomes.
The core argument for substantial equivalence is based on:
- Identical Indications for Use.
- Identical platform (mobile system, generator, collimator, power source).
- Replacement of components (detectors and acquisition software) with components that are either:
- Previously FDA cleared (AeroDR detectors, CS-7 software).
- Validated according to FDA guidance (Ultra software).
- Compliance with recognized international standards for medical electrical equipment.
Here are the specific details requested:
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not applicable. No patient-level test set data is mentioned for testing diagnostic performance. The "test set" consisted of physical devices (systems covering all generator/panel combinations) for bench testing and software for validation.
- Data Provenance: Not applicable for a clinical test set. The testing was non-clinical bench testing. The detectors themselves (AeroDR) are stated to have been "previously cleared" by the FDA, implying their performance was established via other submissions, likely including data from various countries consistent with regulatory submissions.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Not applicable. There was no clinical test set requiring expert ground truth establishment for diagnostic accuracy.
4. Adjudication Method for the Test Set
- Not applicable. There was no clinical test set requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not required to establish substantial equivalence because all digital x-ray receptor panels have had previous FDA clearance."
- Effect size of human readers improvement: Not applicable, as no such study was performed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, for the software component. The new image acquisition software (Ultra) was validated according to the "FDA Guidance: Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." This validation assesses the software's functionality and performance as a standalone component within the system, ensuring it correctly manages workflow, acquires images, and processes them. However, this is software validation, not a standalone diagnostic performance study in the context of an AI algorithm producing diagnostic outputs.
7. The Type of Ground Truth Used
- For the overall device: Substantial equivalence to a legally marketed predicate device (K192011 PHOENIX), which itself would have demonstrated safety and effectiveness.
- For the components (detectors): Prior FDA clearance of the Konica-Minolta AeroDR panels served as the "ground truth" for their imaging characteristics (MTF, DQE, pixel size, etc.) being diagnostically acceptable.
- For the software (Ultra): Validation against specified functional and performance requirements outlined in the FDA software guidance, which serves as the ground truth for software quality and safety.
- For the PHOENIX system itself: Compliance with international safety and performance standards (IEC series) served as the ground truth for its electrical, mechanical, and radiation safety.
8. The Sample Size for the Training Set
- Not applicable. This device is not an AI/ML algorithm that requires a training set in the conventional sense of image analysis. It is an imaging acquisition device. The software validation is based on engineering principles and testing, not statistical training on a dataset.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As above, no training set for an AI/ML algorithm was used.
Ask a specific question about this device
(176 days)
This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures.
This device is not indicated for use in mammography, fluoroscopy, and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
The subject device SKR3000 is not intended for use in mammography
This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation - correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
The proposed SKR 3000 is modified to consist of new FPD P-65 and P-75 in addition to previously cleared P-61, P-71, and P-81, Console CS-7 and other peripherals. The DR Detector uses the exposure signal or exposure from the X-ray device to generate X-ray digital image data for diagnosis, including serial exposure images, and send to the image processing controller.
The operator console software, Console CS-7, is a software program for installation on a OTC PC. Software module modifications have been made to use new FPDs (P-65 and P-75) (Cassette Type Detection Software (CTDS)) and to support 40 seconds serial radiography (SIC).
The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
The new DR panels, P-65 and P-75, employ the surface material containing antibacterial agent in both radiation and irradiation sides. In the serial radiography settings, acquisition time has been changed from up to 20 seconds to 40 seconds to observe a variety of dynamic objects. Other control parameters of serial radiography are not changed from the predicate device.
The SKR 3000 is distributed under a commercial name AeroDR 3.
This document describes the Konica Minolta SKR 3000, a digital radiography system, and its substantial equivalence to a predicate device. The information provided focuses on the device's design, specifications, and performance testing to demonstrate compliance with standards, but does not include a detailed study proving the device meets specific acceptance criteria related to diagnostic accuracy or clinical outcomes through a prospective trial involving human readers. The provided text primarily focuses on engineering and regulatory compliance, not clinical performance metrics in the context of AI assistance or human reader improvement.
However, based on the provided text, here's a breakdown of the acceptance criteria met through performance testing as described, and the absence of certain study types:
1. Table of Acceptance Criteria and Reported Device Performance
The document broadly states that "the performance tests according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices' and the other verification and validation including the items required by the risk analysis for the SKR3000 were performed and the results demonstrated that the predetermined acceptance criteria were met."
While specific numerical acceptance criteria and their corresponding reported device performance values are not explicitly detailed in the text, the comparison table implicitly highlights characteristics where performance is expected to be equivalent or improved. For instance, the Signal-to-Noise Ratio (SNR) and Detective Quantum Efficiency (DQE) are critical performance metrics for X-ray detectors, and based on the equivalence asserted, one can infer that these metrics met predefined acceptance thresholds.
Given the information in the "Comparison Table", the following can be inferred as performance aspects that were evaluated and met criteria for substantial equivalence:
Acceptance Criteria (Implied from comparison) | Reported Device Performance (Implied from comparison) |
---|---|
Image Quality Metrics: | |
MTF (1.0 cycle/mm) | Non-binning: 0.62 |
MTF (1.0 cycle/mm) | 2x2 binning: 0.58 |
DQE (1.0 cycle/mm) | 56% @ 1mR |
DQE (0 cycle/mm) | 65% @ 0.02mR |
Exposure Acquisition Time | Max. acquisition time: 40 seconds (for serial radiography) |
Battery Duration in Standby | P-65: Approx. 13.2 hours; P-75: Approx. 12.2 hours |
Antibacterial Properties | Surface infused with Silver ions (antibacterial properties) |
Environmental Protection (IPX) | IPX6 |
Regulatory Compliance | AAMI/ANSI ES 60601-1 (Ed.3.1), IEC 60601-1-2 (Ed.4.0), and ISO 10993-1 (2018) met. |
Software Functionality | New FPD support (CTDS) and 40 seconds serial radiography support (SIC) operating as intended. |
Absence of New Safety/Effectiveness Issues | Performance tests demonstrated no new issues compared to predicate device. |
2. Sample Size Used for the Test Set and Data Provenance
The provided document does not detail any clinical test set or data provenance in terms of patient images or specific study populations. The performance data mentioned refers to engineering and quality assurance tests, not clinical performance studies with patient data.
3. Number of Experts Used to Establish Ground Truth and Qualifications
This information is not applicable or disclosed in the provided text. The document refers to engineering performance tests and compliance with regulatory standards, not expert-adjudicated clinical ground truth.
4. Adjudication Method for the Test Set
This information is not applicable or disclosed as there is no mention of a human-reviewed test set or adjudication process for diagnostic performance.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study is not mentioned in the provided text. The document indicates that "the results of risk management did not require clinical studies to demonstrate the substantial equivalency of the proposed device," which suggests that comparative effectiveness with human readers or AI assistance was not a component of this 510(k) submission.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done
The device itself is a digital radiography system, which generates images. While there are software components (like Console CS-7 for image processing), the submission focuses on the hardware (FPDs) and overall system performance in generating X-ray images, not an AI algorithm's standalone diagnostic performance. Therefore, such a standalone diagnostic algorithm study is not mentioned. The "performance tests" refer to technical specifications and safety, not diagnostic accuracy.
7. The Type of Ground Truth Used
Based on the document, the "ground truth" for the acceptance criteria was primarily based on technical specifications, regulatory standards, and engineering performance requirements. These include metrics like MTF, DQE, mechanical dimensions, battery life, IPX ratings, and compliance with electrical safety and electromagnetic compatibility standards. No clinical ground truth (e.g., pathology, outcomes data, or expert consensus on disease presence) is mentioned as being used for performance evaluation in this submission.
8. The Sample Size for the Training Set
This is not applicable or disclosed. The document does not describe the development or training of an AI algorithm in the context of machine learning, so there is no mention of a training set of images.
9. How the Ground Truth for the Training Set Was Established
This is not applicable or disclosed as there is no mention of an AI training set.
Ask a specific question about this device
(96 days)
This software is intended to generate digital radiographic images of the skull, spinal column, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, or lying in the prone or supine position and is intended for use in all routine radiography exams. The product is not intended for mammographic applications.
This software is not meant for mammography, fluoroscopy, or angiography.
The I-Q View is a software package to be used with FDA cleared solid-state imaging receptors. It functions as a diagnostic x-ray image acquisition platform and allows these images to be transferred to hard copy, softcopy, and archive devices via DICOM protocol. The flat panel detector is not part of this submission. In the I-Q View software, the Digital Radiography Operator Console (DROC) software allows the following functions:
-
- Add new patients to the system; enter information about the patient and physician that will be associated with the digital radiographic images.
-
- Edit existing patient information.
-
- Emergency registration and edit Emergency settings.
-
- Pick from a selection of procedures, which defines the series of images to be acquired.
-
- Adiust technique settings before capturing the x-ray image.
-
- Preview the image, accept or reject the image entering comments or rejection reasons to the image. Accepted images will be sent to the selected output destinations.
-
- Save an incomplete procedure, for which the rest of the exposures will be made at a later time.
-
- Close a procedure when all images have been captured.
-
- Review History images, resend and reprint images.
-
- Re-exam a completed patient.
-
- Protect patient records from being deleted by the system.
-
- Delete an examined Study with all images being captured.
-
- Edit User accounts.
-
- Check statistical information.
-
- Image QC.
-
- Image stitching.
-
- Provides electronic transfer of medical image data between medical devices.
The provided document is a 510(k) summary for the I-Q View software. It focuses on demonstrating substantial equivalence to a predicate device through bench testing and comparison of technical characteristics. It explicitly states that clinical testing was not required or performed.
Therefore, I cannot provide details on clinical acceptance criteria or a study proving the device meets them, as such a study was not conducted for this submission. The document relies on bench testing and comparison to a predicate device to establish substantial equivalence.
Here's a breakdown of what can be extracted from the provided text regarding acceptance criteria and the "study" (bench testing) that supports the device:
1. Table of Acceptance Criteria and Reported Device Performance
Since no clinical acceptance criteria or performance metrics are provided, this table will reflect the general statements made about the device performing to specifications.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Device functions as intended for image acquisition. | Demonstrated intended functions. |
Device performs to specification. | Performed to specification. |
Integration with compatible solid-state detectors performs within specification. | Verified integration performance within specification. |
Software is as safe and functionally effective as the predicate. | Bench testing confirmed as safe and functionally effective as predicate. |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not applicable/not reported. The document describes bench testing, not a test set of patient data.
- Data Provenance: Not applicable. Bench testing generally involves internal testing environments rather than patient data from specific countries or retrospective/prospective studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable. As no clinical test set was used, no experts were needed to establish ground truth for patient data. Bench testing typically relies on engineering specifications and verification.
4. Adjudication method for the test set
- Not applicable. No clinical test set or human interpretation was involved.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical Testing: The bench testing is significant enough to demonstrate that the I-Q View software is as good as the predicate software. All features and functionality have been tested and all specifications have been met. Therefore, it is our conclusion that clinical testing is not required to show substantial equivalence." The device is software for image acquisition, not an AI-assisted diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, in a sense. The "study" described is bench testing of the software's functionality and its integration with solid-state detectors. This is an evaluation of the algorithm/software itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For bench testing, the "ground truth" would be the engineering specifications and expected functional behavior of the software and its interaction with hardware components. It's about verifying that the software performs according to its design requirements.
8. The sample size for the training set
- Not applicable. The I-Q View is described as an image acquisition and processing software, not an AI/machine learning model that typically requires a training set of data.
9. How the ground truth for the training set was established
- Not applicable, as there is no mention of a training set or AI/machine learning component.
Summary of the "Study" (Bench Testing) for K203703:
The "study" conducted for the I-Q View software was bench testing. This involved:
- Verification and validation of the software.
- Demonstrating the intended functions and relative performance of the software.
- Integration testing to verify that compatible solid-state detectors performed within specification as intended when used with the I-Q View software.
The conclusion drawn from this bench testing was that the software performs to specification and is "as safe and as functionally effective as the predicate software." This was deemed sufficient to demonstrate substantial equivalence, and clinical testing was explicitly stated as not required.
Ask a specific question about this device
Page 1 of 1