Search Results
Found 5 results
510(k) Data Aggregation
(138 days)
The device is designed to perform general radiography x-ray examinations on all pediatric and all adult patients, in all patient treatment areas.
Treatment areas are defined as professional health care facility environments where operators with medical training are continually present during patients' examinations.
The ModelOne mobile X-ray system is a diagnostic mobile x-ray system utilizing digital radiography technology. The device consists of a self-contained x-ray generator, image receptor(s), imaging display and software for acquiring medical diagnostic images both inside and outside of a standard stationary x-ray room. The ModelOne system incorporates a flat-panel(s) detector that can be used wirelessly for exams as in-bed projections. The system is intended to be marketed with two options with flat-panel digital images from Canon and Konica Minolta.
Based on the provided text, the device is an X-ray system, and the "study" described is a non-clinical performance evaluation rather than a traditional clinical study with human patients. The information provided is for regulatory clearance (510(k) summary) rather than a comprehensive research paper on AI performance.
Therefore, many of the typical acceptance criteria and study details for an AI/ML device (e.g., ground truth establishment for a test set, MRMC studies, standalone AI performance) are not applicable or not provided in this document. The device is a mobile X-ray system, not an AI-powered diagnostic tool. The focus is on the safety and performance of the hardware and integrated previously-cleared digital imagers, demonstrating substantial equivalence to a predicate device.
Here's an attempt to answer the questions based only on the provided text, noting where information is absent or not relevant for this type of device:
Acceptance Criteria and Device Performance (Non-AI X-ray System)
The document describes performance tests for a mobile X-ray system, NOT an AI/ML device. The acceptance criteria are implicit in the performance tests verifying the functionality and safety of the hardware. The "reported device performance" refers to the successful completion of these non-clinical tests.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Test/Evaluation | Reported Device Performance |
---|---|---|
Usability | Acceptance test on customer site | "Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies meeting usability expectations) |
Performance test at hospital by professional personnel | "Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies meeting usability expectations) | |
Battery Performance | Beginning of life/end of life test | "Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies battery life meets operational needs) |
Mobility | Driving distance test (full to empty battery) | "The driving distance test was performed to verify maximum distance of driving from full to empty battery." (Implies meeting or exceeding required driving distance for mobile operation) |
Generator Performance | Comparison of exposure time with competitors | "The aim of generator performance test was to compare the time of exposure of !M1 and its competitors." (Implies competitive or equivalent exposure times, contributing to "performs as well as or better than the predicate device.") |
System Integration | Integration test with previously cleared flat-panel imagers | "Integration test was performed on the previously cleared flat-panel digital imagers in order to demonstrate that all components of the device function in a reproductive way according to the design specifications." (Confirms successful integration and functional operation of the complete system) |
Software Risk | Software risk classification | "The software risk is classified as moderate level of concern device according to the Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." (Acceptance is compliance with software risk guidelines, not a performance metric in this context, but a regulatory requirement met) |
Safety | Overall safety assessment | "Technological differences do not raise questions of safety and the device is as safe as legally marketed DRX-Revolution by Carestream." (Overall safety acceptance based on non-clinical tests and comparison to predicate) |
2. Sample Size for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated in terms of number of "cases" or "patients" as this is a device performance test, not a clinical study on diagnostic accuracy. The tests involve the device itself and its components.
- Data Provenance: The tests are "non-clinical testing" and performed on the device hardware. Usability tests involved "professional personal" at a "hospital," but this is for evaluating the device's operation in a real-world setting, not an evaluation of diagnostic output. It's a "retrospective" view of testing results provided to the FDA.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not Applicable / Not Provided. This document describes a mobile X-ray system, not an AI/ML diagnostic algorithm that requires expert-established ground truth for image interpretation. The "ground truth" here is the device's functional performance against its design specifications and compared to a predicate, not clinical diagnostic accuracy.
4. Adjudication Method for the Test Set
- Not Applicable / None. No adjudication method is mentioned as this is not a study assessing human or AI diagnostic performance based on image interpretation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No. "No clinical testing was performed on the subject device." Therefore, no MRMC study was conducted to evaluate human readers with or without AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Not Applicable / No. The device itself is an X-ray imaging system. It produces images, but the document does not describe a new AI algorithm for interpreting those images. The "software" mentioned is for acquiring and displaying images, and its risk is classified. The post-processing is defined by protocols from previously cleared Canon and Konica Minolta image software.
7. Type of Ground Truth Used
- Functional Performance Specifications and Predicate Comparison. The "ground truth" for this regulatory submission is that the device functions according to its design specifications (e.g., battery life, driving distance, exposure time) and performs "as well as or better than the predicate device" in non-clinical settings.
8. Sample Size for the Training Set
- Not Applicable. This is not an AI/ML algorithm that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established
- Not Applicable. As above, no AI/ML training set is mentioned or implied.
Ask a specific question about this device
(134 days)
The AeroDR SYSTEM 2 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpse diagnostic procedures.
The AeroDR SYSTEM 2 is not in mammography. fluoroscopy, tomography and angiography applications.
The AeroDR SYSTEM 2 is a digital imaging system to be used with diagnostic x-ray systems. A new AeroDR Detector (flat panel digital detector: hereafter P-51) and AeroDR Generator Interface Unit2 has been just added to AeroDR SYSTEMS (The predicate devices:K102349, K113248, K120477, K130936) to function together such as with Console CS-7 (operator console), AeroDR Interface Unit, AeroDR Interface Unit2, AeroDR Generator Interface Unit, AeroDR Access Point and AeroDR Battery Charger, AeroDR Battery Charger2 and perform fundamentally same as Aero DR SYSTEMS do in physical and performance characteristics such as in device design, material safety and physical properties. Therefore, images captured with the flat panel digital detector in the AeroDR SYSTEM 2 can be communicated to the operator console via wired connection or wireless, depend on user's choice. The AeroDR SYSTEM 2 is just developed to meet user's compact layout needs without changing fundamental functions of the predicate devices.
AeroDR SYSTEM 2 is only connected with X-ray devices which are regally marketed in the United States of America and are compatible with XGIF, UEC, XIF Board along with certain electronic requirement, Specific signal controls for hardware and software and accessories described in Operation manual and Installation manual which is also fulfilled how to compatibility test at the time of installation also. In addition, for the use of pediatric, X-ray control system for pediatric are required.
The provided document, a 510(k) summary for the AeroDR SYSTEM 2, does not contain detailed information about acceptance criteria and a study proving the device meets those criteria in the format requested. The document focuses on demonstrating substantial equivalence to a predicate device, AeroDR SYSTEMS.
However, based on the information provided, here's what can be extracted and inferred:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly list specific quantitative acceptance criteria or a performance table. Instead, it states that the AeroDR SYSTEM 2 was evaluated for "equivalent evaluation outcome" to the predicate device. The performance characteristics mentioned are qualitative comparisons to the predicate device.
Acceptance Criteria Category | Reported Device Performance (AeroDR SYSTEM 2) |
---|---|
Indications for Use | Identical to predicate device. |
Biocompatibility | Evaluated with EN ISO 10993-1, assured safety as same as predicate. |
Electrical Safety | Conducted and assured as predicate devices (AAMI / ANSI ES60601-1:2005/(R) 2012 and C1:2009/(R) 2012 and A2:2010/(R) 2012). |
Electromagnetic Compatibility (EMC) | Conducted and assured as predicate devices (IEC 60601-1-2). |
Technological Characteristics (Hardware/Software) | Verification and validation completed without problem. |
Wireless Function | Evaluated referencing FDA Guidance. |
Risk Management | Based on ISO14971, completed without problem. |
Performance Testing (Bench Testing) | Concluded and showed equivalent evaluation outcome to predicate. |
Non-clinical Testing | Concluded and showed equivalent evaluation outcome to predicate. |
Clinical Testing | Concluded and showed equivalent evaluation outcome to predicate. |
Safety and Effectiveness | No safety and effectiveness and performance issue or no differences were found further than the predicate devices. |
2. Sample Size Used for the Test Set and Data Provenance:
The document mentions "Non clinical and clinical testing" but does not specify the sample size for these tests or the data provenance (e.g., country of origin, retrospective/prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
This information is not provided in the document.
4. Adjudication Method for the Test Set:
This information is not provided in the document.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study:
The document does not mention an MRMC comparative effectiveness study or any effect size of human readers improving with AI vs. without AI assistance. The AeroDR SYSTEM 2 is a digital radiography system, not an AI-assisted diagnostic tool.
6. Standalone Performance:
The document implies standalone performance testing ("Bench Testing," "Non clinical and clinical testing") was conducted to demonstrate equivalence to the predicate device. However, it does not explicitly state "algorithm only without human-in-the-loop performance" as would be relevant for an AI device. As it's a hardware/software system for image generation, its standalone performance refers to its ability to capture and process images equivalently to the predicate.
7. Type of Ground Truth Used:
The document does not explicitly state the type of ground truth used for the "clinical testing." Given the context of a diagnostic imaging system, it would typically involve images reviewed against a clinical standard, but the specific nature (e.g., expert consensus, pathology, outcomes data) is not detailed.
8. Sample Size for the Training Set:
The document does not mention a training set or its size. This is consistent with a device seeking substantial equivalence to a predicate, where the focus is on verification and validation against established standards and predicate performance rather than training a novel algorithm from scratch.
9. How the Ground Truth for the Training Set Was Established:
Not applicable, as no training set is mentioned for an AI model.
Summary of what is present and absent regarding acceptance criteria and study details:
The document primarily acts as a 510(k) summary, aiming to prove substantial equivalence to existing predicate devices based on various safety, performance, and technical characteristics. It asserts that "equivalent evaluation outcome" was achieved in performance, non-clinical, and clinical testing, and that there were "no safety and effectiveness and performance issue or no differences were found" compared to the predicate. However, it lacks the detailed quantitative acceptance criteria, specific study designs, sample sizes, and expert qualification information that would be typically found for studies evaluating novel AI algorithms or clinical efficacy with precise endpoints.
Ask a specific question about this device
(88 days)
The AeroDR Stitching System is used with Konica Minolta AeroDR SYSTEM which is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures. This device is used for examinations of long areas of anatomy such as the leg and spine. This device is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroDR Stitching System is used with 510(k) cleared Konica Minolta AeroDR SYSTEM (K102349) which is indicated for use in generating radiographic images of human anatomy. The AeroDR Stitching System is an accessory of stationary X-ray system which extends the capability of the AeroDR SYSTEM to allow the capture of long length images with an image area up to 1196mm x 349mm. It consists of AeroDR Stitching Unit, AeroDR Stitching X-ray Auto Barrier Unit and Power Supply Unit. The AeroDR Detecter (K102349) loaded in the AeroDR Stitching Unit takes up to 3 images and transfer them to the Console CS-7 (K102349). Combining the transferred images in the CS-7 enables diagnosis of long images.
The provided 510(k) summary for the Konica Minolta AeroDR Stitching System (K120752) focuses primarily on substantial equivalence to a predicate device and adherence to general safety and EMC standards. It does not contain detailed performance study data, acceptance criteria, or ground truth establishment relevant to the device's image stitching capability.
Here's an analysis of the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document unfortunately does not explicitly state specific acceptance criteria in a quantitative or qualitative manner for the imaging performance of the stitching system, nor does it report detailed device performance metrics against such criteria.
The "Performance-Testing" section states generically that: "Performance testing was conducted to verify the design output met the design input requirements and to validate AeroDR Stitching SYSTEM conformed to the defined user needs and intended uses upon the quality of the device software. Through validation results of sample images and non-clinical testing under simulated use conditions, safe, effectiveness and performances are confirmed the achievement of predefined acceptance criteria..."
However, these predefined acceptance criteria and the corresponding performance results are not detailed in the summary. The focus is on demonstrating substantial equivalence to a predicate device and compliance with safety and EMC standards.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size of "sample images" used for performance testing (if any were used beyond engineering testing). It also does not mention the data provenance (e.g., country of origin, retrospective or prospective) for any clinical or non-clinical image data used in testing. The phrase "validation results of sample images and non-clinical testing under simulated use conditions" suggests that testing might have involved a limited set of images, possibly generated internally, rather than a broad clinical dataset.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The 510(k) summary does not provide any information regarding the use of experts to establish ground truth for a test set. This type of detail would typically be found in a clinical performance study report, which is not included here.
4. Adjudication Method for the Test Set
Since no information is provided about expert review or a "test set" in the context of clinical evaluation, there is no mention of an adjudication method (e.g., 2+1, 3+1, none).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. The focus of the submission is on demonstrating substantial equivalence to a predicate device through technological characteristics and safety standards, not on comparative clinical performance with human readers. Therefore, no effect size of human readers improving with AI vs. without AI assistance is mentioned.
6. Standalone (Algorithm Only) Performance Study
While the device's stitching capability is an algorithm, the document does not present a standalone performance study in terms of specific metrics like stitching accuracy, artifact detection, or image quality assessments related only to the algorithm's output. The "Performance-Testing" section vaguely refers to "validation results of sample images and non-clinical testing under simulated use conditions" to confirm "safe, effectiveness and performances," but concrete results of the stitching algorithm's standalone performance are absent.
7. Type of Ground Truth Used
The document does not specify the type of ground truth used for any performance evaluation. Given the nature of a stitching system for long-length imaging, ground truth might ideally involve physical measurements on phantoms, or expert assessment of stitching lines and image continuity. However, this information is not provided.
8. Sample Size for the Training Set
The document does not provide any information about a training set size. As a 510(k) for a relatively early-stage digital radiography accessory (2012), and based on the summary, it's possible the device relied more on rule-based or deterministic algorithms for stitching rather than extensive machine learning requiring a large training set. Even if machine learning was used, the details are not presented.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about a training set, the document also does not explain how ground truth for a training set was established.
Summary of Missing Information:
The provided 510(k) summary is typical for a device primarily seeking substantial equivalence based on technological similarity and compliance with recognized standards. It lacks the detailed performance study information, including acceptance criteria, sample sizes, expert involvement, and ground truth methodologies, that would be expected for a more in-depth clinical performance evaluation or an AI-intensive device requiring specific validation of its intelligent features. The "Performance-Testing" section is very general and does not disclose the specific data or methods used to "confirm the achievement of predefined acceptance criteria."
Ask a specific question about this device
(27 days)
IMIX PanoRad and PanoRad SL X-Ray Systems are indicated for use in generating radiographic images of human anatomy. They have Solid State X-ray Imaging systems intended to replace radiographic film/screen systems in all general-purpose diagnostic procedures. (Not for mammography.)
The modified device can produce digital x-ray images in various configurations.
The provided text is a 510(k) summary for the IMIX ADR Finland OY PanoRad and PanoRad SL Systems. This document aims to demonstrate substantial equivalence to previously cleared devices, rather than establishing acceptance criteria for a new, distinct device's performance.
Therefore, the document does not contain the information requested in your prompt regarding acceptance criteria, device performance studies, and ground truth establishment because it is focused on demonstrating that the revised device is substantially equivalent to existing, legally marketed devices.
Here's why each of your requested points is not present in the provided text:
- A table of acceptance criteria and the reported device performance: This document doesn't define new performance criteria or report performance against them. Instead, it compares the characteristics of the modified device to a predicate device to show they are essentially the same.
- Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective): No specific test sets or clinical studies for performance evaluation are described. The filing relies on the established safety and effectiveness of the existing predicate devices and the individual components.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience): Since no new performance studies are detailed, there's no mention of experts establishing ground truth.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable, as there's no new test set described.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: This is not an AI/CAD device. It's an X-ray imaging system, so MRMC studies, especially with AI assistance, are not relevant to this filing.
- If a standalone (i.e. algorithm only without human-in-the loop performance) was done: Not applicable, as this is a hardware device (X-ray system), not an algorithm.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable, as there's no new performance study requiring ground truth.
- The sample size for the training set: Not applicable, as this is not an AI/Machine Learning device that requires a training set.
- How the ground truth for the training set was established: Not applicable for the same reason as point 8.
In summary, the provided document focuses on demonstrating substantial equivalence of a modified X-ray system to a predicate device by comparing technical specifications and intended use, rather than presenting a de novo performance study with specific acceptance criteria and ground truth validation.
Ask a specific question about this device
(75 days)
The AeroDR SYSTEM with P-21 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures. The AeroDR SYSTEM with P-21 is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroDR SYSTEM, K102349 is a digital imaging system to be used with diagnostic x-ray systems. It consists of AeroDR Defector (flat panel digital detector), Console CS-7 (operator console), AeroDR Interface Unit, AeroDR Generator Interface Unit, AeroDR Access Point and AeroDR Battery Charger. Images captured with the flat panel digital detector can be communicated to the operator console via wired connection or wireless, depend on user's choice. The modification was made to the AeroDR SYSTEM with P-21 to add the different panel size. The panel size of 17 x 17 inches (P-21) is added to 17 x 14 inches. The materials of the panel remain unchanged and no other changes were made other than the panel size from 17 x 14 inches to 17 x 17 inches.
I am sorry, but based on the provided text, there is no information about acceptance criteria or a study proving the device meets those criteria. The document describes a 510(k) submission for a medical device (AeroDR SYSTEM with P-21), but it primarily focuses on its substantial equivalence to a predicate device, its indications for use, and regulatory compliance.
Specifically, the "Performance Testing" section states: "Performance data from non-clinical testing of the AeroDR SYSTEM with P-21 is compared with data from the predicate device, AeroDR SYSTEM (P-11). This comparison showed that the AeroDR SYSTEM with P-21 performed as well as the predicate device."
This statement indicates a comparison was made, but it does not provide details on:
- Specific acceptance criteria.
- The results of the performance data in terms of specific metrics.
- The type of study conducted (e.g., sample size, data provenance, ground truth, expert involvement, MRMC, standalone performance).
Therefore, I cannot populate the table or answer the specific questions about the study from the given text.
Ask a specific question about this device
Page 1 of 1