Search Results
Found 3 results
510(k) Data Aggregation
(276 days)
OEC One ASD
The OEC One ASD mobile C-arm system is designed to provide fluoroscopic and digital spot images of adult and pediatic patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, vascular, critical care, and emergency procedures.
The OEC One ASD is a mobile C-arm X-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, neurologic, vascular, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped apparatus that has a flat panel detector on the top of the C-arm and the X-ray Source assembly at the opposite end.
The OEC One ASD is capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allows the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C- arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock.
The subject device is labelled as OEC One ASD.
The provided document is a 510(k) Summary of Safety and Effectiveness for the GE Hualun Medical Systems Co. Ltd. OEC One ASD, a mobile C-arm X-ray system. The document focuses on demonstrating substantial equivalence to a predicate device, OEC One (K182626), rather than presenting a study with specific acceptance criteria and detailed device performance results for a new AI/CAD feature.
The submission is for a modification of an existing device, primarily introducing an amorphous silicon (a-Si) flat panel detector as the image receptor and updating some hardware and software components. The changes are stated to enhance device performance and are discussed in terms of their impact on safety and effectiveness, concluding that no new hazards or concerns were raised.
Therefore, the information required for a detailed description of acceptance criteria and a study proving device performance, especially for AI/CAD features, is largely not present in this document. The document centers on demonstrating that the modified device maintains safety and effectiveness and is substantially equivalent to the predicate, rather than detailing a study against specific acceptance criteria for a novel functionality.
However, I can extract the available relevant information and highlight what is missing based on your request.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly define "acceptance criteria" in the context of a study demonstrating novel AI/CAD feature performance. Instead, it presents a comparison table of technical specifications between the proposed device (OEC One ASD) and the predicate device (OEC One K182626) to demonstrate substantial equivalence. The "Acceptance Criteria" here are implicitly derived from the predicate's performance and safety profiles.
Feature / Performance Metric | Predicate Device (OEC One K182626) | Subject Device (OEC One ASD) | Discussion of Differences / Equivalence |
---|---|---|---|
Image Receptor | Image Intensifier | 21cm Amorphous Silicon (a-Si) Flat Panel Detector | Substantially Equivalent. Change to enhance device performance. |
DQE | 65% | 70% (0 lp/mm) | Enhanced DQE, indicating improved image quality. |
MTF | 45% | 46% (1.0 lp/mm) | Slightly enhanced MTF, indicating improved image quality. |
Field of View | 9 inch, 6 inch, 4.5 inch | 21 cm, 15 cm, 11 cm | No new hazards or hazard situations. Performance testing indicated effectiveness. |
Image Matrix Size | 1000x1000 | 1520x1520 | Substantially Equivalent. Driven by detector pixel matrix for higher resolution. |
Image Shape | Circle | Squircle | Substantially Equivalent. Enhanced viewing area without typically unnecessary corner areas. |
Anti-scatter Grid | Line Rate: 60 L/cm, Ratio: 10:1, Focal Distance: 100 cm | Line Rate: 74 L/cm, Ratio: 14:1, Focal Distance: 100 cm | Substantially Equivalent. Specification updated based on new image receptor. |
X-ray Generator | Fluoroscopy: 0.1-4.0 mA | Fluoroscopy: 0.1-8.0 mA | Substantially Equivalent. mA range change for optimized image quality (ABS). No new safety/effectiveness concerns. |
Digital Spot: 0.2-10.0 mA (100-120V system) | Digital Spot: 2-10.0 mA (100-120V system) | Substantially Equivalent. mA range change for optimized image quality (increasing mA on thin anatomy). No new safety/effectiveness concerns. | |
Imaging Modes | Digital Spot: Normal Dose, Low Dose | Digital Spot: Normal Dose | Low Dose mode not provided for Digital Spot as high mA exposure ensures quality; similar functionality available via Fluoroscopy. |
Roadmap: Normal Dose, Low Dose | Removed | Roadmap mode removed based on marketing; similar functionality via peak opacify function on cine. | |
Imaging Features | Zoom & Roam | Zoom (Live Zoom) & Roam | Improved with Live Zoom during fluoro/cine. |
N/A | Digital Pen | Added for planning/educational purposes. | |
Monitor Display | Resolution: 1920x1080 | Resolution: 3840 x 2160 | Substantially Equivalent. Updated to higher resolution due to IT advancement. |
8bit image display | 10bit image display | Substantially Equivalent. Better display technology. | |
Tech View Tablet | OS: Android 5.1 | OS: Android 11.0 | Substantially Equivalent. OS upgraded due to IT advancement. |
C-Arm Physical Dimensions | Orbital Rotation: 120° (90° underscan /30° overscan) | Orbital Rotation: 150° (95° underscan /55° overscan) | Substantially Equivalent. Larger range for user convenience. |
Image Storage | 100,000 Images | 150,000 Images | Substantially Equivalent. Driven by IT advancement (more storage). |
Wireless Printing Module | N/A | Wireless Printing Module | Substantially Equivalent. Not for diagnostic use or device control. No new risks. |
Video Distributor | DVI, BNC | DP, BNC | Substantially Equivalent. Driven by IT advancement. |
Laser Aimer | Red Laser, Class IIIa/3R, 650 nm, ≤ 5.0 mW | Green Laser, Class 2, 510-530nm, 1mW | Substantially Equivalent. Updated for green laser and convenience (tube side). Both meet laser product requirements. |
Image Processing | ADRO (based on CPU) | ADRO (based on GPU) | Substantially Equivalent. GPU for better calculation speed. All other listed image processing functions are the same. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The submission states, "comparative clinical images were evaluated to demonstrate substantial equivalence for the OEC One ASD compared to the cleared predicate," but no details on the sample size, data provenance (e.g., country of origin, retrospective/prospective nature), or specific evaluation methodology for these clinical images are given.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The document states that "comparative clinical images were evaluated," but it does not specify the number or qualifications of experts involved in this evaluation or the establishment of ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. The method used to resolve discrepancies or establish a consensus for the evaluation of comparative clinical images is not described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with AI assistance was not mentioned or described in this 510(k) submission. The document discusses device modifications and their impact on image quality and functionality, but not the comparative effectiveness of human readers utilizing AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This submission does not describe a standalone performance study for an AI algorithm. The device itself is an X-ray system, and while it has "Image Processing" features, these are not presented as standalone AI algorithms for diagnostic assistance but rather as integrated components affecting image generation and display characteristics. The update to ADRO from CPU to GPU based processing is noted for speed, but its standalone performance as an AI algorithm is not evaluated or presented.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document mentions "comparative clinical images were evaluated," but it does not specify the type of ground truth against which these images were assessed. Since the primary focus is on demonstrating substantial equivalence of technical image characteristics rather than validating a diagnostic AI output, a traditional "ground truth" (like pathology or outcomes a specific AI would predict) is not explicitly detailed. The implicit ground truth would be the expected imaging performance and diagnostic utility comparable to the predicate device.
8. The sample size for the training set
This information is not provided in the document. The document describes modifications to an existing X-ray system, including software updates. It states, "Its software is based on the architecture, design and code base of the predicate device OEC One (K182626)," and underwent a standard software development lifecycle. There is no mention of a separate "training set" in the context of an AI/CAD algorithm as typically understood for deep learning models.
9. How the ground truth for the training set was established
Since no training set for an AI/CAD algorithm is mentioned (refer to point 8), the method for establishing its ground truth is not applicable/provided in this document.
Ask a specific question about this device
(53 days)
OEC One
The OEC One™ mobile C-arm system is designed to provide fluoroscopic and digital spot images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, neurologic, vascular, critical care, and emergency procedures.
The OEC One™ is a mobile C-arm x-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, vascular, neurologic, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped apparatus that has an image intensifier on the top of the C-arm and the X-ray Source assembly at the opposite end.
The OEC One™ is capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allow the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C- arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock.
The subject device is labelled as OEC One.
The provided text is a 510(k) Premarket Notification Submission for the OEC One with vascular option. This document primarily focuses on establishing substantial equivalence to a predicate device (OEC One, K172700) rather than presenting a detailed study with acceptance criteria for device performance in the context of an AI/algorithm-driven device.
The "device" in this context is an X-ray imaging system (OEC One™ mobile C-arm system), and the changes described are hardware and software modifications to enhance vascular imaging features. It is not an AI or algorithm-only device with specific performance metrics like sensitivity, specificity, or AUC.
Therefore, most of the requested information regarding acceptance criteria for AI performance, sample sizes for test/training sets, expert ground truth, adjudication methods, MRMC studies, or standalone algorithm performance is not applicable or cannot be extracted from this document.
However, I can extract information related to the device's technical specifications and the testing performed to demonstrate its safety and effectiveness.
Here is a summary of the information that can be extracted, addressing the closest relevant points:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of numerical acceptance criteria (e.g., sensitivity, specificity) for the device's imaging performance in relation to clinical outcomes. Instead, the acceptance criteria are generally implied by conformance to existing standards and successful completion of various engineering and verification tests. The "reported device performance" refers to the device meeting these design inputs and user needs.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with medical electrical equipment standards | Certified compliant with IEC 60601-1 Ed. 3 series, including IEC60601-2-54:2009 and IEC 60601-2-43:2010. |
Compliance with radiation performance standards | All applicable 21 CFR Subchapter J performance standards were met. |
Design inputs and user needs met | Verification and validation executed; results demonstrate the OEC One™ system met the design inputs and user needs. |
Image quality and dose assessment for fluoroscopy | All image quality/performance testing identified for fluoroscopy in FDA's "Information for Industry: X-ray Imaging Devices- Laboratory Image Quality and Dose Assessment. Tests and Standards" was performed with acceptable results. This included testing using anthropomorphic phantoms. |
Software documentation requirements for moderate level of concern | Substantial equivalence based on software documentation for a "Moderate" level of concern device. |
Functional operation of new vascular features | The primary change was to implement vascular features (Subtraction, Roadmap, Peak Opacification, Cine Recording/Playback, Re-registration, Variable Landmarking, Mask Save/Recall, Reference Image Hold) to perform vascular procedures with "easiest workflow and least intervention by the user" and "further enhance the vascular workflows." (Bench testing demonstrated user requirements were met.) |
Safety and effectiveness | The changes do not introduce any adverse effects nor raise new questions of safety and effectiveness. |
2. Sample sized used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated in terms of patient data. The testing involved "anthropomorphic phantoms" for image performance and various engineering/bench testing for functional validation. These are not "test sets" in the typical sense of a dataset for an AI algorithm.
- Data Provenance: Not applicable as it's not patient data for AI evaluation. The testing was conducted internally at GE Hualun Medical Systems Co., Ltd.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not applicable. Ground truth from experts is not mentioned for this type of device evaluation.
- Qualifications of Experts: Not applicable.
4. Adjudication method for the test set
- Adjudication Method: Not applicable. There was no expert adjudication process described for the testing performed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This document describes a C-arm X-ray system, not an AI-assisted diagnostic tool that would typically undergo such a study.
- Effect Size of Human Readers: Not applicable.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable. The device is an imaging system; its "performance" is inherently tied to image acquisition and display, which are used by a human operator/physician. The "vascular features" are software enhancements to the imaging workflow, not a standalone AI algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For image quality, the ground truth was based on physical phantom characteristics and established technical standards (e.g., image resolution, contrast, noise, dose measurements). For functional aspects, it was based on meeting design inputs and user requirements validated through engineering tests. No expert consensus, pathology, or outcomes data were used as "ground truth" for this device's substantial equivalence declaration.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This document does not describe an AI model that requires a training set. The software updates are feature additions and modifications, not learned from a large dataset in the way a deep learning model would be.
9. How the ground truth for the training set was established
- Ground Truth Establishment: Not applicable, as there is no mention of an AI model with a training set.
Ask a specific question about this device
(63 days)
OEC One
The OEC One™ mobile C-arm system is designed to provide fluoroscopic and digital spot/film images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, neurologic, critical care, and emergency procedures.
The OEC One™ is a mobile C-arm x-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, neurologic, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped
Based on the provided text, the device in question is the OEC One™ mobile C-arm system, which is an image-intensified fluoroscopic X-ray system. The document is a 510(k) Premarket Notification Submission, indicating that the manufacturer is seeking to demonstrate substantial equivalence to a legally marketed predicate device rather than provide evidence of a novel device's safety and effectiveness.
Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed within the context of demonstrating substantial equivalence to the predicate device (K123603 OEC Brivo), rather than proving the device's de novo performance against specific clinical metrics as one might expect for a new AI/CADx device.
Here's an analysis of the provided information in response to your specific questions:
1. A table of acceptance criteria and the reported device performance
The document does not present a table of specific performance acceptance criteria (e.g., sensitivity, specificity, accuracy) for a diagnostic output, as this is an imaging device rather than a diagnostic AI/CADx algorithm. Instead, the acceptance criteria are linked to demonstrating that the modified device maintains the same safety and effectiveness as the predicate device, especially considering the changes made (integration of mainframe/workstation, new display, software updates).
The "acceptance criteria" for this 510(k) appear to be:
- Conformance to relevant safety and performance standards: IEC 60601-1 Ed. 3 series (including IEC60601-2-54 and IEC 60601-2-43), and all applicable 21 CFR Subchapter J performance standards.
- Successful verification and validation: Demonstrating that the system met design input and user needs, including hazard mitigation.
- Maintenance of comparable image quality: Assessed through engineering bench testing using anthropomorphic phantoms.
- Compliance with software development requirements: For a "Moderate" level of concern device.
Acceptance Criteria Category | Reported Device Performance/Evidence |
---|---|
Safety and Performance Standards | - System tested by an NRTL and certified compliant with IEC 60601-1 Ed. 3 series, including IEC60601-2-54 and IEC 60601-2-43. |
- All applicable 21CFR Subchapter J performance standards are met. |
| Verification and Validation | - Verification and validation including hazard mitigation has been executed with results demonstrating the OEC One™ system met design input and user needs. - Developed under GE Healthcare's Quality Management System, including design controls, risk management, and software development life cycle processes.
- Quality assurance measures applied: Risk Analysis, Required Reviews, Design Reviews, Unit Testing (Sub System verification), Integration testing (System verification), Performance testing (Verification), Safety testing (Verification), Simulated use testing (Validation). |
| Image Quality/Performance (Non-Clinical) | - Additional engineering bench testing on image performance using anthropomorphic phantoms was performed. - All the image quality/performance testing identified for fluoroscopy found in FDA's "Information for Industry: X-ray Imaging Devices - Laboratory Image Quality and Dose Assessment, Tests and Standards" was performed with acceptable results. |
| Software Compliance | - Substantial equivalence was also based on software documentation for a "Moderate" level of concern device. |
| Clinical Equivalence (No Clinical Study) | - "Because OEC One's modification based on the predicate device does not change the system's intended use and represent equivalent technological characteristics, clinical studies are not required to support substantial equivalence." This indicates the acceptance criterion for clinical performance was met by demonstrating the modifications did not impact the clinical function or safety relative to the predicate. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable in the context of clinical images with human expert ground truth for an AI/CADx device. The testing described focuses on non-clinical engineering bench tests using anthropomorphic phantoms and system verification/validation against standards.
- Data Provenance: The document states "Additional engineering bench testing on image performance using anthropomorphic phantoms was also performed." This implies a prospective generation of test data using physical phantoms, rather than retrospective or prospective clinical patient data. The country of origin for this testing is not explicitly stated beyond the manufacturer's location (China).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. The testing described is primarily engineering and performance verification using phantoms and standards, not clinical image interpretation requiring expert radiologists to establish ground truth for a diagnostic task.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable, as there is no clinical image-based test set requiring human adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. The document explicitly states: "Clinical testing: Because OEC One’s modification based on the predicate device does not change the system’s intended use and represent equivalent technological characteristics, clinical studies are not required to support substantial equivalence." This is not a study of AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is not an AI/CADx algorithm. The device itself is an X-ray imaging system. The software updates mentioned ("Adaptive Dynamic Range Optimization(ARDO) and motion artifact reduction") relate to image processing within the device itself, not a separate standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the device's performance is established through:
- Engineering benchmarks and physical phantom measurements: For image quality assessment against established standards (e.g., FDA's "Information for Industry: X-ray Imaging Devices - Laboratory Image Quality and Dose Assessment, Tests and Standards").
- Compliance with international safety and performance standards: IEC 60601 series, 21 CFR Subchapter J.
- Conformance to design specifications and user needs: Through verification and validation activities.
8. The sample size for the training set
- Not applicable. This is not a machine learning or AI device that requires a training set in the conventional sense. The "software updates" mentioned are more likely based on engineering principles and signal processing than machine learning training.
9. How the ground truth for the training set was established
- Not applicable, as there is no explicit "training set" for an AI algorithm. Software development and calibration would typically rely on engineering specifications, physical models, and potentially empirical adjustments based on performance testing.
Ask a specific question about this device
Page 1 of 1