Search Results
Found 3 results
510(k) Data Aggregation
(53 days)
The OEC One™ mobile C-arm system is designed to provide fluoroscopic and digital spot images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, neurologic, vascular, critical care, and emergency procedures.
The OEC One™ is a mobile C-arm x-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, vascular, neurologic, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped apparatus that has an image intensifier on the top of the C-arm and the X-ray Source assembly at the opposite end.
The OEC One™ is capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allow the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C- arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock.
The subject device is labelled as OEC One.
The provided text is a 510(k) Premarket Notification Submission for the OEC One with vascular option. This document primarily focuses on establishing substantial equivalence to a predicate device (OEC One, K172700) rather than presenting a detailed study with acceptance criteria for device performance in the context of an AI/algorithm-driven device.
The "device" in this context is an X-ray imaging system (OEC One™ mobile C-arm system), and the changes described are hardware and software modifications to enhance vascular imaging features. It is not an AI or algorithm-only device with specific performance metrics like sensitivity, specificity, or AUC.
Therefore, most of the requested information regarding acceptance criteria for AI performance, sample sizes for test/training sets, expert ground truth, adjudication methods, MRMC studies, or standalone algorithm performance is not applicable or cannot be extracted from this document.
However, I can extract information related to the device's technical specifications and the testing performed to demonstrate its safety and effectiveness.
Here is a summary of the information that can be extracted, addressing the closest relevant points:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of numerical acceptance criteria (e.g., sensitivity, specificity) for the device's imaging performance in relation to clinical outcomes. Instead, the acceptance criteria are generally implied by conformance to existing standards and successful completion of various engineering and verification tests. The "reported device performance" refers to the device meeting these design inputs and user needs.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with medical electrical equipment standards | Certified compliant with IEC 60601-1 Ed. 3 series, including IEC60601-2-54:2009 and IEC 60601-2-43:2010. |
Compliance with radiation performance standards | All applicable 21 CFR Subchapter J performance standards were met. |
Design inputs and user needs met | Verification and validation executed; results demonstrate the OEC One™ system met the design inputs and user needs. |
Image quality and dose assessment for fluoroscopy | All image quality/performance testing identified for fluoroscopy in FDA's "Information for Industry: X-ray Imaging Devices- Laboratory Image Quality and Dose Assessment. Tests and Standards" was performed with acceptable results. This included testing using anthropomorphic phantoms. |
Software documentation requirements for moderate level of concern | Substantial equivalence based on software documentation for a "Moderate" level of concern device. |
Functional operation of new vascular features | The primary change was to implement vascular features (Subtraction, Roadmap, Peak Opacification, Cine Recording/Playback, Re-registration, Variable Landmarking, Mask Save/Recall, Reference Image Hold) to perform vascular procedures with "easiest workflow and least intervention by the user" and "further enhance the vascular workflows." (Bench testing demonstrated user requirements were met.) |
Safety and effectiveness | The changes do not introduce any adverse effects nor raise new questions of safety and effectiveness. |
2. Sample sized used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated in terms of patient data. The testing involved "anthropomorphic phantoms" for image performance and various engineering/bench testing for functional validation. These are not "test sets" in the typical sense of a dataset for an AI algorithm.
- Data Provenance: Not applicable as it's not patient data for AI evaluation. The testing was conducted internally at GE Hualun Medical Systems Co., Ltd.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not applicable. Ground truth from experts is not mentioned for this type of device evaluation.
- Qualifications of Experts: Not applicable.
4. Adjudication method for the test set
- Adjudication Method: Not applicable. There was no expert adjudication process described for the testing performed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This document describes a C-arm X-ray system, not an AI-assisted diagnostic tool that would typically undergo such a study.
- Effect Size of Human Readers: Not applicable.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable. The device is an imaging system; its "performance" is inherently tied to image acquisition and display, which are used by a human operator/physician. The "vascular features" are software enhancements to the imaging workflow, not a standalone AI algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For image quality, the ground truth was based on physical phantom characteristics and established technical standards (e.g., image resolution, contrast, noise, dose measurements). For functional aspects, it was based on meeting design inputs and user requirements validated through engineering tests. No expert consensus, pathology, or outcomes data were used as "ground truth" for this device's substantial equivalence declaration.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This document does not describe an AI model that requires a training set. The software updates are feature additions and modifications, not learned from a large dataset in the way a deep learning model would be.
9. How the ground truth for the training set was established
- Ground Truth Establishment: Not applicable, as there is no mention of an AI model with a training set.
Ask a specific question about this device
(63 days)
The OEC One™ mobile C-arm system is designed to provide fluoroscopic and digital spot/film images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, neurologic, critical care, and emergency procedures.
The OEC One™ is a mobile C-arm x-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, neurologic, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped
Based on the provided text, the device in question is the OEC One™ mobile C-arm system, which is an image-intensified fluoroscopic X-ray system. The document is a 510(k) Premarket Notification Submission, indicating that the manufacturer is seeking to demonstrate substantial equivalence to a legally marketed predicate device rather than provide evidence of a novel device's safety and effectiveness.
Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed within the context of demonstrating substantial equivalence to the predicate device (K123603 OEC Brivo), rather than proving the device's de novo performance against specific clinical metrics as one might expect for a new AI/CADx device.
Here's an analysis of the provided information in response to your specific questions:
1. A table of acceptance criteria and the reported device performance
The document does not present a table of specific performance acceptance criteria (e.g., sensitivity, specificity, accuracy) for a diagnostic output, as this is an imaging device rather than a diagnostic AI/CADx algorithm. Instead, the acceptance criteria are linked to demonstrating that the modified device maintains the same safety and effectiveness as the predicate device, especially considering the changes made (integration of mainframe/workstation, new display, software updates).
The "acceptance criteria" for this 510(k) appear to be:
- Conformance to relevant safety and performance standards: IEC 60601-1 Ed. 3 series (including IEC60601-2-54 and IEC 60601-2-43), and all applicable 21 CFR Subchapter J performance standards.
- Successful verification and validation: Demonstrating that the system met design input and user needs, including hazard mitigation.
- Maintenance of comparable image quality: Assessed through engineering bench testing using anthropomorphic phantoms.
- Compliance with software development requirements: For a "Moderate" level of concern device.
Acceptance Criteria Category | Reported Device Performance/Evidence |
---|---|
Safety and Performance Standards | - System tested by an NRTL and certified compliant with IEC 60601-1 Ed. 3 series, including IEC60601-2-54 and IEC 60601-2-43. |
- All applicable 21CFR Subchapter J performance standards are met. |
| Verification and Validation | - Verification and validation including hazard mitigation has been executed with results demonstrating the OEC One™ system met design input and user needs. - Developed under GE Healthcare's Quality Management System, including design controls, risk management, and software development life cycle processes.
- Quality assurance measures applied: Risk Analysis, Required Reviews, Design Reviews, Unit Testing (Sub System verification), Integration testing (System verification), Performance testing (Verification), Safety testing (Verification), Simulated use testing (Validation). |
| Image Quality/Performance (Non-Clinical) | - Additional engineering bench testing on image performance using anthropomorphic phantoms was performed. - All the image quality/performance testing identified for fluoroscopy found in FDA's "Information for Industry: X-ray Imaging Devices - Laboratory Image Quality and Dose Assessment, Tests and Standards" was performed with acceptable results. |
| Software Compliance | - Substantial equivalence was also based on software documentation for a "Moderate" level of concern device. |
| Clinical Equivalence (No Clinical Study) | - "Because OEC One's modification based on the predicate device does not change the system's intended use and represent equivalent technological characteristics, clinical studies are not required to support substantial equivalence." This indicates the acceptance criterion for clinical performance was met by demonstrating the modifications did not impact the clinical function or safety relative to the predicate. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable in the context of clinical images with human expert ground truth for an AI/CADx device. The testing described focuses on non-clinical engineering bench tests using anthropomorphic phantoms and system verification/validation against standards.
- Data Provenance: The document states "Additional engineering bench testing on image performance using anthropomorphic phantoms was also performed." This implies a prospective generation of test data using physical phantoms, rather than retrospective or prospective clinical patient data. The country of origin for this testing is not explicitly stated beyond the manufacturer's location (China).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. The testing described is primarily engineering and performance verification using phantoms and standards, not clinical image interpretation requiring expert radiologists to establish ground truth for a diagnostic task.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable, as there is no clinical image-based test set requiring human adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. The document explicitly states: "Clinical testing: Because OEC One’s modification based on the predicate device does not change the system’s intended use and represent equivalent technological characteristics, clinical studies are not required to support substantial equivalence." This is not a study of AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is not an AI/CADx algorithm. The device itself is an X-ray imaging system. The software updates mentioned ("Adaptive Dynamic Range Optimization(ARDO) and motion artifact reduction") relate to image processing within the device itself, not a separate standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the device's performance is established through:
- Engineering benchmarks and physical phantom measurements: For image quality assessment against established standards (e.g., FDA's "Information for Industry: X-ray Imaging Devices - Laboratory Image Quality and Dose Assessment, Tests and Standards").
- Compliance with international safety and performance standards: IEC 60601 series, 21 CFR Subchapter J.
- Conformance to design specifications and user needs: Through verification and validation activities.
8. The sample size for the training set
- Not applicable. This is not a machine learning or AI device that requires a training set in the conventional sense. The "software updates" mentioned are more likely based on engineering principles and signal processing than machine learning training.
9. How the ground truth for the training set was established
- Not applicable, as there is no explicit "training set" for an AI algorithm. Software development and calibration would typically rely on engineering specifications, physical models, and potentially empirical adjustments based on performance testing.
Ask a specific question about this device
(153 days)
The OEC Elite MiniView (mobile mini C-Arm) is designed to provide physicians with real time goopic visualization of patients of all ages. It is intended to aid physicians and surgeons during diagnostic or therapeutic treatment/surgical procedures of the limbs/extremities and shoulders including, but not limited to, orthopedics and emergency medicine.
The OEC Elite™ MiniView™ is a mobile fluoroscopic mini C-arm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment, and surgical procedures of the shoulders, limbs, and extremities. The system consists of a C-arm attached to an image processing workstation. A CsI(TI) - CMOS flat panel detector and the identical X-ray source monoblock are used for image acquisition.
The C-arm supports the high-voltage generator, X-ray tube, X-ray controls, collimator, and the FPD. The C-arm is capable of performing linear (vertical, horizontal, orbital) and rotational motions that allow the user to position the X-Ray imaging components at various angles and distances with respect to the patient extremity anatomy to be imaged. The C and support arm are mechanically balanced allowing for ease of movement and capable of being "locked" in place using an electronically controlled braking system. The workstation is a stable mobile platform that supports the C-arm, image display monitor(s), image processing equipment/software, recording devices, data input/output devices and power control systems.
The OEC Elite™ MiniView™
is a mobile fluoroscopic mini C-arm system. The provided document is a 510(k) Premarket Notification Submission, which focuses on demonstrating substantial equivalence to a predicate device, rather than defining and proving acceptance criteria in the typical sense of a clinical trial for a novel AI device. However, based on the information provided, we can extract details about the performance evaluation done to demonstrate this equivalence.
Here's an analysis based on the provided text, structured according to your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics like sensitivity, specificity, or AUC as one might find for an AI diagnostic algorithm. Instead, the evaluation focuses on demonstrating that the performance of the proposed device (OEC Elite™ MiniView™
) is at least equivalent to the predicate device (OEC Mini 6800 Digital Mobile C-arm
) and reference devices in terms of image quality and clinical capability.
The "acceptance criteria" here implicitly revolve around ensuring the safety and effectiveness of the updated device, which includes:
- Meeting design input and user needs.
- Compliance with regulatory standards (IEC 60601-1 Ed.3 series, IEC 60601-2-54, IEC 60601-2-43, and 21CFR Subchapter J performance standards).
- Image quality and clinical capability at least equivalent to the predicate device.
Performance Aspect | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Overall Performance | System meets design input, user needs, and regulatory standards; image quality and clinical capability at least equivalent to predicate. | "The system has been NRTL tested and certified compliant... All applicable 21CFR Subchapter J performance standards are met. The OEC Elite™ MiniView™ system was developed under the GE Healthcare's design controls processes... and additional engineering bench testing was performed... to demonstrate system performance." |
Image Quality | Image quality metrics (e.g., resolution, noise reduction) are adequate for viewing extremities and are at least equivalent to predicate/reference devices. | Pixel Size: Proposed Device: 100 Microns. (Reference Device: 75 Microns). Reported: "larger for reducing image noise. The resolution is higher than the Image Intensifier on the predicate." |
Array Size: Proposed Device: 1.3k x 1.3k. Reported: "adequate for viewing extremities." | ||
Field Size: Proposed Device: Full Field 13 cm circle, Limited Field 10 cm circle. Reported: "appropriate for viewing extremities." | ||
Clinical Capability | Demonstrated ability to provide fluoroscopic visualization in diagnostic/therapeutic/surgical procedures of limbs/extremities/shoulders equivalently to the predicate. | Cadaver study results: "For all procedures, the study confirmed the clinical capability and overall quality of the images produced by the OEC Elite™ MiniView™ was at least equivalent to that of the Mini 6800 Digital Mobile C-Arm." |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document mentions a cadaver study involving two cadavers on which nineteen orthopedic procedures were performed across a variety of extremity anatomies.
- Data Provenance: The cadaver study was performed as part of the submission process, implying it was a prospective evaluation specifically for this device. The country of origin of the cadavers is not specified in the provided text.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two independent physicians were used to evaluate the images.
- Qualifications of Experts: The document states they were "two independent physicians", and given the nature of the device (fluoroscopic imaging for orthopedic procedures), it's highly probable these were orthopedic surgeons or radiologists with expertise in musculoskeletal imaging and procedures. However, their specific specializations or years of experience are not explicitly stated in the provided text.
4. Adjudication Method for the Test Set
The document states: "The performance of the subject device to the predicate was also performed by two independent physicians." It further states that the "study confirmed the clinical capability and overall quality of the images produced by the OEC Elite™ MiniView™ was at least equivalent to that of the Mini 6800 Digital Mobile C-Arm." This implies a consensus or comparative evaluation by the two physicians. However, a specific adjudication method (e.g., 2+1, 3+1, etc.) is not explicitly detailed. It's presented as a direct comparison where both physicians apparently agreed on the equivalence.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Not a typical MRMC study: The evaluation described is not a traditional MRMC comparative effectiveness study focused on quantifying human reader improvement with AI assistance. This device is an imaging system (hardware and software for image acquisition and processing), not an AI-powered diagnostic aide designed to improve human reader performance for a specific task.
- Focus on System Equivalence: The study aimed to demonstrate the system's overall clinical capability and image quality equivalence to a predicate device, as evaluated by human readers (the two physicians), rather than measuring the effect size of AI assistance on human readers.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone "Algorithm" Performance: The device itself is an imaging system, not purely an algorithm. Its performance is inherent in the images it produces. Therefore, "standalone" in this context refers to the system's ability to produce diagnostically acceptable images.
- Bench Testing and Image Quality Tests: The document details extensive "engineering bench testing" and "image quality/performance testing" identified for fluoroscopy. These tests evaluate the system's technical image output without human interpretation as the primary endpoint. This can be considered the equivalent of "standalone" performance for an imaging device. Specifically mentioned are:
- Demonstration of system performance.
- Imaging performance evaluation using anthropomorphic phantoms (including a pediatric anthropomorphic phantom).
- All image quality/performance testing identified for fluoroscopy in FDA's "Information for Industry: X-ray Imaging Devices - Laboratory Image Quality and Dose Assessment, Tests and Standards" was performed.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
- For the Cadaver Study (Test Set): The ground truth for evaluating clinical capability and image quality seems to be based on expert consensus (or agreement) between the two independent physicians regarding the equivalence of the images produced by the OEC Elite™ MiniView™ compared to the predicate device for diagnostic and procedural guidance in the cadaveric setting. There is no mention of pathology or outcomes data for this specific evaluation, as it's a technical performance and clinical utility assessment on cadavers.
- For Bench Testing: The ground truth for bench testing and phantom studies would be defined by known physical properties of the phantoms and established engineering specifications and standards for image quality metrics.
8. The Sample Size for the Training Set
The document describes a medical imaging device (C-arm), not an AI algorithm that requires a separate training set. Therefore, the concept of a "training set sample size" as typically applied to machine learning models is not applicable here. The device's underlying technology and software architecture are based on existing, proven designs (predicate and reference devices), with modifications validated through engineering bench tests and the cadaver study.
9. How the Ground Truth for the Training Set Was Established
As noted in point 8, there is no explicit "training set" in the context of an AI algorithm described in this document. The device's development involved standard engineering practices, which could be considered an iterative design and testing process that refines the system's performance. The "ground truth" during this development would be based on engineering specifications, performance targets, and established imaging principles, rather than a labeled dataset for training an AI model.
Ask a specific question about this device
Page 1 of 1