Search Results
Found 56 results
510(k) Data Aggregation
(125 days)
OXO
The Orthoscan TAU Mini C-arm X-ray system is designed to provide physicians with general fluoroscopic visualization, using pulsed or continuous fluoroscopy, of a patient including but not limited to, diagnostic, surgical, and critical emergency care procedures for patients of all ages including pediatric populations when imaging limbs/extremities, shoulders, at locations including but not limited to, hospitals, ambulatory surgery, emergency, traumatology, orthopedic, critical care, or physician office environments.
The proposed modifications to Orthoscan TAU Mini C-Arm system models 1000-0015, 1000-0016, 1000-0017 retain identical function as the predicate Orthoscan TAU Mini C-arm (K213113) and the Orthoscan VERSA Mini C-arm (K243452) as a mobile fluoroscopic mini C-arm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment and surgical procedures involving anatomical regions such as but not limited to that of extremities, limbs, shoulders and knees and hips. The system consists of C-arm support attached to the image workstation.
The changes to the Orthoscan TAU Mini C-Arm X-ray system models 1000-0015, 1000-0016, 1000-0017 represent a modification of our presently legally marketed devices Orthoscan TAU Mini C-Arm (K213113) and Orthoscan VERSA Mini C-arm (K243452). The proposed modifications to the predicate encompass the implementation of a LINUX based operating system upgrade from Ubuntu version 16.04 to Ubuntu version 20.04, revisions to generator printed circuit board to improve power management efficiency, implementation of an alternate generator radiation shielding material to reduce environmental impact of lead, update to wireless footswitch communication protocol, an alternate detector for Orthoscan TAU Mini C-arm model 1000-0017 and the introduction of an optional 32in. display monitor.
The proposed device replicates the features and functions of the predicate devices without impacting image clarity or dose levels.
For both the predicate TAU (K213113) and proposed device, the following are unchanged; C-arm support of flat panel detector, generator and x-ray controls, mechanical connections, balancing, locking, rotations, work-station platform, main user interface controls, touch screen interface, selectable imaging, X-ray technique control, entry of patient information, wired footswitch operation, interface connection panel and DICOM fixed wire and wireless network interfaces.
The provided FDA 510(k) clearance letter and summary for the Orthoscan TAU Mini C-Arm details a modification to an existing device rather than a new device with novel performance claims. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are primarily focused on demonstrating substantial equivalence to existing predicate devices, particularly in terms of image quality and safety, rather than establishing absolute performance metrics for a completely new clinical claim.
Here's a breakdown of the requested information based on the provided document:
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for this 510(k) submission is to demonstrate substantial equivalence to the predicate devices (Orthoscan TAU Mini C-Arm K213113 and Orthoscan VERSA Mini C-arm K243452) in terms of image quality, safety, and functionality, despite the implemented modifications.
Since this is a modification to an existing fluoroscopic X-ray system, the "performance" is assessed relative to the predicate, with the aim of ensuring no degradation, and ideally, slight improvement in certain aspects. The document doesn't provide a table of precise quantitative acceptance criteria for image quality metrics (e.g., spatial resolution in lp/mm, contrast-to-noise ratio) because the primary goal was comparative equivalence, not meeting predefined numerical thresholds for a new claim.
However, the reported device performance, relative to the predicate, is implicitly stated:
Acceptance Criterion (Implicit) | Reported Device Performance (Relative to Predicate) |
---|---|
Image Quality Equivalence/Improvement | "His conclusion was that the image quality at same or similar patient dose rates will result in equivalent or slight improvement in patient care (images) for the proposed modified TAU device over the predicate device." |
"Image quality acquired using the proposed alternate detector was of equal or slightly improved image quality..." | |
Dose Rate Equivalence | "the image quality at same or similar patient dose rates..." |
"maintaining or improving image at same or similar dose..." | |
Safety (Radiation, Mechanical, Electrical, Cybersecurity) | "The proposed modified Orthoscan TAU Mini C-arm's potential radiation, mechanical, and electrical hazards are identified and analyzed as part of risk management and controlled by meeting the applicable CDRH 21 CFR subchapter J performance requirements, Recognized Consensus Standards, designing and manufacturing under Ziehm-Orthoscan, Inc. Quality System, and system verification and validation testing ensure the device performs to the product specifications and its intended use. The adherence to these applicable regulations and certification to Recognized Consensus Standards that apply to this product provides the assurance of device safety and effectiveness." |
"...cybersecurity controls are improved..." | |
Certified compliant with 60601-1 ED 3.2 series, including IEC 60601-2-54, well as IEC 62304:2006 + A1:2015 Medical device software – Software life cycle processes. Met all applicable sections of 21 CFR Subchapter J performance standards. Software and cybersecurity testing performed to meet requirements from FDA guidances "Content of Premarket Submissions for Device Software Functions" (2023) and "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions" (2023). | |
Functionality Equivalence | "The proposed device replicates the features and functions of the predicate devices without impacting image clarity or dose levels." |
"For both the predicate TAU (K213113) and proposed device, the following are unchanged; C-arm support of flat panel detector, generator and x-ray controls, mechanical connections, balancing, locking, rotations, work-station platform, main user interface controls, touch screen interface, selectable imaging, X-ray technique control, entry of patient information, wired footswitch operation, interface connection panel and DICOM fixed wire and wireless network interfaces." |
Study Details:
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: The document does not specify a numerical "sample size" in terms of number of unique phantoms or individual images. It states "Numerous image comparison sets were taken" and "Images collected included phantom motion that was representative of typical clinical use". For the alternate detector evaluation, "Images collected included phantom motion... These images were reviewed by a Certified Radiologist who confirmed that the image quality acquired using the proposed alternate detector was of equal or slightly improved image quality...".
- Data Provenance: The data was generated through "Non-clinical image and dose lab testing" and "bench testing". This implies controlled laboratory conditions, not patient data. Country of origin for data generation is not explicitly stated but can be inferred as likely being in the US, given the US-based company and FDA submission. The study was inherently prospective in that new images were generated for the purpose of the comparison.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "a Radiologist" (singular) performed an assessment of individual images.
- Qualifications of Experts: "Certified Radiologist". No further details on years of experience are provided, but "Certified" implies meeting professional board certification standards.
-
Adjudication method for the test set:
- The document states "a Radiologist performed an assessment of individual images arranged in groups of image sets." There is no mention of an adjudication method involving multiple readers, as only a single radiologist was used for the image quality assessment.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC study was not done. The document explicitly states: "Orthoscan TAU Mini C-arm system did not require live human clinical studies to support substantial equivalence...". The image quality assessment was performed by a single certified radiologist using phantom images. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance is not the subject of this 510(k) (it's a hardware/OS/component modification, not an AI diagnostic tool).
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This question is not applicable in the context of this 510(k). The device is an imaging system (C-arm), not an AI algorithm that performs standalone diagnoses. Its performance is assessed in terms of image generation quality, which is then interpreted by a human user (physician).
-
The type of ground truth used:
- The "ground truth" for the image quality comparison was established by expert assessment (a Certified Radiologist's qualitative judgment) of images generated from anthropomorphic (PMMA material) phantoms and anatomical simulation phantoms. This is considered a "phantom-based" ground truth, which is a common approach for demonstrating equivalence in imaging system modifications where clinical studies are not deemed necessary.
-
The sample size for the training set:
- Not applicable. The document describes modifications to an existing fluoroscopic X-ray system, including an OS upgrade and hardware changes. There is no indication of a machine learning or AI component that would require a "training set" in the conventional sense of data used to train an algorithm. The development involved risk analysis, design reviews, component testing, integration testing, performance testing, safety testing, and product use testing of the system itself.
-
How the ground truth for the training set was established:
- Not applicable, as there is no "training set" for an AI algorithm in this context.
Ask a specific question about this device
(68 days)
OXO
The Orthoscan VERSA Mini C-arm X-ray system is designed to provide physicians with general fluoroscopic visualization, using pulsed or continuous fluoroscopy, of a patient including but not limited to, diagnostic, surgical, and critical emergency care procedures for patients of all ages including pediations when imaging limbs/extremities, shoulders, at locations including but not limited to, hospitals, ambulatory surgency, traumatology, orthopedic, critical care, or physician office environments.
The proposed modifications to Ziehm-Orthoscan, Inc. VERSA Mini C-Arm series (which we will refer to internally and in this submittal as Orthoscan VERSA, for distinction from predicate Orthoscan TAU and Orthoscan Mobile DI) retain identical function as the predicate TAU Mini C-arm (K2131130) and predicate Mobile DI (K113708) as a mobile fluoroscopic mini C-arm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment and surgical procedures involving anatomical regions such as but not limited to that of extremities, limbs, shoulders and knees. The system consists of C-arm support attached to the image workstation.
The changes to the Orthoscan VERSA Mini C-arm X-ray system represent a modification of our presently legally marketed device Orthoscan TAU Mini C-Arm K213113 and Orthoscan Mobile DI Mini C-arm (K113708). The proposed modifications to the predicate encompass the implementation of a LINUX based operating system upgrade from Ubuntu version 16.04 to Ubuntu version 20.04 and related software revisions, modification to the mechanical design to further facilitate desk top use, revisions to generator printed circuit board to improve power management efficiency, implementation of an alternate generator radiation shielding material to reduce environmental impact of lead and an update to wireless footswitch communication protocol.
The provided text describes the Orthoscan VERSA Mini C-Arm, a fluoroscopic X-ray system. The 510(k) summary outlines the device's characteristics, modifications, and the studies conducted to demonstrate substantial equivalence to predicate devices (Orthoscan TAU Mini C-Arm K213113 and Orthoscan Mobile DI Mini C-Arm K113708).
Acceptance Criteria and Device Performance
The document does not explicitly present a table of acceptance criteria with numerical performance targets and reported device performance based on objective metrics. Instead, the "acceptance criteria" are implied through the statement that the device was tested to be "certified compliant with 60601-1 ED 3.2 series, including IEC 60601-2-54" and "met all applicable sections of 21 CFR Subchapter J performance standards." These regulatory and consensus standards serve as the de facto acceptance criteria.
The "reported device performance" is qualitative and comparative, focusing on maintaining or improving image quality and safety compared to the predicate devices.
Implied Acceptance Criteria and Reported Performance (from the text):
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with IEC 60601-1 ED 3.2 series, including IEC 60601-2-54 (Medical Electrical Equipment - General requirements for basic safety and essential performance, and particular requirements for medical electrical equipment for X-ray equipment) | "The device was tested by certified test laboratory resulting in device being certified compliant with 60601-1 ED 3.2 series, including IEC 60601-2-54." |
Compliance with 21 CFR Subchapter J performance standards (Performance Standards for Diagnostic X-Ray Systems and Their Major Components) | "Further, the device met all applicable sections of 21 CFR Subchapter J performance standards." |
Image quality and dose levels (relative to predicate) | "The proposed device replicates the features and functions of the predicate devices without impacting image clarity or dose levels." |
"His conclusion was that the image quality at same or similar patient dose rates will result in equivalent or slight improvement in patient care (images) for the proposed modified VERSA device over the predicate device. Therefore, Ziehm-Orthoscan, Inc. believes the VERSA Mini C-arm image quality, safety and effectiveness to be substantially equivalent to that of the predicate device Orthoscan TAU (K213113) and Orthoscan Mobile DI (K113708)." | |
Usability and User Interface | "Usability testing concluded that there were no previously unknown use errors or hazardous situations and no unacceptable residual risks due to the changes in user interface from the predicate device Orthoscan TAU (K213113) and Orthoscan Mobile DI (K113708)." |
Safety and Effectiveness (overall substantial equivalence) | "Ziehm-Orthoscan, Inc. considers the proposed modified VERSA Mini C-arm to be as safe, as effective, and performs substantially equivalent to the predicate device Orthoscan TAU Mini C-arm (K213113) and Orthoscan Mobile DI Mini C-arm (K113708) in accordance with its labeling." |
"The proposed modified Ziehm-Orthoscan, Inc. VERSA Mini C-arm's potential radiation, mechanical, and electrical hazards are identified and analyzed as part of risk management and controlled by meeting the applicable CDRH 21CFR subchapter J performance requirements, Recognized Consensus Standards, designing and manufacturing under Ziehm-Orthoscan, Inc. Quality System, and system verification and validation testing ensure the device performs to the product specifications and its intended use. The adherence to these applicable regulations and certification to Recognized Consensus Standards that apply to this product provides the assurance of device safety and effectiveness." |
Study Details:
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: "Numerous image comparison sets were taken" using anthropomorphic (PMMA) phantoms and anatomical simulation phantoms. The exact number of images or comparison sets is not specified.
- Data Provenance: The data was generated through "Non-clinical image and dose lab testing" using phantoms. This implies the data was generated specifically for this study, likely in the US, and is prospective in nature as it involved creating new images with the proposed and predicate devices.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: "A Radiologist" (singular) performed the assessment.
- Qualifications: The qualification mentioned is "Radiologist." No further details on experience or specialization are given.
-
Adjudication Method for the Test Set:
- Method: "A Radiologist performed an assessment of individual images arranged in groups of image sets." It appears to be a single-reader assessment without an explicit multi-reader adjudication process (e.g., 2+1 or 3+1) mentioned.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:
- Not Applicable: This was not an MRMC study and did not involve AI assistance. The study was a direct image comparison between the modified device and predicate devices performed by a single radiologist.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Not Applicable: This device is an X-ray imaging system, not an AI algorithm. Its performance is assessed through its ability to produce images and meet regulatory standards, with human assessment of image quality.
-
The Type of Ground Truth Used:
- Ground Truth: The ground truth for image quality comparison was established by the qualitative "assessment" of the single radiologist, who concluded that the image quality was "equivalent or slight improvement" over the predicate device. This is effectively an expert consensus (single expert) on image quality, derived from images of phantoms. It is not pathology or outcomes data.
-
The Sample Size for the Training Set:
- Not Applicable: This is a hardware modification submission for a medical imaging device, not an AI/machine learning device. Therefore, there is no "training set" in the context of data science.
-
How the Ground Truth for the Training Set Was Established:
- Not Applicable: As there is no training set mentioned, this question is not relevant to this submission.
In summary, the substantial equivalence demonstration for the Orthoscan VERSA Mini C-Arm primarily relied on non-clinical bench testing, compliance with international and federal standards, and a qualitative image quality comparison by a single radiologist using phantom images, rather than human clinical studies or complex AI validation methods. The acceptance criteria were broadly defined by compliance with specified regulatory and consensus standards, and the reported performance was a qualitative assessment of non-inferiority or slight improvement in image quality.
Ask a specific question about this device
(276 days)
OXO
The OEC One ASD mobile C-arm system is designed to provide fluoroscopic and digital spot images of adult and pediatic patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, vascular, critical care, and emergency procedures.
The OEC One ASD is a mobile C-arm X-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, neurologic, vascular, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped apparatus that has a flat panel detector on the top of the C-arm and the X-ray Source assembly at the opposite end.
The OEC One ASD is capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allows the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C- arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock.
The subject device is labelled as OEC One ASD.
The provided document is a 510(k) Summary of Safety and Effectiveness for the GE Hualun Medical Systems Co. Ltd. OEC One ASD, a mobile C-arm X-ray system. The document focuses on demonstrating substantial equivalence to a predicate device, OEC One (K182626), rather than presenting a study with specific acceptance criteria and detailed device performance results for a new AI/CAD feature.
The submission is for a modification of an existing device, primarily introducing an amorphous silicon (a-Si) flat panel detector as the image receptor and updating some hardware and software components. The changes are stated to enhance device performance and are discussed in terms of their impact on safety and effectiveness, concluding that no new hazards or concerns were raised.
Therefore, the information required for a detailed description of acceptance criteria and a study proving device performance, especially for AI/CAD features, is largely not present in this document. The document centers on demonstrating that the modified device maintains safety and effectiveness and is substantially equivalent to the predicate, rather than detailing a study against specific acceptance criteria for a novel functionality.
However, I can extract the available relevant information and highlight what is missing based on your request.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly define "acceptance criteria" in the context of a study demonstrating novel AI/CAD feature performance. Instead, it presents a comparison table of technical specifications between the proposed device (OEC One ASD) and the predicate device (OEC One K182626) to demonstrate substantial equivalence. The "Acceptance Criteria" here are implicitly derived from the predicate's performance and safety profiles.
Feature / Performance Metric | Predicate Device (OEC One K182626) | Subject Device (OEC One ASD) | Discussion of Differences / Equivalence |
---|---|---|---|
Image Receptor | Image Intensifier | 21cm Amorphous Silicon (a-Si) Flat Panel Detector | Substantially Equivalent. Change to enhance device performance. |
DQE | 65% | 70% (0 lp/mm) | Enhanced DQE, indicating improved image quality. |
MTF | 45% | 46% (1.0 lp/mm) | Slightly enhanced MTF, indicating improved image quality. |
Field of View | 9 inch, 6 inch, 4.5 inch | 21 cm, 15 cm, 11 cm | No new hazards or hazard situations. Performance testing indicated effectiveness. |
Image Matrix Size | 1000x1000 | 1520x1520 | Substantially Equivalent. Driven by detector pixel matrix for higher resolution. |
Image Shape | Circle | Squircle | Substantially Equivalent. Enhanced viewing area without typically unnecessary corner areas. |
Anti-scatter Grid | Line Rate: 60 L/cm, Ratio: 10:1, Focal Distance: 100 cm | Line Rate: 74 L/cm, Ratio: 14:1, Focal Distance: 100 cm | Substantially Equivalent. Specification updated based on new image receptor. |
X-ray Generator | Fluoroscopy: 0.1-4.0 mA | Fluoroscopy: 0.1-8.0 mA | Substantially Equivalent. mA range change for optimized image quality (ABS). No new safety/effectiveness concerns. |
Digital Spot: 0.2-10.0 mA (100-120V system) | Digital Spot: 2-10.0 mA (100-120V system) | Substantially Equivalent. mA range change for optimized image quality (increasing mA on thin anatomy). No new safety/effectiveness concerns. | |
Imaging Modes | Digital Spot: Normal Dose, Low Dose | Digital Spot: Normal Dose | Low Dose mode not provided for Digital Spot as high mA exposure ensures quality; similar functionality available via Fluoroscopy. |
Roadmap: Normal Dose, Low Dose | Removed | Roadmap mode removed based on marketing; similar functionality via peak opacify function on cine. | |
Imaging Features | Zoom & Roam | Zoom (Live Zoom) & Roam | Improved with Live Zoom during fluoro/cine. |
N/A | Digital Pen | Added for planning/educational purposes. | |
Monitor Display | Resolution: 1920x1080 | Resolution: 3840 x 2160 | Substantially Equivalent. Updated to higher resolution due to IT advancement. |
8bit image display | 10bit image display | Substantially Equivalent. Better display technology. | |
Tech View Tablet | OS: Android 5.1 | OS: Android 11.0 | Substantially Equivalent. OS upgraded due to IT advancement. |
C-Arm Physical Dimensions | Orbital Rotation: 120° (90° underscan /30° overscan) | Orbital Rotation: 150° (95° underscan /55° overscan) | Substantially Equivalent. Larger range for user convenience. |
Image Storage | 100,000 Images | 150,000 Images | Substantially Equivalent. Driven by IT advancement (more storage). |
Wireless Printing Module | N/A | Wireless Printing Module | Substantially Equivalent. Not for diagnostic use or device control. No new risks. |
Video Distributor | DVI, BNC | DP, BNC | Substantially Equivalent. Driven by IT advancement. |
Laser Aimer | Red Laser, Class IIIa/3R, 650 nm, ≤ 5.0 mW | Green Laser, Class 2, 510-530nm, 1mW | Substantially Equivalent. Updated for green laser and convenience (tube side). Both meet laser product requirements. |
Image Processing | ADRO (based on CPU) | ADRO (based on GPU) | Substantially Equivalent. GPU for better calculation speed. All other listed image processing functions are the same. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The submission states, "comparative clinical images were evaluated to demonstrate substantial equivalence for the OEC One ASD compared to the cleared predicate," but no details on the sample size, data provenance (e.g., country of origin, retrospective/prospective nature), or specific evaluation methodology for these clinical images are given.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The document states that "comparative clinical images were evaluated," but it does not specify the number or qualifications of experts involved in this evaluation or the establishment of ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. The method used to resolve discrepancies or establish a consensus for the evaluation of comparative clinical images is not described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with AI assistance was not mentioned or described in this 510(k) submission. The document discusses device modifications and their impact on image quality and functionality, but not the comparative effectiveness of human readers utilizing AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This submission does not describe a standalone performance study for an AI algorithm. The device itself is an X-ray system, and while it has "Image Processing" features, these are not presented as standalone AI algorithms for diagnostic assistance but rather as integrated components affecting image generation and display characteristics. The update to ADRO from CPU to GPU based processing is noted for speed, but its standalone performance as an AI algorithm is not evaluated or presented.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document mentions "comparative clinical images were evaluated," but it does not specify the type of ground truth against which these images were assessed. Since the primary focus is on demonstrating substantial equivalence of technical image characteristics rather than validating a diagnostic AI output, a traditional "ground truth" (like pathology or outcomes a specific AI would predict) is not explicitly detailed. The implicit ground truth would be the expected imaging performance and diagnostic utility comparable to the predicate device.
8. The sample size for the training set
This information is not provided in the document. The document describes modifications to an existing X-ray system, including software updates. It states, "Its software is based on the architecture, design and code base of the predicate device OEC One (K182626)," and underwent a standard software development lifecycle. There is no mention of a separate "training set" in the context of an AI/CAD algorithm as typically understood for deep learning models.
9. How the ground truth for the training set was established
Since no training set for an AI/CAD algorithm is mentioned (refer to point 8), the method for establishing its ground truth is not applicable/provided in this document.
Ask a specific question about this device
(134 days)
OXO
The OEC 3D mobile fluoroscopy system is designed to provide fluoroscopic and digital spot images of adult and pediatric populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal.endoscopic, urologic, neurologic, vascular, cardiac, critical care, and emergency procedures.
The OEC 3D is a mobile fluoroscopic C-Arm imaging system used to assist trained surgeons and other qualified physicians. The system is used to provide fluoroscopic X-ray images and volumetric reconstructions during diagnostic, interventional, and surgical procedures. These images help the physician visualize the patient's anatomy and interventional tools. This visualization helps to localize clinical regions of interest and pathology. The images provide real-time visualization and records of pre-procedure anatomy, in vivo-clinical activity and post-procedure outcomes. The system is composed of two major components, a C-Arm and a tethered Workstation. The C-Arm is a stable mobile platform capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral) that allow the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C-Arm is comprised of the high voltage generator, software, X-ray control, and a "C" shaped image gantry, which supports an X-ray tube and a Flat Panel Detector. Its functionality is controlled by software on the Workstation and on the OEC Touch, a digital flat panel controller mounted on the cross-arm. The workstation is a stable mobile platform with an articulating arm supporting a color image high resolution LCD display monitor. It also includes image processing equipment/software, recording devices, data input/output devices and power control systems. The Workstation is the primary user interface to the system and can be located at a convenient location in the room independent of where the C-Arm is located.
The provided text describes modifications to the OEC 3D mobile fluoroscopy system, specifically introducing a "3D Spine Centerline Tool with Manual Labeling of the Vertebrae," a "3D Screw Evaluation Tool," and "Augmented Fluoroscopy." The document indicates that these modifications do not require clinical data to establish safety or efficacy and that the device meets acceptance criteria through non-clinical performance testing.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance/Testing Description |
---|---|
3D Spine Centerline Tool: | Evaluated on cadaveric volume datasets of the spine representing different imaging conditions. The tool identifies vertebrae levels in a 3D volume with centroids and facilitates oblique viewing along the spine centerline defined by the centroids. It also gives the user the option to label vertebrae levels manually. |
3D Screw Evaluation Tool: | Evaluated on cadaveric volume datasets of the spine representing different imaging conditions. |
Augmented Fluoroscopy Accuracy: | Performance testing was done to quantify the error between the projected 3D point of interest on live fluoroscopy (2D X-ray) and its actual position in the associated fluoroscopic image. This testing was conducted using a rigid phantom. |
General System Performance & Safety: | Successful completion of verification and validation testing as required by design control procedures. Compliant with IEC 60601-1 (including IEC 60601-1-2, 60601-1-3, 60601-2-43, and 60601-2-54), and all applicable 21CFR Subchapter J performance standards (1020.30 Diagnostic X-Ray Systems and their major components, 1020.32 Fluoroscopic equipment, 1040.10 Laser products). Developed under GE OEC Medical Systems Quality Management System, including risk analysis, required reviews, design reviews, unit level testing, integration testing, performance testing, safety testing, and simulated use testing. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The text states that "cadaveric volume datasets of the spine representing different imaging conditions" were used for the 3D Spine Centerline Tool and 3D Screw Evaluation Tool, and a "rigid phantom" was used for Augmented Fluoroscopy. Specific numbers for the cadaveric datasets or phantom instances are not provided.
- Data Provenance:
- Cadaveric datasets: Implies human cadavers. Country of origin is not specified.
- Rigid phantom: Artificial, not human data.
- Retrospective or Prospective: Not specified, but given the nature of cadaver and phantom studies, they are typically considered controlled experimental setups rather than retrospective or prospective clinical studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not provide information on the number of experts used or their qualifications for establishing ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not report a multi-reader multi-case (MRMC) comparative effectiveness study. The focus is on the performance of the device's new features with respect to established metrics (e.g., accuracy for Augmented Fluoroscopy) and demonstrating substantial equivalence to the predicate device through non-clinical testing. It also explicitly states, "The new performance claims did not require clinical data in order to establish safety or efficacy."
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
The testing described for the "3D Spine Centerline Tool with Labeling" and "3D Screw Evaluation Tool" on cadaveric data, and the "Augmented Fluoroscopy" error quantification with a rigid phantom, suggests standalone performance evaluation of these features. The documentation does not describe a human-in-the-loop study for these features, rather it focuses on the intrinsic performance of the algorithms.
7. Type of Ground Truth Used
- 3D Spine Centerline Tool and 3D Screw Evaluation Tool: The ground truth for these tools was likely established through precise measurements or expert annotations on the cadaveric volume datasets, though the exact method is not detailed.
- Augmented Fluoroscopy: The ground truth for accuracy was established by the known true positions within the rigid phantom, against which the projected 3D points were compared.
8. Sample Size for the Training Set
The document does not specify a sample size for any training set. It primarily discusses validation and verification testing of modifications, which implies the features were developed and potentially trained using internal datasets not detailed in this submission summary.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for any training set (if applicable) was established.
Ask a specific question about this device
(24 days)
OXO
The Orthoscan TAU Mini C-arm is designed to provide physicians with general fluoroscopic visualization, using pulsed or continuous fluoroscopy, of a patient including but not limited to, diagnostic, surgical, and critical emergency care procedures for patients of all ages including when imaging limbs/extremities, shoulders; at locations including but not limited to, hospitals, ambulatory surgery, emergency, traumatology, orthopedic, critical care, or physician office environments.
The proposed modifications to Orthoscan, Inc. TAU Mini C-Arm series (which we will refer to internally and in this submittal as Orthoscan TAU 2.0, for distinction from predicate Orthoscan TAU) retain identical function as the predicate TAU Mini C-arm (K183220) as a mobile fluoroscopic mini C-arm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment and surgical procedures involving anatomical regions such as but not limited to that of extremities, limbs, shoulders, knees, and Hips. The system consists of C-arm support attached to the image workstation.
The changes to the Orthoscan TAU series of Mini C-arm X-ray systems represent a modification of our presently legally marketed device Orthoscan TAU mini C-Arm K183220. The proposed modifications to the predicate encompass the implementation of an optional IGZO 15 cm x 15 cm Flat Panel Detector (FPD) in the 15x12cm and 15x15cm device detector sizes, a new LINUX based operating system and related software, image processing board revisions and a revised Power Manager Board for AC to DC conversion that will distribute 24Vdc via a medical grade DC power supply. The proposed device incorporates software architecture and other improvements that replicate the features and functions of the predicate device and improve image clarity without increasing dose levels.
This FDA 510(k) summary describes the modified Orthoscan TAU Mini C-arm (Orthoscan TAU 2.0) and its substantial equivalence to its predicate device (Orthoscan TAU Mini C-arm, K183220). The device is an image-intensified fluoroscopic x-ray system.
Here's an analysis of the provided information regarding acceptance criteria and the study:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not explicitly present a table of "acceptance criteria" against "reported device performance" in a quantitative manner for specific benchmarks. Instead, it focuses on demonstrating substantial equivalence to the predicate device by comparing technological characteristics and asserting overall safety and effectiveness.
The document highlights differences in the modified device (Orthoscan TAU 2.0) compared to the predicate (Orthoscan TAU):
- Optional IGZO Flat Panel Detector (FPD): The predicate used CMOS detectors. The modified device offers CMOS or optional IGZO for 15x12cm and 15x15cm sizes.
- Reported Performance: "Substantially Equivalent. The introduction of the optional IGZO technology was found to be equal in safety and effectiveness including image quality (Essential Performance). IGZO sensor technology demonstrates equal/better image quality to that of the predicate... and provides slightly improved image quality at equal dose values as the predicate."
- Linux-based Operating System: The predicate used Windows 8.1 Embedded.
- Reported Performance: "Substantially Equivalent operating system was shown to support nearly identical workflows to achieve the same basic functionality with new proposed device software application. During verification and validation activities this change did not raise any safety and/or effectiveness concerns. The difference does not affect the safety or efficacy of the device."
- Revised Power Manager Board: For AC to DC conversion.
- Reported Performance: "Substantially Equivalent. The AC to DC conversion will provide intrinsic value through risk reduction such as leakage, while standardizing distribution of 24Vdc."
- Software Architecture (OrthoTouch Application vs. OrthoMini Application):
- Reported Performance: "Software architecture design is Substantially Equivalent to that of the predicate device... The OrthoTouch Application provides the main user interface to Orthoscan fluoroscopic X-Ray products, identical to OrthoMini, application. OrthoTouch on LINUX operating system performs equal to OrthoMini."
- Graphical User Interface (GUI):
- Reported Performance: "Substantially Equivalent GUI application are nearly Identical in workflows to achieve the same basic functionality with new proposed device software application. During verification and validation activities this change did not raise any safety and/or effectiveness concerns. The difference does not affect the safety or efficacy of the device."
- Minor Differences in Detector Specifications (Pixel Spacing, Dynamic Range, DQE) for IGZO:
- Reported Performance: These differences "do not affect the safety or efficacy of the device."
The overall "acceptance criteria" seem to be the demonstration of substantial equivalence to the predicate device, ensuring at least the same level of safety and effectiveness, including image quality.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set: The "test set" for the image quality comparison consisted of:
- "Numerous Image comparison sets"
- Images from "Anthropomorphic (PMMA material) phantoms"
- Images from "anatomical simulation phantoms"
- Sample Size: The exact number of images or phantoms in the "numerous image comparison sets" is not specified.
- Data Provenance: The study was a retrospective lab test image comparison study conducted by Orthoscan, Inc. (the manufacturer). There is no mention of country of origin for the data; it was an internal company lab study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: One expert.
- Qualifications: A "board-certified Radiologist." No details on years of experience or specialization are provided beyond this.
- Ground Truth Establishment: The radiologist performed an "assessment of individual images arranged in groups of image sets." Their conclusion served as the basis for the ground truth regarding image quality comparison between the modified and predicate devices.
4. Adjudication Method for the Test Set
- Adjudication Method: None mentioned or implied. Only one radiologist was involved in the assessment, so there was no multi-reader consensus or adjudication process.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The study involved a single radiologist's assessment of image sets from phantoms.
- Effect Size: Not applicable, as no MRMC study was performed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Study: The device is a fluoroscopic X-ray system, not an AI algorithm in the context of standalone performance studies typically seen for AI/ML devices. The "algorithm" here refers to image processing within the device. The performance assessment was based on visual evaluation of the output images by a human expert. Therefore, a standalone algorithm-only performance study in the way it's usually defined for AI software was not conducted or described. The performance tested was for the integrated device.
7. The Type of Ground Truth Used
- Type of Ground Truth: The ground truth for image quality was established by expert consensus (albeit by a single expert) and comparison of visual characteristics ("image quality") based on images of "anthropomorphic (PMMA material) phantoms and anatomical simulation phantoms." The expert's conclusion stated "the image quality at same or similar patient dose rates will result in a slight improvement in patient care (images) for the proposed modified TAU device over the Predicate device."
8. The Sample Size for the Training Set
- Training Set Sample Size: This submission is for a medical device (Mini C-arm X-ray system), not an AI/ML software. It describes modifications to an existing device, including a new operating system and detector options. There is no mention of a training set in the context of machine learning. The device itself is not presented as an AI-powered diagnostic tool requiring a separate training process for its core functionality.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no mention of a training set for machine learning.
Ask a specific question about this device
(112 days)
OXO
The OEC 3D mobile fluoroscopy system is designed to provide fluoroscopic and digital spot images of adult and pediativ populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, neurologic, vascular, cardiac, citical care and emergency procedures.
The OEC 3D is a mobile fluoroscopic C-arm imaging system used to assist trained surgeons and other qualified physicians. The system is used to provide fluoroscopic X-ray images and volumetric reconstructions during diagnostic, interventional, and surgical procedures. These images help the physician visualize the patient's anatomy and interventional tools. This visualization helps to localize clinical regions of interest and pathology. The images provide real-time visualization and records of pre-procedure anatomy, in vivo-clinical activity and post-procedure outcomes.
The system is composed of two primary physical components. The first is referred to as the "C -Arm" because of its "C" shaped image gantry; the second is referred to as the "Workstation", and this is the primary user interface for the user to interact with the system. The C-arm has an interface tablet allowing a technician to interact with the system.
The C-arm is a stable mobile platform capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral) that allow the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C-Arm is comprised of the high voltage generator, software, X-ray control, and a "C" shaped image gantry, which supports an X-ray tube and a Flat Panel Detector,
The workstation is a stable mobile platform with an articulating arm supporting a color image high resolution LCD display monitor. It also includes image processing equipment/software, recording devices, data input/output devices and power control systems.
On the C-Arm, the generator remains unchanged from the OEC Elite. This is also true for the 31 cm x 31 cm image receptor, consisting of a Thallium-doped Cesium Iodide [Cs] (TI)] solid state flat panel X-ray detector with Complementary Metal Oxide Semiconductor (CMOS) light imager. The X-ray tube housing and insert remains the same as on the predicate OEC Elite (K192819).
C-Arm functionality is managed by a digital flat tablet control panel mounted on the C-arm base. Motion is controlled by a joystick.
On the workstation, the main hardware includes a computer with integrated wireless capability and a dedicated computer for 3D reconstruction located within the storage bay. The OEC 3D employs the same software architecture and platform design that fully supports the flat panel detector as the OEC Elite and complies with IEC 60601-1. The OEC 3D includes the existing 2D imaging functionalities available on the OEC Elite including imaging and post processing applications.
The provided text does not contain specific acceptance criteria or a detailed study proving the device meets those criteria. Instead, it is a 510(k) premarket notification summary from the FDA, asserting substantial equivalence to predicate devices rather than demonstrating performance against explicit acceptance criteria with clinical data.
Here's an analysis of the information available in the document, and where details are explicitly not provided:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not provided in the document. The submission focuses on demonstrating substantial equivalence to predicate devices based on technological characteristics and non-clinical performance testing against general standards, rather than specific acceptance criteria for performance metrics.
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided. The document states that "clinical data is not required to demonstrate substantial equivalence" and that the device was evaluated using "engineering bench testing" and "non-clinical performance testing." Therefore, there is no discrete "test set" of patient data in the clinical sense mentioned.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
This information is not provided. Since clinical data was not used for the performance evaluation for substantial equivalence, no expert ground truth establishment for a test set is described.
4. Adjudication Method for the Test Set
This information is not provided. As no clinical test set with human assessments is described, no adjudication method is relevant or provided.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No, an MRMC comparative effectiveness study was not done. The document explicitly states: "The new performance claims did not require clinical data in order to establish safety or efficacy." Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The document describes non-clinical performance testing and engineering bench testing, which would broadly cover standalone algorithm performance in a technical sense (e.g., image quality metrics, reconstruction accuracy). However, it does not explicitly detail a "standalone performance study" in the context of clinical metrics like sensitivity, specificity, or reader performance. The focus is on demonstrating that the new 3D functionality is "substantially equivalent" to that of reference devices.
7. The Type of Ground Truth Used
The document does not describe the use of specific ground truth (expert consensus, pathology, outcomes data) in the context of clinical performance evaluation for substantial equivalence to the same extent as a traditional clinical study. The "ground truth" for the non-clinical performance testing would be derived from engineering specifications, phantom measurements, and compliance with standards (e.g., IEC 60601-1, NEMA XR-27). The 3D algorithm is stated to be "identical" to one of the reference devices (INNOVA IGS 5), implying its performance characteristics are assumed to be similar to that previously cleared device.
8. The Sample Size for the Training Set
This information is not provided. The document does not describe any machine learning or AI algorithm development that would involve a training set of data. The 3D algorithm is stated to be "identical" to one of the reference devices, suggesting it's an existing, proven algorithm rather than a newly trained one requiring a specific training set.
9. How the Ground Truth for the Training Set Was Established
This information is not provided, as no training set is mentioned.
Ask a specific question about this device
(38 days)
OXO
The OEC Elite mobile fluoroscopy system is designed to provide fluoroscopic and digital spot images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, vascular, cardiac, critical care, and emergency procedures.
The OEC Elite is a Mobile Fluoroscopic C-arm Imaging system used to assist trained surgeons and other qualified physicians. The system is used to provide fluoroscopic X-Ray images during diagnostic, interventional, and surgical procedures. These images help the physician visualize the patient's anatomy and interventional tools. This visualization helps to localize clinical regions of interest and pathology. The images provide real-time visualization and records of pre-procedure anatomy, in vivo-clinical activity and post-procedure outcomes.
The C-arm is a stable mobile platform capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allow the user to position the X-Ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C - arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock. The C-Arm is comprised of the high voltage generator, software. X-ray control, and a "C" shaped image gantry, which supports an X-ray tube and a Flat Panel Detector or Image Intensifier, depending on the choice of detector configuration desired.
The workstation is a stable mobile platform with an articulating arm supporting a color image, high resolution, LCD display monitor. It also includes image processing equipment/software, recording devices, data input/output devices and power control systems.
GE is submitting this pre-market notification for proposed labeling changes (quantitative performance claims) related to a previously-released feature, Enhanced Noise Reduction.
The provided text describes a 510(k) premarket notification for the GE OEC Elite mobile fluoroscopy system with "Enhanced Noise Reduction" claims. However, it does not contain the specific acceptance criteria or an explicit study proving the device meets those criteria in the format requested. The document focuses on demonstrating substantial equivalence to a predicate device through non-clinical testing and engineering bench testing, rather than reporting on a clinical study against predefined performance metrics.
Therefore, I cannot populate the table and answer all questions directly from the provided input. However, I can extract the information related to the non-clinical testing and the claims being made for the Enhanced Noise Reduction feature.
Here's a breakdown of what can be extracted and what information is missing:
Information that can be extracted or inferred:
- Device Name: OEC Elite with Enhanced Noise Reduction
- Purpose of the Submission: Proposed labeling changes (quantitative performance claims) related to the Enhanced Noise Reduction feature, demonstrating substantial equivalence to the predicate device.
- Nature of Enhanced Noise Reduction: It's a user-selectable, augmented image processing pathway for Cardiac and Vascular acquisition profiles. It "reduces image noise in a manner characteristic of the reduction in noise resulting from an increase in photon flux" while maintaining "spatial and temporal resolution." It does not change the tube output (dose).
- Claim: "Claims for equivalence to a higher power system without an increase in radiation dose for both cardiac and vascular applications."
- Testing Conducted: Non-clinical testing, engineering bench testing, risk analysis, required reviews, design reviews, integration testing, performance testing, safety testing, simulated use testing. Specific mention of "image quality and dose performance using standard IQ metrics and QA phantoms" and "a wide variety of anthropomorphic phantoms."
- Conclusion: The scientific engineering bench testing methods "demonstrate substantial equivalence." Clinical data was not required.
Missing Information (Crucial for the requested table and questions):
- Specific Acceptance Criteria: The document mentions "quantitative performance claims" but does not detail what these exact criteria are (e.g., specific SNR improvement percentages, resolution metrics, dose reduction targets).
- Reported Device Performance: Without explicit acceptance criteria, the "reported performance" cannot be formally assessed against them. The claim itself implies performance ("equivalence to a higher power system without an increase in radiation dose"), but specific metrics are absent.
- Sample Size (Test Set): Not specified for any performance testing. Phantoms are mentioned.
- Data Provenance (Test Set): Phantoms are artificial, so no country of origin or retrospective/prospective status.
- Number of Experts for Ground Truth (Test Set): Not applicable as no human interpretation of test set images is mentioned as part of performance evaluation.
- Qualifications of Experts for Ground Truth: Not applicable.
- Adjudication Method: Not applicable.
- MRMC Comparative Effectiveness Study: Explicitly stated that "clinical data is not required to demonstrate substantial equivalence." Therefore, no MRMC study with human readers comparing AI vs. without AI assistance was performed or reported.
- Standalone Performance: The testing described is for the algorithm (Enhanced Noise Reduction) as part of the device (OEC Elite C-arm) but without human-in-the-loop performance reported.
- Type of Ground Truth: For the "quantitative performance claims," the ground truth would typically be objective physical measurements of image quality parameters derived from phantoms.
- Sample Size (Training Set): Not mentioned.
- Ground Truth for Training Set: Not mentioned.
Based on the available information, here's what can be provided:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Stated/Inferred) | Reported Device Performance (Summary from text) |
---|---|
Quantitative performance claims for Enhanced Noise Reduction related to image quality and dose. | Achieved "equivalence to a higher power system without an increase in radiation dose for both cardiac and vascular applications." |
Reduces image noise while maintaining spatial and temporal resolution. | |
Compliance with IEC 60601-1 series, NEMA XR-27, 21CFR Subchapter J. | Tested and compliant with all applicable standards. |
No change in fundamental control mechanism, operating principle, energy type, or Intended Use. | Changes described do not alter these aspects. |
Substantial equivalence to predicate device (K172550, K171565). | Scientific engineering bench testing demonstrated substantial equivalence, with no new safety/efficacy questions, hazards, or unexpected results. |
2. Sample size used for the test set and the data provenance
- Sample size: Not specified. Testing involved "standard IQ metrics and QA phantoms" and "a wide variety of anthropomorphic phantoms."
- Data provenance: Not applicable in the traditional sense, as testing was performed using phantoms and engineering bench tests. This is non-clinical, in-house testing by the manufacturer.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable. The claims are based on objective, quantifiable physical measurements using phantoms, not on expert human interpretation of images for ground truth.
4. Adjudication method for the test set
- Not applicable, as ground truth was established through physical measurements rather than human consensus or adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The document explicitly states: "The new performance claims and the accumulated changes did not require clinical data in order to establish safety or efficacy." And "clinical data is not required to demonstrate substantial equivalence." Therefore, no MRMC study was performed or reported.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, implicitly. The "Enhanced Noise Reduction" is an algorithm (an "augmented image processing pathway"). The "additional engineering bench testing was performed to substantiate the quantitative performance claims related to Enhanced Noise Reduction" and to demonstrate "overall imaging performance... using a wide variety of anthropomorphic phantoms." This describes testing the algorithm's effect on image quality metrics without human interpretation as part of the core evaluation for these particular claims.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical performance claims, the ground truth was established through objective physical measurements using "standard IQ metrics and QA phantoms" and "anthropomorphic phantoms." This involves measuring parameters like signal-to-noise ratio, spatial resolution, and potentially dose, against expected or ideal values from the phantoms.
8. The sample size for the training set
- Not specified in the provided text.
9. How the ground truth for the training set was established
- Not specified in the provided text, as details on the training set or its ground truth are absent.
Ask a specific question about this device
(263 days)
OXO
The Smart-C is a mini C-arm X-ray system designed to provide physicians with real time general fluoroscopic visualization of adult and pediatric patients. It is intended to aid physicians and surgeons during diagnostic procedures, therapeutic treatment, or surgical procedures of the limbs, extremities, or shoulders including but not limited to, orthopedics and emergency medicine. The Smart-C is intended to be used on a table or other hard flat surface. It may also be used with the optional support stand.
The Smart-C is an ultra-portable, battery-powered, mobile fluoroscopic mini C-arm system. The main component is a mini C-arm that consists of a CMOS flat panel detector aligned with an X-ray source monoblock to be used for image acquisition. The system can be hand-transported for imaging at the point of care. The primary operator workstation is a tablet computer that receives the images from the C-arm via wireless transfer protocol. The system includes a wireless footswitch to initiate image acquisition, making the entire system cord-free during operation. It comes with 2 battery packs, a table-top battery charger, and a tablet docking station. An optional Monitor Cart is provided as an accessory. The Smart-C monitor cart includes a 27" full-color touchscreen monitor, a keyboard for data entry, a printer for hard-copy of the x-ray images, and a battery charger for the Smart-C battery packs. The whole cart is battery-powered, to provide a completely cord-free user experience.
The provided text describes the 510(k) premarket notification for the Smart-C™ X-ray Imaging System and its comparison to a predicate device, the Orthoscan Mobile DI Mini C-arm. The document focuses on demonstrating substantial equivalence, rather than a traditional AI/ML performance study with specific acceptance criteria metrics like sensitivity, specificity, or AUC.
Therefore, the requested information regarding "acceptance criteria and the study that proves the device meets the acceptance criteria" in terms of explicit performance metrics, sample sizes for training/test sets, expert adjudication methods, MRMC studies, standalone algorithm performance, and ground truth establishment for a medical AI device cannot be fully extracted from this document. This is because the Smart-C is an imaging device, not an AI/ML algorithm that interprets images. The "performance" discussed relates to image quality and usability, compared to a predicate device, rather than diagnostic accuracy of an algorithm.
However, I can extract information related to the device's evaluation methods and the qualitative assessment of its performance against the predicate, which serves as its "acceptance criteria" for substantial equivalence.
Here's an attempt to answer the questions based on the available text, with caveats where the specific details are not provided:
Device: Smart-C™ X-ray Imaging System
1. A table of acceptance criteria and the reported device performance
The acceptance criteria are not explicitly stated as quantitative performance metrics (e.g., specific image resolution values to be met). Instead, the performance is evaluated against the predicate device and relevant standards to demonstrate substantial equivalence. The "acceptance" is qualitative: that the device's image quality and usability are "at least as good as" the predicate device and meet applicable safety and performance standards.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Image Quality: "Diagnostic ability" and "image quality" equivalent to a standard surgical monitor/predicate device. | A "Qualified Expert Evaluation of the diagnostic ability of the tablet display device was performed by 2 independent board-certified physicians. The conclusion of the expert evaluators is that the image quality of the tablet is diagnostic in all presented cases, and thus substantially equivalent to a standard surgical monitor for the intended use of the Smart-C device." Additionally, "A Qualified Expert Evaluation of the image quality of the Smart-C was performed by independent physicians utilizing images obtained from anthropomorphic phantoms. An additional Image Quality Performance test was completed using image quality phantoms for contrast and spatial resolution. Dynamic image resolution was assessed using rotation of a phantom with 2 lead dots." |
"Based on physician feedback, the clinical images obtained with the Smart-C were at least as good as the predicate device." |
| Safety and Efficacy: Compliance with relevant standards and guidance documents. | The device "meets the same recognized performance and safety standards, and to conform to FDA guidance regarding solid-state x-ray imaging systems." It has been tested for compliance with "applicable IEC series of x-ray performance standards, including IEC60601-2-54" and "all applicable 21CFR Subchapter J performance standards." Risk analysis and design mitigations were successfully tested. |
| Usability: Equivalent or improved workflow/ease of imaging, given differences like wireless technology and battery power. | The wireless image transfer "improves the workflow and ease of imaging." "The clinical utility of the Smart-C was demonstrated by performing a Clinical Imaging Evaluation. Cadaver subjects were chosen to represent the range of extremity imaging, including shoulders." Pediatric Imaging Usability Evaluation was performed for neonatal and infant patients. "There were no new concerns regarding patient positioning, including for neonatal and infant patients." "The Smart-C has been evaluated by numerous physicians and surgeons for image quality and usability on anthropomorphic phantoms, image quality phantoms, and cadaver subjects in clinical settings. They determined that it performs at least as well as the predicate device, and that it is efficacious for the intended uses." |
| Technical Equivalence: Fundamental scientific technology and core components similar to predicate. | Both devices "use the same fundamental scientific technology of generating fluoroscopic x-ray images using an x-ray source monoblock and flat-panel x-ray imaging detector in a fixed C-arm configuration." Designs are "based on the same modern technologies using a compact monoblock x-ray generator and flat-panel x-ray detector, operating at similar power levels." |
2. Sample sizes used for the test set and the data provenance
- Test Set (Clinical Imaging Evaluation):
- Sample Size:
- "Cadaver subjects were chosen to represent the range of extremity imaging, including shoulders." (Specific number not provided).
- A "Pediatric Imaging Usability Evaluation was performed" for "neonatal and infant patients." (Specific number of subjects not provided).
- For the tablet display evaluation, "all presented cases" were evaluated (number of cases not specified).
- "Anthropomorphic phantoms" and "image quality phantoms" were also used.
- Data Provenance: Not explicitly stated (e.g., country of origin). The evaluation involved "clinical" settings using cadavers. The pediatric study was also an "Evaluation," implying a simulated or controlled setting, not necessarily retrospective or prospective patient data from a real clinic.
- Sample Size:
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts:
- Tablet Display Evaluation: "2 independent board-certified physicians."
- Image Quality Evaluation (Smart-C): "independent physicians" (number not specified).
- General Evaluation: "numerous physicians and surgeons" (number not specified).
- Qualifications of Experts: "board-certified physicians" for the display evaluation. For other evaluations, they are referred to simply as "physicians" or "physicians and surgeons."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- For the tablet display evaluation, it states, "The conclusion of the expert evaluators is that the image quality of the tablet is diagnostic in all presented cases..." This suggests that both experts agreed, or their consensus was sufficient. No formal adjudication method like "2+1" or "3+1" is described.
- For other aspects, "physician feedback" was used, implying a qualitative assessment rather than a structured adjudication process for ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done with AI assistance. This device is an X-ray imaging system, not an AI-powered diagnostic algorithm. The evaluations were performed to establish image quality and usability of the device itself compared to a predicate device, not to evaluate human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. The Smart-C is a piece of hardware (fluoroscopic X-ray system) that produces images for human interpretation, not a standalone diagnostic algorithm. Its "performance" refers to the quality of the images it generates.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the evaluations was primarily expert judgment/consensus regarding image diagnostic quality and usability. This was based on:
- "Anthropomorphic phantoms" and "image quality phantoms" (for objective image quality measures like contrast and spatial resolution).
- "Cadaver subjects" (for clinical imaging evaluation and simulating patient positioning).
- "Neonatal and infant patients" (for pediatric usability evaluation, likely simulated or using models).
- Physicians' informal "feedback" and "conclusion" on images and usability.
8. The sample size for the training set
- Not applicable. This document describes the evaluation of a medical imaging device, not the development or training of an AI/ML algorithm. Therefore, there is no "training set" in the context of machine learning.
9. How the ground truth for the training set was established
- Not applicable. As a hardware medical imaging device, there is no AI/ML training set or associated ground truth establishment process in the context of this FDA submission.
Ask a specific question about this device
(213 days)
OXO
The OrthoScan TAU Mini C-arm is designed to provide physicians with general fluoroscopic visualization, using pulsed or continuous fluoroscopy, of a patient including but not limited to, diagnostic, surgical, and critical emergency care procedures for patients of all ages including pediatic populations when imaging limbs/extremittes, shoulders; at locations including but not limited to, hospitals, ambulatory surgery, emergency, traumatology, orthopedic, critical care, or physician office environments.
The OrthoScan TAU Mini C-Arm is a mobile fluoroscopic mini Carm system that provides fluoroscopic images of patients of all ages during diagnostic, treatment and surgical procedures involving anatomical regions such as but not limited to that of extremities, limbs, shoulders, knees, and Hips. The system consists of C-arm support attached to the image workstation. The proposed device provides the option of three CMOS flat panel detector sizes and identical X-ray source HVPS monoblock generator assembly with continuous or pulsed operation for image acquisition. The C-arm supports the CMOS FPD, X-Ray controls, collimator, high voltage generator with a fixed SID imaging. The C-arm and support arm which is connected to the mobile workstation platform are mechanically balanced allowing the operator precise positioning and locking of the vertical, horizontal, orbital and rotational movements at various angles and distances when imaging the patient's anatomical structures. The main workstation platform that supports the C-arm assembly contains the power control system, image processing system, system software, monitor display control and main user interface controls. The combination of C-Arm and workstation provides the clinician with a stable platform to obtain precise angles for localizing the patient's anatomical structures and visualization of pathology during live fluoroscopic imaging. The touch screen interface and keyboard provide user concise selectable imaging, X-ray technique control, entry of patient demographics and related procedural information. The workstation supports both an optional wired or wireless fluoroscopic footswitch allowing optimal positioning for the clinician. The optional connector interface panel of the OrthoScan TAU Mini C-Arm provides convenient connection of peripheral devices such as thermal video printers, image storage devices (USB) and DICOM fixed wire and wireless network interfaces.
The provided text is a 510(k) Summary for the OrthoScan TAU Mini C-Arm, which is a premarket notification to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed predicate device. This type of submission focuses on non-clinical testing to support the claim of substantial equivalence, rather than a full clinical study with specific acceptance criteria and performance metrics typically seen for novel devices or AI/software as a medical device (SaMD) where performance improvement is a key claim.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes for test/training sets, expert qualifications, and ground truth establishment, which are standard for AI/SaMD studies, are not explicitly provided in this document as it pertains to a traditional medical imaging device (C-arm) that primarily demonstrates substantial equivalence to existing technology.
However, I can extract the information that is present regarding device performance and the "study" conducted to support substantial equivalence.
Here's a breakdown of what can be inferred or directly stated from the document, and what is missing due to the nature of this 510(k) submission for an imaging device, not an AI algorithm:
Acceptance Criteria and Reported Device Performance
The document doesn't present a table of "acceptance criteria" in the traditional sense of specific numerical thresholds for diagnostic performance (e.g., sensitivity, specificity, AUC) that an AI algorithm would be tested against. Instead, it aims to demonstrate "substantial equivalence" to a predicate device. The performance is assessed through "image comparison" and "dose assessment" to show that the new device performs "as intended" and provides "similar image quality with new IDR filter" at "lower entrance dose level" compared to the predicate.
The table below summarizes the comparative technological characteristics which are used to argue substantial equivalence, and indirectly imply performance. The primary "performance" studied here is image quality and dose reduction, not diagnostic accuracy of an AI.
Table of Performance Comparison (Excerpted and Reinterpreted from the Provided Document)
Characteristic | Acceptance Criteria (Predicate) | Reported Device Performance (OrthoScan TAU Mini C-Arm) | Comparison to Predicate, Comments to Differences (Why it's "Acceptable") |
---|---|---|---|
Image Quality / Detector | |||
Detector Technology | medical grade GadOx (T1)/CMOS solid state X-ray detector | medical grade CsI(T1)/CMOS solid state X-ray detector | All Detectors of the TAU Mini C-Arm are of similar design Technology and Scientific principal to that of the Predicate (K133174) They share the advantages of SSXI image receptors. |
Detector Resolution | 1.5 k x 1.5 k | TAU 2020 = 2.0k x 2.2 k | |
TAU 1515 = 1.5 k x 1.5 k | |||
TAU 1512 = 2.0 k x 1.5 k | Substantially Equivalent. The proposed device has added the ability of a larger FOV for Physician. These changes do not raise new safety or effectiveness concerns. | ||
Field of View (Full) | 5.5" x 5.5" | TAU 2020 = 8" x 8" | |
TAU 1515 = 5.5" x 5.5" | |||
TAU 1512 = 5.5" x 4.3" | Substantially Equivalent. The proposed device has added the ability of a larger FOV for Physician. These changes do not raise new safety or effectiveness concerns. | ||
Field of View (Collimated Mag) | 4.3" x 4.3" | TAU 2020 = 4" x 4" | |
TAU 1515 = 4.3" x 4.3" | |||
TAU 1512 = 4.3" x 3.3" | Substantially Equivalent. The proposed device has added the ability of a larger Field of view for the Physician. These changes do not raise new safety or effectiveness concerns. | ||
Detector Size | 15.0 x 15.0 (cm) | TAU 2020 = 20 x 20 cm | |
TAU 1515 = 15 x 15 cm | |||
TAU 1512 = 15 x 12 cm | Substantially Equivalent. The proposed device has added the ability of a larger Field of view for the Physician. The difference does not affect the safety or efficacy of the device. | ||
DQE | 70% (implied for predicate, not explicitly stated as a value) | TAU 2020 = 70% | |
TAU 1515 = 70% | |||
TAU 1512 = 70% | Identical. The difference does not affect the safety or efficacy of the device. | ||
Grayscale Resolution | 16 bit (65,536 shades of gray) | 16 bit (65,536 shades of gray) | Identical. |
Dose Reduction | |||
Pediatric Dose Reduction IDR | NO | YES | IDR unique set of features and functions. Pediatric Dose reduction and special features for pediatric population. Dose assessment and image comparison of dose reduction for pediatric confirmed similar image quality with new IDR filter. The difference does not affect the safety or efficacy of the device. See Substantially Equivalent. |
Adult Dose Reduction IDR | NO | YES | IDR unique set of features and functions for Adult population. Dose assessment and image comparison of dose reduction confirmed similar image quality with new IDR filter. The difference does not affect the safety or efficacy of the device. Substantially Equivalent. |
Beam Pre-filter | Predicate has 2.5mm AL | 0.1mm Cu (Yes) | Although not identical, New Device X-ray beam pre-filter helps reduce skin entrance dose by adding additional filtration of Cu. |
Collimator | Fixed Aperture @ Fixed SID (Normal, Mag) | TAU 1512/1515 Fixed Aperture @ Fixed SID (Normal, Mag) | |
TAU 2020 Stepless Collimator with Fixed SID (4 Leaf, 2 Axis) | Although not Identical both have similar intended use of collimating X-ray beam providing compliance with the regulations. Substantially Equivalent. |
Study Details (as inferable from the 510(k) Summary)
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Images: 330 individual images arranged in 20 groups of image sets.
- Data Provenance: The study involved images taken from "anthropomorphic (PMMA material) phantoms and anatomical simulation phantoms." This means the data is synthetic/phantom-based, not from human patients. The country of origin is not specified, but given the FDA submission, it's presumed to be a controlled laboratory setting. The study is inherently non-clinical (not retrospective or prospective on human subjects).
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: 1 Radiologist.
- Qualifications: "a board-certified Radiologist." No specific years of experience or sub-specialty are explicitly mentioned beyond board certification.
-
Adjudication Method for the Test Set:
- There is no mention of an adjudication method (like 2+1 or 3+1). The evaluation was "conducted by a board-certified Radiologist." This implies a single reader assessment for comparison.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No. The document explicitly states: "OrthoScan TAU mobile fluoroscopic mini C-arm system did not require live human clinical studies to support substantial equivalence... Therefore, OrthoScan conducted a lab test image comparison study employing the use of anthropomorphic phantoms..." The study was a "lab test image comparison study" and involved a "Radiologist performed an assessment of 330 individual images." This is not an MRMC study.
- Effect Size of Human Readers Improve with AI vs. without AI assistance: Not applicable, as this is a device clearance, not an AI algorithm. The device aims to provide better image quality at lower dose, which indirectly can improve human interpretation, but this was not quantified in an MRMC study.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not applicable as this is a medical imaging device (C-arm), not an AI algorithm. The "performance" being evaluated is the direct image output of the device itself and its dose characteristics, not a diagnostic output from an automated algorithm.
-
The Type of Ground Truth Used:
- Phantom-based comparison with expert assessment. The "ground truth" for image quality and dose reduction in this context is established by the comparative assessment of images generated using standardized phantoms and evaluated by a qualified radiologist in conjunction with laboratory performance data (e.g., on dose). There's no "pathology" or "outcomes data" ground truth as this is a technical assessment of an imaging device.
-
The Sample Size for the Training Set:
- Not applicable. This is a medical device, not an AI/machine learning algorithm, so there is no "training set" in the computational sense. The device's design and software are developed through engineering and quality processes, not through autonomous learning from a dataset.
-
How the Ground Truth for the Training Set was Established:
- Not applicable due to the reasons stated above.
Ask a specific question about this device
(53 days)
OXO
The OEC One™ mobile C-arm system is designed to provide fluoroscopic and digital spot images of adult and pediatric patient populations during diagnostic, interventional, and surgical procedures. Examples of a clinical application may include: orthopedic, gastrointestinal, endoscopic, urologic, neurologic, vascular, critical care, and emergency procedures.
The OEC One™ is a mobile C-arm x-ray system to provide fluoroscopic images of the patient during diagnostic, interventional, and surgical procedures such as orthopedic, gastrointestinal, endoscopic, urologic, vascular, neurologic, critical care, and emergency procedures. These images help the physician visualize the patient's anatomy and localize clinical regions of interest. The system consists of a mobile stand with an articulating arm attached to it to support an image display monitor (widescreen monitor) and a TechView tablet, and a "C" shaped apparatus that has an image intensifier on the top of the C-arm and the X-ray Source assembly at the opposite end.
The OEC One™ is capable of performing linear motions (vertical, horizontal) and rotational motions (orbital, lateral, wig-wag) that allow the user to position the X-ray image chain at various angles and distances with respect to the patient anatomy to be imaged. The C- arm is mechanically balanced allowing for ease of movement and capable of being "locked" in place using a manually activated lock.
The subject device is labelled as OEC One.
The provided text is a 510(k) Premarket Notification Submission for the OEC One with vascular option. This document primarily focuses on establishing substantial equivalence to a predicate device (OEC One, K172700) rather than presenting a detailed study with acceptance criteria for device performance in the context of an AI/algorithm-driven device.
The "device" in this context is an X-ray imaging system (OEC One™ mobile C-arm system), and the changes described are hardware and software modifications to enhance vascular imaging features. It is not an AI or algorithm-only device with specific performance metrics like sensitivity, specificity, or AUC.
Therefore, most of the requested information regarding acceptance criteria for AI performance, sample sizes for test/training sets, expert ground truth, adjudication methods, MRMC studies, or standalone algorithm performance is not applicable or cannot be extracted from this document.
However, I can extract information related to the device's technical specifications and the testing performed to demonstrate its safety and effectiveness.
Here is a summary of the information that can be extracted, addressing the closest relevant points:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of numerical acceptance criteria (e.g., sensitivity, specificity) for the device's imaging performance in relation to clinical outcomes. Instead, the acceptance criteria are generally implied by conformance to existing standards and successful completion of various engineering and verification tests. The "reported device performance" refers to the device meeting these design inputs and user needs.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with medical electrical equipment standards | Certified compliant with IEC 60601-1 Ed. 3 series, including IEC60601-2-54:2009 and IEC 60601-2-43:2010. |
Compliance with radiation performance standards | All applicable 21 CFR Subchapter J performance standards were met. |
Design inputs and user needs met | Verification and validation executed; results demonstrate the OEC One™ system met the design inputs and user needs. |
Image quality and dose assessment for fluoroscopy | All image quality/performance testing identified for fluoroscopy in FDA's "Information for Industry: X-ray Imaging Devices- Laboratory Image Quality and Dose Assessment. Tests and Standards" was performed with acceptable results. This included testing using anthropomorphic phantoms. |
Software documentation requirements for moderate level of concern | Substantial equivalence based on software documentation for a "Moderate" level of concern device. |
Functional operation of new vascular features | The primary change was to implement vascular features (Subtraction, Roadmap, Peak Opacification, Cine Recording/Playback, Re-registration, Variable Landmarking, Mask Save/Recall, Reference Image Hold) to perform vascular procedures with "easiest workflow and least intervention by the user" and "further enhance the vascular workflows." (Bench testing demonstrated user requirements were met.) |
Safety and effectiveness | The changes do not introduce any adverse effects nor raise new questions of safety and effectiveness. |
2. Sample sized used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated in terms of patient data. The testing involved "anthropomorphic phantoms" for image performance and various engineering/bench testing for functional validation. These are not "test sets" in the typical sense of a dataset for an AI algorithm.
- Data Provenance: Not applicable as it's not patient data for AI evaluation. The testing was conducted internally at GE Hualun Medical Systems Co., Ltd.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not applicable. Ground truth from experts is not mentioned for this type of device evaluation.
- Qualifications of Experts: Not applicable.
4. Adjudication method for the test set
- Adjudication Method: Not applicable. There was no expert adjudication process described for the testing performed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This document describes a C-arm X-ray system, not an AI-assisted diagnostic tool that would typically undergo such a study.
- Effect Size of Human Readers: Not applicable.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable. The device is an imaging system; its "performance" is inherently tied to image acquisition and display, which are used by a human operator/physician. The "vascular features" are software enhancements to the imaging workflow, not a standalone AI algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For image quality, the ground truth was based on physical phantom characteristics and established technical standards (e.g., image resolution, contrast, noise, dose measurements). For functional aspects, it was based on meeting design inputs and user requirements validated through engineering tests. No expert consensus, pathology, or outcomes data were used as "ground truth" for this device's substantial equivalence declaration.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This document does not describe an AI model that requires a training set. The software updates are feature additions and modifications, not learned from a large dataset in the way a deep learning model would be.
9. How the ground truth for the training set was established
- Ground Truth Establishment: Not applicable, as there is no mention of an AI model with a training set.
Ask a specific question about this device
Page 1 of 6