Search Results
Found 4 results
510(k) Data Aggregation
(283 days)
The ULTRA 1040 Portable X-ray Unit is a portable X-ray device, intended for use by a qualified/trained physician or technician for the purpose of acquiring X-ray images of the patient's extremities.
This device is not intended for mammography.
This Portable X-ray Unit (Model: ULTRA 1040) consists of the following major components: an X-ray main unit, an X-ray exposure hand switch and a battery charger and other components. The X-ray main unit is mainly for emitting X-rays required for X-ray exams; the hand switch is for output control of X-ray emitting, and the battery charger is for charging the built-in battery in the X-ray main unit. The device can be used with an X-ray detector, a computer for receiving and detecting signal results and image processing software. The major components of the X-ray main unit include: handle, enclosure, control panel, system control board, high-voltage tank, collimator (beam limiter), lithium-ion battery and system control software running on the system control board.
The system control software is for real-time interaction and control with various circuit modules inside the portable X-ray unit. The software responds to user operations on the control panel. The user can adjust and control the kV and mAs parameters, and the software will display the parameters or directly load the APR parameters. The software loads the control data from X-ray output into the high-voltage generation control circuit of the system control board and control the high-voltage tank to generate high-voltage to excite the ray tube inside to emit X-rays, control the switch of the collimator indicator, and monitor the working status of the device, the battery power status, and control the display of the status indicators.
The provided FDA 510(k) clearance letter for the ECORAY Ultra 1040 is a regulatory document and does not contain the detailed clinical study results, particularly regarding acceptance criteria for AI-related performance, MRMC studies, or the specifics of training and test set ground truth establishment. The document focuses on showing substantial equivalence to a predicate device through non-clinical testing (electrical safety, EMC, software, bench performance) and a general statement about a "task-based image quality study" for clinical adequacy.
Therefore, I cannot provide a table of acceptance criteria and reported device performance related to AI, nor can I fully answer questions about AI-specific study design, ground truth, or MRMC studies, as this information is not present in the provided text.
The document does mention:
- A "comprehensive, task-based image quality study" to assess clinical adequacy (Section 8).
- Radiologic technologists acquired images, and radiologists clinically evaluated image quality (Section 8). This implies human evaluation, but not necessarily a comparative effectiveness study with AI.
- Software testing in accordance with IEC 62304:2006/A1:2015 (Section 7.3). This standard governs software life cycle processes for medical devices, but doesn't specify AI performance metrics.
Based only on the provided text, here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
Cannot be provided for AI-related performance. The document states:
- "The Ultra 1040 Portable X-ray Unit met bench testing acceptance criteria as defined in the test protocols." (Section 7.2)
- "All test results were satisfying with the standards." (Section 7.1 regarding electrical, mechanical, environmental safety, and EMC).
- For the clinical study: "radiologist clinically evaluated the image quality" to assess "clinical adequacy of the device's imaging performance." (Section 8)
However, the specific quantitative acceptance criteria and their corresponding reported values for image quality performance or any AI-assisted diagnostic criteria are not detailed in this document. The document primarily focuses on regulatory compliance and substantial equivalence to a predicate, not detailed clinical performance metrics for an AI component.
2. Sample sized used for the test set and the data provenance
- Test Set Size: Not specified for any clinical study.
- Data Provenance: Not specified (e.g., country of origin). The study "collected radiographic images for relevant anatomical indications stated in the Indications for Use." (Section 8). There is no mention of retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified.
- Qualifications of Experts: "radiologist" (Section 8). Specific experience (e.g., "10 years of experience") is not mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified. It only states "radiologist clinically evaluated the image quality."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: The document does not describe an MRMC comparative effectiveness study involving AI assistance. The clinical study mentioned is for "assessing the clinical adequacy of the device's imaging performance" (Section 8), implying the performance of the X-ray unit itself, not an AI component integrated into a diagnostic workflow with human readers.
- Effect Size of Human Improvement with AI: Not applicable, as no MRMC study with AI assistance is described.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The document describes the "ULTRA 1040 Portable X-ray Unit" and its "system control software" which manages device parameters and operations (Section 4). It is a mobile X-ray system, not an AI algorithm for image analysis. The "bench test for the Ultra 1040 Portable X-ray Unit assessed radiation performance, collimator accuracy, battery performance, imaging quality, and safety" (Section 7.2). This is testing of the X-ray hardware and its basic software functions, not an AI algorithm assessing images.
It seems this device is an X-ray imaging machine, and the "software" mentioned (Section 4) refers to the control software for the X-ray unit itself, not an AI for image interpretation or diagnosis. Therefore, a standalone performance study for an AI algorithm is not relevant based on the information provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: For the "task-based image quality study," the ground truth for image quality appears to be established by clinical evaluation by "radiologist" (Section 8). This is a form of expert consensus or expert reading, but specifically for image quality, not for diagnostic findings like disease presence/absence.
8. The sample size for the training set
- Training Set Size: Not applicable. This document does not describe the development or training of an AI algorithm for image analysis. The "software" mentioned is operational control software for the X-ray machine.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not applicable, as no AI algorithm training is described.
Ask a specific question about this device
(168 days)
INNOVISION-EXII is a stationery X-ray system intended for obtaining radiographic images of various anatomical parts of the human body, both pediatrics and adults, in a clinical environment. INNOVISION-EXII is not intended for mammography, angiography, interventional, or fluoroscopy use.
INNOVISION-EXII can receive X-ray signals from X-ray irradiation and digitize them into X-ray images by converting digital images to DICOM image format using Elui imaging software. INNOVISION-EXII is a general radiography X-ray system and not for mammography nor fluoroscopy. In addition, the system must be operated by a user who is trained and licensed to handle a general radiography X-ray system to meet the regulatory requirements of a Radiologic Technologist. Target areas for examinations include the head, spine, chest, and abdomen for diagnostic screening of orthopedic, respiratory, or vertebral discs. The system can capture a patient's postures, such as sitting, standing, or lying. This system can be used for patients of all ages, but it should be used with care for pregnant women and infants. The INNOVISION-EXII system has no part directly touching the patient's body.
The provided text describes a 510(k) summary for the INNOVISION-EXII stationary X-ray system, asserting its substantial equivalence to a predicate device (GXR-Series Diagnostic X-Ray System). However, the document does not contain information about acceptance criteria or a detailed study proving the device meets specific acceptance criteria related to its performance metrics for diagnostic imaging or AI assistance.
The "Clinical testing" section on page 9 merely states: "Clinical image evaluation of INNOVISION-EXII has been performed. The evaluation results demonstrated that INNOVISION-EXII generated images are adequate and suitable for expressing contour and outlines. The image quality including contrast and density are appropriate and acceptable for diagnostic exams." This is a very general statement and does not provide specific acceptance criteria or detailed study results.
Similarly, there are no details regarding AI performance (standalone or human-in-the-loop), sample sizes, ground truth establishment, or expert qualifications for such studies. The document focuses on establishing substantial equivalence based on intended use, technological characteristics, and compliance with various safety and performance standards (electrical safety, EMC, software validation, risk analysis).
Therefore, based solely on the provided text, the requested information about acceptance criteria and a study proving the device meets these criteria cannot be extracted or inferred. The document is a 510(k) summary focused on demonstrating substantial equivalence, not a detailed clinical performance study report.
Here is a breakdown of why each requested point cannot be addressed from the given text:
- A table of acceptance criteria and the reported device performance: Not present. The "clinical testing" section is too vague.
- Sample sized used for the test set and the data provenance: Not present. No specific test set for clinical performance is detailed.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not present. No ground truth establishment process is described beyond a general "clinical image evaluation."
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not present.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not present. The document does not mention any AI component or MRMC study.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not present. No mention of an algorithm or standalone performance.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not detailed. Only a general "clinical image evaluation" is mentioned.
- The sample size for the training set: Not present. The document describes a medical imaging device, not a machine learning model requiring a training set.
- How the ground truth for the training set was established: Not applicable, as there's no mention of a training set or machine learning components.
In summary, the provided FDA 510(k) summary largely focuses on engineering and regulatory compliance (electrical safety, EMC, software validation, comparison of technical specifications to a predicate device) to establish substantial equivalence, rather than detailed clinical performance metrics derived from a study with specific acceptance criteria and ground truth for diagnostic accuracy.
Ask a specific question about this device
(90 days)
The GM85 Digital Mobile X-ray imaging System is intended for use in generating radiographic images of human anatomy by a qualified/trained doctor or technician. This device is not intended for mammographic applications.
The GM85 Digital Mobile X-ray Imaging System is used to capture images by transmitting X-ray to a patient's body. The X-ray passing through a patient's body is sent to the detector and then converted into electrical signals. These signals go through the process of amplification and digital data conversion in the signal process on the S-Station, which is the Operation Software (OS) of Samsung Digital Diagnostic X-ray System, and save in DICOM file, a standard for medical imaging. The captured images are tuned up by an Image Post-processing Engine (IPE) which is exclusively installed in S-Station, and send to the Picture Archiving & Communication System (PACS) sever for reading images.
The GM85 Digital Mobile X-ray imaging System was previously cleared with K181626, and through this premarket notification, we would like to add more configurations in the previously cleared GM85 as an optional collapsible column type with a manual collimator, a tube, four detectors, and exposure switches (two wired and one wireless types) are optionally added, and software including the Image Post-processing Engine (IPE) is changed in order to support new hardware and apply new software features.
The provided document is a 510(k) Premarket Notification for the Samsung GM85 Digital Mobile X-ray Imaging System. It describes the updated device and compares it to a legally marketed predicate device (K181626, also GM85). The document primarily focuses on demonstrating substantial equivalence rather than presenting detailed acceptance criteria and a comprehensive study for de novo device performance.
However, based on the information provided, here's an attempt to extract and synthesize what is available regarding acceptance criteria and the supporting study, focusing on the changes made to the device:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with numerical targets and corresponding device performance for the overall system in a typical format found in clinical studies. Instead, it describes differences and then asserts that these differences do not "contribute any adverse impact to the device's safety and effectiveness" or "do not impact safety and/or effectiveness."
The closest to acceptance criteria are the statements made about the new detectors and software features, aiming for equivalent or improved performance compared to the predicate device.
Aspect of Performance/Characteristic | Acceptance Criteria (Implicit) | Reported Device Performance (Implicit) |
---|---|---|
Detectors: Image Characteristics (MTF, DQE) | Equivalent to existing (predicate) detectors, without adverse impact on safety and diagnostic effectiveness. | New detectors have "equivalent image characteristics as the existing ones." MTF is "slightly higher." DQE is "similar" or "a little lower" but "do not contribute any adverse impact to the device's safety and diagnostic effectiveness." |
Detectors: Spatial Resolution (Pixel Pitch, High Contrast Limiting Resolution, Number of pixels) | No adverse impact on safety and/or effectiveness. | New detectors have "same or lower pixel pitch," "same or higher pixel number," and "same or higher resolution" compared to predicate. These "do not impact safety and/or effectiveness." |
Detectors: Mechanical/Environmental (Dust/Water-resistance, Max. load capacity) | No adverse impact on safety and effectiveness. | New detectors have "same or better dust/water-resistance" and "same or higher max load capacity" than predicate. These changes "do not contribute any adverse impact to the device's safety and effectiveness." |
SimGrid (Image Processing Software) | No adverse impact on the device's safety and effectiveness. Improved functionality. | Updated SimGrid provides a parameter for controlling strength, which "does not contribute any adverse impact to the device's safety and effectiveness." |
IPE (Image Post-processing Engine) | No impact to the device's safety and effectiveness. | Upgraded IPE (Clinical Parameter Control) allows simultaneous comparison of editing image with current image, and this "does not contribute any impact to the device's safety and effectiveness." |
Overall System (Clinical Equivalence) | Equivalent to the predicate device. | Phantom image evaluations were found to be "equivalent to the predicate devices." "There is no significant difference in the average score of image quality evaluation between the proposed device and the predicate device." |
2. Sample Size for Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated as a numerical sample size for "cases" in the traditional sense of a clinical study. The document mentions "Anthropomorphic phantom images were provided." The number of phantom images or specific phantom types isn't detailed. For software elements, "Software System Test Case for verification and validation" was performed, but no numerical count is given.
- Data Provenance: The study used "Anthropomorphic phantom images" and non-clinical data (MTF, DQE measurements, Software System Test Cases). This indicates testing in a controlled environment (laboratory/phantom studies). The country of origin for the data is not specified, but the manufacturer is based in the Republic of South Korea. The data is non-clinical/phantom-based, so it's not strictly "retrospective or prospective" in the human clinical trial sense.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: "A professional radiologist" (singular)
- Qualifications of Experts: "professional radiologist" (no specific years of experience or subspecialty mentioned beyond being a radiologist).
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable or not specified. A single "professional radiologist" evaluated the phantom images, implying no consensus or multi-reader adjudication process as there was only one reviewer mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated or described. The evaluation involved a single radiologist reviewing phantom images.
- Effect Size: Not applicable, as no MRMC study was conducted.
6. Standalone (Algorithm only without human-in-the-loop performance) Study
- Standalone Study: Yes, performance data such as MTF and DQE measurements (IEC 62220-1) are inherent standalone technical performance metrics of the detectors. Software System Test Cases for verification and validation would also be considered standalone testing. The "anthropomorphic phantom images" evaluation by a single radiologist could be considered a form of standalone performance evaluation for the system's image quality output, as it assesses the device's generated images directly.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the phantom image evaluation was implicitly the expected characteristics and quality of images of anthropomorphic phantoms, as assessed by a professional radiologist for equivalence to predicate devices. For technical metrics like MTF and DQE, the ground truth is against the measurement standards (IEC 62220-1). There was no pathology or outcomes data.
8. Sample Size for the Training Set
- Sample Size for Training Set: The document does not provide information regarding the sample size of a training set for any machine learning components. While the device includes an "Image Post-processing Engine (IPE)" and features like "SimGrid," if these involve learned algorithms, their training data is not discussed. This submission is for an X-ray system, and the focus is on hardware and general image processing functionality rather than an AI/CADe device where training data would typically be detailed.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set was Established: Not applicable or not provided, as training set details are absent.
Ask a specific question about this device
(90 days)
The GR10X Digital X-ray Imaging System is intended for use in generating radiographic images of human anatomy. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. This system is not intended for mammographic applications.
This device consists of X-ray control, high-voltage generator, X-ray tube support, irradiation equipment, detectors, patient photography stand, digital imaging device, viewer SW, etc. The detectors and the viewer SW are cleared under the following 510ks.
- VIVIX-S VW(K200418) . Model Names: FXRD-2530VW, FXRD-2530VW PLUS, FXRD-3643VW, FXRD-3643VW PLUS, FXRD-4343VW, FXRD-4343VW PLUS, with imaging areas of 25cm x 30cm, 36cm x 43cm, 43cm x 43cm, respectively.
A high frequency inverter (Inverter) X-sensor voltage device designed to generate X-rays by combination of tube voltage, tube current, irradiation time, etc. so that it can be filmed at various angles for diagnosis of the patient's skeletal, respiratory, and urinary systems. The digital imaging system is used to obtain the images taken by the X-ray unit as radiation from the X-ray unit passes through the human body and is transmitted by the detector intercepts x-ray photons and the scintillator emits visible spectrum photons that illuminate an array of photo (a-SI)detectors that create electrical signals. After the electrical signals are generated, it is converted to digital value, and the Software, VXvue, acquires and processes the data values from the detector. The resulting digital images will be displayed on monitors.
The provided text describes a 510(k) premarket notification for the GR10X Digital X-ray Imaging System (Models GR10X-40K, GR10X-50K). The submission aims to demonstrate substantial equivalence to a predicate device (Ysio, K081722).
Based on the document, here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a quantitative format for the clinical study. Instead, the "acceptance criteria" for the clinical study is an overarching statement that the device "provides images of equivalent diagnostic capability to the predicate devices."
The reported device performance from the clinical study is:
- "the study confirmed that the GR10X Digital X-ray Imaging System provides images of equivalent diagnostic capability to the predicate devices, the Yiso and its results demonstrate substantial equivalence."
For non-clinical performance (technical specifications), here's a table comparing the subject device's detectors (VIVIX-S VW Series, VIVIX-S 1717V) to the predicate's detector (Trixell Pixium 4343RCE):
Technical Specifications | Predicate Device (Trixell Pixium 4343RCE) | Subject Device (VIVIX-S VW Series, VIVIX-S 1717V) Reported Performance | Acceptance Criteria (Implicit) | Comparison Result |
---|---|---|---|---|
Dimensions | 423.3mm x 425.4 mm | VIVIX-S 4343VW: 460 mm x 460 mm | ||
VIVIX-S 3643VW: 384mm × 460mm | ||||
VIVIX-S 2530VW: 287mm x 350mm | ||||
VIVIX-S 1717V: 460mm x 460mm | Substantially Equivalent | Substantially Equivalent | ||
Resolution | 2860 x 2874 pixels | VIVIX-S 4343VW: 3072 x 3072 pixels | ||
VIVIX-S 3643VW: 2560 x 3072 pixels | ||||
VIVIX-S 2530VW: 2048 x 2560 pixels | ||||
VIVIX-S 1717V: 3072 x 3072 pixels | Substantially Equivalent | Substantially Equivalent | ||
Pixel size | 148 µm | VIVIX-S 4343VW: 140µm | ||
VIVIX-S 3643VW: 140µm | ||||
VIVIX-S 2530VW: 124µm | ||||
VIVIX-S 1717V: 140µm | Substantially Equivalent | Substantially Equivalent | ||
Semiconductor Material | Amorphous silicon, a-Si | Amorphous silicon, a-Si | Substantially Equivalent | Substantially Equivalent |
Scintillator | Cesium iodide (CsI) | Cesium iodide (CsI), Gadolinium Oxide (Gadox) | Substantially Equivalent | Substantially Equivalent |
Acquisition Depth | 16 bit | 16 bit | Substantially Equivalent | Substantially Equivalent |
DQE @ 0.05 lp/mm (2 µGy) | 67% | FXRD-4343VAW: 47% | ||
FXRD-4343VAW PLUS: 62% | ||||
FXRD-3643VAW: 45.5% | ||||
FXRD-3643VAW PLUS: 61% | ||||
FXRD-2530VAW: 49% | ||||
FXRD-2530VAW PLUS: 61% | ||||
FXRD-1717NAW: 50% | ||||
FXRD-1717NBW: 27% | Substantially Equivalent | Substantially Equivalent | ||
MTF @ 1 lp/mm | 62% | FXRD-4343VAW: 76% | ||
FXRD-4343VAW PLUS: 60% | ||||
FXRD-3643VAW: 74% | ||||
FXRD-3643VAW PLUS: 61% | ||||
FXRD-2530VAW: 76% | ||||
FXRD-2530VAW PLUS: 60% | ||||
FXRD-1717NAW: 66% | ||||
FXRD-1717NBW: (value cut off) | Substantially Equivalent | Substantially Equivalent |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions a "single-blinded concurrence study" as a "clinical test" but does not provide any specific quantitative details regarding the sample size of the test set, or the provenance (country of origin, retrospective/prospective nature) of the data used in this clinical study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number of experts used or their qualifications for establishing ground truth in the clinical study. It only refers to a "single-blinded concurrence study."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document states "a single-blinded concurrence study was conducted." It does not provide details on the specific adjudication method used (e.g., 2+1, 3+1, or if it was based on individual expert assessment without formal adjudication). The term "concurrence" implies agreement among readings, but the method for achieving or resolving discrepancies is not described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document describes a "single-blinded concurrence study" comparing the GR10X system to predicate devices. This study aims to confirm "equivalent diagnostic capability." It is not explicitly described as a Multi-Reader Multi-Case (MRMC) comparative effectiveness study involving AI assistance for human readers. The device is an X-ray imaging system, not an AI-powered diagnostic aid for human readers. Therefore, no effect size of human readers improving with AI vs without AI assistance is reported or relevant based on the information provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to an X-ray imaging system, not an AI algorithm. Therefore, the concept of "standalone (algorithm only without human-in-the-loop performance)" does not apply in the context of this device's evaluation. The evaluation is focused on the image quality and diagnostic capability of the imaging system itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document mentions a "single-blinded concurrence study" to confirm "equivalent diagnostic capability." While this implies expert interpretation, it does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). The phrase "concurrence study" suggests that the comparison was based on human expert interpretations of images from both the subject and predicate devices.
8. The sample size for the training set
The document describes a clinical study for product clearance and does not mention any "training set." This type of submission (510k for a general X-ray system) typically involves demonstrating performance against a predicate device, and not machine learning model training. Therefore, there is no information on a training set sample size.
9. How the ground truth for the training set was established
As there is no mention of a "training set" in the context of a machine learning model, this question is not applicable based on the provided text.
Ask a specific question about this device
Page 1 of 1