Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K241125
    Device Name
    VIVIX-S 1751S
    Manufacturer
    Date Cleared
    2024-11-15

    (206 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K190611

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VIVIX-S 1751S series is used for the general-purpose diagnostic procedures, and as well as intended to replace radiographic film/ screen systems. The VIVIX-S 1751S series is not intended for mammography applications.

    Device Description

    The VIVIX-S 1751S, available in models FXRD-1751SA and FXRD-1751SB, features a 17x51 inch imaging area. This device intercepts x-ray photons and uses a scintillator to emit visible spectrum photons. The FXRD-1751SA model uses a Csl:Tl (Thallium doped Caesium lodide) scintillator, while the FXRD-1751SB model uses a Gadox (Gadolinium Oxysulfide) scintillator. These photons illuminate an array of photo (a-SI) detectors, creating electrical signals are then converted to digital values, which are processed by software to produce digital images displayed on monitors. The VIVIX-S 1751S must be integrated with an operating PC and an X-ray generator, and it can communicate with the generator via cable. It is designed for capturing and transferring digital x-ray images for radiography diagnosis. Note that the X-ray generator and imaging software are not included with the VIVIX-S 1751S.

    AI/ML Overview

    The document describes a 510(k) submission for the VIVIX-S 1751S digital X-ray detector. The acceptance criteria and the study proving the device meets these criteria are primarily demonstrated through a comparison to a predicate device (K190611) and performance testing based on established standards.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by demonstrating substantial equivalence to the predicate device in terms of technological characteristics and performance metrics, as well as compliance with recognized standards. The "performance" column shows the subject device's reported values relative to the predicate device.

    ParameterAcceptance Criteria (Predicate Device K190611, FXRD-1751SB)Reported Device Performance (Subject Device K241125, FXRD-1751SA)Equivalence
    Technological CharacteristicsSame as Predicate deviceSame as Predicate deviceSubstantially Equivalent
    Intended UseVIVIX-S 1751S series is used for general-purpose diagnostic procedures, and to replace radiographic film/screen systems. Not for mammography.VIVIX-S 1751S series is used for general-purpose diagnostic procedures, and to replace radiographic film/screen systems. Not for mammography.Equivalent
    Operating PrincipleSame as Predicate deviceSame as Predicate deviceEquivalent
    Design FeaturesSame as Predicate deviceSame as Predicate deviceEquivalent
    Communication MethodSame as Predicate deviceSame as Predicate deviceEquivalent
    ResolutionSame as Predicate deviceSame as Predicate deviceEquivalent
    Scintillator TypeGadoxCsI (FXRD-1751SA), Gadox (FXRD-1751SB)Different scintillator for FXRD-1751SA model, but performance shown to be comparable. FXRD-1751SB is identical.
    Performance (Optical / Imaging)
    MTF (0.5 lp/mm)≥ 81≥ 83Similar
    MTF (1 lp/mm)≥ 56≥ 63Similar
    MTF (2 lp/mm)≥ 22≥ 30Similar
    MTF (3 lp/mm)≥ 9≥ 14Similar
    DQE (0.5 lp/mm)≥ 29≥ 38Similar
    DQE (1 lp/mm)≥ 22≥ 33Similar
    DQE (2 lp/mm)≥ 11≥ 23Similar
    DQE (3 lp/mm)≥ 4≥ 14Similar
    Compliance with StandardsCompliance with 21CFR1020.30, 21CFR1020.31, IEC 60601-1, CAN/CSA-C22.2 No. 60601-1, ANSI/AAMI ES60601-1, IEC 60601-1-2Complies with all listed standardsCompliant
    Diagnostic CapabilityEquivalent to predicate deviceDemonstrated equivalent diagnostic capabilityEquivalent

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states, "A-Qualified Expert Evaluation study according to CDRH's Guidance for the Submission of 510(k)'s for Solid State X-ray Imaging Devices was conducted..." However, it does not specify the sample size of cases/images used in this clinical evaluation study.
    The data provenance (country of origin, retrospective/prospective) is not explicitly stated in the provided text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions an "Expert Evaluation study" but does not specify the number of experts or their exact qualifications (e.g., "radiologist with 10 years of experience"). It only indicates that it was a "Qualified Expert Evaluation study."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe the adjudication method used for the expert evaluation study.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) study was conducted as part of the "Qualified Expert Evaluation study." The primary goal of this study was to confirm that the subject device (VIVIX-S 1751S - FXRD-1751SA) provides images of equivalent diagnostic capability to the predicate device (VIVIX-S 1751S - FXRD-1751SB).

    The document does not mention the involvement of AI in this study, nor does it quantify any improvement of human readers with AI assistance. The study described is a comparison of two different X-ray detector technologies, not an AI-assisted reading study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This device is an X-ray detector, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance without a human-in-the-loop is not applicable in the same way it would be for an AI diagnostic software. The performance metrics reported (MTF, DQE) are physical image quality parameters of the detector itself, which could be considered "standalone" in this context as they characterize the device's inherent imaging capability.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the "clinical test" (Expert Evaluation study) was established by "Qualified Expert Evaluation" to assess "equivalent diagnostic capability" of the images. This suggests a form of expert consensus or individual expert readings to determine the diagnostic quality of the images produced by the subject device compared to the predicate. It does not mention pathology or outcomes data as the primary ground truth.

    8. The sample size for the training set

    The document does not mention a training set in the context of this device because it is a hardware device (X-ray detector), not an AI algorithm that requires a training phase.

    9. How the ground truth for the training set was established

    Since there is no mention of a training set for an AI algorithm, the question of how its ground truth was established is not applicable to this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220536
    Date Cleared
    2022-04-28

    (63 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K190611

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Venul 748V flat panel detector is provided as an imaging component to the system manufacturer. It is mainly used in long bones, spine and other inspection fields. After collecting static imaged data is output to the processing equipment.

    This device is suitable for providing radiography imaging for adult via DR system. The remaining notes depend on the final DR system.

    It is not intended for mammography, dental applications, neonatal and fluoroscopy.

    Device Description

    Digital flat panel detector is a cassette-size wired X-ray flat panel detector based on amorphous silicon thin-film transistor technologies. It was developed to provide X -Ray image, which contains an active matrix of 3064×8696 with 139um pixel pitch. Detector's scintillator is CsI(Cesium iodide). The biggest feature of Venu1748V is that it supports imaging of large-scale objects, including long bones and complete spine detection.

    AI/ML Overview

    This document is a 510(k) Pre-Market Notification from iRay Technology Taicang Ltd. to the FDA for their Digital Flat Panel Detector, model Venu1748V. It outlines the device's technical characteristics and compares them to a predicate device.

    Analysis of Acceptance Criteria and Study Details:

    The provided document does not describe a clinical study or a multi-reader, multi-case (MRMC) study to prove the device meets acceptance criteria. Instead, it relies on non-clinical studies (bench testing) to demonstrate substantial equivalence to a predicate device. This is a common approach for imaging components like flat panel detectors where the primary concern is the technical performance of the imaging hardware itself, rather than diagnostic accuracy involving human interpretation when coupled with a full DR system.

    Therefore, many of the requested points regarding clinical study design, expert involvement, and human reader performance are not applicable to this submission.

    Here's a breakdown based on the provided text, addressing the points where information is available:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" in a separate table with pass/fail thresholds. Instead, it presents a comparison of key technical specifications between the proposed device (Venu1748V) and the predicate device (VIVIX-S 1751S). The implication is that meeting or exceeding the performance of the predicate device for these parameters constitutes "acceptance."

    ItemPredicate Device: VIVIX-S 1751SProposed Device: Digital flat panel detector Venu1748VImplied Acceptance (Relative to Predicate)Reported Device Performance (Venu1748V)
    Model NameVIVIX-S 1751SVenu1748VN/A (Identification)Venu1748V
    510(K) NumberK190611To be assignedN/A (Identification)To be assigned
    Classification NameStationary X-Ray SystemSameSameSame
    Product CodeMQBSameSameSame
    Regulation Number21 CFR 892.1680SameSameSame
    PanelRadiologySameSameSame
    Classification:IISameSameII
    X-Ray Absorber (Scintillator):Gd2O2S:Tb (Gadolinium oxysulfide)CsIDifferent but acceptableCsI
    Installation Type:PortableSameSamePortable
    Detector structure:Amorphous silicon TFTSameSameAmorphous silicon TFT
    Dimensions:1357.0mm × 532.0mm × 30.0mm1271.4mm × 586.6mm × 20.8mmComparable/Improved Size1271.4mm × 586.6mm × 20.8mm
    Max. Image Matrix Size:3072 × 9216 pixels3064 × 8696 pixelsComparable3064 × 8696 pixels
    Pixel Pitch:140μm139μmComparable/Improved Resolution139μm
    Max. Effective Imaging Area (H×V):430.08mm × 1290.24mm425.8mm × 1208.7mmComparable425.8mm × 1208.7mm
    Spatial resolution3.5 lp/mm3.4 lp/mmComparable3.4 lp/mm
    Greyscales16 bitSameSame16 bit
    Modulation Transfer Function (MTF)40% at 1.0 lp/mm56% at 1.0 lp/mmSuperior56% at 1.0 lp/mm
    Detective Quantum Efficiency (DQE)20% at 1.0 lp/mm24% at 1.0 lp/mmSuperior24% at 1.0 lp/mm
    Power Consumption:Max. 72 WMax. 50 WSuperior (Lower)Max. 50 W
    Communications:Wired LANSameSameWired LAN
    Cooling:Air coolingSameSameAir cooling
    Protection against Matter/WaterIPX0SameSameIPX0
    Operation Temperature:10 to 35°C5 to 35°CComparable/Improved Range5 to 35°C
    Operation Humidity:30 to 85% (Non-Condensing)10 to 90% (Non-Condensing)Comparable/Improved Range10 to 90% (Non-Condensing)
    Operation Atmospheric pressure:70 to 106 kPaSameSame70 to 106 kPa
    Operation Altitude:Max. 3000 metersSameSameMax. 3000 meters
    Storage and Transportation Temperature:-15 to 55°C-20 °C ~ 55 °CComparable/Improved Range-20 °C ~ 55 °C
    Storage and Transportation Humidity:10 to 90% (Non-Condensing)5% ~ 95% (Non-Condensing)Comparable/Improved Range5% ~ 95% (Non-Condensing)
    Storage and Transportation Atmosphere:50 ~ 106 kPa70kPa~106kPaComparable70kPa~106kPa
    Storage and Transportation Altitude:Max. 3000 metersSameSameMax. 3000 meters
    SoftwareVXvueiDetectorDifferent but acceptableiDetector

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not applicable. The "test set" here refers to the non-clinical bench testing of the detector's physical performance characteristics. These tests are typically performed on a limited number of manufactured units (e.g., a few samples per batch) to ensure they meet specifications. The document does not specify the exact number of units tested for each parameter.
    • Data Provenance: The company is iRay Technology Taicang Ltd., located in Taicang, Jiangsu, CHINA. The testing was performed internally or by a contracted lab. The data is retrospective in the sense that it was collected as part of the device's development and verification, prior to submission.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    Not applicable. For non-clinical tests of a flat panel detector, "ground truth" is established by physical measurement standards and calibrated equipment, not by human expert consensus or clinical diagnosis. For example, MTF is measured using a phantom and analytical methods, not by radiologists.

    4. Adjudication Method for the Test Set:

    Not applicable. Since no human experts are establishing ground truth for diagnostic decisions, there's no need for adjudication.

    5. If a Multi-Reader, Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No. The document explicitly states: "Clinical data is not needed to characterize performance and establish substantial equivalence. The non-clinical test data characterizes all performance aspects of the device based on well-established scientific and engineering principles." This device is a hardware component (a flat panel detector), not an AI algorithm assisting human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    No. This is a hardware component. There is no "algorithm only" performance in the sense of an AI diagnostic tool. The detector captures raw image data.

    7. The Type of Ground Truth Used:

    The "ground truth" for the non-clinical tests consists of:

    • Physical standards/measurements: For parameters like dimensions, pixel pitch, greyscales, power consumption, temperature/humidity ranges.
    • Engineering metrics: For performance characteristics like MTF, DQE, spatial resolution, signal-to-noise ratio, uniformity, defect, minimum triggering dose rate, and low contrast resolution. These are established through standardized testing procedures using phantoms and calibrated instruments.
    • Compliance with standards: Electrical safety (IEC/ES 60601-1, IEC60601-2-54) and EMC testing (IEC 60601-1-2) ensure the device meets predefined safety and electromagnetic compatibility benchmarks.
    • Software verification: The software "iDetector" hazards, requirements specification, and design specification were tested against the intended design specification.

    8. The Sample Size for the Training Set:

    Not applicable. This is a hardware device. "Training set" typically refers to data used to train AI/machine learning models.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as there is no training set mentioned for an AI/ML model for this hardware device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K202441
    Date Cleared
    2021-04-02

    (219 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.

    Device Description

    Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.

    Acceptance Criteria (Implied)Reported Device Performance
    Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II).Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels."
    No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts.Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest."
    Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved.Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested."
    No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns.Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence."
    Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness."
    Same Intended Use: The device must maintain the same intended use as the predicate.Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
    • Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
    • Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1