Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K241114
    Manufacturer
    Date Cleared
    2024-07-23

    (92 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EzDent-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    EzDent-i is intended for use as software to acquire, view, save 2D image files, and load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment.

    Device Description

    EzDent-i v3.5 is a device that provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images. EzDent-i is a patient & image management software specifically for digital dental radiography. It also provides server/client model so that the users upload and download clinical diagnostic images and patient information from any workstations in the network environment.

    EzDent-i supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format. For 3D image management, it provides uploading and downloading support for dental CT Images in DICOM format. It interfaces with a 3D imaging software made by our company, the Ez3D-i (K131616, K150761, K161246, K163539, K173863, K190791, K200178, K211791, K222069, K231757) but the EzDent-i itself does not view, transfer or process 3D radiographs.

    EzDent-i supports the acquisition of dental images by interfacing with OpenCV library to import the intra-oral camera images. It also supports the acquisition of CT/Panoramic/Cephalo/Intra-Oral Sensor /Intra-Oral Scanner images by interfacing with Xray capture software.

    AI/ML Overview

    The provided text is a 510(k) summary for the EzDent-i/E2/Prora View/Smart M Viewer (v3.5) device. It asserts substantial equivalence to a predicate device (EzDent-i/E2/Prora View/Smart M Viewer v3.4). However, the document does not contain any specific acceptance criteria or details of a study proving the device meets acceptance criteria.

    The "Performance Data" section (Section 10) only states: "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria. Also we have addressed the recommendations in the most recent cybersecurity guidance, "Cybersecurity in Medical Devices Quality System Considerations and Content of Premarket Submissions"."

    This statement confirms that tests were conducted and passed, but does not provide the actual acceptance criteria, the reported device performance, sample sizes, details on ground truth establishment, expert qualifications, adjudication methods, or whether MRMC/standalone studies were performed.

    Therefore, based solely on the provided text, I cannot complete the requested tables and information. The document confirms that testing occurred and passed, but the specifics required to answer your questions are not present in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223820
    Manufacturer
    Date Cleared
    2023-02-17

    (58 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EzDent-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    EzDent-i is intended for use as software to acquire, view, save 2D image files, and load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment.

    Device Description

    EzDent-i v3.4 is a device that provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images. EzDent-i is a patient & image management software specifically for digital dental radiography. It also provides server/client model so that the users upload and download clinical diagnostic images and patient information from any workstations in the network environment.

    EzDent-i supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format. For 3D image management, it provides uploading and downloading support for dental CT Images in DICOM format. It interfaces with a 3D imaging software made by our company, the Ez3D-i (K131616, K150761, K161246, K163539, K173863, K190791, K200178, K211791, K222069) but the EzDent-i itself does not view, transfer or process 3D radiographs. None of the changes to the predicate software are related to the 3D functions.

    EzDent-i supports the acquisition of dental images by interfacing with OpenCV library to import the intra-oral camera images. It also supports the acquisition of CT/Panoramic/Cephalo/Intra-Oral Sensor /Intra-Oral Scanner images by interfacing with Xray capture software.

    The software level of concern is Moderate.

    AI/ML Overview

    The provided document is a 510(k) summary for the EzDent-i / E2 / Prora View / Smart M Viewer software. It describes the device, its intended use, and argues for its substantial equivalence to a predicate device.

    However, the document does not contain the detailed information necessary to answer all parts of your request, specifically regarding acceptance criteria, reported device performance, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, or training set details.

    This 510(k) submission primarily focuses on demonstrating that the updated software version (v3.4) is substantially equivalent to a previous cleared version (v3.3) by highlighting that the changes are for "user convenience and do not affect the device safety or effectiveness". It therefore does not provide a comprehensive study report with quantified performance metrics against specific acceptance criteria for diagnostic accuracy, which would typically be found in direct performance studies for devices with diagnostic claims.

    Here's what can be extracted and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Not SpecifiedNot Specified
    (The document states "The device passed all of the tests based on pre-determined Pass/Fail criteria." but does not elaborate on what these criteria or the test results were in terms of specific performance metrics.)

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified.
    • Data Provenance (e.g., country of origin of the data, retrospective or prospective): Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.
      • The Indications for Use state the device is "meant to be used by trained medical professionals such as radiologist and dentist." This implies the target users, but not the experts for ground truth.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Adjudication Method: Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study is not mentioned. The device is described as "dental imaging software that is intended to provide diagnostic tools" and its results are "dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." This suggests it is a viewing and processing tool, not a diagnostic AI that would typically undergo an MRMC study to show improvement in human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance Study: No, a standalone performance study is not described. Given the device's function as an image management and processing system used "as an adjunctive to standard radiology practices for diagnosis," it is not presented as an AI algorithm providing standalone diagnoses.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Not specified, as specific performance tests against diagnostic ground truth are not detailed. The performance data section broadly mentions "SW verification/validation and the measurement accuracy test were conducted," implying functional and technical testing rather than a clinical performance study with established ground truth for diagnostic accuracy.

    8. The sample size for the training set

    • Sample Size for Training Set: Not applicable/Not specified. The document describes software with various image processing and management features, rather than a machine learning or AI algorithm that would require a distinct "training set."

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set: Not applicable/Not specified, as no training set for a machine learning model is mentioned.

    Summary of what the document implies about performance testing:

    The "Performance Data" section states: "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria."

    This indicates that Ewoosoft performed internal validation testing to ensure the software met its specified functional and technical requirements (e.g., correct image display, accurate measurements for linear distance and angle, proper functioning of new features like "Image Share" and "IO sensor image Preview"). However, these are not detailed clinical performance metrics for diagnostic efficacy or accuracy that would typically be associated with AI-driven diagnostic devices. Since the device is presented as substantially equivalent to a predicate device for managing and processing images, the focus of the 510(k) is on the safety and effectiveness of the software updates rather than demonstrating novel diagnostic performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K222145
    Manufacturer
    Date Cleared
    2022-08-12

    (23 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EzDent-i /E2 /ProraView/ Smart M Viewer is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    The EzDent-i F2 /ProraView/ Smart M Viewer is intended for use as software to acquire, view, save 2D image files, and load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment.

    Device Description

    The EzDent-i / E2 / Prora View / Smart M Viewer v.3.3 is a device that provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images. The subject device is a patient & image management software specifically for digital dental radiography. It also provides server/client model so that the users upload and download clinical diagnostic images and patient information from any workstations in the network environment.

    It also supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format. For 3D image management, it provides uploading and downloading support for dental CT Images in DICOM format. It interfaces with a 3D imaging software made by our company, the Ez3D-i (K131616, K150761, K161246, K163539, K173863, K190791, K200178, K211791) but does not view, transfer or process 3D radiographs.

    The subject device supports the acquisition of dental images by interfacing with OpenCV library to import the intra-oral camera images. It also supports the acquisition of CT/Panoramic/Cephalo/Intra-Oral Sensor /Intra-Oral Scanner images by interfacing with Xray capture software.

    AI/ML Overview

    This document describes a 510(k) premarket notification for the EzDent-i / E2 / ProraView/ Smart M Viewer, a dental imaging software. The submission aims to demonstrate substantial equivalence to a previously cleared version of the same software (EzDent-i / E2 / Prora View / Smart M Viewer v.3.2, K211795).

    Based on the provided information, the following can be extracted:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or detailed device performance metrics in a pass/fail format. Instead, it relies on a statement that "The device passed all of the tests based on pre-determined Pass/Fail criteria." The study's focus was on demonstrating that modifications made to the software "do not raise the questions of safety or effectiveness" and that the newer version is substantially equivalent to the predicate device.

    The reported device performance, in terms of meeting criteria, is implicitly described as:

    Acceptance Criteria CategoryReported Device Performance
    Software FunctionalityThe device passed all SW verification/validation tests.
    Measurement AccuracyThe device passed all measurement accuracy tests.
    Safety & EffectivenessModifications (PC system requirements, video tutorial, setting/viewer/report tab upgrades) do not raise questions of safety or effectiveness.
    Substantial EquivalenceDemonstrated substantial equivalence to the predicate device in technical characteristics, general function, application, and indications for use.

    2. Sample Size Used for the Test Set and Data Provenance

    The document states that "SW verification/validation and the measurement accuracy test were conducted." However, it does not provide any details regarding:

    • The specific sample size (e.g., number of images, number of cases) used for the test set.
    • The data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    This information is not provided in the document. The document mentions that the software's diagnostic tools are "meant to be used by trained medical professionals such as radiologist and dentist" and that "Results produced by the software's diagnostic, treatment planning and simulation tools are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." However, it does not specify how ground truth was established for the performance testing.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for establishing ground truth for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    An MRMC comparative effectiveness study was not performed or reported in this submission. The study focuses on demonstrating substantial equivalence of the modified software version to its predicate through software verification/validation and measurement accuracy testing, rather than evaluating human reader performance with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    A standalone performance study focused on the algorithm's performance without human-in-the-loop interaction was performed implicitly through the "SW verification/validation and the measurement accuracy test." These tests are typically designed to assess the software's intrinsic functionalities and accuracy parameters independently. The submission indicates that these tests were conducted and passed. However, specific metrics (e.g., sensitivity, specificity, AUC) for diagnostic capabilities are not provided, as the device is characterized as a "medical image management and processing system" with "diagnostic tools," meaning it is an adjunctive tool for image interpretation by clinicians.

    7. Type of Ground Truth Used

    The type of ground truth used for the "SW verification/validation and the measurement accuracy test" is not explicitly stated. Given the nature of a "medical image management and processing system" and its function of enabling trained professionals to "view and interpret" images and provide "diagnostic tools," the ground truth for measurement accuracy tests would likely involve precisely measured objects or features within reference images. For software functionality, it would involve confirming that features operate as intended based on predefined specifications. Pathology or outcomes data are not mentioned as being used for ground truth.

    8. Sample Size for the Training Set

    The document does not provide any information about a training set or its sample size. This submission describes an updated version of existing software, and the evaluation focuses on comparing it to a predicate device rather than on the development and training of a new AI algorithm.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned, information on how its ground truth was established is not applicable/not provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K211795
    Manufacturer
    Date Cleared
    2021-10-04

    (116 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EzDent-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    EzDent-i is intended for use as software to acquire, view, save 2D image files, and load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment.

    Device Description

    EzDent-i is a device that provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images. EzDent-i is a patient & image management software specifically for digital dental radiography. It also provides server/client model so that the users upload and download clinical diagnostic images and patient information from any workstations in the network environment.

    EzDent-i supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format. For 3D image management, it provides uploading and downloading support for dental CT Images in DICOM format. It interfaces with a 3D imaging software made by our company, the Ez3D-i (K131616, K150761, K161246, K163539, K173863, K190791, K200178) but the EzDent-i itself does not view, transfer or process 3D radiographs.

    EzDent-i supports the acquisition of dental images by interfacing with OpenCV library to import the intra-oral camera images. It also supports the acquisition of CT/Panoramic/Cephalo/Intra-Oral Sensor /Intra-Oral Scanner images by interfacing with Xray capture software.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a dental imaging software named EzDent-i. The primary purpose of this submission is to demonstrate substantial equivalence to a previously cleared predicate device, not to showcase the performance of an AI algorithm based on comparative studies. Therefore, much of the requested information regarding acceptance criteria and performance studies (especially relating to AI, human readers, and ground truth establishment) is not detailed in this document because it is not typically required for a software device demonstrating substantial equivalence by adding convenience features.

    However, based on the context of the document, we can infer some details and explicitly state what is not provided.

    Here's an attempt to answer your questions based on the provided text, indicating where information is not present:


    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria or detailed performance metrics. Instead, it relies on demonstrating that the modified device's performance aligns with its intended use and is comparable to the predicate device.

    The "Performance Data" section states: "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria." This indicates that internal testing was performed, and the device met its internal "Pass/Fail criteria," but these specific criteria and results are not provided in this summary.

    Given that this is a 510(k) for a medical image management and processing system (not an AI-driven diagnostic aid that independently identifies pathologies), the "performance" here refers to its ability to correctly acquire, view, save, load, and manipulate dental images, similar to its predicate.

    Acceptance Criterion (Inferred)Reported Device Performance
    Functionality & Reliability (e.g., image acquisition, viewing, saving, loading, editing, display functions)"The device passed all of the tests based on pre-determined Pass/Fail criteria." The device "provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images." "Supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format." "Supports the acquisition of dental images by interfacing with OpenCV library."
    Measurement Accuracy (e.g., linear distance, angle)"The device passed all of the tests based on pre-determined Pass/Fail criteria." (Specific accuracy metrics not provided).
    Substantial Equivalence (to predicate device)"The subject device is substantially equivalent in the areas of technical characteristics, general function, application, and indications for use." "The new device does not introduce a fundamentally new scientific technology." "The device has been validated through system level test."
    Safety and Effectiveness (no new safety/effectiveness questions)"The modifications are PC system requirement information change, adding logout option, and upgrades to Setting tab, Viewer tab, and Report tab. These differences are not significant since they are additional features for user convenience and do not raise the questions of safety or effectiveness."

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify any sample size for a test set in terms of patient images. The "performance data" section refers to "SW verification/validation and the measurement accuracy test," implying software-level testing rather than clinical study data on a patient image test set. No information is available regarding data provenance (country of origin, retrospective/prospective).


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable and not provided. The device (EzDent-i) is described as "dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist." It is a viewer and manager, not an AI diagnostic tool that produces a finding requiring expert ground truth for performance evaluation in the described context. The performance verification likely focuses on technical accuracy of image display, manipulation, and data handling, not diagnostic accuracy against a ground truth.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable and not provided. As per point 3, there's no indication of a diagnostic test set requiring adjudication.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study was mentioned or required for this 510(k) submission. This device is a general image management and processing system, not an AI-assisted diagnostic tool that would typically warrant an MRMC study to show human reader improvement with AI assistance.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This device is not an algorithm performing a standalone diagnostic task. It is software that provides "diagnostic tools" for viewing and interpreting images by human professionals. Therefore, a standalone algorithm performance evaluation would not be applicable or described here.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable and not provided. As this is not an AI diagnostic device, the concept of ground truth for diagnostic accuracy (e.g., concerning a disease or finding) is not relevant to the described performance evaluations. The "performance" relates to the software's ability to achieve its technical specifications.


    8. The sample size for the training set

    Not applicable and not provided. This document describes a traditional software upgrade and substantial equivalence claim, not a machine learning or AI device that would have a training set.


    9. How the ground truth for the training set was established

    Not applicable and not provided. As per point 8, there is no mention of a training set for an AI model in this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210329
    Manufacturer
    Date Cleared
    2021-02-18

    (14 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    EzDent-i (K202116), Ez3D-i (K200178)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Green X 18 (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.

    Device Description

    Green X 18 (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X 18 (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X 18 (Model : PHT-75CHS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment. The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same to the predicate device (PHT-75CHS (K201627)). The difference from the predicate device is that it is equipped with a new CBCT/PANO detector to provide users with a larger CBCT FOV.

    AI/ML Overview

    The medical device in question is the Green X 18 (Model: PHT-75CHS), a digital X-ray imaging system for panoramic, cephalometric, and 3D dental imaging. The study described focuses on demonstrating substantial equivalence to a predicate device, the Green X (Model: PHT-75CHS, K201627), particularly concerning a new detector, Xmaru1524CF Master Plus OP.

    Based on the provided text, the acceptance criteria and study information can be summarized as follows:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document primarily focuses on demonstrating equivalence to the predicate device rather than setting specific numeric acceptance criteria for unique features or diagnostic accuracy. Instead, the acceptance is based on the new detector performing "equivalently or better" than the predicate.

    Acceptance Criterion (Implicit)Reported Device Performance (Subject Device vs. Predicate Device)
    Technological CharacteristicsThe fundamental technological characteristics of the subject and predicate device are identical. Similar imaging modes (PANO, CEPH (Optional), CBCT, and 3D MODEL Scan). The materials, safety characteristics, X-ray source, indications for use, and image reconstruction/MAR (Metal Artifact Reduction) algorithm are the same as the predicate device. The difference is the new CBCT/PANO detector for a larger CBCT FOV.
    Pixel Resolution (New Detector vs. Predicate Detector)New Detector (Xmaru1524CF Master Plus OP): 5 lp/mm (2x2 binning), 2.5 lp/mm (4x4 binning) for CT&PANO.
    Predicate Detector (Xmaru1314CF): 5 lp/mm (2x2 binning), 2.5 lp/mm (4x4 binning). Test patterns of the new sensor images show the test subjects without aliasing throughout the same spatial frequency as the predicate device.
    DQE Performance (New Detector vs. Predicate Detector)New Detector (Xmaru1524CF Master Plus OP): Similarly or better overall DQE performance. At a low spatial frequency (~0.5 lp/mm), DQE of 41% (4x4 binning).
    Predicate Detector (Xmaru1314CF): DQE of 36% (4x4 binning) at ~0.5 lp/mm.
    MTF and NPS Performance (New Detector vs. Predicate Detector)The new sensor also exhibits similar performances in terms of MTF and NPS.
    Image Quality (Contrast, Noise, CNR, MTF in CT mode)The subject device performed equivalently or better than the predicate device in the general image quality, measured with FDK (back projection) and CS (iterative) reconstruction algorithm.
    Dosimetric Performance (DAP)PANO mode: DAP measurement was the same under identical FDD, exposure area, X-ray exposure time, tube voltage, and tube current.
    CEPH mode: DAP measurement was the same under identical FDD, detector specifications, X-ray exposure conditions (exposure time, tube voltage, tube current).
    CBCT mode: DAP measurements compared at different FOV sizes (12x9/8x8/8x5/5x5 cm) were equivalent under identical FDD and exposure conditions.
    General Clinical Image Quality (PANO/CBCT mode)The Clinical consideration and Image Quality Evaluation Report further demonstrated that the general image quality of the subject device is equivalent or better than the predicate device.
    Compliance with Standards (Non-Clinical)The acceptance test was performed according to the requirements of 21 CFR Part 1020.30, 1020.33 and IEC 61223-3-5. The device passed these tests. Non-clinical consideration report according to FDA Guidance "Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" was provided. Bench testing according to FDA Guidance "Format for Traditional and Abbreviated 510(k)s, Performance Testing – Bench" were performed. Acceptance test and Image evaluation report according to IEC 61223-3-4 and IEC 61223-3-5 were also performed. All test results were satisfactory.
    Software Verification and ValidationSoftware verification and validation were conducted and documented as recommended by FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." The software was considered a "moderate" level of concern. The viewing programs EzDent-i (K202116) and Ez3D-i (K200178) were previously cleared.
    Safety, EMC, and Performance Standards (Electrical, Mechanical, Environmental)Electrical, mechanical, environmental safety and performance testing according to IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1) were performed. EMC testing was conducted in accordance with IEC 60601-1-2:2014 (Edition 4). Manufacturing facility conforms with relevant EPRC standards (21 CFR 1020.30, 31, and 33). Conforms to NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set. All test results were satisfactory.

    2. Sample Size for the Test Set and Data Provenance:

    • Sample Size: Not explicitly stated as a number of patient cases or images. The performance testing was conducted in a laboratory setting using test protocols and phantoms, not a clinical test set of patient images.
    • Data Provenance: The testing was "non-clinical" and "in a laboratory." It compared the performance of the new detector and the subject device against the predicate device. This implies retrospective comparison against previously established performance data for the predicate, and possibly prospective bench testing on the new device itself. The data is likely from the manufacturer's internal testing facilities (presumably in Korea, given the manufacturer's address).

    3. Number of Experts and Qualifications for Ground Truth of Test Set:

    • There is no mention of human experts being used to establish ground truth for a test set of clinical images. The provided information describes non-clinical performance testing using quantitative metrics (DQE, MTF, NPS, Contrast, Noise, CNR) and comparison to the predicate device, as well as a "Clinical consideration and Image Quality Evaluation Report" which "demonstrated that the general image quality of the subject device is equivalent or better than the predicate device in PANO/CBCT mode." However, details on how this "Clinical consideration" was performed (e.g., blinded review by experts, number of experts, their qualifications, or what "ground truth" they used) are not provided in this summary.

    4. Adjudication Method for the Test Set:

    • Not applicable as the testing described is primarily non-clinical and does not involve human readers or a clinical test set requiring adjudication in the context of diagnostic accuracy.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned. The study focuses on equivalence through non-clinical performance metrics and comparison to a predicate device, not on how human readers' performance improves with or without the device.

    6. Standalone Performance:

    • Yes, performance data for the standalone device/detector was done. The document outlines bench testing of the Xmaru1524CF Master Plus OP detector and the Green X 18 system itself, measuring metrics such as pixel size, DQE, MTF, NPS, Contrast, Noise, CNR, and DAP. These measurements represent the algorithm-only/device-only performance in a controlled environment.

    7. Type of Ground Truth Used:

    • For the non-clinical performance testing, the "ground truth" was established by physical standards and quantitative measurements in a laboratory setting, comparing the device's performance against industry standards (e.g., 21 CFR Part 1020.30, 1020.33, IEC 61223-3-5, IEC 61223-3-4) and the performance of the predicate device.
    • For the "Clinical consideration and Image Quality Evaluation Report," the method for establishing ground truth is not detailed, but it would presumably involve expert review of images obtained from the device.

    8. Sample Size for the Training Set:

    • No training set information is provided, as the submission describes a medical imaging device (X-ray system), not an AI algorithm that requires a training set of images. The "image reconstruction/MAR(Metal Artifact Reduction) algorithm" is mentioned as being the same as the predicate device, implying it was developed prior and is not a new algorithm requiring a new training study for this submission.

    9. How the Ground Truth for the Training Set Was Established:

    • Not applicable, as this is an imaging device and not an AI algorithm requiring a training set with established ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1