Search Filters

Search Results

Found 9 results

510(k) Data Aggregation

    K Number
    K232325
    Device Name
    RAYSCAN a-Expert
    Manufacturer
    Date Cleared
    2024-04-18

    (259 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN a-Expert

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RAYSCAN a- P, SC, OCL, OCS panoramic X-ray imaging system with Cephalostat is an extra-oral source X-ray system, intended for dental radiographic examination of the teeth, jaw, and oral structures, to include panoramic examinations and implantology and for TMJ studies and cephalometry. Images are obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN α-Expert (RAYSCAN α-P, SC, OCL, OCS) provides panoramic for scanning teeth, jaw and oral structures. By rotating the C-arm, which houses a high-voltage generator, an all-in-one Xray tube and a detector on each end, panoramic images of oral and maxillofacial structures are obtained byrecombining data scanned from different angles. Functionalities include panoramic image scanning for obtaining images of whole teeth, and a Cephalometric scanning option for obtaining Cephalic images.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the "RAYSAN α-Expert" dental X-ray system. The submission affirms its substantial equivalence to a predicate device, K142058. While it outlines several tests conducted to support this claim, it does not provide explicit acceptance criteria in a table format nor does it detail a specific study with quantitative performance metrics for a direct comparison against such criteria.

    Here's a breakdown of the information that can be extracted, and where there are gaps regarding the requested specifics:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a table of acceptance criteria with corresponding device performance metrics. Instead, it states that "All test results were satisfactory" for performance (imaging performance) testing conducted according to IEC 61223-3-4. It also mentions that "a licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." This indicates a subjective assessment of image quality rather than quantitative performance against defined acceptance criteria.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: The document mentions that "images were gathered from all detectors of RAYSCAN α-Expert using protocols with random patient age, gender, and size" and that "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images." However, it does not specify the exact number of images or patients in the clinical test set.
    • Data Provenance: The images were collected "at the two offices where the predicate device was installed for the clinical test images." The manufacturer is Ray Co., Ltd. located in South Korea. It's implied these are prospective clinical images gathered for the purpose of the submission.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: "The clinical performance of RAYSCAN α-Expert were clinically tested and approved by two licensed practitioners/clinicians."
    • Qualifications of Experts: They are described as "licensed practitioners/clinicians." No specific details such as years of experience, specialization (e.g., radiologist, dentist), or board certification are provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document states, "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." It implies individual review, but does not specify any formal adjudication method (e.g., whether the two practitioners independently reviewed images and consensus was reached, or if there was a third adjudicator in case of disagreement).

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No MRMC comparative effectiveness study is mentioned. This device is an X-ray imaging system, not an AI-assisted diagnostic tool for humans, so this type of study would not be applicable. The comparison is between the new device's image quality and the image quality of the predicate device.
    • Effect Size: Not applicable.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This refers to an X-ray imaging device, not an algorithm. Therefore, "standalone (algorithm only)" performance is not applicable. The device's primary function is image acquisition.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the clinical image quality assessment appears to be expert opinion/consensus (from two licensed practitioners) regarding whether the images were "of acceptable quality for the intended use." There's no mention of pathology or outcomes data for establishing ground truth.

    8. The sample size for the training set

    The document mentions software validation, but this X-ray system is not described as an AI/ML device that requires a distinct "training set" in the context of machine learning model development. This question is not directly applicable to the type of device described.

    9. How the ground truth for the training set was established

    As the device is not described as involving an AI/ML model with a training set, this question is not directly applicable. The software mentioned is for saving patient and image data, inquiries, and image generation, and was validated according to FDA guidance for software in medical devices, not specific AI/ML training.

    Summary of what is present and what is missing:

    • Acceptance Criteria/Performance Table: Not provided in the requested format. General statement of "satisfactory" test results and "acceptable quality."
    • Test Set Sample Size & Provenance: Sample size not quantified. Provenance is South Korea, likely prospective.
    • Number & Qualification of Experts: Two licensed practitioners/clinicians. No further qualification details.
    • Adjudication Method: Not specified.
    • MRMC Study: Not applicable.
    • Standalone Performance: Not applicable.
    • Type of Ground Truth: Expert opinion on image quality.
    • Training Set Sample Size: Not applicable (not an AI/ML device in this context).
    • Training Set Ground Truth: Not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232287
    Manufacturer
    Date Cleared
    2023-08-31

    (30 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN a-Expert3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert3D, panoramic x-ray imaging system with cephalostat, is an extra-oral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBCT technique, to generate dentomaxillofacial 3D images.

    Device Description

    RAYSCAN α-3D, SM3D, M3DS and M3DL are 3D computed tomography for scanning hard tissues like bone and teeth. By rotating the C-arm, which houses a high-voltage generator, a X-ray tube and a detector on each end, CBCT images of dental maxillofacial structures are obtained by recombining data scanned from the same level at different angles. Functionalities include panoramic image option and cephalometric option.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the RAYSCAN a-Expert3D, a dental X-ray imaging system. The document focuses on demonstrating substantial equivalence to a predicate device, rather than proving that the device meets specific acceptance criteria through a comprehensive clinical study.

    Therefore, the requested information regarding detailed acceptance criteria, sample sizes, expert qualifications, and specific study designs (MRMC, standalone performance) is largely not present in the provided text. The document primarily highlights non-clinical bench testing and the provision of clinical image samples for review by licensed practitioners to further support substantial equivalence.

    Based on the available information, here's what can be extracted and what is missing:


    Overview of Device Performance and Study Information

    The submission for the RAYSCAN a-Expert3D is a 510(k) for substantial equivalence to a predicate device (RAYSCAN α-Expert3D K190812 and RCT800 K230753). The performance assessment primarily relies on demonstrating that the modified device (with updated X-ray voltage/current and detector types) maintains similar safety and effectiveness compared to the predicate, as supported by non-clinical and limited clinical data.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present explicit quantitative acceptance criteria for device performance, such as specific accuracy, sensitivity, or specificity thresholds. Instead, it states that "All test results were satisfactory" for bench testing. The primary "acceptance criterion" implied throughout the 510(k) process is demonstrating substantial equivalence to the predicate device.

    Criterion / AspectAcceptance Standard (Implied)Reported Device Performance
    Imaging PerformanceSatisfy designated tolerances for imaging properties (as per FDA Guidance for 510(k)'s for Solid State X-ray Imaging Devices and standards IEC 61223-3-4, IEC 61223-3-7). Demonstrate similar clinical image quality to the predicate device."Performance (Imaging performance) testing was conducted according to standard of IEC 61223-3-4 and IEC 61223-3-7. All test results were satisfactory."

    "Clinical imaging samples were collected... A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use."

    "Because the subject device uses the same detector as the predicate device, there are no significant differences between the two devices as a result of non-clinical testing." |
    | Safety (Electrical, Mechanical, Environmental) | Compliance with relevant international standards: IEC 60601-1, IEC 60601-1-3, IEC 60601-1-6, IEC 60601-2-63, IEC 60601-1-2 (EMC). | "Electrical, mechanical and environmental safety testing according to standard of IEC 60601-1: 2005/AMD1:2012(3.1 Edition), IEC 60601-1-3: 2008/AMD1:2013(Second Edition), IEC 60601-1-6:2010(Third Edition) and IEC 60601-2-63: 2012/AMD1:2017(first Edition) were performed. EMC testing was conducted in accordance with the standard IEC 60601-1-2: 2014(Edition 4.0)." (Implied successful compliance, as it's part of an SE submission). |
    | Software Validation | Validation according to FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices". Software level of concern deemed "moderate" and differences do not affect safety/effectiveness. | "The software... has been validated according to the FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices" to assure substantial equivalence." (Implied successful validation). |
    | Patient Dosage | Patient dosage satisfies designated tolerance. | "Bench testing is used to assess whether the parameters required to describe functionalities related to imaging properties of the dental X-ray device and patient dosage satisfy the designated tolerance." (Implied satisfactory result). |

    2. Sample size(s) used for the test set and the data provenance

    • Test Set Sample Size: The document states that "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images." It also mentions "images were gathered from all detectors installed with RAYSCAN a-Expert3D using protocols with random patient age, gender, and size." However, no specific numerical sample size (e.g., number of patients or images) for the clinical test set is provided.
    • Data Provenance:
      • Country of Origin: Not explicitly stated for the clinical data. The manufacturer is in South Korea.
      • Retrospective or Prospective: Not explicitly stated. The phrasing "Clinical imaging samples were collected from new detectors on the proposed device" could suggest prospective collection for the purpose of this submission, but it's not definitive.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: "two licensed practitioners/clinicians."
    • Qualifications of Experts: "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." Specific specialties (e.g., radiologist, dentist with specific experience) or years of experience are not provided beyond "licensed practitioner."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • "A licensed practitioner reviewed the sample clinical images." This suggests an individual review, potentially without formal adjudication unless the "two licensed practitioners" independently reviewed and concurred, which is not detailed. No specific adjudication method (e.g., consensus, majority vote) is mentioned. The primary assessment seems to be a qualitative review for "acceptable quality for the intended use."

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. The document describes a "substantial equivalence" submission for an imaging device, not an AI-assisted diagnostic tool. The purpose was to show the new device produces images comparable to the predicate for diagnostic use. No AI component is mentioned, and therefore no assessment of human reader improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This device is an X-ray imaging system, not a diagnostic algorithm. Its performance is related to image acquisition parameters and image quality, not an output from an algorithm in the typical sense of standalone AI.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • The "ground truth" for the clinical images appears to be the qualitative assessment of "acceptable quality for the intended use" by licensed practitioners. It is not based on pathology, outcomes data, or a formal expert consensus process as would be typically seen for a diagnostic performance study. The images serve to "show that the complete system works as intended."

    8. The sample size for the training set

    • Not applicable. This is an X-ray imaging system, not a machine learning model, so there is no "training set." The software validation refers to standard software development practices, not AI model training.

    9. How the ground truth for the training set was established

    • Not applicable, as there is no training set for an AI model.

    In summary, the provided document focuses on demonstrating that the updated RAYSCAN a-Expert3D device is substantially equivalent to previously cleared predicate devices, primarily through non-clinical bench testing and a limited qualitative review of clinical images by licensed practitioners. It does not contain the detailed, quantitative clinical study data (such as MRMC, standalone algorithm performance, or specific metrics with acceptance thresholds) typically associated with AI/CADe device submissions.

    Ask a Question

    Ask a specific question about this device

    K Number
    K190812
    Manufacturer
    Date Cleared
    2019-04-24

    (26 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    RAYSCAN a-Expert3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBCT technique, to generate dentomaxillofacial 3D images.

    Device Description

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system... The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations.

    AI/ML Overview

    The provided text is an FDA 510(k) clearance letter for the RAYSCAN a-Expert3D device. It outlines the device's classification, regulatory requirements, and indications for use. However, it does not contain any information about the acceptance criteria or the study that proves the device meets those criteria.

    The document states that the device is a Computed Tomography X-ray System (21 CFR 892.1750) for dental radiographic examination. The indications for use describe its capabilities for panoramic examinations, implantology, TMJ studies, cephalometry, and the generation of dentomaxillofacial 3D images using CBCT technique.

    Without further documentation (e.g., the 510(k) summary or a detailed clinical/performance study report), it is impossible to provide the requested information regarding acceptance criteria and the study proving the device's performance. The provided text is solely the FDA's clearance notice, not the submission itself or its supporting data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142058
    Device Name
    RAYSCAN A-EXPERT
    Manufacturer
    Date Cleared
    2015-04-22

    (267 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RAYSCAN a- Expert Dental X-Ray System is an extraoral source dental panoramic and optional cephalometric Xray system intended to produce X-rays for dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures.

    Device Description

    The RAYSCAN a- Expert Dental X-Ray System is an extraoral source dental panoramic and optional cephalometric X-ray system. The machine is made of X-ray generator and arms in which transfers X-ray signals to a sensor in 2D. The arm parts are controlled for rotating and linear moving to synchronize between the sensor and X-ray generator to get the image of interests. The unit has to be adjustable depending on height of a patient and PC system to reconstruct an image. Panorama is to diagnose the structures in a panoramic view. Cephalometric allows for orthodontic treatment. These 2 functions could be in 1 system, or Panorama only system depending on the needs. Digital transferring from permeated X-ray to absorbing to the sensor is essential and all the process are proceed in Detector. Detector transfers X-ray to light depending on the structure materials. Detector is separated into indirect method that the light is changed to digital signals on photodiode and direct method in which the light is directly transferred to digital signal. This unit is using both direct and indirect method depending on the interior structure materials.

    AI/ML Overview

    1. Table of Acceptance Criteria and Reported Device Performance

    FeatureAcceptance Criteria (Implicit)Reported Device Performance (Proposed Device Configuration: RAYSCAN α-OCL & α-OCS)Predicate Device-1 Performance (RAYSCAN α-Expert [K122918])Predicate Device-2 Performance (RAYSCAN α-Expert [K131693])
    Detector Type (Ceph, One-shot, Large)Indirect type (GADOX Scintillator)GADOX (Indirect type)N/ACsI (Indirect type)
    Total Pixel Area (Ceph, One-shot, Large)427(W)x356(H)mm427(W)x356(H)mmSame as Predicate #143.2 x 36.0 cm
    Total Pixel (Ceph, One-shot, Large)3072x25603072x2560N/A2880 x 2400
    Pixel Size (Ceph, One-shot, Large)139um139umN/A150 um
    Limiting Resolution (Ceph, One-shot, Large)3.6lp/mm3.6lp/mmN/A3.3 lp/mm
    MTF (Ceph, One-shot, Large)54% at 1LP/mm54% at 1LP/mmN/A45% at 1LP/mm
    DQE (Ceph, One-shot, Large)0.2 at 1LP/mm0.2 at 1LP/mmN/A0.41 at 1LP/mm
    Detector Type (Ceph, One-shot, Standard)Indirect type (GADOX Scintillator)GADOX (Indirect type)N/AN/A
    Total Pixel Area (Ceph, One-shot, Standard)302(W)x249(H)mm302(W)x249(H)mmN/AN/A
    Total Pixel (Ceph, One-shot, Standard)2176x17922176x1792N/AN/A
    Pixel Size (Ceph, One-shot, Standard)139um139umN/AN/A
    Limiting Resolution (Ceph, One-shot, Standard)3.6lp/mm3.6lp/mmN/AN/A
    MTF (Ceph, One-shot, Standard)54% at 1LP/mm54% at 1LP/mmN/AN/A
    DQE (Ceph, One-shot, Standard)0.2 at 1LP/mm0.2 at 1LP/mmN/AN/A

    Note: The document primarily focuses on demonstrating substantial equivalence to predicate devices, rather than explicit numerical acceptance criteria. The "Acceptance Criteria (Implicit)" column is derived from the performance of the predicate devices or the device's own reported values that are compared against. The primary claim of meeting criteria is that the "diagnostic image quality of the new detector is equal or better than those of the predicate device and there is no significant difference in efficiency and safety."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): Not explicitly stated as a number of patients or images. The document mentions "clinical imaging samples are collected from the new 2 one shot detector on propose device at the 2 offices where the predicate device is installed."
    • Data Provenance: The images were gathered "on any protocols with random patient age, gender, and size." The location where images were collected is "the 2 offices where the predicate device is installed," which implies a clinical setting. The country of origin is not explicitly stated, but the manufacturer is based in Korea. The study appears to be prospective in the sense that images were collected from the new devices in a clinical setting to demonstrate performance.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Two licensed practitioners/clinicians.
    • Qualifications of Experts: The document states they were "licensed practitioners/clinicians." No further specific qualifications (e.g., years of experience, specialization like radiologist) are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: The document states that the "two licensed practitioners/clinicians observed and verified that dental X ray system from RAYSCAN α" and, as licensed practitioners, their "diagnosis of the images, it might be proved that the clinical diagnosis and structures are acceptable in the region of interests." This suggests an expert consensus/agreement, but no formal adjudication method (like 2+1 or 3+1) is detailed. It implies a qualitative assessment of acceptability rather than a quantitative ground truth measurement.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, a formal MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance (vs. without AI) was not explicitly conducted or described. The study focused on demonstrating substantial equivalence of the device's technical specifications and image quality.

    6. Standalone (Algorithm Only) Performance Study

    • No, a standalone (algorithm only without human-in-the-loop performance) study was not described. The evaluation involved "licensed practitioners/clinicians" observing and verifying the images.

    7. Type of Ground Truth Used

    • Ground Truth: The "ground truth" for the clinical evaluation was based on expert consensus/clinical diagnosis. The practitioners' "diagnosis of the images" was used to conclude that "the clinical diagnosis and structures are acceptable in the region of interests." There is no mention of pathology or outcomes data being used as ground truth.

    8. Sample Size for the Training Set

    • The document describes a 510(k) submission for an X-ray imaging device; it does not explicitly refer to an "AI algorithm" with a training set. The device is an image acquisition system. Therefore, the concept of a training set for an AI model is not applicable in this context. The focus is on the performance of the imaging hardware.

    9. How the Ground Truth for the Training Set Was Established

    • As mentioned above, the device is an X-ray imaging system, not an AI algorithm requiring a training set. Therefore, this question is not applicable to the provided document. The ground truth mentioned in the document pertains to the clinical assessment of the acquired images.
    Ask a Question

    Ask a specific question about this device

    K Number
    K142247
    Manufacturer
    Date Cleared
    2015-04-17

    (262 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBCT technique, to generate dentomaxillofacial 3D images.

    The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations.

    2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN α-3D, SM3D, M3DS and M3DL are 3D computed tomography for scanning hard tissues like bone and teeth.

    By rotating the c-arm which is embedded with high voltage generator all-in-one x-ray tube and a detector on each end, fault surface image of whole body is attained by recombining data from the same level that are scanned from different angle. Panoramic image scanning function for attaining image of whole teeth, and cephalometric scanning option for attaining cephalic image are included.

    Detector Options:

    Specific models according to the detector type; CT, Pano and Ceph mounted in the RAYSCAN α- Expert 3D system are classified as shown below.

    RAYSCAN α-3D: CT(model-C10900D)+PANO(model-C10500D) RAYSCAN α-SM3D: CT(model-C10900D)+PANO(model-C10500D)+ Scan Ceph(model-XID-C24DS) RAYSCAN α-M3DL: CT(model-C10900D)+PANO(model-C10500D)+ One shot ceph(model-PaxScan 4336X) RAYSCAN α-M3DS: CT(model-C10900D)+PANO(model-C10500D) + One shot ceph(model-PaxScan 2530C)

    AI/ML Overview

    The provided text describes the RAYSCAN a-Expert 3D, a dental X-ray system. Here's a breakdown of the acceptance criteria and study information:

    Acceptance Criteria and Device Performance

    The document does not explicitly state "acceptance criteria" for performance metrics in a pass/fail format. Instead, it compares the proposed device's detector specifications (mainly for the new one-shot cephalometric models, PaxScan 4336X and PaxScan 2530C) against those of the predicate devices. The implicit acceptance criterion is that the new detectors should have comparable or better imaging performance metrics (MTF, DQE, limiting resolution, pixel size) to the predicate devices, and that the overall system performs as intended.

    Here's a table summarizing the relevant performance specifications for the new one-shot Ceph detectors and their closest predicate counterparts:

    ParameterAcceptance Criteria (Predicate Device SDX-4336CP)Reported Device Performance (Proposed Device PaxScan 4336X)Reported Device Performance (Proposed Device PaxScan 2530C)
    Ceph (One shot, Large Size) Detector
    ManufacturerSamsung Mobile DisplayVarianN/A (different size for this comparison)
    ModelSDX-4336CPPaxScan 4336XN/A
    Scintillator MaterialCsI (Indirect type)GADOX (Indirect type)N/A (GADOX)
    Total pixel area43.2 x 36.0 cm427(W)x356(H)mm (42.7 x 35.6 cm)N/A (Smaller size, 30.2 x 24.9 cm for 2530C)
    Total pixel2880 x 24003072x2560N/A (2176x1792 for 2530C)
    Pixel size150 um139 umN/A (139 um for 2530C)
    Limiting resolution3.3 lp/mm3.6 lp/mmN/A (3.6 lp/mm for 2530C)
    MTF (at 1LP/mm)45%54%N/A (54% for 2530C)
    DQE (at 1P/mm)0.410.2N/A (0.2 for 2530C)

    Summary of Performance:
    The proposed detectors (PaxScan 4336X and 2530C) show a higher limiting resolution (3.6 lp/mm vs 3.3 lp/mm) and a smaller pixel size (139 um vs 150 um) compared to the predicate's one-shot Ceph detector (SDX-4336CP). The MTF (Modulation Transfer Function) is also higher (54% vs 45%). However, the DQE (Detective Quantum Efficiency) is lower (0.2 vs 0.41), indicating less efficient X-ray photon utilization for image quality. The overall conclusion states that "the diagnostic image quality of the new detector is equal or better than those of the predicate device and there is no significant difference in efficiency and safety." This implies the lower DQE was deemed acceptable in the context of other improved metrics and overall system performance.


    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Test Set Sample Size: The document mentions "clinical imaging samples are collected from the new 2 one shot detector on propose device at the 2 offices." It also states "clinical test images were gathered from the new 2 one shot ceph detector installed with RAYSCAN α-M3DL and M3DS on any protocols with random patient age, gender, and size." However, a specific number of cases or images for the clinical test set is not provided.
      • Data Provenance: Not explicitly stated, but implies real-world clinical data as the images were gathered from "2 offices" where the proposed devices were installed. It's likely prospective for these specific tests as it references "random patient age, gender, and size," suggesting real-time image acquisition for the test. However, the overarching context of a 510(k) submission might involve retrospective review of other data.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Number of Experts: "two licensed practitioners/clinicians"
      • Qualifications of Experts: "licensed practitioners/clinicians" - no further details provided (e.g., specialty, years of experience).
    3. Adjudication Method for the Test Set:

      • The document states, "As licensed practitioners or clinician diagnoses of the images, it might be proved that the clinical diagnosis and structures are acceptable in the region of interests." This suggests that the two practitioners independently reviewed the images and determined their diagnostic acceptability, likely reaching a consensus or individual affirmation of acceptability, rather than a formal adjudication process like 2+1 or 3+1 for conflicting interpretations. The method is not detailed beyond "observed and verified."
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a direct MRMC comparative effectiveness study is not explicitly mentioned. The study focuses on comparing technical specifications and verifying diagnostic acceptability of the proposed device's images. There's no mention of human readers' performance with and without AI assistance, as this is an imaging device, not an AI diagnostic tool.
    5. Standalone (Algorithm Only) Performance Study:

      • Yes, a standalone performance study was done for the detector components. Bench testing (IEC 61223-3-4, IEC 61223-3-5, FDA Guidance "Guidance for the submissions of 510(k)'s for Solid State Xray Imaging Devices") was conducted to assess imaging performance metrics like MTF and DQE. This is a technical performance evaluation of the hardware, not an AI algorithm. The device itself is an X-ray imaging system, which inherently has "standalone" image generation capability without human interpretation during the image creation phase.
    6. Type of Ground Truth Used:

      • Clinical Diagnoses/Acceptability: For the clinical image evaluation, the ground truth was established by "licensed practitioners or clinician diagnoses of the images" determining if "the clinical diagnosis and structures are acceptable in the region of interests." This implies a form of expert consensus or clinical expert opinion on the diagnostic utility of the images. It's not explicitly stated as pathology or outcomes data.
    7. Sample Size for the Training Set:

      • Not Applicable / Not Provided. This device is an X-ray imaging system, not an AI algorithm that typically requires a large training set of labeled data for machine learning. The "software of RAYSCAN α-Expert3D has been validated" but no training set for the imaging capabilities of the X-ray system itself is mentioned, as its function is image acquisition, not autonomous interpretation.
    8. How the Ground Truth for the Training Set Was Established:

      • Not Applicable / Not Provided. As above, there is no mention of a "training set" in the context of an AI algorithm or diagnostic model here. The system's function is image acquisition based on physics and engineering principles, not learning from labeled data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K131695
    Manufacturer
    Date Cleared
    2013-11-01

    (144 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBVT technique, to generate dentomaxillofacial 3D images. The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations. 2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN α-Expert 3D is a 3D computed tomography for scanning hard tissues such as bones and teeth. By rotating the c-arm that include the high voltage generator all-in-one xray tube and a detector on each end, a CBCT image of whole dentomaxillofacial is attained by recombining data from the same level that are scanned from different angles. Panoramic image scanning function for attaining images of the entire or segmental teeth and cephalometric scanning option (One shot type & Scan type) for attaining the cephalic images are included. It allows to choose from two different types of CEPH detectors: Base: RAYSCAN α-3D: CT+PANO Option: RAYSCAN α-Multi 3D: CT+PANO+One-shot CEPH Option: RAYSCAN a-SM3D: CT+PANO+SCAN CEPH SMARTDent software for processing and archiving is optional.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study proving the device meets them:

    The document (K131695) describes the RAYSCAN α-Expert 3D, a dental panoramic/tomography and cephalometric x-ray system. This is a Special 510(k) submission, indicating a modification to a previously cleared device (K122981).

    Emphasis: It's crucial to understand that this document is a 510(k) submission, which primarily focuses on demonstrating substantial equivalence to a predicate device. It does not typically involve detailed performance studies with acceptance criteria in the way one might expect for a novel device or a significantly modified one that requires extensive clinical validation. Instead, the "study" proving acceptance is largely based on demonstrating that the modified device's performance is equivalent to the predicate, particularly for the new feature (Scan type CEPH sensor).


    Acceptance Criteria and Study Details:

    1. Table of Acceptance Criteria and Reported Device Performance

    Given that this is a Special 510(k) for substantial equivalence to a predicate device, the "acceptance criteria" are essentially for demonstrating that the modified device's specifications and performance meet or are equivalent to the predicate device, especially for the new feature.

    ParameterAcceptance Criteria (Predicate: RAYSCAN α-Expert 3D [K122981])Reported Device Performance (Modified: RAYSCAN α-Expert 3D)Device Meets Criteria?
    Common NameDental panoramic/tomography and cephalometric x-ray systemDental panoramic/tomography and cephalometric x-ray systemYes
    Indications for UseIntended for dental radiographic examination of teeth, jaw, oral structures; panoramic examinations, implantology, TMJ studies, cephalometry; capability for dento-maxillo-facial 3D images using CBVT technique; uses cone-shaped x-ray beam projection onto flat panel detector for 3D reconstruction; 2D images via standard narrow beam.Intended for dental radiographic examination of teeth, jaw, oral structures; panoramic examinations, implantology, TMJ studies, cephalometry; capability for dento-maxillo-facial 3D images using CBVT technique; uses cone-shaped x-ray beam projection onto flat panel detector for 3D reconstruction; 2D images via standard narrow beam.Yes
    3D TechnologyCBCT Cone beam Computed TomographyCBCT Cone beam Computed TomographyYes
    Performance SpecificationPanoramic, Cephalometric (optional: One_shot type)Panoramic, Cephalometric (optional: One_shot type, Scan type)Yes (with added Scan type)
    Functional OptionBase: α -3D : CT+PANO; Option: a-Multi 3D: CT+PANO+One-shot CEPHBase: α -3D : CT+PANO; Option: a-Multi 3D: CT+PANO+One-shot CEPH(option), a -SM3D: CT+PANO+SCAN CEPH(option)Yes (with added Scan type option)
    Detector Type (CT)Flat panel X-ray sensorFlat panel X-ray sensorYes
    Detector Type (Pano)Flat panel X-ray sensorFlat panel X-ray sensorYes
    Detector Type (Ceph, One-shot)Flat panel X-ray sensorFlat panel X-ray sensor(One-shot type)Yes
    Detector Type (Ceph, Scan type)Not applicable (predicate did not have this)CdTe Direct flat panel sensor [Scan type]N/A (new feature, performance evaluated for equivalence)
    Focal Size0.5mm0.5mmYes
    Field of View (CT)90x90mm90x90mmYes
    X-ray Voltage60-90kVp60~90kVpYes
    X-ray Current4-17mA4~17mAYes
    Total Filtration2.6 mm Al equivalent2.6 mm Al equivalentYes
    Magnification (CT)1.391.39Yes
    Magnification (Pano)1.311.31Yes
    Magnification (Ceph One-shot)1.131.13Yes
    Magnification (Ceph Scan type)Not applicable1.11N/A (new feature, performance evaluated for equivalence)
    Scan Time (CT)14sec14secYes
    Scan Time (Pano)14secbelow 14secYes
    Scan Time (Ceph One-shot)0.3sec~3.0sec0.3sec~3.0secYes
    Scan Time (Ceph Scan type)Not applicablebelow 18secN/A (new feature, performance evaluated for equivalence)
    Safety and EMC (Applicable Standards)IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -2-44, -1-2IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -2-44, -1-2Yes
    Certificate ProductCE0120(MDD93/42/EEC)CE0120(MDD93/42/EEC)Yes
    Non-clinical & Clinical Considerations (for added Scan CEPH sensor)N/AReport provided for equivalenceYes

    Summary of "Acceptance": The device meets "acceptance criteria" by demonstrating that all parameters common with the predicate device are identical or within acceptable bounds, and the new "Scan type CEPH sensor" feature is evaluated and found to be substantially equivalent in terms of safety and effectiveness.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a "test set" sample size or its provenance in terms of patient data. For a Special 510(k), the focus is often on engineering verification and validation (V&V) and comparing the new feature to the existing one.

    • Test Set: Not explicitly defined in terms of patient images for a statistical study. The "test set" for demonstrating equivalence appears to be qualitative comparisons of imaging performance, particularly for the new "Scan type CEPH sensor."
    • Data Provenance: Not specified. However, given that the manufacturer is based in South Korea, it's plausible any internal testing or "expert review of image comparisons" would involve data generated internally, possibly from phantoms or a limited set of patient images, likely retrospective if not specifically collected for this submission.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document mentions an "outcome of experts review of image comparisons."

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. It's reasonable to infer they would be qualified to review dental radiographic images, such as radiologists or oral and maxillofacial radiologists, but specific experience levels are not provided.

    4. Adjudication Method for the Test Set

    The document states "outcome of experts review of image comparisons," implying a qualitative assessment.

    • Adjudication Method: Not explicitly stated (e.g., 2+1, 3+1). It was likely a consensus approach or individual expert assessment contributing to an overall finding of equivalence. There's no detail on how disagreements would be resolved.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not conducted. This device is an imaging system, not an AI-assisted diagnostic tool. The submission is about physical device performance and substantial equivalence, not the improvement of human readers with AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable in the context of an AI algorithm. This device is an X-ray imaging system. The "performance" assessment focuses on the image quality produced by the system and its technical specifications compared to the predicate device.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: For the "expert review of image comparisons," the "ground truth" would be the subjective assessment of image quality, diagnostic utility, and comparability between images from the modified device (especially the new CEPH scan type) and the predicate device. This is a form of expert consensus/assessment, rather than pathology or outcomes data. For the technical specifications, the ground truth is established through engineering measurements and adherence to international standards.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. This device is an X-ray imaging machine, not a machine learning or AI algorithm that requires a "training set" in the conventional sense.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set: Not applicable, as there is no training set for this type of device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K131693
    Device Name
    RAYSCAN A-EXPERT
    Manufacturer
    Date Cleared
    2013-11-01

    (144 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RAYSCAN α- Expert Dental X-Ray System is an extraoral source dental panoramic and optional cephalometric X-ray system intended to produce X-rays for dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures.

    Device Description

    Modified RAYSCAN α- Expert is designed for panoramic scanning of teeth, jaw and oral cavity, used to create and control the X-ray beam. And as a dental digital panoramic X-ray system with X-ray located on outer part of the oral cavity, includes the Cephalometric scanning function, as an option, for acquiring images of the head. The modifications are as following: Hardware: Added option CEPH sensor (Scan type). In addition to the one-shot type CEPH sensor of original device (K122918), the modified device offers an additional Scan type CEPH sensor. Updated Software including: Additional protocol for new CEPH sensor (Scan type): Lateral wide Additional PANO protocols: Segmentation (Individual Tooth), Bitewing, and Orthogonal RAYSCAN α- Expert offers digital imaging with or without the optional One-shot type & Scan type cephalometric attachment. Detector Options: Base: RAYSCAN α-P: PANO Option: RAYSCAN α-OC: PANO+One-shot CEPH Option: RAYSCAN α-SC: PANO+SCAN CEPH The system includes processing, and archiving "SMARTDent "software(Optional)

    AI/ML Overview

    The provided text describes a 510(k) submission for the RAYSCAN α-Expert, primarily focusing on modifications to an existing device (K122918). The modifications include adding a "Scan type CEPH sensor" and new PANO protocols. The submission argues for substantial equivalence to the predicate device.

    Here's an analysis of the acceptance criteria and study information provided:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" in a quantitative performance metric sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, the performance specification comparison table largely establishes that the modified device either has the same specifications as the predicate device or additional specifications for the new "Scan type" cephalometric function. The implicit acceptance criterion is that the new components perform either equivalently to the existing components or to a satisfactory level for their intended function.

    ParameterAcceptance Criteria (Implicit from Predicate & New Features)Reported Device Performance (Modified K131693)
    Common NameSame as Predicate: Dental panoramic and cephalometric X-ray systemDental panoramic and cephalometric X-ray system
    Indications for useSame as Predicate: For dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures.The RAYSCAN α- Expert Dental X-Ray System is an extraoral source dental panoramic and optional cephalometric X-ray system intended to produce X-rays for dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures.
    Performance SpecificationPanoramic, One-shot type Cephalometric (optional), Additional Scan type Cephalometric (optional)Panoramic
    Cephalometric(optional)
    • One_shot type
    • Scan type |
      | Detector type | α-OC: PANO+One-shot CEPH (option), α -P : PANO, Additional α -SC: PANO+SCAN CEPH (option) | α-OC: PANO+One-shot CEPH (option)
      α -P : PANO
      α -SC: PANO+SCAN CEPH (option) |
      | Detector Type (detail) | Pano: Flat panel X-ray sensor, Ceph(Optional): Flat panel X-ray sensor [One-shot type], Additional CdTe Direct flat panel sensor[Scan type] | Pano: Flat panel X-ray sensor
      Ceph(Optional)
    • Flat panel X-ray sensor [One-shot type]
    • CdTe Direct flat panel sensor[Scan type] |
      | Focal size | Same as Predicate: 0.5mm | 0.5mm |
      | X-ray Voltage | Same as Predicate: 6090kVp | 6090kVp |
      | X-ray Current | Same as Predicate: 417mA | 417mA |
      | Total Filtration | Same as Predicate: 2.6 mm Al equivalent | 2.6 mm Al equivalent |
      | Magnification | Pano: 1.31, Ceph[One-shot type]: 1.13, Additional Ceph[Scan type]: 1.11 | Pano: 1.31
      Ceph[One-shot type]: 1.13
      Ceph[Scan type]: 1.11 |
      | Scan time | Pano: 14sec, Ceph[One-shot type]: 0.3sec3.0sec, Additional Ceph[Scan type]: below 18sec | Pano: 14sec
      Ceph[One-shot type]: 0.3sec
      3.0sec
      Ceph[Scan type]: below 18sec |
      | Applicable Standards | Compliance with IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -1-2 | All listed IEC standards met. |
      | Software Functionality | Modified device performs as intended with no adverse impact from hardware/software changes. | "Software verification testing and validation testing was performed to confirm that the modified device performed as intended and that changes made to the hardware and software had no adverse impact on the functionality of the system. All tests met requirements demonstrating that the modified device performed as expected." |
      | Non-clinical/Clinical Safety | Equivalence to predicate regarding safety and effectiveness, especially for new Scan type CEPH sensor. | Safety & effectiveness reports for the added Scan type cephalometric sensor provided separately. "Non-clinical & Clinical considerations according to FDA Guidance 'Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices' were performed. All test results were satisfactory." "expert review of image comparisons for both devices" concluded substantial equivalence. |

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not provide details on the sample size for any clinical or non-clinical test sets, nor does it specify the data provenance (e.g., country of origin, retrospective or prospective nature of data). It mentions "safety & effectiveness reports for the added Scan type cephalometric sensor is provided separately" and "report regarding non-clinical & clinical consideration for the added Scan CEPH sensor is provided separately," suggesting such details would be found within those separate documents.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document mentions an "expert review of image comparisons for both devices" as part of the basis for concluding substantial equivalence. However, it does not specify:

    • The number of experts.
    • The qualifications of those experts (e.g., specific medical specialty, years of experience, board certifications).
    • How "ground truth" was established, only that images were compared.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or resolving discrepancies among experts. It only mentions an "expert review of image comparisons."

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no indication that a multi-reader multi-case (MRMC) comparative effectiveness study was done, nor is there any mention of comparing human readers with AI assistance versus without AI assistance. The device in question is an X-ray system, not primarily an AI-driven interpretation tool. The software updates are for new protocols and sensor types, not for diagnostic assistance features requiring such a study.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This submission is for an X-ray imaging system, not an algorithm meant for standalone diagnostic performance. Therefore, a standalone algorithm-only performance study is not applicable and not mentioned. The "performance" refers to the imaging system's ability to produce images according to specifications.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document only states that an "expert review of image comparisons" was conducted. It does not explicitly define how ground truth was established beyond this expert review (e.g., whether it involved expert consensus, comparison to a gold standard like pathology, or outcomes data). For an imaging device, "ground truth" often refers to the image quality parameters and features being correctly discernible and meeting technical specifications.

    8. The sample size for the training set

    The document does not provide any information regarding a training set sample size. This is consistent with the nature of the submission, which focuses on device modifications and substantial equivalence to a predicate, rather than a novel AI algorithm requiring separate training and test sets. Software verification and validation are mentioned, but not in the context of machine learning training.

    9. How the ground truth for the training set was established

    Since no training set is mentioned in the context of machine learning or AI, there is no information on how its ground truth would have been established. The "ground truth" in this context pertains to the functional and safety performance of the hardware and software components based on engineering tests and expert image review.

    Ask a Question

    Ask a specific question about this device

    K Number
    K122918
    Device Name
    RAYSCAN A-EXPERT
    Manufacturer
    Date Cleared
    2013-03-15

    (172 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RAYSCAN α- Expert Dental X-Ray System is an extraoral source dental panoramic and optional cephalometric X-ray system intended to produce X-rays for dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures.

    Device Description

    RAYSCAN a- Expert is designed for panoramic scanning of teeth, jaw and oral cavity, used to create and control the X-ray beam. And as a dental digital panoramic X-ray system with X-ray located on outer part of the oral cavity, includes the Cephalometric scanning function, as an option, for acquiring images of the head. RAYSCAN a- Expert offers digital imaging with or without the optional cephalometric attachment. The system includes processing, and archiving "SMARTDent "software(Optional)

    AI/ML Overview

    The provided document K122918 describes the RAYSCAN α-Expert Dental X-Ray System. However, it does not explicitly detail acceptance criteria or a specific study proving the device meets those criteria in the way typically found for an AI/CADe device. Instead, it focuses on demonstrating substantial equivalence to a predicate device (Orthophos XGPlus DS/Ceph K033073) and compliance with electrical, mechanical, environmental, and EMC safety and performance standards.

    The closest information regarding performance acceptance criteria and study is within the "Safety and Effectiveness Information" section, which states: "Non-clinical & Clinical considerations according to FDA Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" were performed. All test results were satisfactory." This is a general statement and does not provide specific performance metrics, sample sizes, or ground truth establishment relevant to an AI/CADe system.

    Therefore, many of the requested details for an AI/CADe device cannot be extracted from this document, as the RAYSCAN α-Expert is an X-ray imaging system, not an AI-powered diagnostic tool in the sense of the prompt's implied context (e.g., for detecting specific pathologies). The focus is on the safety and effectiveness of the imaging system itself to produce X-rays for examination and diagnosis by a human, not on automated analysis or improved human reading through AI assistance.

    Based on the provided text, here's what can be answered:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide specific quantitative acceptance criteria or reported device performance metrics in the way one would expect for an AI/CADe device (e.g., sensitivity, specificity, AUC for a diagnostic task). The performance assessment is qualitative, focusing on compliance with safety and effectiveness standards, and substantial equivalence to a predicate device in terms of technical specifications.

    Acceptance Criteria CategorySpecific Criteria (Implicit from document)Reported Device Performance (Implicit from document)
    SafetyCompliance with IEC 60601-1, IEC 60601-1-1, IEC 60601-1-3, IEC 60601-2-7, IEC 60601-2-28, IEC 60601-2-32, and IEC 60601-1-2 (EMC)"All test results were satisfactory."
    EffectivenessAbility to produce X-rays for dental radiographic examination and diagnosis. Substantial equivalence to predicate device (Orthophos XGPlus DS/Ceph K033073) in intended use, indications, construction materials, principle of operation, features, and technical data."the RAYSCAN α-Expert system is safe and effective to perform its intended use as well as substantially equivalent to the Predicate device."
    Image QualityNot directly specified with metrics, but implied through substantial equivalence to predicate device's image modality (Digital only) and detector types.Not specified with metrics.

    2. Sample size used for the test set and the data provenance

    Not applicable/Not provided. The document primarily discusses non-clinical and clinical considerations for the imaging device itself, not a test set for an AI algorithm's diagnostic performance. There is no mention of a "test set" in the context of diagnostic accuracy.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable/Not provided. This concerns an AI/CADe system's diagnostic accuracy, which is not the focus of this document for an X-ray imaging system.

    4. Adjudication method for the test set

    Not applicable/Not provided.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable/Not provided. The device is an X-ray system, not an AI assistance tool for human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable/Not provided. The device is not an AI algorithm.

    7. The type of ground truth used

    Not applicable/Not provided.

    8. The sample size for the training set

    Not applicable/Not provided. The device is an X-ray system, not an AI algorithm that undergoes training.

    9. How the ground truth for the training set was established

    Not applicable/Not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K122981
    Manufacturer
    Date Cleared
    2013-03-12

    (169 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN α - Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBVT technique, to generate dentomaxillofacial 3D images. The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations. 2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN a-Expert 3D is a 3D computed tomography for scanning hard tissues such as bones and teeth. By rotating the c-arm which includes the high voltage generator all-in-one x-ray tube and a detector on each end, CBCT image of whole dentomaxillofacial is attained by recombining data from the same level that are scanned from different angles. Panoramic image scanning function for attaining images of the entire teeth and cephalometric scanning option for attaining the cephalic images are included. The system includes processing, and archiving "SMARTDent "software(Optional)

    AI/ML Overview

    The provided text is a 510(k) submission for the RAYSCAN α-Expert 3D device. It describes the device, its intended use, and a comparison to a predicate device to establish substantial equivalence. However, this document does not contain information about acceptance criteria or a study proving the device meets specific performance criteria in terms of diagnostic accuracy or clinical effectiveness involving human subjects or AI performance metrics.

    The document focuses on demonstrating substantial equivalence to a predicate device (Rotagraph EVO 3D, K111152) primarily through a comparison of technical specifications and intended use.

    Here's a breakdown of the information that is present and what is missing based on your request:

    1. Table of Acceptance Criteria and Reported Device Performance:

    • Not provided. The document includes a table comparing technical specifications of the RAYSCAN α-Expert 3D with the predicate device. This table lists parameters like detector pixel size, scan time, X-ray voltage, etc., but it does not present specific acceptance criteria (e.g., minimum spatial resolution, contrast-to-noise ratio) nor does it provide a direct "reported device performance" against such criteria. The comparison simply shows the values for both devices.
    ParameterRAYSCAN α-Expert 3D (New Device)Rotagraph EVO 3D (Predicate Device)
    Focal size0.5mm0.5mm
    Field of View (CT)90x90mm85x85mm
    X-ray Voltage60~90kVp60~86kVp
    X-ray Current4~17mA6~12 mA
    Total Filtration2.6 mm Al equivalent2.5 mm Al equivalent
    CT Detector Pixel size100 μm127 μm
    Pano Detector Pixel size100 μm127 μm
    Ceph Detector Pixel size150 μm48 μm
    CT Magnification1.391.25 (Open/close mouth TMJ)
    Pano Magnification1.311.28
    Ceph Magnification1.131.10
    CT Scan time14secMax 20sec
    Pano Scan time14secMax 13.8sec
    Ceph Scan time0.3sec~3.0sec15sec
    CT Grey level14bit14bit
    Pano Grey level14bit14bit
    Ceph Grey level14bit12bit
    Rotation angle360°200°

    2. Sample Size Used for the Test Set and Data Provenance:

    • Not provided. The document mentions "Non-clinical & Clinical considerations according to FDA Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" were performed, but no details about participants, data provenance (country of origin, retrospective/prospective), or a specific test set for evaluation are given. The submission focuses on technical and safety equivalence, not clinical performance data from a specific study population.

    3. Number of Experts Used to Establish Ground Truth and Their Qualifications:

    • Not applicable/Not provided. Since there's no clinical performance study involving a test set described, there's no mention of experts establishing ground truth for such a set.

    4. Adjudication Method:

    • Not applicable/Not provided. As no clinical performance study is detailed, no adjudication method is described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No. The document does not describe an MRMC study or any comparison of human readers with vs. without AI assistance. The device as described is an imaging system, not an AI-powered diagnostic tool in the sense of providing automated interpretations.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    • Not applicable/Not provided. This is an imaging device, and its approval is based on its ability to produce images comparable to a predicate device, not on the performance of a standalone algorithm for diagnosis.

    7. Type of Ground Truth Used:

    • Not applicable/Not provided. Without a described performance study, there's no mention of ground truth (e.g., pathology, expert consensus). The "ground truth" for this 510(k) submission is effectively the established safety and effectiveness of the predicate device, to which the new device is being compared for "substantial equivalence."

    8. Sample Size for the Training Set:

    • Not applicable/Not provided. The device is an X-ray imaging system, not an AI model that requires a training set in the typical sense.

    9. How the Ground Truth for the Training Set was Established:

    • Not applicable/Not provided. As above, this is an imaging device, not an AI model requiring a training set and corresponding ground truth.

    Summary of the Study Mentioned:

    The document states: "Electrical, mechanical, environmental safety and performance testing according to standards IEC 60601-1, IEC 60601-1-1, IEC 60601-1-3, IEC 60601-2-7, IEC 60601-2-28, IEC 60601-2-32 and IEC 60601-2-44 was performed, and EMC testing was conducted in accordance with the standard IEC 60601-1-2. Non-clinical & Clinical considerations according to FDA Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" were performed. All test results were satisfactory."

    This indicates that the device underwent bench testing and compliance testing with recognized international safety and performance standards for medical electrical equipment and specific X-ray equipment. These tests are designed to ensure the device functions as intended from an engineering and safety perspective, and meets regulatory requirements for radiation safety and electrical safety. The "Non-clinical & Clinical considerations" refer to review against FDA guidance, which might involve analysis of image quality metrics typically reviewed for X-ray devices (e.g., spatial resolution, contrast, noise, dose efficiency), but specific details of such studies (sample size, methodology, acceptance criteria) are not included in this summary.

    Conclusion from the document:

    The conclusion drawn by the manufacturer is that "Based on a comparison of intended use, indications, constructions, construction materials, principal of Operation, features and technical data, the RAYSCAN α-Expert 3D system are safe and effective to perform its intended use as well as substantially equivalent to the predicate device."

    Therefore, the "study" that proves the device meets (implicitly, safety and effectiveness) criteria is essentially the technical comparison and independent testing against recognized standards to establish substantial equivalence to the predicate device, rather than a clinical performance study with specific diagnostic accuracy metrics.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1