Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K142247
    Manufacturer
    Date Cleared
    2015-04-17

    (262 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBCT technique, to generate dentomaxillofacial 3D images.

    The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations.

    2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN α-3D, SM3D, M3DS and M3DL are 3D computed tomography for scanning hard tissues like bone and teeth.

    By rotating the c-arm which is embedded with high voltage generator all-in-one x-ray tube and a detector on each end, fault surface image of whole body is attained by recombining data from the same level that are scanned from different angle. Panoramic image scanning function for attaining image of whole teeth, and cephalometric scanning option for attaining cephalic image are included.

    Detector Options:

    Specific models according to the detector type; CT, Pano and Ceph mounted in the RAYSCAN α- Expert 3D system are classified as shown below.

    RAYSCAN α-3D: CT(model-C10900D)+PANO(model-C10500D) RAYSCAN α-SM3D: CT(model-C10900D)+PANO(model-C10500D)+ Scan Ceph(model-XID-C24DS) RAYSCAN α-M3DL: CT(model-C10900D)+PANO(model-C10500D)+ One shot ceph(model-PaxScan 4336X) RAYSCAN α-M3DS: CT(model-C10900D)+PANO(model-C10500D) + One shot ceph(model-PaxScan 2530C)

    AI/ML Overview

    The provided text describes the RAYSCAN a-Expert 3D, a dental X-ray system. Here's a breakdown of the acceptance criteria and study information:

    Acceptance Criteria and Device Performance

    The document does not explicitly state "acceptance criteria" for performance metrics in a pass/fail format. Instead, it compares the proposed device's detector specifications (mainly for the new one-shot cephalometric models, PaxScan 4336X and PaxScan 2530C) against those of the predicate devices. The implicit acceptance criterion is that the new detectors should have comparable or better imaging performance metrics (MTF, DQE, limiting resolution, pixel size) to the predicate devices, and that the overall system performs as intended.

    Here's a table summarizing the relevant performance specifications for the new one-shot Ceph detectors and their closest predicate counterparts:

    ParameterAcceptance Criteria (Predicate Device SDX-4336CP)Reported Device Performance (Proposed Device PaxScan 4336X)Reported Device Performance (Proposed Device PaxScan 2530C)
    Ceph (One shot, Large Size) Detector
    ManufacturerSamsung Mobile DisplayVarianN/A (different size for this comparison)
    ModelSDX-4336CPPaxScan 4336XN/A
    Scintillator MaterialCsI (Indirect type)GADOX (Indirect type)N/A (GADOX)
    Total pixel area43.2 x 36.0 cm427(W)x356(H)mm (42.7 x 35.6 cm)N/A (Smaller size, 30.2 x 24.9 cm for 2530C)
    Total pixel2880 x 24003072x2560N/A (2176x1792 for 2530C)
    Pixel size150 um139 umN/A (139 um for 2530C)
    Limiting resolution3.3 lp/mm3.6 lp/mmN/A (3.6 lp/mm for 2530C)
    MTF (at 1LP/mm)45%54%N/A (54% for 2530C)
    DQE (at 1P/mm)0.410.2N/A (0.2 for 2530C)

    Summary of Performance:
    The proposed detectors (PaxScan 4336X and 2530C) show a higher limiting resolution (3.6 lp/mm vs 3.3 lp/mm) and a smaller pixel size (139 um vs 150 um) compared to the predicate's one-shot Ceph detector (SDX-4336CP). The MTF (Modulation Transfer Function) is also higher (54% vs 45%). However, the DQE (Detective Quantum Efficiency) is lower (0.2 vs 0.41), indicating less efficient X-ray photon utilization for image quality. The overall conclusion states that "the diagnostic image quality of the new detector is equal or better than those of the predicate device and there is no significant difference in efficiency and safety." This implies the lower DQE was deemed acceptable in the context of other improved metrics and overall system performance.


    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Test Set Sample Size: The document mentions "clinical imaging samples are collected from the new 2 one shot detector on propose device at the 2 offices." It also states "clinical test images were gathered from the new 2 one shot ceph detector installed with RAYSCAN α-M3DL and M3DS on any protocols with random patient age, gender, and size." However, a specific number of cases or images for the clinical test set is not provided.
      • Data Provenance: Not explicitly stated, but implies real-world clinical data as the images were gathered from "2 offices" where the proposed devices were installed. It's likely prospective for these specific tests as it references "random patient age, gender, and size," suggesting real-time image acquisition for the test. However, the overarching context of a 510(k) submission might involve retrospective review of other data.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Number of Experts: "two licensed practitioners/clinicians"
      • Qualifications of Experts: "licensed practitioners/clinicians" - no further details provided (e.g., specialty, years of experience).
    3. Adjudication Method for the Test Set:

      • The document states, "As licensed practitioners or clinician diagnoses of the images, it might be proved that the clinical diagnosis and structures are acceptable in the region of interests." This suggests that the two practitioners independently reviewed the images and determined their diagnostic acceptability, likely reaching a consensus or individual affirmation of acceptability, rather than a formal adjudication process like 2+1 or 3+1 for conflicting interpretations. The method is not detailed beyond "observed and verified."
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a direct MRMC comparative effectiveness study is not explicitly mentioned. The study focuses on comparing technical specifications and verifying diagnostic acceptability of the proposed device's images. There's no mention of human readers' performance with and without AI assistance, as this is an imaging device, not an AI diagnostic tool.
    5. Standalone (Algorithm Only) Performance Study:

      • Yes, a standalone performance study was done for the detector components. Bench testing (IEC 61223-3-4, IEC 61223-3-5, FDA Guidance "Guidance for the submissions of 510(k)'s for Solid State Xray Imaging Devices") was conducted to assess imaging performance metrics like MTF and DQE. This is a technical performance evaluation of the hardware, not an AI algorithm. The device itself is an X-ray imaging system, which inherently has "standalone" image generation capability without human interpretation during the image creation phase.
    6. Type of Ground Truth Used:

      • Clinical Diagnoses/Acceptability: For the clinical image evaluation, the ground truth was established by "licensed practitioners or clinician diagnoses of the images" determining if "the clinical diagnosis and structures are acceptable in the region of interests." This implies a form of expert consensus or clinical expert opinion on the diagnostic utility of the images. It's not explicitly stated as pathology or outcomes data.
    7. Sample Size for the Training Set:

      • Not Applicable / Not Provided. This device is an X-ray imaging system, not an AI algorithm that typically requires a large training set of labeled data for machine learning. The "software of RAYSCAN α-Expert3D has been validated" but no training set for the imaging capabilities of the X-ray system itself is mentioned, as its function is image acquisition, not autonomous interpretation.
    8. How the Ground Truth for the Training Set Was Established:

      • Not Applicable / Not Provided. As above, there is no mention of a "training set" in the context of an AI algorithm or diagnostic model here. The system's function is image acquisition based on physics and engineering principles, not learning from labeled data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K131695
    Manufacturer
    Date Cleared
    2013-11-01

    (144 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN a-Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBVT technique, to generate dentomaxillofacial 3D images. The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations. 2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN α-Expert 3D is a 3D computed tomography for scanning hard tissues such as bones and teeth. By rotating the c-arm that include the high voltage generator all-in-one xray tube and a detector on each end, a CBCT image of whole dentomaxillofacial is attained by recombining data from the same level that are scanned from different angles. Panoramic image scanning function for attaining images of the entire or segmental teeth and cephalometric scanning option (One shot type & Scan type) for attaining the cephalic images are included. It allows to choose from two different types of CEPH detectors: Base: RAYSCAN α-3D: CT+PANO Option: RAYSCAN α-Multi 3D: CT+PANO+One-shot CEPH Option: RAYSCAN a-SM3D: CT+PANO+SCAN CEPH SMARTDent software for processing and archiving is optional.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study proving the device meets them:

    The document (K131695) describes the RAYSCAN α-Expert 3D, a dental panoramic/tomography and cephalometric x-ray system. This is a Special 510(k) submission, indicating a modification to a previously cleared device (K122981).

    Emphasis: It's crucial to understand that this document is a 510(k) submission, which primarily focuses on demonstrating substantial equivalence to a predicate device. It does not typically involve detailed performance studies with acceptance criteria in the way one might expect for a novel device or a significantly modified one that requires extensive clinical validation. Instead, the "study" proving acceptance is largely based on demonstrating that the modified device's performance is equivalent to the predicate, particularly for the new feature (Scan type CEPH sensor).


    Acceptance Criteria and Study Details:

    1. Table of Acceptance Criteria and Reported Device Performance

    Given that this is a Special 510(k) for substantial equivalence to a predicate device, the "acceptance criteria" are essentially for demonstrating that the modified device's specifications and performance meet or are equivalent to the predicate device, especially for the new feature.

    ParameterAcceptance Criteria (Predicate: RAYSCAN α-Expert 3D [K122981])Reported Device Performance (Modified: RAYSCAN α-Expert 3D)Device Meets Criteria?
    Common NameDental panoramic/tomography and cephalometric x-ray systemDental panoramic/tomography and cephalometric x-ray systemYes
    Indications for UseIntended for dental radiographic examination of teeth, jaw, oral structures; panoramic examinations, implantology, TMJ studies, cephalometry; capability for dento-maxillo-facial 3D images using CBVT technique; uses cone-shaped x-ray beam projection onto flat panel detector for 3D reconstruction; 2D images via standard narrow beam.Intended for dental radiographic examination of teeth, jaw, oral structures; panoramic examinations, implantology, TMJ studies, cephalometry; capability for dento-maxillo-facial 3D images using CBVT technique; uses cone-shaped x-ray beam projection onto flat panel detector for 3D reconstruction; 2D images via standard narrow beam.Yes
    3D TechnologyCBCT Cone beam Computed TomographyCBCT Cone beam Computed TomographyYes
    Performance SpecificationPanoramic, Cephalometric (optional: One_shot type)Panoramic, Cephalometric (optional: One_shot type, Scan type)Yes (with added Scan type)
    Functional OptionBase: α -3D : CT+PANO; Option: a-Multi 3D: CT+PANO+One-shot CEPHBase: α -3D : CT+PANO; Option: a-Multi 3D: CT+PANO+One-shot CEPH(option), a -SM3D: CT+PANO+SCAN CEPH(option)Yes (with added Scan type option)
    Detector Type (CT)Flat panel X-ray sensorFlat panel X-ray sensorYes
    Detector Type (Pano)Flat panel X-ray sensorFlat panel X-ray sensorYes
    Detector Type (Ceph, One-shot)Flat panel X-ray sensorFlat panel X-ray sensor(One-shot type)Yes
    Detector Type (Ceph, Scan type)Not applicable (predicate did not have this)CdTe Direct flat panel sensor [Scan type]N/A (new feature, performance evaluated for equivalence)
    Focal Size0.5mm0.5mmYes
    Field of View (CT)90x90mm90x90mmYes
    X-ray Voltage60-90kVp60~90kVpYes
    X-ray Current4-17mA4~17mAYes
    Total Filtration2.6 mm Al equivalent2.6 mm Al equivalentYes
    Magnification (CT)1.391.39Yes
    Magnification (Pano)1.311.31Yes
    Magnification (Ceph One-shot)1.131.13Yes
    Magnification (Ceph Scan type)Not applicable1.11N/A (new feature, performance evaluated for equivalence)
    Scan Time (CT)14sec14secYes
    Scan Time (Pano)14secbelow 14secYes
    Scan Time (Ceph One-shot)0.3sec~3.0sec0.3sec~3.0secYes
    Scan Time (Ceph Scan type)Not applicablebelow 18secN/A (new feature, performance evaluated for equivalence)
    Safety and EMC (Applicable Standards)IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -2-44, -1-2IEC 60601-1, -1-1, -1-3, -2-7, -2-28, -2-32, -2-44, -1-2Yes
    Certificate ProductCE0120(MDD93/42/EEC)CE0120(MDD93/42/EEC)Yes
    Non-clinical & Clinical Considerations (for added Scan CEPH sensor)N/AReport provided for equivalenceYes

    Summary of "Acceptance": The device meets "acceptance criteria" by demonstrating that all parameters common with the predicate device are identical or within acceptable bounds, and the new "Scan type CEPH sensor" feature is evaluated and found to be substantially equivalent in terms of safety and effectiveness.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a "test set" sample size or its provenance in terms of patient data. For a Special 510(k), the focus is often on engineering verification and validation (V&V) and comparing the new feature to the existing one.

    • Test Set: Not explicitly defined in terms of patient images for a statistical study. The "test set" for demonstrating equivalence appears to be qualitative comparisons of imaging performance, particularly for the new "Scan type CEPH sensor."
    • Data Provenance: Not specified. However, given that the manufacturer is based in South Korea, it's plausible any internal testing or "expert review of image comparisons" would involve data generated internally, possibly from phantoms or a limited set of patient images, likely retrospective if not specifically collected for this submission.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document mentions an "outcome of experts review of image comparisons."

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. It's reasonable to infer they would be qualified to review dental radiographic images, such as radiologists or oral and maxillofacial radiologists, but specific experience levels are not provided.

    4. Adjudication Method for the Test Set

    The document states "outcome of experts review of image comparisons," implying a qualitative assessment.

    • Adjudication Method: Not explicitly stated (e.g., 2+1, 3+1). It was likely a consensus approach or individual expert assessment contributing to an overall finding of equivalence. There's no detail on how disagreements would be resolved.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not conducted. This device is an imaging system, not an AI-assisted diagnostic tool. The submission is about physical device performance and substantial equivalence, not the improvement of human readers with AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable in the context of an AI algorithm. This device is an X-ray imaging system. The "performance" assessment focuses on the image quality produced by the system and its technical specifications compared to the predicate device.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: For the "expert review of image comparisons," the "ground truth" would be the subjective assessment of image quality, diagnostic utility, and comparability between images from the modified device (especially the new CEPH scan type) and the predicate device. This is a form of expert consensus/assessment, rather than pathology or outcomes data. For the technical specifications, the ground truth is established through engineering measurements and adherence to international standards.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. This device is an X-ray imaging machine, not a machine learning or AI algorithm that requires a "training set" in the conventional sense.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set: Not applicable, as there is no training set for this type of device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K122981
    Manufacturer
    Date Cleared
    2013-03-12

    (169 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RAYSCAN A-EXPERT 3D

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYSCAN α - Expert 3D, panoramic x-ray imaging system with cephalostat, is an extraoral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBVT technique, to generate dentomaxillofacial 3D images. The device uses cone shaped x-ray beam projected on to a flat panel detector, and the examined volume image is reconstructed to be viewed in 3D viewing stations. 2D Image is obtained using the standard narrow beam technique.

    Device Description

    RAYSCAN a-Expert 3D is a 3D computed tomography for scanning hard tissues such as bones and teeth. By rotating the c-arm which includes the high voltage generator all-in-one x-ray tube and a detector on each end, CBCT image of whole dentomaxillofacial is attained by recombining data from the same level that are scanned from different angles. Panoramic image scanning function for attaining images of the entire teeth and cephalometric scanning option for attaining the cephalic images are included. The system includes processing, and archiving "SMARTDent "software(Optional)

    AI/ML Overview

    The provided text is a 510(k) submission for the RAYSCAN α-Expert 3D device. It describes the device, its intended use, and a comparison to a predicate device to establish substantial equivalence. However, this document does not contain information about acceptance criteria or a study proving the device meets specific performance criteria in terms of diagnostic accuracy or clinical effectiveness involving human subjects or AI performance metrics.

    The document focuses on demonstrating substantial equivalence to a predicate device (Rotagraph EVO 3D, K111152) primarily through a comparison of technical specifications and intended use.

    Here's a breakdown of the information that is present and what is missing based on your request:

    1. Table of Acceptance Criteria and Reported Device Performance:

    • Not provided. The document includes a table comparing technical specifications of the RAYSCAN α-Expert 3D with the predicate device. This table lists parameters like detector pixel size, scan time, X-ray voltage, etc., but it does not present specific acceptance criteria (e.g., minimum spatial resolution, contrast-to-noise ratio) nor does it provide a direct "reported device performance" against such criteria. The comparison simply shows the values for both devices.
    ParameterRAYSCAN α-Expert 3D (New Device)Rotagraph EVO 3D (Predicate Device)
    Focal size0.5mm0.5mm
    Field of View (CT)90x90mm85x85mm
    X-ray Voltage60~90kVp60~86kVp
    X-ray Current4~17mA6~12 mA
    Total Filtration2.6 mm Al equivalent2.5 mm Al equivalent
    CT Detector Pixel size100 μm127 μm
    Pano Detector Pixel size100 μm127 μm
    Ceph Detector Pixel size150 μm48 μm
    CT Magnification1.391.25 (Open/close mouth TMJ)
    Pano Magnification1.311.28
    Ceph Magnification1.131.10
    CT Scan time14secMax 20sec
    Pano Scan time14secMax 13.8sec
    Ceph Scan time0.3sec~3.0sec15sec
    CT Grey level14bit14bit
    Pano Grey level14bit14bit
    Ceph Grey level14bit12bit
    Rotation angle360°200°

    2. Sample Size Used for the Test Set and Data Provenance:

    • Not provided. The document mentions "Non-clinical & Clinical considerations according to FDA Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" were performed, but no details about participants, data provenance (country of origin, retrospective/prospective), or a specific test set for evaluation are given. The submission focuses on technical and safety equivalence, not clinical performance data from a specific study population.

    3. Number of Experts Used to Establish Ground Truth and Their Qualifications:

    • Not applicable/Not provided. Since there's no clinical performance study involving a test set described, there's no mention of experts establishing ground truth for such a set.

    4. Adjudication Method:

    • Not applicable/Not provided. As no clinical performance study is detailed, no adjudication method is described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No. The document does not describe an MRMC study or any comparison of human readers with vs. without AI assistance. The device as described is an imaging system, not an AI-powered diagnostic tool in the sense of providing automated interpretations.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    • Not applicable/Not provided. This is an imaging device, and its approval is based on its ability to produce images comparable to a predicate device, not on the performance of a standalone algorithm for diagnosis.

    7. Type of Ground Truth Used:

    • Not applicable/Not provided. Without a described performance study, there's no mention of ground truth (e.g., pathology, expert consensus). The "ground truth" for this 510(k) submission is effectively the established safety and effectiveness of the predicate device, to which the new device is being compared for "substantial equivalence."

    8. Sample Size for the Training Set:

    • Not applicable/Not provided. The device is an X-ray imaging system, not an AI model that requires a training set in the typical sense.

    9. How the Ground Truth for the Training Set was Established:

    • Not applicable/Not provided. As above, this is an imaging device, not an AI model requiring a training set and corresponding ground truth.

    Summary of the Study Mentioned:

    The document states: "Electrical, mechanical, environmental safety and performance testing according to standards IEC 60601-1, IEC 60601-1-1, IEC 60601-1-3, IEC 60601-2-7, IEC 60601-2-28, IEC 60601-2-32 and IEC 60601-2-44 was performed, and EMC testing was conducted in accordance with the standard IEC 60601-1-2. Non-clinical & Clinical considerations according to FDA Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" were performed. All test results were satisfactory."

    This indicates that the device underwent bench testing and compliance testing with recognized international safety and performance standards for medical electrical equipment and specific X-ray equipment. These tests are designed to ensure the device functions as intended from an engineering and safety perspective, and meets regulatory requirements for radiation safety and electrical safety. The "Non-clinical & Clinical considerations" refer to review against FDA guidance, which might involve analysis of image quality metrics typically reviewed for X-ray devices (e.g., spatial resolution, contrast, noise, dose efficiency), but specific details of such studies (sample size, methodology, acceptance criteria) are not included in this summary.

    Conclusion from the document:

    The conclusion drawn by the manufacturer is that "Based on a comparison of intended use, indications, constructions, construction materials, principal of Operation, features and technical data, the RAYSCAN α-Expert 3D system are safe and effective to perform its intended use as well as substantially equivalent to the predicate device."

    Therefore, the "study" that proves the device meets (implicitly, safety and effectiveness) criteria is essentially the technical comparison and independent testing against recognized standards to establish substantial equivalence to the predicate device, rather than a clinical performance study with specific diagnostic accuracy metrics.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1