Search Results
Found 2 results
510(k) Data Aggregation
(259 days)
The RAYSCAN a- P, SC, OCL, OCS panoramic X-ray imaging system with Cephalostat is an extra-oral source X-ray system, intended for dental radiographic examination of the teeth, jaw, and oral structures, to include panoramic examinations and implantology and for TMJ studies and cephalometry. Images are obtained using the standard narrow beam technique.
RAYSCAN α-Expert (RAYSCAN α-P, SC, OCL, OCS) provides panoramic for scanning teeth, jaw and oral structures. By rotating the C-arm, which houses a high-voltage generator, an all-in-one Xray tube and a detector on each end, panoramic images of oral and maxillofacial structures are obtained byrecombining data scanned from different angles. Functionalities include panoramic image scanning for obtaining images of whole teeth, and a Cephalometric scanning option for obtaining Cephalic images.
The provided text describes a 510(k) premarket notification for the "RAYSAN α-Expert" dental X-ray system. The submission affirms its substantial equivalence to a predicate device, K142058. While it outlines several tests conducted to support this claim, it does not provide explicit acceptance criteria in a table format nor does it detail a specific study with quantitative performance metrics for a direct comparison against such criteria.
Here's a breakdown of the information that can be extracted, and where there are gaps regarding the requested specifics:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with corresponding device performance metrics. Instead, it states that "All test results were satisfactory" for performance (imaging performance) testing conducted according to IEC 61223-3-4. It also mentions that "a licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." This indicates a subjective assessment of image quality rather than quantitative performance against defined acceptance criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: The document mentions that "images were gathered from all detectors of RAYSCAN α-Expert using protocols with random patient age, gender, and size" and that "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images." However, it does not specify the exact number of images or patients in the clinical test set.
- Data Provenance: The images were collected "at the two offices where the predicate device was installed for the clinical test images." The manufacturer is Ray Co., Ltd. located in South Korea. It's implied these are prospective clinical images gathered for the purpose of the submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: "The clinical performance of RAYSCAN α-Expert were clinically tested and approved by two licensed practitioners/clinicians."
- Qualifications of Experts: They are described as "licensed practitioners/clinicians." No specific details such as years of experience, specialization (e.g., radiologist, dentist), or board certification are provided.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document states, "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." It implies individual review, but does not specify any formal adjudication method (e.g., whether the two practitioners independently reviewed images and consensus was reached, or if there was a third adjudicator in case of disagreement).
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC comparative effectiveness study is mentioned. This device is an X-ray imaging system, not an AI-assisted diagnostic tool for humans, so this type of study would not be applicable. The comparison is between the new device's image quality and the image quality of the predicate device.
- Effect Size: Not applicable.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to an X-ray imaging device, not an algorithm. Therefore, "standalone (algorithm only)" performance is not applicable. The device's primary function is image acquisition.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the clinical image quality assessment appears to be expert opinion/consensus (from two licensed practitioners) regarding whether the images were "of acceptable quality for the intended use." There's no mention of pathology or outcomes data for establishing ground truth.
8. The sample size for the training set
The document mentions software validation, but this X-ray system is not described as an AI/ML device that requires a distinct "training set" in the context of machine learning model development. This question is not directly applicable to the type of device described.
9. How the ground truth for the training set was established
As the device is not described as involving an AI/ML model with a training set, this question is not directly applicable. The software mentioned is for saving patient and image data, inquiries, and image generation, and was validated according to FDA guidance for software in medical devices, not specific AI/ML training.
Summary of what is present and what is missing:
- Acceptance Criteria/Performance Table: Not provided in the requested format. General statement of "satisfactory" test results and "acceptable quality."
- Test Set Sample Size & Provenance: Sample size not quantified. Provenance is South Korea, likely prospective.
- Number & Qualification of Experts: Two licensed practitioners/clinicians. No further qualification details.
- Adjudication Method: Not specified.
- MRMC Study: Not applicable.
- Standalone Performance: Not applicable.
- Type of Ground Truth: Expert opinion on image quality.
- Training Set Sample Size: Not applicable (not an AI/ML device in this context).
- Training Set Ground Truth: Not applicable.
Ask a specific question about this device
(30 days)
RAYSCAN a-Expert3D, panoramic x-ray imaging system with cephalostat, is an extra-oral source x-ray system, which is intended for dental radiographic examination of the teeth, jaw, and oral structures, specifically for panoramic examinations and implantology and for TMJ studies and cephalometry, and it has the capability, using the CBCT technique, to generate dentomaxillofacial 3D images.
RAYSCAN α-3D, SM3D, M3DS and M3DL are 3D computed tomography for scanning hard tissues like bone and teeth. By rotating the C-arm, which houses a high-voltage generator, a X-ray tube and a detector on each end, CBCT images of dental maxillofacial structures are obtained by recombining data scanned from the same level at different angles. Functionalities include panoramic image option and cephalometric option.
The provided text describes a 510(k) premarket notification for the RAYSCAN a-Expert3D, a dental X-ray imaging system. The document focuses on demonstrating substantial equivalence to a predicate device, rather than proving that the device meets specific acceptance criteria through a comprehensive clinical study.
Therefore, the requested information regarding detailed acceptance criteria, sample sizes, expert qualifications, and specific study designs (MRMC, standalone performance) is largely not present in the provided text. The document primarily highlights non-clinical bench testing and the provision of clinical image samples for review by licensed practitioners to further support substantial equivalence.
Based on the available information, here's what can be extracted and what is missing:
Overview of Device Performance and Study Information
The submission for the RAYSCAN a-Expert3D is a 510(k) for substantial equivalence to a predicate device (RAYSCAN α-Expert3D K190812 and RCT800 K230753). The performance assessment primarily relies on demonstrating that the modified device (with updated X-ray voltage/current and detector types) maintains similar safety and effectiveness compared to the predicate, as supported by non-clinical and limited clinical data.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present explicit quantitative acceptance criteria for device performance, such as specific accuracy, sensitivity, or specificity thresholds. Instead, it states that "All test results were satisfactory" for bench testing. The primary "acceptance criterion" implied throughout the 510(k) process is demonstrating substantial equivalence to the predicate device.
Criterion / Aspect | Acceptance Standard (Implied) | Reported Device Performance |
---|---|---|
Imaging Performance | Satisfy designated tolerances for imaging properties (as per FDA Guidance for 510(k)'s for Solid State X-ray Imaging Devices and standards IEC 61223-3-4, IEC 61223-3-7). Demonstrate similar clinical image quality to the predicate device. | "Performance (Imaging performance) testing was conducted according to standard of IEC 61223-3-4 and IEC 61223-3-7. All test results were satisfactory." |
"Clinical imaging samples were collected... A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use."
"Because the subject device uses the same detector as the predicate device, there are no significant differences between the two devices as a result of non-clinical testing." |
| Safety (Electrical, Mechanical, Environmental) | Compliance with relevant international standards: IEC 60601-1, IEC 60601-1-3, IEC 60601-1-6, IEC 60601-2-63, IEC 60601-1-2 (EMC). | "Electrical, mechanical and environmental safety testing according to standard of IEC 60601-1: 2005/AMD1:2012(3.1 Edition), IEC 60601-1-3: 2008/AMD1:2013(Second Edition), IEC 60601-1-6:2010(Third Edition) and IEC 60601-2-63: 2012/AMD1:2017(first Edition) were performed. EMC testing was conducted in accordance with the standard IEC 60601-1-2: 2014(Edition 4.0)." (Implied successful compliance, as it's part of an SE submission). |
| Software Validation | Validation according to FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices". Software level of concern deemed "moderate" and differences do not affect safety/effectiveness. | "The software... has been validated according to the FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices" to assure substantial equivalence." (Implied successful validation). |
| Patient Dosage | Patient dosage satisfies designated tolerance. | "Bench testing is used to assess whether the parameters required to describe functionalities related to imaging properties of the dental X-ray device and patient dosage satisfy the designated tolerance." (Implied satisfactory result). |
2. Sample size(s) used for the test set and the data provenance
- Test Set Sample Size: The document states that "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images." It also mentions "images were gathered from all detectors installed with RAYSCAN a-Expert3D using protocols with random patient age, gender, and size." However, no specific numerical sample size (e.g., number of patients or images) for the clinical test set is provided.
- Data Provenance:
- Country of Origin: Not explicitly stated for the clinical data. The manufacturer is in South Korea.
- Retrospective or Prospective: Not explicitly stated. The phrasing "Clinical imaging samples were collected from new detectors on the proposed device" could suggest prospective collection for the purpose of this submission, but it's not definitive.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: "two licensed practitioners/clinicians."
- Qualifications of Experts: "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." Specific specialties (e.g., radiologist, dentist with specific experience) or years of experience are not provided beyond "licensed practitioner."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- "A licensed practitioner reviewed the sample clinical images." This suggests an individual review, potentially without formal adjudication unless the "two licensed practitioners" independently reviewed and concurred, which is not detailed. No specific adjudication method (e.g., consensus, majority vote) is mentioned. The primary assessment seems to be a qualitative review for "acceptable quality for the intended use."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document describes a "substantial equivalence" submission for an imaging device, not an AI-assisted diagnostic tool. The purpose was to show the new device produces images comparable to the predicate for diagnostic use. No AI component is mentioned, and therefore no assessment of human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is an X-ray imaging system, not a diagnostic algorithm. Its performance is related to image acquisition parameters and image quality, not an output from an algorithm in the typical sense of standalone AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the clinical images appears to be the qualitative assessment of "acceptable quality for the intended use" by licensed practitioners. It is not based on pathology, outcomes data, or a formal expert consensus process as would be typically seen for a diagnostic performance study. The images serve to "show that the complete system works as intended."
8. The sample size for the training set
- Not applicable. This is an X-ray imaging system, not a machine learning model, so there is no "training set." The software validation refers to standard software development practices, not AI model training.
9. How the ground truth for the training set was established
- Not applicable, as there is no training set for an AI model.
In summary, the provided document focuses on demonstrating that the updated RAYSCAN a-Expert3D device is substantially equivalent to previously cleared predicate devices, primarily through non-clinical bench testing and a limited qualitative review of clinical images by licensed practitioners. It does not contain the detailed, quantitative clinical study data (such as MRMC, standalone algorithm performance, or specific metrics with acceptance thresholds) typically associated with AI/CADe device submissions.
Ask a specific question about this device
Page 1 of 1