Search Results
Found 2 results
510(k) Data Aggregation
(25 days)
RCT800 is CBCT and panoramic x-ray imaging system with cephalometric. Which is intended to radiographic examination of the dento-maxillofacial, sinus, TMJ, Airway for diagnostic support for adult and pediatric patients. And a model scan is included as an option. Cephalometric image is also includes wrist to obtain carpus images for growth and maturity assessment for orthodontic treatment. The device is to be operated and used by dentists or other legally qualified health care professionals.
The system's purpose is RCT800, a 3D computed tomography scanner for scanning hard tissues like bones and teeth. By rotating the c-arm, which is embedded with an all-in-one x-ray tube and a detector on each end, CBCT images of the dental maxillofacial area can be attained by recombining data from the same level that is scanned from different angles. Additionally, the system includes a panoramic image scanning function for attaining images of the whole teeth, a cephalometric scanning option for attaining a cephalic image, and a Model Scan option for attaining a dental model CBCT image.
The provided text describes the 510(k) summary for the Ray Co., Ltd.'s RCT800 device. While it mentions various tests and compliance with standards, it does not explicitly provide a table of acceptance criteria and reported device performance in the typical format of a clinical study report with specific metrics and thresholds. Instead, it states that "All test results were satisfactory" for performance (imaging performance) testing conducted according to IEC 61223-3-4 and IEC 61223-3-7.
It also mentions "Clinical images were provided" and that "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use," but it doesn't detail a formal clinical study with specific acceptance criteria beyond subjective expert opinion.
Therefore, much of the requested information regarding the study that proves the device meets the acceptance criteria in a quantifiable manner (e.g., statistical significance, specific performance numbers) is not present in the provided document. The document primarily focuses on demonstrating substantial equivalence to a predicate device through technical similarities, bench testing, and compliance with general safety and performance standards.
However, based on the provided text, I can infer and extract some information:
Acceptance Criteria and Device Performance (Inferred from compliance statements)
Since the document states that "All test results were satisfactory" for imaging performance tests and that a licensed practitioner found clinical images to be of "acceptable quality for the intended use," the implied acceptance criteria were met. However, the specific quantitative criteria are not listed.
Table 1: Implied Acceptance Criteria and Reported Device Performance
Criterion Category | Implied Acceptance Criterion | Reported Device Performance |
---|---|---|
Imaging Performance | Parameters required to describe functionalities related to imaging properties satisfy designated tolerance (as per IEC 61223-3-4 and IEC 61223-3-7). | "All test results were satisfactory." (No specific quantitative results provided) |
Clinical Image Quality | Sample clinical images are of acceptable quality for the intended use by a licensed practitioner. | "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." (Qualitative assessment) |
Electrical, Mechanical & Environmental Safety | Conformity to IEC 60601-1:2005/AMD1:2012 (3.1 Edition), IEC 60601-1-3:2008/AMD1:2013 (Second Edition), IEC 60601-1-6:2010 (Third Edition), and IEC 60601-2-63:2012 (First Edition). | "Electrical, mechanical and environmental safety testing... were performed." (Implied satisfactory outcome as part of substantial equivalence) |
EMC | Conformity to IEC 60601-1-2:2014 (Edition 4.0). | "EMC testing was conducted in accordance with the standard IEC 60601-1-2:2014 (Edition 4.0)." (Implied satisfactory outcome as part of substantial equivalence) |
Software Validation | Compliance with FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices" for moderate level of concern. | "That has been validated according to the FDA 'Guidance...'" and "...assure substantial equivalence." "Based on our risk analysis of software, the difference does not affect its safety and effectiveness." (Implied successful validation and no safety/effectiveness issues due to software) |
Study Details (Based on the provided text)
-
Sample sizes used for the test set and the data provenance:
- Test Set (Clinical Images): The document states "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images. These images were gathered from all detectors installed with RCT800 using protocols with random patient age, gender, and size."
- Specific Number: Not specified. It indicates "samples" and "images gathered from all detectors," implying a collection across multiple patient demographics, but no numerical count is provided.
- Provenance: Clinical images obtained from "two offices where the predicate device was installed." The country of origin is not explicitly stated for these clinical images, but the applicant (Ray Co., Ltd.) is from SOUTH KOREA, so it is plausible these were collected within South Korea or countries where the predicate device was installed.
- Retrospective/Prospective: Not explicitly stated, but the mention of collecting "new detectors" and "random patient age, gender, and size" suggests some form of prospective or concurrent collection for evaluation.
- Test Set (Clinical Images): The document states "Clinical imaging samples were collected from new detectors on the proposed device at the two offices where the predicate device was installed for the clinical test images. These images were gathered from all detectors installed with RCT800 using protocols with random patient age, gender, and size."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "Two licensed practitioners/clinicians" clinically tested and approved the features. "A licensed practitioner reviewed the sample clinical images." It's unclear if these refer to the same "two" or one specific "licensed practitioner."
- Qualifications: "Licensed practitioners/clinicians." No further details on their years of experience, subspecialty (e.g., specific type of radiologist/dentist), or formal board certifications are provided.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document states "A licensed practitioner reviewed the sample clinical images and deemed them to be of acceptable quality for the intended use." This suggests a qualitative assessment, but no formal adjudication method (like 2+1 consensus) is described. It implies that the single or (if two practitioners were involved) potentially un-adjudicated opinion of the expert(s) served as the "truth."
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was mentioned. The document focuses on demonstrating that the device itself produces images of acceptable quality for human interpretation, not on the improvement of human readers with AI assistance. This device is a source of images (an X-ray system), not an AI algorithm for image analysis.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device is a medical imaging system (CT/panoramic/cephalometric X-ray system), not an AI algorithm for image analysis. Therefore, the concept of "standalone performance" for an algorithm does not apply in the context of this device. The performance refers to the image acquisition capabilities of the system itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For clinical image quality, the ground truth was expert opinion/assessment by "licensed practitioners/clinicians" who deemed the images "of acceptable quality for the intended use."
- For bench testing, the ground truth was based on compliance with standards (e.g., IEC 61223-3-4, IEC 61223-3-7) and "designated tolerance" parameters.
-
The sample size for the training set:
- This device is an X-ray imaging system. It takes images (presumably raw data) and reconstructs them into clinically useful images using reconstruction algorithms such as FBP (Filtered Back Projection) (as mentioned for the predicate device). The document does not describe the use of machine learning that would require a "training set" in the conventional AI sense. If any internal algorithms are adaptable or "learn," details are not provided. The phrase "training set" is typically applicable to AI/ML devices, which this is not identified as in the provided text.
-
How the ground truth for the training set was established:
- Not applicable, as no training set (for AI/ML) is mentioned or implied for this device.
Ask a specific question about this device
(203 days)
The X-ray unit system is a diagnostic imaging system which consists of multiple image acquisition modes; panoramic, cephalometric, and CBCT (Cone Beam Computed Tomography). X-ray unit system is used for dental radiographic examination and diagnosis of teeth, jaw, oral structures and skull. The device is to be operated and used by dentists and other legally qualified professionals.
The proposed device PAPAYA 3D Premium Plus the computed tomography x-ray system which consists of image acquisition modes; panorana, cephalometric, and computed tomography. The difference between PAPAYA 3D Premium Plus is only optional of the cephalometric detector. It designed for dental radiography of the oral and cranionarial as teeth, jaws and oral structures. The device with cephalometric detector is named PAPAYA 3D Premium Plus and the device without cephalometric detector is named PAPAYA 3D Premium.
The proposed device are composed of flat panel x-ray detectors which are based on CMOS, and TFT detector types and divided in to CT, panoramic and cephalometric radiography, and x-ay tube. CMOS, and TFT detectors are used to capture scamed image for obtaining diagnostic information for craniofacial surgery or other treatments. And it also provides 3D the anatomic stuctures by acquiring 3609rotational image sequences of oral and craniofacial area.
The provided text describes a 510(k) premarket notification for a dental X-ray system, PAPAYA 3D Premium & PAPAYA 3D Premium Plus. The document focuses on demonstrating substantial equivalence to a predicate device (PAPAYA 3D Plus, K150354) rather than presenting a detailed clinical study with specific acceptance criteria and performance metrics for an AI algorithm.
Therefore, many of the requested details about acceptance criteria for an AI device, sample sizes, expert qualifications, and specific study designs (MRMC, standalone performance) are not present in the provided text. The device in question is a medical imaging hardware system, not an AI software.
However, I can extract information related to the performance validation of the newly added image receptors, which is the closest thing to "device meets acceptance criteria" in this context.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance (as much as can be inferred for the imaging components):
Acceptance Criteria (Inferred) | Reported Device Performance (for newly added detectors) |
---|---|
Clinical Considerations: Images are "diagnosable" and meet indications for use | "well enough to diagnosable and meet its indications for use" |
Imaging Performance (for newly added CBCT image receptor FXDD-0909GA): | Tested for: |
- Gantry positioning accuracy | - Gantry positioning accuracy |
- In-plane uniformity | - In-plane uniformity |
- Spatial resolution section thickness | - Spatial Resolution section thickness |
- Noise | - Noise |
- Contrast to Noise Ratio | - Contrast to Noise Ratio |
- Geometric Distortion | - Geometric Distortion |
- Metal Artifacts | - Metal Artifacts |
Imaging Performance (for newly added Cephalometric image receptor FXDD-1012CA): | Tested for: |
- Line pair resolution | - Line pair resolution |
Note: The document states these performance metrics were "tested," implying they met predefined acceptance criteria, but the specific numerical values or thresholds for "acceptance" are not provided.
2. Sample Size and Data Provenance:
- Test Set Sample Size: Not explicitly stated for either clinical image evaluation or phantom testing. The document only mentions "clinical images" for evaluation.
- Data Provenance: The document does not specify the country of origin of the clinical images. It implies a retrospective review of existing clinical image sets.
3. Number of Experts and Qualifications:
- Number of Experts: "the clinical images were evaluated by the US board-certified oral surgeon." (Singular - implies one or an unspecified small number of US board-certified oral surgeons).
- Qualifications: "US board-certified oral surgeon." No specific years of experience are mentioned.
4. Adjudication Method:
- Adjudication Method: "Throughout the evaluation by oral surgeon..." This wording suggests a single expert's opinion, so there's no mention of a formal adjudication method (like 2+1 or 3+1).
5. MRMC Comparative Effectiveness Study:
- MRMC Study Done? No. This document describes a new imaging hardware device and its added detectors. There is no mention of an AI component requiring a comparison of human reader performance with and without AI assistance.
6. Standalone Performance (Algorithm Only):
- Standalone Performance Done? N/A. This is a hardware device. The closest related component is the "Theia" image processing software, which is described as having "only UI" differences from the predicate's software and being "developed for marketing purpose only." Its validation focused on standards compliance (EN 62304, NEMA PS 3.1-3.20 DICOM, FDA Guidance) rather than a standalone clinical performance study as one might expect for an AI algorithm.
7. Type of Ground Truth Used:
- For Clinical Image Evaluation: Expert consensus (from the US board-certified oral surgeon) on whether images were "diagnosable" and met indications for use.
- For Imaging Performance Tests: Phantom data (e.g., gantry positioning accuracy, spatial resolution, CNR, etc.).
8. Sample Size for Training Set:
- Training Set Sample Size: Not applicable. This is a hardware device, not an AI model that undergoes "training."
9. How Ground Truth for Training Set Was Established:
- Ground Truth Establishment for Training Set: Not applicable, as there's no AI training set described.
Ask a specific question about this device
Page 1 of 1