Search Results
Found 2 results
510(k) Data Aggregation
(128 days)
The OP300 dental panoramic, cephalometric and cone beam computed tomography x-ray device is intended for dental radiographic examination of teeth, jaw and TMJ areas by producing conventional 2D x-ray images as well as x-ray projection images of an examined volume for the reconstruction of a 3D view. The device is operated and used by qualified healthcare professionals.
The Orthopantomograph OP300 is an extra oral source dental x-ray device that is softwarecontrolled which produces conventional digital 2D panoramic, cephalometric and TMJ x-ray images as well as digital x-ray projection images taken during cone beam rotations around a patient's head. The projection images are reconstructed to be viewed in 3D by a 3D viewing software.
The provided text describes a 510(k) premarket notification for a modified dental X-ray device, the OP300. The submission aims to demonstrate substantial equivalence to a predicate device (also an OP300, K122018) rather than presenting a novel device that requires extensive clinical studies. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are focused on engineering and bench testing, demonstrating that the modifications do not negatively impact safety or effectiveness.
Here's an analysis based on the provided text, recognizing that this is a 510(k) submission for a modification, not a de novo device requiring broad clinical trials:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by complying with recognized consensus standards and demonstrating equivalent image quality and performance to the predicate device through bench testing.
| Acceptance Criteria Category | Specific Criteria / Standard | Reported Device Performance (Modified OP300) |
|---|---|---|
| Image Quality Equivalence | No significant differences in image quality compared to the predicate OP300 (K122018) | Concluded that there are no significant differences in image quality. |
| Sensor Performance Equivalence | Equivalent sensor performance to the predicate OP300 (K122018) | Concluded that there are no significant differences in sensor performance. |
| Compliance with Consensus Standards | IEC60601-1:1988 (Medical electrical equipment - Part 1: General requirements for safety) | Compliant |
| IEC60601-1-2:2001 (Medical electrical equipment - Part 1-2: General requirements for safety - Collateral standard: Electromagnetic compatibility - Requirements and tests) | Compliant | |
| IEC 60601-1-3:1994 (Medical electrical equipment - Part 1-3: General requirements for safety - Collateral Standard: General requirements for radiation protection in diagnostic X-ray equipment) | Compliant | |
| IEC60601-1-4:1996 (Medical electrical equipment - Part 1-4: General requirements for safety - Collateral standard: Programmable electrical medical systems) | Compliant | |
| IEC 60601-2-7:1998 (Medical electrical equipment - Part 2-7: Particular requirements for the safety of high-voltage generators of diagnostic X-ray generators) | Compliant | |
| IEC 60601-2-28:1993 (Medical electrical equipment - Particular requirements for the safety of X-ray source assemblies and X-ray generators for medical diagnosis) | Compliant | |
| IEC 60601-2-32:1994 (Medical electrical equipment - Part 2-32: Particular requirements for the safety of associated equipment of X-ray equipment) | Compliant | |
| Anthropomorphic Phantom Evaluation | Produce images without severe defects in 3D imaging mode. | Demonstrated capability of producing images without severe defects. |
| Software Validation | Successful validation of GUI software to incorporate new features (FOVs, low-dose mode). | Successfully verified and validated. |
| Safety and Effectiveness | Ensure the safety and effectiveness of the device (overall). | Successfully verified and validated. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- For image quality and sensor performance, the testing involved comparing the modified OP300 against the predicate OP300 (K122018). The specific number of images or runs is not explicitly stated, but it was "in-house Performance (bench) testing."
- For the anthropomorphic phantom evaluation, it involved "images of an anthropomorphic phantom." The number of images is not specified.
- Clinical images of patients were explicitly not used to support substantial equivalence.
- Data Provenance: The testing was "in-house" bench testing, conducted by the manufacturer (PaloDEx Group Oy) in Finland. This indicates internal, controlled testing, not necessarily independent third-party validation. The data is retrospective in the sense that it's comparing a new version to an existing (predicate) version's performance characteristics.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
No external experts or clinicians were explicitly stated as establishing ground truth for the bench test set. The evaluation of "image quality" and "sensor performance" likely relied on internal engineering and quality control staff, comparing objective metrics and potentially subjective assessments by qualified personnel. The statement "it was concluded that there is no significant differences in image quality" implies an internal assessment.
4. Adjudication Method for the Test Set
No formal adjudication method (like 2+1 or 3+1 by multiple experts) is mentioned, as clinical data was not used. The determination of "no significant differences" in image quality and sensor performance appears to be a conclusion drawn from the in-house bench testing results.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
No MRMC comparative effectiveness study was conducted. The submission explicitly states: "Sample clinical images of patients were not used to support substantial equivalence of the OP300 device." This means there was no human reader component to the "effect size of how much human readers improve with AI vs without AI assistance" as there is no AI assistance feature discussed in the submission, and no human reader study.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The device is an imaging system, not an algorithm being validated in isolation. The "standalone" performance relates to its ability to produce images compliant with standards and equivalent to the predicate, which was assessed through bench testing. The reconstruction software (FBP or ART) operates in a "standalone" fashion to generate the 3D view from 2D projections, but its performance was evaluated in terms of image quality metrics from the phantom, not through a separate algorithm-only study.
7. The Type of Ground Truth Used
The "ground truth" for this 510(k) submission primarily relies on:
- Engineering benchmarks and specifications: Adherence to the technical parameters and performance characteristics established for the predicate device.
- Consensus Standards: Compliance with recognized international standards (IEC 60601 series).
- Anthropomorphic phantom images: The "truth" for these images is the known anatomical/radiological features within the phantom, and the assessment looked for "severe defects" rather than diagnosing a specific condition.
8. The Sample Size for the Training Set
This submission is for a device modification (hardware and GUI changes), not a new algorithm that requires a separate training set. The device uses established image reconstruction techniques (FBP, ART) which do not involve deep learning or AI requiring a "training set" in the modern sense. Therefore, there is no mention of a training set sample size.
9. How the Ground Truth for the Training Set Was Established
As there is no mention of a training set, the establishment of ground truth for a training set is not applicable to this submission.
Ask a specific question about this device
(153 days)
The OP300 dental panoramic, cephalometric and cone beam computed tomography x-ray device is intended for dental radiographic examination of teeth, jaw and TMJ areas by producing conventional 2D x-ray images as well as x-ray projection images of an examined volume for the reconstruction of a 3D view. The device is operated and used by qualified healthcare professionals.
The Orthopantomograph OP300 is an extraoral source, software-controlled dental x-rav device which produces conventional digital 2D panoramic, cephalometric and TMJ x-ray images as well as digital x-ray projection images taken during cone beam rotations around a patient's head. The projection images are reconstructed to be viewed in 3D by a 3D viewing software.
The provided text does not contain detailed information about specific acceptance criteria (performance metrics with thresholds) for the OP300 device, nor does it detail a comprehensive study designed to prove the device meets such criteria. Instead, it focuses on demonstrating substantial equivalence to a predicate device.
Here's an analysis based on the available text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics with specific thresholds (e.g., accuracy > 90%, sensitivity > 85%). Instead, it establishes "substantial equivalence" based on similar technological characteristics. The table provided focuses on comparing the design and function of the modified OP300 to its predicate (OP300 [K093683]).
| Concept | OP300 (Modified) Performance | Acceptance Criteria (Implicit: Substantial Equivalence to Predicate) |
|---|---|---|
| Indications for Use | Dental radiographic examination of teeth, jaw, and TMJ areas; produces conventional 2D x-ray images and x-ray projection images for 3D reconstruction. | Same as predicate |
| Imaging modes | Panoramic, Cephalometric, TMJ, 3D | Same as predicate |
| X-ray source | 3D: 90kV; Pan: 57-90 kV; Ceph: 60-90 kV; kV accuracy: +/-5kV; mA range: 3.2-16 mA; 3D power: pulsed | Close match to predicate (mA range for predicate is 2-16 mA, modified is 3.2-16 mA, which is within the predicate's range) |
| Focal spot | 0.5mm | Same as predicate |
| Image detector(s) | CMOS Flat Panel + CMOS for pan/ceph imaging | Same as predicate |
| 3D imaging technique | Reconstruction from 2D images | Same as predicate |
| 3D's Field Of View | 61 x 41 mm, 61 x 78 mm | Same as predicate |
| 3D's total viewing angle | 200 degrees | Same as predicate |
| Pixel size | CMOS flat panel for 3D: 200 um; CMOS for panoramic imaging: 100 um | Same as predicate |
| Voxel size | 80-350 µm | Same as predicate |
| Reconstruction Software | Filtered Back Projection (FBP) or Algebraic Reconstruction Technique (ART) | Predicate used Algebraic Reconstruction Technique (ART). The modified device includes FBP as an additional option, which is noted in the non-clinical data as widely used. |
| 3D's effective exposure time | 2.3 - 12.5 sec | Predicate: 2 - 20 sec (modified device falls within this range) |
| 3D Reconstruction Time | 1-3 min | Same as predicate |
| Patient's Position | Standing and wheelchair | Same as predicate |
| System footprint | H161-241cm x D1390cm x W97-193 cm | Same as predicate |
| Weight | Pan/3D 205 kg; Ceph 250 kg | Same as predicate |
The "acceptance criteria" here are largely based on the new device having "Same" or "similar" (within a specified range) technological and performance characteristics as the predicate device, which is a common approach for 510(k) submissions.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size: The document mentions a "Preference study as an image quality comparison." It states that "Same patient data was used for the reconstructions." It does not specify the number of patients or images used in this preference study.
- Data provenance: Not explicitly stated. The manufacturer is based in Finland. The study is referred to as a "preference study" implying image review, but the source of the patient data (country, retrospective/prospective) is not detailed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of experts: The preference study involved "internal reviewers and external dental professionals." The exact number of these professionals is not specified.
- Qualifications of experts: The document only refers to them as "external dental professionals" and "internal reviewers." No specific qualifications (e.g., years of experience, specialization) are provided.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe the adjudication method for the preference study. It simply states that images were "evaluated by internal reviewers and external dental professionals."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study, particularly one measuring the improvement of human readers with AI assistance, was not conducted. This device is a pure imaging system, not an AI diagnostic tool.
- The "preference study" compares images from the new OP300 with its predicate. It's a reader study, but not an MRMC for AI assistance. It aims to show equivalent image quality between the devices, not human performance improvement with AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This question is less relevant to the OP300, as it is an imaging device, not an AI algorithm intended for standalone diagnostic performance.
- The "standalone" performance in this context would refer to the image quality produced by the system itself. The preference study was designed to compare the image quality of the modified OP300 to the predicate, effectively assessing its standalone imaging performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the preference study, the "ground truth" was subjective expert preference/evaluation of image quality. It was not based on objective clinical ground truth like pathology or outcomes data. This is appropriate for an image quality comparison study rather than a diagnostic accuracy study.
8. The sample size for the training set
This device does not involve a "training set" in the context of machine learning. It's an X-ray imaging device. Its algorithms (Filtered Back Projection or Algebraic Reconstruction Technique) are established methods, not 'trained' in the same way an AI model is.
9. How the ground truth for the training set was established
Not applicable, as no machine learning training set is mentioned or implied for this device.
Ask a specific question about this device
Page 1 of 1