Search Results
Found 1 results
510(k) Data Aggregation
(25 days)
System, X-Ray, Extra oral Source, Digital, PaX-i Plus, PaX-i Insight
PCH-30CS is intended to produce panoramic or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by physicians, dentists, and x-ray technicians.
PaX-i Plus / PaX-i Insight (Model: PCH-30CS) is an advanced 3-in-1 digital X-ray imaging system that incorporates PANO, CEPH (Optional) and 3D PHOTO (Optional) imaging capabilities into a single system and acquires 2D diagnostic image data in conventional panoramic and cephalometric modes. The PaX-i Plus / PaX-i Insight dental systems are not intended for CBCT imaging.
The provided text describes the PaX-i Plus/PaX-i Insight (Model: PCH-30CS) digital X-ray system and its substantial equivalence to a predicate device. However, it does not contain the level of detail requested for acceptance criteria and a study that 'proves' the device meets them, especially in the context of AI assistance or human-in-the-loop performance measurement.
The document focuses on demonstrating substantial equivalence to an existing predicate device rather than outright proving performance against specific acceptance criteria in a clinical study as might be done for a novel AI-powered diagnostic device. It primarily relies on bench testing of components, comparison of technical characteristics, and a qualitative assessment of general image quality.
Here's an analysis based on the available information, with caveats where data is missing:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of "acceptance criteria" for diagnostic accuracy or clinical utility that an AI-driven device would typically have (e.g., sensitivity, specificity, AUC thresholds). Instead, the performance evaluation focuses on direct comparisons of technical imaging metrics and safety standards against a predicate device.
Acceptance Criteria (Implied / Technical) | Reported Device Performance (Subject Device vs. Predicate) |
---|---|
Image Quality (Detectors) | |
MTF (Modulation Transfer Function) | Similar or better |
DQE (Detective Quantum Efficiency) | Similar or better |
NPS (Normalized Noise Power Spectrum) | Similar or better |
Signal-to-Noise Ratio (SNR) | Superior in all spatial frequencies |
Pixel Resolution | Similar (for Xmaru1501CF-PLUS) or Higher/Better SNR (for Xmaru1404CF-PLUS) |
Dosimetric Performance (DAP) | |
Panoramic mode | Similar to predicate device under similar X-ray exposure conditions |
Cephalometric mode (Fast) | Equivalent to predicate device |
Safety & Standards Compliance | |
IEC 60601-1 (Electrical, Mechanical, Environmental Safety) | Conforms |
IEC 60601-1-3 (Performance) | Conforms |
IEC 60601-2-63 (Performance - Dental X-ray) | Conforms |
IEC 60601-1-2 (EMC) | Conforms |
21 CFR Part 1020.30, 31 (EPRC standards) | Conforms |
NEMA PS 3.1-3.18 (DICOM Set) | Conforms |
FDA Guidance: "Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" | Complies (non-clinical consideration report provided) |
IEC 61223-3-4 (Acceptance test and image evaluation) | Satisfactory |
Software Verification & Validation (Moderate Concern Level) | Satisfactory testing results, proper functioning |
Cybersecurity Risk Analysis | Performed |
General Image Quality (Clinical) | Equivalent or better |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "Clinical images were provided" and "PANO / CEPH images from the subject and predicate devices are evaluated in the Clinical consideration and image quality evaluation report." However:
- Sample Size: The exact sample size of clinical images used for evaluation is not specified.
- Data Provenance: The country of origin is not specified, and it's not explicitly stated if the data was retrospective or prospective. The phrasing "Clinical images were provided" suggests pre-existing images, implying retrospective data, but this is not confirmed.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states: "PANO / CEPH images from the subject and predicate devices are evaluated in the Clinical consideration and image quality evaluation report." However:
- Number of Experts: The number of experts involved in this evaluation is not specified.
- Qualifications of Experts: The qualifications of these experts (e.g., radiologist with X years of experience) are not specified.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating clinical images. It simply states that images were "evaluated."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC study appears to have been conducted to measure the effect size of human readers improving with AI vs. without AI assistance. The device in question is a digital X-ray imaging system, not an AI-assisted diagnostic tool. The document focuses on the equivalence of image acquisition hardware and software, not on a decision support or AI diagnostic component that would interact with human readers in such a study.
- The "Insight PAN" mode is described as providing "multilayer panorama images with depth information" which "demonstrate useful diagnostic information," but this is a feature of the imaging system itself, not necessarily an AI-driven interpretation aid.
6. Standalone (Algorithm Only) Performance
- No standalone (algorithm-only) performance testing was presented in the context of diagnostic accuracy. As noted above, the device is an imaging system, not primarily an AI algorithm for diagnosis. Performance evaluation focused on bench testing of hardware components (detectors, X-ray source) and overall image quality metrics, not on an algorithm's diagnostic output.
7. Type of Ground Truth Used
The document refers to "Clinical consideration and image quality evaluation report" and that "general image quality... is equivalent or better." This suggests that the "ground truth" for the clinical image evaluation was likely based on expert consensus or qualitative assessment of image quality for diagnostic detail, rather than pathology, outcomes data, or a single definitive reference standard.
8. Sample Size for the Training Set
- This information is not applicable or not provided. The document describes a traditional X-ray imaging system, not a machine learning model that would require a distinct "training set." The performance evaluation is based on comparison to a predicate device and technical measurements.
9. How the Ground Truth for the Training Set Was Established
- This information is not applicable or not provided for the same reasons as #8.
In summary: The provided 510(k) summary focuses on demonstrating substantial equivalence for an X-ray imaging system by comparing technical specifications, bench test results of components (like detectors), compliance with safety standards, and general image quality to a predicate device. It does not provide the kind of detailed clinical study data (e.g., specific acceptance criteria for diagnostic performance, detailed clinical test set demographics, expert qualifications, or MRMC studies) that would be expected for an AI-powered diagnostic device or a system whose primary claim is improved diagnostic accuracy for specific conditions.
Ask a specific question about this device
Page 1 of 1