Search Results
Found 2 results
510(k) Data Aggregation
(49 days)
CS 9600
The CS 9600 is extraoral system intended to produce two-dimensional digital X-ray images of the dento-maxillofacial, ENT (Ear, Nose and Throat), cervical spine and wrist regions at the direction of healthcare professionals as diagnostic support for pediatric and adult patients.
The CS 9600 can be upgraded to produce cephalometric digital X-ray images. This includes imaging the hand and wrist to obtain carpus image for growth and maturity assessment.
CS 9600 is an extraoral system intended to produce two-dimensional and three-dimensional digital X-ray images of the dento-maxilofacial, ENT (Ear, Nose and Throat), cervical spine and wrist regions at the direction of healthcare professionals as diagnostic support for pediatric and adult patients.
The CS 9600 can be upgraded to produce cephalometric digital X-ray images. This includes imaging the hand and wrist to obtain carpus image for growth and maturity assessment.
CS 9600 is a cone-beam computed tomography (CBCT) x-ray system. It means CS 9600 rotates around the patient, capturing data using a cone-shaped x-ray beam. These data are used to reconstruct a two or a three-dimensional (3D) image of the following regions of the patient's anatomy: dental (teeth); oral and maxillofacial region (mouth, jaw and neck); ears, nose and throat region (ENT); cervical spine or wrist region.
Additional features such as low dose mode, scout image and metal artifact reduction are also provided by the CS 9600.
The CS 9600 can also be upgraded with cephalometric modality. The cephalometric modality of the proposed device CS 9600 is the same than the one available in the reference device K151087. The cephalometric mode works with a narrow beam linear scanning process called a "slot technique". The patient head is scanned in lines with a flat, fan-shaped x-ray beam.
The provided text describes the CS 9600 device, an extraoral system for producing 2D and 3D digital X-ray images, and its substantial equivalence to predicate devices. It specifically details the addition of an optional cephalometric modality.
Here's an analysis of the acceptance criteria and the study that proves the device meets those criteria, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't present a formal table of acceptance criteria with specific quantitative thresholds. Instead, it describes performance through comparison to predicate devices and general statements about clinical effectiveness. The core "acceptance criteria" appear to be met by demonstrating substantial equivalence to these predicate devices.
Feature / Modality | Predicate Device (K181136) Performance | Reference Device (K151087) Performance (for Ceph) | CS 9600 Reported Performance | Acceptance Standard |
---|---|---|---|---|
Panoramic Modality | Present, same specifications as CS 9600 | N/A | Present, same specifications as predicate K181136 | Substantial Equivalence to K181136 |
3D Modality | Present, same specifications as CS 9600 | N/A | Present, same specifications as predicate K181136 | Substantial Equivalence to K181136 |
Cephalometric Modality (Optional) | Not present in primary predicate | Present, with detailed specifications | Present, identical to reference device K151087 | Substantial Equivalence to K151087 |
Image Quality (Cephalometric) | N/A | Not explicitly stated but implied acceptable | "acceptable clinical effectiveness" and "clinically usable diagnostic quality" | Qualified expert review |
EMC & Electrical Safety | Implicitly met by predicate | Implicitly met by reference | Meets IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-2-63 | Compliance with specified IEC/AAMI standards |
Software Validation | Implicitly met by predicate | Implicitly met by reference | Validated according to FDA Guidance for Software and Cybersecurity | Compliance with specified FDA Guidances |
DICOM Conformance | Implicitly met by predicate | Implicitly met by reference | Meets NEMA PS 3.1-3.20 | Compliance with NEMA DICOM Set |
Pediatric Information | Implicitly met by predicate | Implicitly met by reference | Provides design features and instructions for pediatrics | Compliance with FDA Guidance on Pediatric Information |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not explicitly state the sample size for the test set. It mentions "clinical images representative of the range of the different cephalometric radiological exams were taken." This implies a set of images was used, but the exact number is not provided. The data provenance (country of origin, retrospective/prospective) is also not stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
The document states that "The cephalometric images were reviewed by a qualified expert." It indicates a single expert was used. The specific qualifications of this expert are not detailed beyond "qualified expert."
4. Adjudication Method for the Test Set:
No adjudication method is described. The review was conducted by a single "qualified expert."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No MRMC comparative effectiveness study was mentioned. The study focused on technical comparisons and single-expert clinical review, not on comparing human reader performance with and without AI assistance.
6. If a Standalone (algorithm only without human-in-the-loop performance) was done:
The device described is an X-ray imaging system, not an AI algorithm. Therefore, a standalone performance study of an algorithm independent of human interaction is not applicable in this context. The "performance testing" was for the imaging system itself.
7. The Type of Ground Truth Used:
For the cephalometric images, the ground truth was established through expert consensus (or rather, the opinion of a single "qualified expert") who "evaluated [the images] to be of acceptable clinical effectiveness for the proposed indications for use" and "deemed to be of a clinically usable diagnostic quality."
8. The Sample Size for the Training Set:
The document describes performance testing for an imaging device, not an AI algorithm. As such, there is no mention of a "training set" in the context of machine learning. The device's performance is established based on its physical characteristics, image quality, and regulatory compliance, rather than by training on a dataset.
9. How the Ground Truth for the Training Set Was Established:
As mentioned above, the concept of a "training set" for an AI algorithm is not applicable to the information provided for this medical imaging device.
Ask a specific question about this device
(24 days)
CS 9600
The CS 9600 is extraoral system intended to produce two-dimensional digital X-ray images of the dento-maxillofacial, ENT (Ear, Nose and Throat), cervical spine and wrist regions at the direction of healthcare professionals as diagnostic support for pediatric and adult patients.
CS 9600 is an extraoral system intended to produce two-dimensional digital X-ray images of the dento-maxilofacial, ENT (Ear, Nose and Throat), cervical spine and wrist regions at the direction of healthcare professionals as diagnostic support for pediatric and adult patients.
CS 9600 is cone-beam computed tomography (CBCT) x-ray system. It means CS 9600 rotates around the patient, capturing data using a cone-shaped x-ray beam. These data are used to reconstruct a two or a three-dimensional (3D) image of the following regions of the patient's anatomy: dental (teeth); oral and maxillofacial region (mouth, jaw and neck); ears, nose and throat region (ENT); cervical spine or wrist region.
Additional features such as low dose mode, scout image, metal artifact reduction are also provided by the CS 9600.
The provided text describes the 510(k) summary for the CS 9600 device. However, it does not include specific acceptance criteria with numerical targets, nor does it detail a study that rigorously proves the device meets such criteria in terms of clinical performance or diagnostic accuracy. Instead, it focuses on demonstrating substantial equivalence to a predicate device (Planmeca ProMax 3D Max, K160506) through technical comparisons and general performance testing.
Here's a breakdown of the information requested, based on the provided text, and highlighting what is not present:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state numerical acceptance criteria for clinical performance (e.g., sensitivity, specificity, accuracy for specific diagnostic tasks). Instead, it states that "The images were reviewed by a qualified expert and were evaluated to be of acceptable clinical effectiveness for the proposed indications for use. The CS 9600 set of images were deemed to be of a clinically usable diagnostic quality." This is a qualitative statement of performance rather than a quantitative comparison against defined criteria.
A comparison of technical specifications between the CS 9600 and its predicate device is provided, which implicitly serves as a form of "performance" comparison to demonstrate similarity.
Feature | CS 9600 Reported Performance | Predicate Device (Planmeca ProMax 3D Max) Performance |
---|---|---|
General Specifications | ||
X-ray tube voltage | 60-90 KV (60-120 kV in option) | 60-96 KV (60-120 kV in option) |
X-ray tube current | 2-15 mA | 1-14 mA |
Tube focal spot | 0.3 or 0.7 mm | 0.5 or 0.6 mm |
Patient sizes | 4 (child, small adult, medium adult, large adult) | 5 (child, small adult, medium adult, large adult, extra large adult) |
Sensor technology | CMOS | Amorphous silicon |
Sensor active area (mm) | 120 x 140 | 193 x 242 |
Pixel size (µm) | 100 x 100 | 127 x 127 |
Sensor resolution | 1200 x 1400 pixels | 1536 x 1920 pixels |
Gray scale | 16384 - 14 bits | 32768 - 15 bits |
Limiting resolution | 5 lp/mm | 3.94 lp/mm |
MTF, X-ray (%) at 1 lp/mm | 60 | ≥48 |
DQE, X-ray (%) at 0 lp/mm | 60 | 70 |
Unit dimensions (mm) | 1284 (L) x 1669 (D) x 2526 (H) | 1280 (L) x 1430 (D) x 2390 (H) |
Two-dimensional modality: Panoramic | ||
Magnification | 1.28 | 1.2 |
Exposure time | 2-14 seconds | 2.7-16 seconds |
Dose Estimation (Full Panoramic) | Child: 58.5 mGy.cm²; Adult Small: 87.8 mGy.cm²; Adult Medium: 122 mGy.cm²; Adult Large: 139 mGy.cm² | Child: 55 mGy.cm²; Adult Small: 92 mGy.cm²; Adult Medium: 111 mGy.cm²; Adult Large: 136 mGy.cm² |
Three-dimensional modality: 3D | ||
Magnification | 1.4 | 1.4 |
Voxel size (µm) | 75, 150, 300 and 400 | 75, 100, 150, 200, 400 and 600 |
Field of View (cm) | Various, e.g., 4x4 to 16x17* | Various, e.g., 5x5 to 23x26 (with stitching) |
Exposure time | 3-20 seconds | 2.8-18 seconds |
Dose Estimation (FoV 5x5 cm) | Child: 211 mGy.cm²; Adult Small: 220 mGy.cm²; Adult Medium: 440 mGy.cm²; Adult Large: 550 mGy.cm² | Child: 288 mGy.cm²; Adult Small: 472 mGy.cm²; Adult Medium: 598 mGy.cm²; Adult Large: 758 mGy.cm² |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "The performance testing for imaging applications was carried out taking clinical images representative of the range of the different radiological exams available." It does not specify the sample size of these clinical images, their provenance (country of origin), or whether they were collected retrospectively or prospectively.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document states: "The images were reviewed by a qualified expert." It refers to a singular "expert" and provides no details about their number, specific qualifications (e.g., years of experience, subspecialty), or how ground truth was established beyond a general review.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method for establishing ground truth from multiple experts. It only mentions review by "a qualified expert."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned. This device is an imaging system (CBCT scanner), not an AI-assisted diagnostic tool, so such a study would not be applicable in this context. The testing described is focused on the inherent imaging quality and clinical usability of the system itself, not its impact on human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This question is not applicable as the CS 9600 is a CBCT imaging system, not an AI algorithm. Its "performance" refers to the quality of the images it produces for human interpretation, not an automated diagnostic output.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth was established by "a qualified expert" who "evaluated [images] to be of acceptable clinical effectiveness for the proposed indications for use." This implies an expert opinion/review rather than a definitive histological (pathology) or patient outcomes-based ground truth.
8. The sample size for the training set
This device is an imaging system, not a machine learning algorithm. Therefore, there is no "training set" in the context of AI model development described in the document. The performance testing involves clinical images, but these are for testing the device's output, not for training an algorithm within the device.
9. How the ground truth for the training set was established
As there is no training set mentioned for an AI model, this question is not applicable.
Ask a specific question about this device
Page 1 of 1