Search Results
Found 3 results
510(k) Data Aggregation
(155 days)
EzOrtho is a software indicated for use by dentists who provide orthodontic treatment for image analysis, simulation, profilogram, VTO/STO and patient consultation. Results produced by the software's diagnostic, treatment planning and simulation tools are dependent on the interpretation of trained and licensed practitioners or dentists.
EzOrtho is a 2D orthodontic analysis program developed by Ewoosoft. EzOrtho manages patient information and images for orthodontic analysis. This software also assists in orthodontic treatment by providing accurate image analysis, profilograms, superimpositions, and VTO (visualised treatment objective) and STO (surgical treatment objective). The analyzed results are saved in chart format so that you can easily store and track the treatment and records of each patient.
EzOrtho is designed to provide a simple and straightforward user interface.
- Managing Patients and Registering Images EzOrtho offers powerful features related to making schedules and managing patient appointments. In addition, EzOrtho enables you to import images from EzDenti(K190087, K172364, K163533, K161117, K150747), Explorer, or a scanner and easily calibrate the size of the image or arrange multiple film/photo images.
- . Analyzing and Tracing Images The Landmark Voice Guide and the improved Landmark Input Interface support more accurate and easier tracing.
- . Establishing Treatment Plans The Morphing feature enables you to predict how the treatment plan established may affect the face of a patient. In addition, the Compare feature enables you to establish a treatment plan by comparing photos before and after the treatment.
- . Assisting with Patient Consultation EzOrtho provides features to facilitate understanding and communication between doctors and patients during consultation. For example, the Superimposition feature displays the changes visually due to the treatment and the Gallery feature plays a slide show with multiple images of patients.
The EzOrtho device's acceptance criteria and the study proving it meets them are described below:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document focuses on establishing substantial equivalence to a predicate device (OrthoVision v2.1) rather than defining granular acceptance criteria for specific performance metrics of the new features. However, for the newly added "automatic cephalometric tracing feature," a validation test was performed to evaluate its accuracy.
Acceptance Criteria (Implied for Automatic Cephalometric Tracing):
Feature | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Automatic Cephalometric Tracing | The automatic cephalometric tracing feature should demonstrate acceptable accuracy. | The validation test concluded that the auto feature demonstrates accuracy. (Specific performance metrics like mean absolute error or agreement rates are not provided in this summary.) |
Overall Software Performance | The device passed all tests based on pre-determined Pass/Fail criteria for software verification/validation and measurement accuracy. | The device passed all of the tests. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify the exact sample size (number of images or cases) used for the "validation test to evaluate the accuracy of this auto feature."
- Data Provenance: Not specified in the document.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not specified in the document.
4. Adjudication Method for the Test Set
Not specified in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly mentioned or described for the EzOrtho device. The focus of the performance data section is on the device's own accuracy and software validation.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation of the "automatic cephalometric tracing feature" was done. The document states: "We performed the validation test to evaluate the accuracy of this auto feature." This implies an assessment of the algorithm's performance on its own. It also states that "The users can still adjust the points manually when necessary," indicating that while standalone performance was evaluated, the device is ultimately intended for human-in-the-loop use.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
For the "automatic cephalometric tracing feature," the type of ground truth used is not explicitly stated. However, given the context of cephalometric analysis, it is highly probable that the ground truth would have been established by:
- Expert Tracings/Measurements: Manual tracings and measurements performed by one or more qualified orthodontic experts.
8. The Sample Size for the Training Set
Not specified in the document. The document details a 510(k) submission for a software device, and while it mentions an "auto feature," it does not provide details about model training or the size of any training sets.
9. How the Ground Truth for the Training Set Was Established
Not specified in the document. As no information on a training set is provided, the method for establishing its ground truth is also not mentioned.
Ask a specific question about this device
(20 days)
Digital Dental Intra Oral Sensor is intended to collect dental x-ray photons and convert them into electronic impulses that may be stored, viewed and manipulated for diagnostic use by dentists.
Digital Dental Intra Oral Sensor is a device which acquires digital intra-oral images. Direct digital systems acquire images with a sensor that is connected to a computer to produce an image almost instantaneously following exposure. The primary advantage of direct sensor systems is the speed with which images are acquired. For patient comfort, the ergonomic design is based on human intraoral anatomy.
- Excellent image quality based on advanced CMOS technology
- More comfortable sensor ergonomic shape for the human oral structure
- Lower dose exposure (Compared to film sensor)
- Enhanced durability
- Easy-to-use USB interface
The provided document is a 510(k) summary for the Rayence Co., Ltd. Digital Dental Intra Oral Sensor. This type of submission focuses on demonstrating substantial equivalence to a predicate device rather than providing detailed clinical study results designed to prove device performance against specific acceptance criteria.
Therefore, the document does not contain the information requested for acceptance criteria and a study proving the device meets those criteria in the typical sense of a clinical trial. Instead, it relies on demonstrating similar performance to a legally marketed predicate device through non-clinical testing.
Here's what can be extracted and what is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document measures performance characteristics like DQE, MTF, Pixel Pitch, and Sensor Dimensions against those of a predicate device. It doesn't explicitly state "acceptance criteria" but rather demonstrates "equivalence" to the predicate.
Characteristic | Acceptance Criteria (Implicit: Equivalent to Predicate) | Reported Device Performance (Proposed Device) | Predicate Device Performance |
---|---|---|---|
Sensor Dimension (mm) (±10%) | Equivalent to Predicate or justified differences | Size 1.0: 36.8 x 25.4 | |
Size 1.5: 39.5 x 29.2 | |||
Size 2.0: 42.9 x 31.3 | Size 1.0: 37.6 x 25.4 | ||
Size 1.5: 39.5 x 29.2 | |||
Sensor Thickness (mm) | Equivalent to Predicate | 4.8 | 4.8 |
Active Area (mm) | Equivalent to Predicate or justified differences | Size 1.0: 30.01 x 20.01 | |
Size 1.5: 33.00 x 23.98 | |||
Size 2.0: 35.99 x 25.99 | Size 1.0: 30.01 x 20.00 | ||
Size 1.5: 33.00 x 23.98 | |||
Pixel Pitch (µm) (Full Resolution) | Equivalent to Predicate | 14.8 | 14.8 |
Pixel Pitch (µm) (Binning mode) | Equivalent to Predicate | 29.6 | 29.6 |
DQE (6 lp/mm) (Full Resolution) | Equivalent to Predicate | 0.38 | 0.38 |
DQE (6 lp/mm) (Binning mode) | Equivalent to Predicate | 0.34 | 0.34 |
MTF (3 lp/mm) (Full Resolution) | Equivalent to Predicate | 0.642 | 0.642 |
MTF (3 lp/mm) (Binning mode) | Equivalent to Predicate | 0.630 | 0.630 |
Electrical, Mechanical, Environmental Safety | Compliance with IEC 60601-1: 2005 + CORR.1(2006) + CORR(2007) | Performed and Compliant | (Not explicitly stated for predicate in summary, but assumed compliant) |
EMC Testing | Compliance with IEC 60601-1-2:2007 | Performed and Compliant | (Not explicitly stated for predicate in summary, but assumed compliant) |
Dynamic Range | Same as predicate device | Same dynamic range | (Not explicitly stated with a value, but equivalence claimed) |
2. Sample size used for the test set and the data provenance:
- The document explicitly states: "Clinical images were not necessary to establish substantial equivalence based on the modifications to the device. The laboratory performance data shows that the subject device operates similar to the predicate device." Therefore, there was no clinical test set in the traditional sense with human patient data.
- The comparison was done with laboratory performance data. The provenance of this laboratory data (e.g., specific lab, country) is not detailed. It's an internal test report.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable, as no clinical test set with human data and expert ground truth was used.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable, as no clinical test set with human data and associated adjudication was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done, nor is this device described as an AI-assisted device. The device is a digital intra-oral sensor for image acquisition.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device itself is a standalone hardware component (sensor). The "performance" being evaluated is its technical characteristics (DQE, MTF, etc.) demonstrating equivalence to a predicate, not an algorithmic diagnostic output. The document states that "The DQE, MTF and linear response to X-ray exposure test demonstrated that the subject sensor performed equivalently compared to the predicate device with the same dynamic range." This is a standalone device performance evaluation, but not in the context of an AI algorithm's diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable, as the evaluation was based on non-clinical, laboratory performance measurements (e.g., DQE, MTF values, physical dimensions) compared to a predicate device's specifications. The ground truth for these measurements would be the reference standards and protocols used in the laboratory setting to measure these physical properties.
8. The sample size for the training set:
- Not applicable. This is a 510(k) submission for a hardware device (digital x-ray sensor) demonstrating substantial equivalence to a predicate through non-clinical bench testing. It does not involve machine learning or AI models with training sets.
9. How the ground truth for the training set was established:
- Not applicable, as there is no training set for an AI model mentioned.
Ask a specific question about this device
(25 days)
EzSensor Soft, EzSensor Soft i, EzSensor Bio i Digital Dental Intra Oral Sensors are intended to collect dental x-ray photons and convert them into electronic impulses that may be stored, viewed and manipulated for dagnostic use by dentists.
EzSensor Soft, EzSensor Soft i, EzSensor Bio i are digital dental intraoral sensors which acquire digital intra oral images. Direct digital systems acquire images with a bendable sensor that is connected to a computer to produce an image almost instantaneously following exposure. The primary advantage of direct sensor systems is the image acquisition speed. For patient comfort, the ergonomic design is based on human intraoral anatomy.
The provided text describes a Special 510(k) submission for device modifications to the EzSensor Soft, EzSensor Soft i, EzSensor Bio, and EzSensor Bio i digital dental intraoral sensors. This filing primarily focuses on demonstrating substantial equivalence to a predicate device through performance testing and does not include a comparative effectiveness study with human readers (MRMC) or a standalone (algorithm-only) performance study. The device itself is a digital dental intraoral sensor, not an AI algorithm.
Here's a breakdown of the requested information based on the provided document:
1. A table of acceptance criteria and the reported device performance
The document doesn't explicitly state "acceptance criteria" in a pass/fail quantifiable manner for the overall device. Instead, it demonstrates performance equivalence to a predicate device against specific technical characteristics.
Characteristic | Acceptance Criteria (Implied: Equivalence to Predicate) | Reported Device Performance (Proposed Device) | Predicate Device Performance (K143753) |
---|---|---|---|
Indications for Use | Substantially equivalent to predicate. | EzSensor Soft, EzSensor Soft i, EzSensor Bio and EzSensor Bio i Digital Dental Intra Oral Sensors are intended to collect dental x-ray photons and convert them into electronic impulses that may be stored, viewed and manipulated for diagnostic use by dentists. | EzSensor Soft [Alternative name : EzSensor Bio] Digital Dental Intra Oral Sensor is intended to collect dental x-ray photons and convert them into electronic impulses that may be stored, viewed and manipulated for diagnostic use by dentists. |
Sensor Dimension (mm) (±10%) | Slight variation within acceptable tolerance. | Size "1.0": 37.8 x 26.6 Size "1.5": 40.8 x 30.6 Size "2.0": 44.0 x 32.5 | Size "1.0": 37.5 x 26.5 Size "2.0": 43.5 x 32.5 |
Sensor Thickness (mm) | Equivalent to predicate. | 5 | 5 |
Active Area (mm) | Slight variation within acceptable tolerance. | Size "1.0": 20.01 x 30.01 Size "1.5": 23.98 x 33.00 Size "2.0": 25.99 x 35.99 | Size "1.0": 20 x 30 Size "2.0": 25.99 x 35.99 |
USB Module | Equivalent to predicate. | Integrated USB 2.0 module | Integrated USB 2.0 module |
Pixel Pitch (µm) - Full Resolution | Equivalent to predicate. | 14.8 | 14.8 |
Pixel Pitch (µm) - Binning mode | Equivalent to predicate. | 29.6 | 29.6 |
DQE 84.64 µGy 6 lp/mm - Full Resolution | Equivalent to predicate. | 0.070 | 0.070 |
DQE 84.64 µGy 6 lp/mm - Binning mode | Equivalent to predicate. | 0.070 | 0.070 |
MTF 84.64 µGy 6 lp/mm - Full Resolution | Equivalent to predicate. | 0.154 | 0.154 |
MTF 84.64 µGy 6 lp/mm - Binning mode | Equivalent to predicate. | 0.133 | 0.133 |
Typical dose range (µGy) | N/A (Information provided for proposed device only) | Incisor & Canine: 300 ~ 500 / Molar: 400 ~ 600 | Not specified for predicate. |
Viewer Software | Equivalence in function and indications for use. | Easydent or EzDent-i (K150747) (Note: EzDent-i 2.0 has additional features, but maintains similar indications and functionalities as EzDent-i 1.0 (K131594) from predicate) | Easydent or EzDent-i (K131594) |
Safety and Effectiveness | No additional safety risk identified, substantially equivalent to predicate. | Performance test results indicate the subject detector performed equally to the predicate. No additional safety risk identified in bench test. | N/A (Predicate performance is the benchmark) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes "Performance Testing" and "bench test: Non-clinical report" according to FDA Guidance "Guidance for the Submissions of 510(k)'s for Solid State X-ray Imaging Devices." However, it does not specify a sample size for the test set or the data provenance (country of origin, retrospective/prospective). The testing appears to be laboratory-based ("bench test").
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided. The assessment relies on technical specifications (DQE, MTF, linear response to X-ray exposure) and safety testing, not on a ground truth established by medical experts for diagnostic accuracy in a clinical context. The device is a sensor, not a diagnostic algorithm that interprets images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable and not provided. As mentioned above, the evaluation is based on technical specifications and safety testing, not on clinical image interpretation requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. This is a 510(k) for a digital dental intraoral sensor, not an AI-powered diagnostic tool. The document states a "Summary of Performance Testing" based on technical specifications compared to a predicate device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study was done. The device itself is a sensor that collects data, which is then viewed and manipulated by dentists using viewer software. It is not an algorithm making standalone diagnostic assessments.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The concept of "ground truth" as typically applied to diagnostic algorithms (expert consensus, pathology, etc.) is not directly relevant or discussed in this submission. The device's performance is assessed through technical metrics (DQE, MTF, linear response to X-ray exposure) and compliance with electrical, mechanical, and environmental safety standards (IEC 60601-1, IEC 60601-1-2), comparing these metrics against those of a predicate device to establish substantial equivalence.
8. The sample size for the training set
Not applicable. This submission is for hardware (an X-ray sensor) and associated viewer software, not a machine learning or AI algorithm that would require a training set.
9. How the ground truth for the training set was established
Not applicable. As there is no training set for an AI algorithm, there is no ground truth established for one.
Ask a specific question about this device
Page 1 of 1