Search Results
Found 9 results
510(k) Data Aggregation
(152 days)
DURR DENTAL SE
VistaSoft 4.0 and VisionX 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimization of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 4.0 / VistaSoft 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition. Additional information: The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, digital X-ray cameras, phosphor storage plate scanner, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimization of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The provided document is a 510(k) clearance letter for VistaSoft 4.0 and VisionX 4.0. It does not contain any information regarding acceptance criteria or a study proving the device meets those criteria, specifically concerning AI performance or clinical efficacy.
The document primarily focuses on regulatory compliance, outlining:
- The device's classification and regulation.
- Its intended use and indications for use.
- Comparison with a predicate device (VisionX 3.0), highlighting new features (image filter operations, annotations, cloud interface, cybersecurity enhancements).
- Compliance with FDA recognized consensus standards and guidance documents for software development and cybersecurity (e.g., ISO 14971, IEC 62304, IEC 82304-1, IEC 81001-5-1, IEC 62366-1).
- Statement that "Software verification and validation were conducted."
However, there is no specific information presented that describes:
- Quantitative acceptance criteria for device performance (e.g., sensitivity, specificity, accuracy).
- Details of a clinical or analytical study to demonstrate meeting these criteria.
- Sample sizes for test sets or training sets.
- Data provenance.
- Number or qualifications of experts for ground truth establishment.
- Adjudication methods.
- MRMC study results or effect sizes.
- Standalone algorithm performance.
- Type of ground truth used (e.g., pathology, expert consensus).
- How ground truth was established for training data.
The FDA 510(k) clearance process for this type of device (Medical image management and processing system, Product Code QIH) often relies on demonstrating substantial equivalence to a predicate device based on similar technological characteristics and performance, rather than requiring extensive clinical studies with specific performance metrics like those for AI-driven diagnostic aids. The "new features" listed (filter optimization, acquisition/storage/etc., forwarding data) appear to be enhancements to image management and display, not necessarily new diagnostic algorithms that would typically necessitate rigorous performance studies with specific acceptance criteria.
Therefore, based solely on the provided text, I cannot complete the requested tables and details regarding acceptance criteria and study results, as this information is not present in the document.
If such information were available, it would typically be found in a separate section of the 510(k) submission, often within the "Non-Clinical and/or Clinical Tests Summary & Conclusions" section in more detail than what is provided here, or in a specific performance study report referenced by the submission. The current document only states that "Software verification and validation were conducted" and lists the standards used for software development and cybersecurity, but not the outcomes of performance testing against specific acceptance criteria.
Ask a specific question about this device
(28 days)
Durr Dental SE
Pro Vecta 3D Prime and ProVecta 3D Prime Ceph are computed tomography x-ray units intended to generate 3D, panoramic and cephalometric (ProVecta 3D Prime Ceph Model) X-ray images in dental radiography for adult and pediatric patients. They provide diagnostic details of the maxillofacial areas for a dental treatment. The devices are operated and used by physicians, dentists, and x-ray technicians. Not intended for mammography use.
This device is a cone beam CT x-ray device for the acquisition of dental images. Similar to computer tomography or magnetic resonance tomography, sectional images can be generated with CBCT. With CBCT, an X-ray tube and an imaging sensor opposite it rotate around a seated or standing patient. The X-ray tube rotates through 180°-540° and emits a conical X-ray beam. The X-rays pass through the region under investigation and are measured for image generation by a detector as an attenuated grey scale X-ray image. Here, a large series of two-dimensional images is acquired during the revolution of the X-ray tube. Using a mathematical calculation on the rotating image series via a reconstruction computer, a grey value coordinate image is generated in the three spatial dimensions. This three-dimensional coordinate model corresponds to a volume graphic that is made up of individual voxels. This volume can be used to generate sectional images (tomograms) in all spatial dimensions as well as 3D views. The system complies with US Radiation Safety Performance Standard. The ProVecta 3D Prime model does not have the CEPH function.
This premarket notification is because of the biological evaluation of medical devices documentation according to EN ISO 10993-1:2020. The revision of the document is the inclusion of the Comfort Bite Foam for the bite block with direct patient contact. The relevant documents regarding biological safety were included and evaluated in this biological evaluation. Furthermore, the Biological Evaluation has been updated to the latest Version of the standard. The name, application and biocompatibility-relevant materials of the product have not changed since the last version. In addition to the current bite block (REF: 2210200100), two new, more comfortable variants were developed:
- standard bite block: an optimized version of the existing bite block
- comfort bite block: an extension of the existing bite block
The image management software was recently updated in K213326.
This document describes a 510(k) premarket notification for the "ProVecta 3D Prime and ProVecta 3D Prime Ceph" devices. The submission primarily addresses a change in the bite-block material and an updated biological evaluation, not a new or significantly altered imaging algorithm. Therefore, the information requested for acceptance criteria and a study proving device performance (especially related to AI or standalone algorithm performance) is not fully present in the provided text.
Based on the provided text, here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance
The document focuses on demonstrating substantial equivalence to a predicate device (ProVecta 3D Prime Ceph, K193139) because of a change in bite-block material and an updated biological evaluation. There are no explicit "acceptance criteria" for imaging performance described in the text, nor are there reported device performance metrics in terms of clinical accuracy or diagnostic efficacy for the imaging capability itself.
The only "performance testing" mentioned relates to the new bite foam:
Acceptance Criteria (for New Bite Foam) | Reported Device Performance (for New Bite Foam) |
---|---|
Not cytotoxic (based on ISO 10993 standards) | Result: Not cytotoxic. |
2. Sample size used for the test set and the data provenance
For the new bite foam testing, specific sample sizes for cytotoxicity testing are not provided in the text. There's no information on a "test set" for imaging performance, as the submission does not involve an evaluation of a new imaging algorithm.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The submission is not about clinical diagnostic performance or AI algorithm evaluation requiring expert-established ground truth.
4. Adjudication method for the test set
Not applicable. The submission is not about clinical diagnostic performance or AI algorithm evaluation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was done. This submission is for an X-ray unit, not an AI-assisted diagnostic device. The imaging software (VisionX 3.0) is referenced as an updated component, but the submission itself is not about the performance of the software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study was done. This submission is for an X-ray unit, not a standalone algorithm.
7. The type of ground truth used
For the biological evaluation, the ground truth was established by laboratory testing for cytotoxicity based on EN ISO 10993-1, -5, -12 standards. No clinical ground truth (pathology, outcomes data, or expert consensus) for diagnostic accuracy is mentioned as this is not the focus of this 510(k).
8. The sample size for the training set
Not applicable. There is no mention of a training set as this submission is not about an AI algorithm.
9. How the ground truth for the training set was established
Not applicable. There is no mention of a training set.
Ask a specific question about this device
(25 days)
Durr Dental SE
The ScanX Swift 2.0 is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.
The ScanX Swift View 2.0 is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.
The ScanX Swift 2.0 and ScanX Swift View 2.0 are dental devices that scan photostimulable phosphor storage plates that have been exposed in place of dental X- Ray film and allows the resulting images to be displayed on a personal computer monitor and stored for later recovery. It will be used by licensed clinicians and authorized technicians for this purpose. The device is an intraoral Plate Scanner, which is designed to read out all cleared plates of the sizes 0, 1, 2, 3, and 4. The phosphor plates are made of rigid photostimulable material. Intraoral phosphor plate x-ray (also known as phosphor storage plate or PSP x-ray) eliminates the need for traditional film processing for dental radiography. Phosphor storage plates can convert existing film based imaging systems to a digital format that can be integrated into a computer or network system. The intraoral Plates are put into the mouth of the patient, exposed to X-rays and then are read out with the device. The read-out-process is carried out with a 639nm Laser. The laser beam is moved across the plate by an oscillating MEMS mirror. The laser beam stimulates the top coating of the plates, which consists of x-ray sensitive material. Depending on the exposed dose, the coating emits different levels of light. These light particles are then requisitioned by an optical sensor (Photo Multiplier Tube/ PMT) and transferred into an electrical output signal is digitalized and is the data for the digital X-ray image. The data is transmitted via an Ethernet link to a computer. Before the plate is discharged, the remaining data is erased by a LED-PCB. The user chooses which size of plate he has to use and prepares the device by inserting the appropriate plate insert into the device. He then exposes the plate and then puts the plate directly into the insert by pushing it out of the light protection envelope. The user closes the light protection cover and starts the read out process. After the read out process the picture is transmitted to the connected PC, the picture can be viewed and the IP is erased and ready to use for the next acquisition. The main difference between the two models is on the ScanX Swift View 2.0 the display is larger, has touch capability, and can show a preview of the scan image. The device firmware is based on the predicate firmware and is of a moderate level of concern.
The provided text is a 510(k) summary for a medical device (ScanX Swift 2.0, ScanX Swift View 2.0), which focuses on demonstrating substantial equivalence to a predicate device. It does not contain information about acceptance criteria for a specific clinical endpoint or a study proving a device meets such criteria.
Instead, it discusses the technological characteristics, safety, and performance of the device in comparison to a predicate device based on non-clinical testing and engineering principles. The document explicitly states:
- "Summary of clinical performance testing: Not required to establish substantial equivalence."
Therefore, I cannot provide a table of acceptance criteria and reported device performance from a clinical study, nor details about sample sizes, ground truth establishment, or multi-reader multi-case studies, as this information is not present in the provided text.
However, I can extract information related to non-clinical performance testing and technical characteristics, which are used to establish substantial equivalence.
Here's an analysis based on the available information:
Key Takeaways from the Document:
- Device Type: Phosphor Storage Plate (PSP) scanner for dental X-ray images.
- Purpose: Scan exposed PSPs, process digital images, and display/store them.
- Approval Basis: Substantial equivalence to a predicate device (ScanX Edge K202633).
- No Clinical Study: Clinical performance testing was explicitly stated as "Not required to establish substantial equivalence." This means the FDA cleared the device based on non-clinical data and comparison to a legally marketed predicate.
Information Related to Device Performance and Equivalence (Non-Clinical):
The document compares the subject devices (ScanX Swift 2.0, ScanX Swift View 2.0) to the predicate device (ScanX Edge) based on various technical specifications and non-clinical performance metrics.
1. Table of "Acceptance Criteria" (Technical Specification Comparison) and Reported Device Performance (as listed for the subject devices):
Since no acceptance criteria are explicitly stated as pass/fail for a clinical endpoint, I will present the comparative technical specifications as the basis for demonstrating equivalence and "performance" in this context. The "acceptance criteria" here are effectively the predicate device's performance, and the subject device's performance is compared against it for substantial equivalence.
Characteristic | Predicate Device (ScanX Edge) "Acceptance Criteria" (for equivalence) | Subject Devices (ScanX Swift 2.0, ScanX Swift View 2.0) Reported Performance | Comparison / Impact Analysis |
---|---|---|---|
Max. theoretical resolution | Approx. 40 Lp/mm | Approx. 40 Lp/mm | SAME |
MTF (at 3 LP/mm) | More than 40% | Horizontal 59%, Vertical 49% (in 12.5µm pixel size mode) | Similar/better. (Subject device performance is higher than the predicate's stated 'more than 40%') |
DQE (at 3 LP/mm) | More than 3.4% | Horizontal 8.5%, Vertical 10.5% (in 12.5µm pixel size mode with 99µGy) | Similar/better. (Subject device performance is significantly higher than the predicate's stated 'more than 3.4%') |
Image bit depth | 16 bits | 16 bits | Identical |
Operating Principle | Laser / Photomultiplier Tube (PMT) Components: Photomultiplier 2" Diode, Laser 639nm/10mW Fiber coupled laser diode | Laser / Photomultiplier Tube (PMT) Components: Photomultiplier 2" Diode, Laser 639nm/10mW Fiber coupled laser diode | Identical. Note: While the components are identical, a new "Flying-Spot configuration (PCS technology)" is used, which was cleared in a predecessor device (K170733), suggesting equivalence in efficacy despite a change in the exact scanning mechanism. |
Supported Plate Sizes | Size 0 (22x35mm), 1 (24x40mm), 2 (31x41mm) | Size 0, 1, 2, 3 (27x54mm), 4 (57x76mm) | Similar. The predicate device uses smaller phosphor plates: Size 0, 1 and 2. The subject devices support these and add Size 3 and 4, which were available on a previous model (K170733), implying this expansion is also a previously cleared technology. This is presented as an enhancement rather than a deviation that would impact safety/effectiveness negatively. |
Data Transfer | Ethernet link | Ethernet link (all models); WLAN interface or removable storage (XPS07.2A1 only) | Similar. The View model offers additional flexibility. Risks associated with WLAN would be addressed by standards compliance (e.g., IEC 60601-1-2 and "Radio Frequency Wireless Technology in Medical Devices" guidance). |
Image Generation | Image assembled by imaging software (e.g., VisionX) | Image assembled within the image plate scanner (using same algorithm as K192743) | New. The raw image data is the same, and the algorithm used is the same as already cleared for the VisionX imaging software (K192743), suggesting this change does not impact safety or effectiveness. |
2. Sample size used for the test set and the data provenance:
- Not explicitly stated for performance testing. The document refers to non-clinical performance testing (MTF, DQE, noise power spectrum) in accordance with IEC 6220-1:2003, which would involve imaging phantoms or test objects.
- Data Provenance: Not specified, but given the manufacturer is German (DURR DENTAL SE), the non-clinical testing likely occurred in a controlled lab environment, presumably in Germany or where their R&D facilities are located. These tests are inherently "prospective" in the sense that they are conducted to characterize the specific device.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts for clinical studies:
- Not applicable. The submission states no clinical performance testing was required. For non-clinical tests like MTF/DQE, ground truth is established by the design of the test phantom and the known physical properties being measured.
4. Adjudication method for the test set (for clinical studies):
- Not applicable. No clinical studies were performed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No. The document explicitly states: "Summary of clinical performance testing: Not required to establish substantial equivalence." Therefore, no MRMC study was performed.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The device itself is a standalone imaging acquisition and processing system. The performance metrics (MTF, DQE) are inherently standalone measurements of the device's image quality output.
- The ScanX Swift View 2.0 model has a "Stand-Alone-Mode" where it can operate without a connection to a computer, generate image data, and store it on a USB stick. This is a functional feature, not a separate performance study.
7. The type of ground truth used:
- For the non-clinical image quality performance metrics (MTF, DQE, noise power spectrum), the ground truth is established by physical standards and phantoms as defined by the IEC 6220-1:2003 standard. These involve known patterns and controlled radiation exposures to objectively measure the imaging system's capabilities.
8. The sample size for the training set:
- Not applicable. As this is a 510(k) for a hardware device (PSP scanner) and not an AI/ML algorithm requiring a training set in the typical sense, there is no mention of a "training set" for image processing algorithms. The image processing algorithms used within the device are stated to be "the same as cleared in K192743 for the imaging software VisionX," implying they are established and validated algorithms, not newly trained ones for this specific device.
9. How the ground truth for the training set was established:
- Not applicable, as there is no training set discussed for this device submission.
Ask a specific question about this device
(28 days)
Durr Dental SE
The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, image plate scamers, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimisation of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 3.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 3.0 software runs on user provided PCcompatible computers and utilize previously cleared digital image capture devices for image acquisition.
The VisionX 3.0 device includes new AI-powered functions such as automatic nerve canal tracing, automatic image rotation, and improved panoramic curve detection. The 510(k) summary provided does not contain a specific study demonstrating the device meets acceptance criteria for these AI functions. Instead, it states that "Full functional software cross check testing was performed." and that "The verification testing demonstrates that the device continues to meet its performance specifications and the results of the testing did not raise new issues of safety or effectiveness." This implies that the internal performance specifications were met, but details on these specifications and the testing methodology are not provided in the summary.
Here's the information that can be extracted from the provided text, along with details that are explicitly stated as not available or not applicable based on the given document:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Function | Acceptance Criteria (Implied/General) | Reported Device Performance (as per 510(k) summary) |
---|---|---|
General software functionality and effectiveness | No specific quantitative acceptance criteria are provided in the document. Implied: Device meets its performance specifications. | "Full functional software cross check testing was performed." "The verification testing demonstrates that the device continues to meet its performance specifications." |
AI Functions (Automatic nerve canal calculation, Automatic image rotation, In-line automatic image plate quality checks, Improved panoramic curve detection) | No specific quantitative acceptance criteria (e.g., accuracy, sensitivity, specificity) for these AI functions are provided in the document. Implied: Functions operate as intended. | The new functions were added and underwent "Full functional software cross check testing." The modifications did not raise new issues of safety or effectiveness. |
Cybersecurity | Compliance with FDA guidance for "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." | "Cybersecurity was addressed according to the FDA guidance document." |
DICOM compliance | Compliance with DICOM standards. | "VisionX 3.0 is DICOM compliant." |
Software life cycle requirements | Compliance with IEC 62304 standard. | "VisionX 3.0 was developed in compliance with the harmonized standard of IEC 62304." |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified in the provided document.
- Data Provenance: Not specified in the provided document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified or implied in the provided document.
- Qualifications of Experts: Not specified or implied in the provided document.
4. Adjudication method for the test set
- Adjudication Method: Not specified or implied in the provided document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study is not mentioned in the provided document.
- Effect Size: Not applicable, as no MRMC study was mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the AI features (e.g., automatic nerve canal tracing) operate in a standalone capacity within the software, as they are listed as "new functions." However, no specific standalone performance metrics (e.g., accuracy, sensitivity, specificity of the algorithm alone) are provided in the summary. The "full functional software cross check testing" suggests validation of the software's operation, but without specific performance data for the AI components in isolation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified in the provided document.
8. The sample size for the training set
- Sample Size for Training Set: Not specified in the provided document.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not specified in the provided document.
Ask a specific question about this device
(154 days)
Durr Dental SE
The intraoral sensor is intended to convert x-ray photons into electronic impulses that may be stored, viewed and manipulated for diagnostic use by dentists.
The subject device SensorX device is an intraoral x-ray sensor for dental applications. It detects the x-rays and performs the image acquisition, digitizes the image and makes it available for the PC. The x-ray sensor is connected to the computer via the sensor cable, and if required, the USB extension. The x-ray sensor is equipped with protective cover sheaths (previously 510(k) cleared) and placed in the mouth of the patient. For patient comfort, the ergonomic design is based on human intraoral anatomy. SensorX enables high resolution with a minimum radiation dose. It is connected to a computer to produce an image almost instantaneously following exposure. The primary advantage of direct sensor systems such as SensorX, is the speed with which images are acquired. SensorX is activated via the imaging software VisionX (K192743) OR DBSWIN (K203287).
The provided document is a 510(k) Premarket Notification from DÜRR DENTAL SE for their device, SensorX. It primarily focuses on demonstrating substantial equivalence to a predicate device (DEXIS Titanium / KaVo IXS HD) rather than providing detailed acceptance criteria and a study proving the device meets those criteria in a traditional sense (e.g., a clinical trial with statistical endpoints).
Therefore, some of the requested information, particularly regarding specific performance metrics against pre-defined acceptance criteria, multi-reader multi-case studies, and detailed ground truth methodologies for a test set, is not explicitly present in this document. The document primarily relies on non-clinical data (i.e., technical specifications and compliance with standards) and a general statement about clinical images.
However, I can extract the available information as requested:
1. Table of Acceptance Criteria and Reported Device Performance
As specific, quantifiable acceptance criteria with corresponding performance results akin to a clinical trial are not presented in this 510(k) summary, I will infer the "acceptance criteria" from the technological characteristics compared to the predicate device, as substantial equivalence is the goal. The reported "device performance" will be the SensorX's specifications.
Characteristic (Inferred Acceptance Criteria based on Predicate) | SensorX Reported Device Performance | Comments |
---|---|---|
Device Name | SensorX | New device name. |
Type of X-ray Detection Technology | CMOS | Matches predicate. |
Pixel Size (μm) | 19 | Very close to predicate (19.5 μm). |
Dynamic Range | 4,096:1 | Matches predicate. |
X-ray Resolution | 20+ visible lp/mm | Matches predicate ("20+ visible lp/mm"). |
Scintillator Technology | Cesium Iodide (CsI) Scintillator | Matches predicate. |
Software Features | USB 2.0 Communication, Noise Filtering, Binning, Basic Image Correction (Gain/offset/pixel Calibration), Monitoring Sensor Health/State, Image Transmission | Matches predicate. |
PC Interface | USB Type A Plug | Matches predicate. |
Input Electrical Power | 5.0 V / 0.5 W via USB | Matches predicate. |
Communication Standard | USB 2.0 | Matches predicate. |
Motion Sensing Compatibility | Yes | Matches predicate. |
Safety and EMC Standards Compliance | IEC 60601-1, IEC 60601-1-2, IEC 60601-1-6, IEC 60601-2-65, IEC 62304, IEC 14971, EN ISO 10993-5 | Demonstrated compliance with same or similar standards as would be expected for a device of this type and the predicate. |
Purity of Signal/Image Quality | "excellent resolution and contrast" (from provided dental images) | Qualitative statement, no specific metric or acceptance criteria provided. |
2. Sample Size Used for the Test Set and the Data Provenance
The document does not explicitly mention a "test set" in the context of an algorithm's performance with a specified sample size. Instead, it refers to "actual dental images were provided which showed excellent resolution and contrast" as part of the non-clinical data. The provenance of these images (e.g., country of origin, retrospective or prospective) is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided in the document. The document states a "clinical evaluation was performed" and "actual dental images were provided," but it does not detail how ground truth was established for these images, nor the number or qualifications of any experts involved.
4. Adjudication Method for the Test Set
This information is not provided in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention any MRMC comparative effectiveness study. The SensorX is an intraoral x-ray sensor, not an AI-assisted diagnostic tool.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The SensorX device is a hardware component (an intraoral x-ray sensor) that captures images, not a standalone algorithm. Its performance is evaluated based on its technical specifications and image quality, not as an AI algorithm.
7. The Type of Ground Truth Used
The document states that "actual dental images were provided which showed excellent resolution and contrast." This implies an expert assessment of image quality, but the specific type of ground truth (e.g., expert consensus on specific pathologies, pathology reports, or patient outcomes data) is not detailed. Given the device's function as an imaging sensor, the ground truth would likely relate to image quality parameters such and resolution, contrast, and diagnostic interpretability by dentists, evaluated by experts.
8. The Sample Size for the Training Set
The document does not describe the use of a "training set" in the context of an AI algorithm, as the SensorX is a hardware device.
9. How the Ground Truth for the Training Set was Established
This is not applicable, as the document does not describe a training set for an AI algorithm.
Ask a specific question about this device
(113 days)
Durr Dental SE
ProVecta 3D Prime Ceph is a computed tomography x-ray unit intended to generate 3D, panoramic and cephalometric Xray images in dental radiography for adult and pediatic patients. It provides diagnostic details of the maxillofacial areas for a dental treatment. The device is operated and used by physicians, dentists, and x-ray technicians.
Not intended for mammography use.
This device is a cone beam CT x-ray device for the acquisition of dental images. Similar to computer tomography or magnetic resonance tomography, sectional images can be generated with CBCT. With CBCT, an X-ray tube and an imaging sensor opposite it rotate around a seated or standing patient. The X-ray tube rotates through 180°-540° and emits a conical X-ray beam. The Xrays pass through the region under investigation and are measured for image generation by a detector as an attenuated grey scale X-ray image. Here, a large series of two-dimensional individual images is acquired during the revolution of the X-ray tube. Using a mathematical calculation on the rotating image series via a reconstruction computer, a grey value coordinate image is generated in the three spatial dimensions. This three-dimensional coordinate model corresponds to a volume graphic that is made up of individual voxels. This volume can be used to generate sectional images (tomograms) in all spatial dimensions as well as 3D views. The system complies with US Radiation Safety Performance Standard. This device is similar to our reference device, K181432, but we have now added cephalometric capability, making it entirely equivalent to our predicate device for indications. An option would allow the customer to purchase this new device without the CEPH function, if desired.
Here's a breakdown of the acceptance criteria and study information for the ProVecta 3D Prime Ceph based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided FDA 510(k) summary does not include specific acceptance criteria in numerical or quantifiable terms (e.g., minimum sensitivity/specificity, specific image quality scores). Instead, it relies on demonstrating compliance with recognized standards and comparing its performance to a predicate device.
The "reported device performance" primarily comes from conformity to these standards and the implicit performance derived from its technological characteristics being similar to or the same as the predicate.
Acceptance Criteria (Implied by Standards & Equivalence) | Reported Device Performance |
---|---|
Safety: | Complies with IEC 60601-1 (General requirements for basic safety and essential performance), IEC 60601-1-2 (Electromagnetic Compatibility), IEC 60601-1-3 (Radiation Protection in Diagnostic X-Ray Equipment), IEC 60825-1 (Safety of laser products). |
Essential Performance: | Verified through compliance with IEC 60601-1, IEC 60601-2-63 (Particular requirements for the basic safety and essential performance of dental extra-oral X-ray equipment). |
Image Quality (Dental Radiography): | Acceptance testing was performed for both panoramic and cephalometric modes according to DIN 6868-151 (Image quality assurance in diagnostic X-ray departments - Acceptance testing of dental radiographic equipment) and DIN 6868-161 (Image Quality Assurance In Diagnostic X-Ray Departments - Acceptance Testing Of Dental Radiographic Equipment For Digital Cone-Beam Computed Tomography). Line pair and contrast were evaluated using a phantom designed for this purpose. The device's technological characteristics (kV, mA, focal spot, detector) are similar to the predicate. |
Usability: | Complies with IEC 60601-1-6 (Usability) and IEC 62366 (Application of usability engineering to medical devices). |
Software Life-cycle Processes: | Complies with IEC 62304 (Medical Device Software Life-cycle processes). Firmware evaluated according to the FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices; risk management documented. |
Biocompatibility: | Chin Holder (material PBT) complies with EN ISO 10993-5 (Cytotoxicity). Other accessories were previously cleared. |
Substantial Equivalence: | The device is deemed substantially equivalent to the predicate (K152106) and reference device (K181432) regarding technology, performance, and indications for use. Key performance differences with the reference device (K181432, which lacked CEPH) are resolved by the addition of cephalometric capability, making it entirely equivalent to the predicate. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a sample size for a "test set" in the context of patient data or clinical images. The testing described focuses on non-clinical performance and engineering standards (e.g., electrical safety, image quality with phantoms).
- Test Set Sample Size: Not applicable/not provided for patient data.
- Data Provenance: Not applicable, as no patient data test set is described. The non-clinical testing appears to have been conducted by the manufacturer, presumably in Germany (country of origin for DÜRR DENTAL SE).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided as the evaluation relies on non-clinical phantom-based testing and compliance with recognized standards, rather than expert-derived ground truth from clinical images.
4. Adjudication Method for the Test Set
This information is not provided as the evaluation relies on non-clinical phantom-based testing and compliance with recognized standards.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done, and thus, no effect size of human reader improvement with AI assistance is reported. This device is an X-ray imaging system, not an AI-powered diagnostic aid for interpretation.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
No, a standalone algorithm performance study was not done. The device is a medical imaging hardware system.
7. The Type of Ground Truth Used
The "ground truth" for this device's evaluation is primarily established through:
- Phantoms: For image quality assessment (line pair, contrast).
- Engineering Standards: Electrical safety, radiation protection, EMC, usability, software life-cycle, and biocompatibility standards provide the "ground truth" for compliance.
- Predicate Device Performance: The primary "ground truth" for substantial equivalence is demonstrating that the ProVecta 3D Prime Ceph performs as safely and effectively as the legally marketed predicate device (Vatech Co. Ltd. PaX-i3D Smart, K152106).
8. The Sample Size for the Training Set
This information is not applicable/not provided. The device is an X-ray imaging system, not an AI/machine learning device that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable/not provided for the same reason as point 8.
Ask a specific question about this device
(31 days)
Durr Dental SE
The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, image plate scamers, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources. The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
-
Filter optimization of the display of 2D and 3D images for improved diagnosis
-
Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitized 2D and 3D images and videos
-
Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 2.4 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 2.4 software runs on user provided PC-compatible computers and utilizes previously cleared digital image capture devices for image acquisition. This software was cleared in K181432 as part of the x-Ray system ProVecta 3D Prime. With this submission VisionX will be established as standalone software. Additionally, new hardware was integrated: Support of the ScanX Touch / Duo Touch (K191623)
This document is a 510(k) premarket notification for the VisionX 2.4 imaging software. It primarily focuses on demonstrating substantial equivalence to a predicate device (DBSWIN and VistaEasy Imaging Software, K190629), rather than presenting a detailed study with specific acceptance criteria and performance metrics for an AI/algorithm-driven diagnostic aid.
Here's an analysis based on the provided text, highlighting what is available and what is not:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly define acceptance criteria as a table with numerical thresholds for performance metrics (e.g., accuracy, sensitivity, specificity) for algorithm performance. Instead, it relies on the concept of "substantial equivalence" to a predicate device that has established safety and effectiveness.
The "device performance" reported is at a high level, stating:
- "Software testing, effectiveness, and functionality were successfully conducted and verified between VisionX 2.4 and image capture devices."
- "Full functional software cross check testing was performed."
- "The verification testing demonstrates that the device continues to meet its performance specifications and the results of the testing did not raise new issues of safety or effectiveness."
2. Sample Size Used for the Test Set and Data Provenance:
This information is not provided in the document. The submission is for an imaging software that manages and displays images, and while it mentions "supporting diagnosis," it does not seem to include a specific AI/algorithm for automated diagnosis where a test set with performance metrics would typically be required.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:
This information is not provided. Given the nature of the submission (imaging software for viewing and management, rather than a novel AI diagnostic algorithm), such detailed ground truth establishment is not typically a requirement for this type of 510(k). The document mentions a "Clinical Evaluation" which included "detailed review of literature data, data from practical tests in dental practices, and safety data," to conclude suitability for dental use, but this is distinct from establishing ground truth for an AI algorithm's performance.
4. Adjudication Method for the Test Set:
This information is not provided.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported. The submission focuses on the functionality and safety of the imaging software itself and its equivalence to other legally marketed imaging software, not on an AI's impact on human reader performance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Given that VisionX 2.4 is described as "imaging software" for "viewing and diagnosis of image data" and includes "Filter optimization of the display of 2D and 3D images for improved diagnosis" and "supporting diagnosis," it functions as a tool for clinicians. The text explicitly states that "The software must only be used by authorized healthcare professionals in dental areas for the following tasks." This indicates a human-in-the-loop scenario. The document does not describe a standalone algorithm performance study in the way typically expected for an AI diagnostic tool.
7. The Type of Ground Truth Used:
This information is not explicitly stated as traditionally understood for AI performance. The nearest concept mentioned is that the "Clinical Evaluation" concluded that the software is suitable for dental use, based on "literature data, data from practical tests in dental practices, and safety data." This points more towards usability and safety in a clinical context rather than a specific ground truth for an automated diagnostic task.
8. The Sample Size for the Training Set:
This information is not provided. As this is an imaging and management software, not a deep learning AI model requiring a distinct training set for diagnostic capabilities, such data is not expected or presented.
9. How the Ground Truth for the Training Set was Established:
This information is not provided, as it's not a submission for a deep learning AI model with a training set requiring ground truth establishment in the typical sense.
Summary of what is present and relevant to the request:
The submission focuses on the functionality and software development process of VisionX 2.4, an imaging management and display software for dental use, seeking substantial equivalence to a predicate device (K190629). It highlights:
- Compliance with IEC 62304 and FDA guidance for software in medical devices.
- Successful software testing for effectiveness and functionality.
- DICOM compliance.
- Risk analysis, design reviews, and full functional cross-check testing.
- A "Clinical Evaluation" assessing suitability for dental use based on literature and practical tests.
The document does not provide specific quantitative acceptance criteria or detailed studies on the performance of a novel AI/algorithm in terms of diagnostic accuracy, sensitivity, or specificity against established ground truth, or its impact on human reader performance. This is consistent with the device being primarily an image management and viewing system with features that "support diagnosis" through display optimization, rather than a standalone AI diagnostic tool.
Ask a specific question about this device
(20 days)
Durr Dental SE
DBSWIN and VistaEasy imaging software are intended for use by qualified dental professionals for windows based diagnostics. The software is a diagnostic aide for licensed radiologists, dentists and clinicians, who perform the actual diagnosis based on their training, qualification, and clinical experience. DBSWIN and VistaEasy are clinical software applications that receive images and data from various imaging sources (i.e., radiography devices and digital video capture devices) that are manufactured and distributed by DÜRR Dental and Air Techniques. It is intended to acquire, display, edit (i.e., resize, adjust contrast, etc.) and distribute images using standard PC hardware. In addition, DBSWIN enables the acquisition of still images from 3rd party TWAIN compliant imaging devices (e.g., generic image devices such as scanners) and the storage and printing of clinical exam data, while VistaEasy distributes the acquired images to 3rd party TWAIN compliant PACS systems for storage and printing.
DBSWIN and VistaEasy software are not intended for mammography use.
DBSWIN and VistaEasy imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. DBSWIN and VistaEasy software runs on user provided PC-compatible computers and utilize previously cleared digital image capture devices for image acquisition. VistaEasy is included as part of DBSWIN. It provides additional interfaces for Third Party Software. VistaEasy can also be used by itself, as a reduced feature version of DBSWIN.
The provided document is a 510(k) summary for DÜRR DENTAL SE's DBSWIN and VISTAEASY Imaging Software (K190629). This submission focuses on establishing substantial equivalence to a predicate device (K161444) rather than presenting a study to prove performance against specific acceptance criteria for a novel device. Therefore, much of the requested information about device performance and study details is not explicitly available in this document.
However, based on the information provided, here's an attempt to answer the questions:
1. A table of acceptance criteria and the reported device performance
The document does not specify quantitative acceptance criteria for device performance. Instead, it asserts substantial equivalence to a predicate device by comparing technological characteristics and functionalities. The performance is implied to be equivalent to the predicate.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Identical Indications for Use | Confirmed "SAME, unchanged" |
Identical functionality (Patient Management, Image Management, Display, Enhance, etc.) | Confirmed "YES" for all listed functionalities compared to predicate. |
Supported Devices | Similar to predicate, with additional integrated devices (ScanX Swift View, VistaScan Nano, ScanX Classic View, CamX Triton HD Proxi, CamX Triton HD Spectra). |
Compatible Computer Operating Systems | Updated list of supported Microsoft Windows and Server OS, removing old and adding new versions. |
Minimum CPU Requirements | Similar to predicate (≥ Intel Pentium IV compatible, 1.4 GHz). |
Minimum RAM Requirements | Similar to predicate (≥ 1GB, 2GB recommended). |
Hard Disk Requirements | Similar to predicate (Workstation (without database) ≥50 GB; memory requirements of database depend on image count). |
DICOM Compliance | Confirmed "DBSWIN is DICOM compliant." |
Compliance with medical device software life cycle requirements (IEC 62304) | Confirmed "DBSWIN/VistaEasy was developed in compliance with the harmonized standard of IEC 62304." |
No new issues of safety or effectiveness | Verification testing demonstrated the device continues to meet performance specifications and no new issues were raised. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a separate "test set" or its sample size. The performance claims are based on "Bench testing, effectiveness, and functionality" and "Full functional software cross check testing." No information on data provenance (country of origin, retrospective/prospective) is provided, as no clinical study with patient data is mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. The document does not describe a study involving expert-established ground truth for a test set, as it emphasizes technological equivalence and software verification/validation rather than clinical performance evaluation against a gold standard.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No expert-adjudicated test set is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This document is for imaging software that aids in acquiring, displaying, editing, and distributing images, rather than an AI diagnostic tool. No MRMC study is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The device is referred to as "clinical software applications" and a "diagnostic aide for licensed radiologists, dentists and clinicians," indicating a human-in-the-loop context. It processes and presents images, it does not perform standalone diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not applicable. As no clinical study validating diagnostic accuracy is presented, there is no mention of a ground truth type. The focus is on software functionality and technical specifications.
8. The sample size for the training set
Not applicable. This device is not described as an AI/machine learning model where a training set size would be relevant. It's a general imaging software.
9. How the ground truth for the training set was established
Not applicable, for the same reason as above.
Ask a specific question about this device
(62 days)
Durr Dental SE
ProVecta 3D Prime:
ProVecta 3D Prime is computed tomography x-ray unit intended to generate 3D and panoramic X-ray images in dental radiography for adult and pediatric patients. It provides diagnostic details of the maxillofacial areas for a dental treatment. The device is operated and used by physicians, dentists, and x-ray technicians. Not intended for mammography use.
VistaSoft:
The VistaSoft software functions for recording, displaying, analyzing, diagnosing, managing and sending digital or digitized video and X-ray images in dental practices and specialist dental clinics. Not intended for mammography use.
This device is a cone beam CT x-ray device for the acquisition of dental images. Similar to computer tomography or magnetic resonance tomography, sectional images can be generated with CBCT. With CBCT, an X-ray tube and an imaging sensor opposite it rotate around a seated or standing patient. The X-ray tube rotates through 180°-540° and emits a conical X-ray beam. The X-rays pass through the region under investigation and are measured for image generation by a detector as an attenuated grey scale X-ray image. Here, a large series of two-dimensional individual images is acquired during the revolution of the X-ray tube. Using a mathematical calculation on the rotating image series via a reconstruction computer, a grey value coordinate image is generated in the three spatial dimensions. This three-dimensional coordinate model corresponds to a volume graphic that is made up of individual voxels. This volume can be used to generate sectional images (tomograms) in all spatial dimensions as well as 3D views. The system complies with US Radiation Safety Performance Standard.
The provided text is a 510(k) Premarket Notification summary for the ProVecta 3D Prime with VistaSoft device, a dental computed tomography x-ray system. The document focuses on demonstrating substantial equivalence to a predicate device, rather than proving the device meets specific acceptance criteria through a clinical study. Therefore, much of the requested information regarding acceptance criteria and performance study details is not explicitly available in this document.
However, based on the information provided, here's what can be inferred and explicitly stated:
Overall Conclusion from the Document:
The device's performance is not evaluated against formal "acceptance criteria" in the sense of a new feature or algorithm being validated. Instead, the focus is on demonstrating that the device's technical characteristics and image quality are equivalent or non-inferior to the predicate device, thereby not raising new questions of safety or effectiveness. The statement "The overall impression is sufficient for dental diagnostics. Good contrast and good resolution. Teeth, osseous structures, sinus maxillaries are clearly shown" acts as a qualitative assessment of diagnostic quality, though not tied to specific acceptance criteria.
1. A table of acceptance criteria and the reported device performance
Since this is a 510(k) submission primarily focused on demonstrating substantial equivalence, formal quantifiable "acceptance criteria" for a new AI algorithm's performance are not explicitly defined or measured in a distinct table in the provided text. The performance assessment is comparative to the predicate device's specifications for imaging parameters.
Acceptance Criteria (Implied/Comparative) | Reported Device Performance (Comparative to Predicate) |
---|---|
Image Quality for Dental Diagnostics | "The overall impression is sufficient for dental diagnostics. Good contrast and good resolution. Teeth, osseous structures, sinus maxillaries are clearly shown." |
Technical Specifications (Equivalence to Predicate) | |
Tube Voltage | 50-99 kV (Same as predicate) |
Tube Current | 4-16 mA (Same as predicate) |
Focal Spot Size | 0.5 mm (Same as predicate) |
Exposure Time | Max. 16.4s (Predicate: Max. 18s) |
Slice Width | 0.1 mm min. (Same as predicate) |
Total Filtration | 2.8 mm Al (Same as predicate) |
Image Receptor (Xmaru1404CF) | MTF @ 2.5 lp/mm >8% (Same as predicate) |
Noise, RMS of Dark Current | ADU |
Ask a specific question about this device
Page 1 of 1