Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K213562
    Manufacturer
    Date Cleared
    2022-03-25

    (136 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K123519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DTX Studio Clinic is a software program for the acquisition, management, transfer and analysis of dental and craniomaxillofacial image information, and can be used to provide design input for dental restorative solutions. It displays and enhances digital images from various sources to support the diagnostic process and treatment planning. It stores and provides these images within the system or across computer systems at different locations.

    Device Description

    DTX Studio Clinic is a software interface for dental/medical practitioners used to analyze 2D and 3D imaging data, in a timely fashion, for the treatment of dental, cramomaxillofacial and related conditions. DTX Studio Clinic displays and processes imaging data from different devices (i.e. intraoral X-Rays, (CB)CT scanners, intraoral scanners, intraoral and extraoral cameras).

    AI/ML Overview

    This document is a 510(k) Premarket Notification for the DTX Studio Clinic 3.0. It primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed technical study report with specific acceptance criteria and performance metrics for novel functionalities.

    Therefore, the requested information regarding detailed acceptance criteria, specific performance data (e.g., accuracy metrics), sample sizes for test sets, data provenance, expert qualifications, and ground truth establishment for the automatic annotation of mandibular canals is not explicitly detailed in the provided text.

    The document states that "Automatic annotation of the mandibular canals" is a new feature in DTX Studio Clinic 3.0, and it is compared to the reference device InVivoDental (K123519) which has "Creation and visualization of the nerve manually or by using the Automatic Nerve feature." However, it does not provide the specific study details for validating this new feature within DTX Studio Clinic 3.0. It only broadly states that "Software verification and validation testing was conducted on the subject device."

    Based on the provided text, I cannot fulfill most of the requested information directly because it is not present. The document's purpose is to establish substantial equivalence based on the overall device function and safety, not to detail the rigorous validation of a specific AI/ML component with numerical acceptance criteria.

    However, I can extract the available information and highlight what is missing.


    Acceptance Criteria and Study for DTX Studio Clinic 3.0's Automatic Mandibular Canal Annotation (Information extracted from the document):

    Given the provided text, the specific, quantitative acceptance criteria and detailed study proving the device meets these criteria for the automatic annotation of the mandibular canal are not explicitly described. The document focuses on a broader claim of substantial equivalence and general software validation.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/MetricAcceptance CriteriaReported Device PerformanceSource/Methodology (if available in text)
    Automatic annotation of mandibular canalsNot explicitly stated in quantitative terms. Implied acceptance is that the functionality is "similar as in the reference device InVivoDental (K123519)" and the user can "manually indicate or adjust the mandibular canal."No specific performance metrics (e.g., accuracy, precision, recall, Dice coefficient) are provided. The text states: "The software automatically segments the mandibular canal based on the identification of the mandibular foramen and the mental foramen. This functionality is similar as in the reference device InVivoDental (K123519). The user can also manually indicate or adjust the mandibular canal."Comparison to reference device and user adjustability. Software verification and validation testing was conducted, but details are not provided.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified for the automatic mandibular canal annotation feature. The document states "Software verification and validation testing was conducted on the subject device," but provides no numbers.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: Not mentioned or detailed. The document primarily makes a substantial equivalence claim based on the device's overall functionality and features, not a comparative effectiveness study involving human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Not explicitly detailed. The document describes the automatic segmentation functionality and mentions that the user can manually adjust, implying a human-in-the-loop scenario. No standalone performance metrics are provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Type of Ground Truth: Not specified for the automatic mandibular canal annotation. Given the context of a dental/maxillofacial imaging device, it would likely involve expert annotations on CBCT scans, but this is not confirmed in the text.

    8. The sample size for the training set:

    • Training Set Sample Size: Not specified. This document is a 510(k) submission, which focuses on validation, not the development or training process.

    9. How the ground truth for the training set was established:

    • Ground Truth Establishment for Training Set: Not specified.

    Summary of what can be inferred/not inferred from the document regarding the mandibular canal annotation:

    • New Feature: Automatic annotation of mandibular canals is a new feature in DTX Studio Clinic 3.0 that was not present in the primary predicate (DTX Studio Clinic 2.0).
    • Comparison to Reference Device: This new feature's "functionality is similar as in the reference device InVivoDental (K123519)", which has "Creation and visualization of the nerve manually or by using the Automatic Nerve feature."
    • Human Oversight: The user has the ability to "manually indicate or adjust the mandibular canal," suggesting that the automatic annotation is an aid to the diagnostic process, not a definitive, unreviewable output. This is typical for AI/ML features in medical imaging devices that are intended to support, not replace, clinical judgment.
    • Validation Claim: The submission states that "Software verification and validation testing was conducted on the subject device and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices'." This implies that the validation was performed in accordance with regulatory guidelines, but the specific details of that validation for this particular feature are not disclosed in this public summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    K163122
    Manufacturer
    Date Cleared
    2017-01-31

    (84 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152078, K130724, K110300, K112251, K123519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NobelClinician® (DTX Studio Implant) is a software interface for the transfer and visualization of 2D and 3D image information from equipment such as a CT scanner for the purposes of supporting the diagnostic process, treatment planning and follow-up in the dental and cranio-maxillofacial regions.

    NobelClinician® (DTX Studio Implant) can be used to support guided implant surgery and to provide design input for and review of dental restorative solutions. The results can be exported to be manufactured.

    Device Description

    NobelClinician® is a software interface used to support the image-based diagnostic process and treatment planning of dental, cranio-maxillofacial, and related treatments. The product will also be marketed as DTX Studio implant.

    The software offers a visualization technique for (CB)CT images of the patient for the diagnostic and treatment planning process. In addition, 2D image data such as photographic images and X-ray images or surface scans of the intra-oral situation may be visualized to bring diagnostic image data together. Prosthetic information can be added and visualized to support prosthetic implant planning. The surgical plan, including the implant positions and the prosthetic information, can be exported for the design of dental restorations in NobelDesign® (DTX Studio design).

    Surgical planning may be previewed using the software and the related surgical template may be ordered.

    AI/ML Overview

    This document is a 510(k) premarket notification summary for the NobelClinician® (DTX Studio Implant) software. The information provided heavily focuses on regulatory comparisons to a predicate device rather than detailed study protocols and results for meeting specific acceptance criteria in a typical clinical performance study.

    Based on the provided text, the device is a "Picture archiving and communications system" (PACS) software, classified under 21 CFR 892.2050. The performance data presented are primarily non-clinical.

    Here's an attempt to answer your questions based only on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of acceptance criteria and reported device performance in the manner typically found in a clinical study report. Instead, it states that "Software verification and validation per EN IEC 62304:2006" was performed. This implies that the acceptance criteria would be related to the successful completion of these verification and validation activities, demonstrating that the software meets its specified requirements and is fit for its intended use, as defined by the standard.

    Since specific performance metrics (e.g., accuracy, precision) for diagnostic or treatment planning tasks are not quantified or presented in this regulatory summary, a table showing such criteria and performance cannot be constructed from this document.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document does not provide details on the sample size used for a test set or the provenance of any data beyond indicating "non-clinical studies". This suggests that a traditional clinical test set with patient data for evaluating performance metrics was not the focus of the "performance data" section in this 510(k) summary. The listed activity "Software verification and validation per EN IEC 62304:2006" typically involves testing against synthetic data, simulated scenarios, or existing clinical datasets used for functional and performance testing of software, rather than a prospective clinical study with a defined "test set" in the sense of patient cases.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    This information is not provided in the document. Given that the performance data mentioned refers to "Software verification and validation per EN IEC 62304:2006," it is unlikely that a formal ground truth establishment by a panel of clinical experts for a test set (as in a diagnostic accuracy study) was conducted or reported in this summary.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention any multi-reader multi-case (MRMC) comparative effectiveness study. The "performance data" section is limited to "Non-Clinical Studies - Software verification and validation per EN IEC 62304:2006".

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document does not explicitly state whether a standalone algorithm-only performance study was conducted. The device is described as "a software interface for the transfer and visualization of 2D and 3D image information from equipment... for the purposes of supporting the diagnostic process, treatment planning and follow-up...". This indicates a human-in-the-loop context for its intended use. The verification and validation activities would assess the software's functional correctness for these tasks.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Since the document only refers to "Software verification and validation per EN IEC 62304:2006" as performance data, the specific type of "ground truth" used for these non-clinical tests is not detailed. For software testing, ground truth could involve:

    • Pre-defined expected outputs for given inputs.
    • Comparisons to known good results from previous versions or manually calculated values.
    • Adherence to specifications and design requirements.
      Formal clinical ground truth (like pathology or outcomes data) is not mentioned.

    8. The sample size for the training set

    This document does not indicate that the device involves AI/machine learning requiring a "training set" in the conventional sense. The "performance data" refers to software verification and validation, which focuses on the deterministic functionality of the software according to its specifications, not statistical learning from data.

    9. How the ground truth for the training set was established

    As there is no mention of an AI/machine learning component requiring a training set, this information is not applicable and not provided in the document.

    In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence to a predicate device based primarily on non-clinical software verification and validation activities. It does not contain detailed information about a clinical performance study with specific acceptance criteria, test sets, expert ground truth establishment, or comparative effectiveness studies of the type you've inquired about.

    Ask a Question

    Ask a specific question about this device

    K Number
    K161270
    Device Name
    CephSimulation
    Manufacturer
    Date Cleared
    2017-01-04

    (244 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K123519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CephSimulation is a software application intended for storing and visualization of patient images and assisting in case diagnosis and surgical treatment planning. Results of the software are to be interpreted by trained and licensed dental and medical practitioners. It is intended for use by dentist, medical surgeons, and other qualified individuals using a standard PC. This device is not indicated for mammography use.

    Device Description

    CephSimulation is an interactive imaging software device. It is used for the visualization of patient image files from scanning devices such as CT scanners, and for assisting in case diagnosis and review, treatment planning and simulation for orthodontic and craniofacial applications. Doctors, dental clinicians, medical surgeons and other qualified individuals can render, review and process the images, perform measurement, analysis and surgery simulation. The software runs with standard PC hardware and visualizes imaging data on standard computer screen. CephSimulation is designed as a plug-in component for InVivoDental software. It is seamlessly integrated into InVivoDental for extended capabilities. The key functionality includes image visualization, cephalometric tracing and measurements and 3D surgery simulation.

    AI/ML Overview

    The CephSimulation device is a software application intended for storing and visualizing patient images, assisting in case diagnosis, and surgical treatment planning.

    1. A table of acceptance criteria and the reported device performance:

    The document does not explicitly present a table of acceptance criteria with specific quantitative thresholds. Instead, it describes a general approach to performance validation. However, based on the narrative, the implicit acceptance criterion is that the software is "as effective as its predicate in its ability to perform essential functions."

    Acceptance Criteria (Implied)Reported Device Performance
    Software is stable and operates as designed.Confirmed by performance testing, usability testing, and final acceptance testing.
    Software has been evaluated for hazards and risk is reduced to acceptable levels.Confirmed by risk analysis and traceability analysis.
    CephSimulation is as effective as its predicate in its ability to perform essential functions.Confirmed by bench testing against predicate software, evaluated by an expert radiologist.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: The document does not specify the exact sample size (number of cases or images) used for bench testing. It only mentions "evaluation of major function outputs from CephSimulation and predicate software."
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: "an expert" (singular) was used.
    • Qualifications of Experts: The expert was "in the field of radiology." No further details on years of experience or sub-specialization are provided.

    4. Adjudication method for the test set:

    The document does not describe a formal adjudication method (like 2+1 or 3+1). It states that the bench testing result "was evaluated by an expert in the field of radiology," implying a single-expert assessment rather than a consensus-based adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No, a multi-reader multi-case (MRMC) comparative effectiveness study, evaluating human reader improvement with vs. without AI assistance, was not conducted or reported in the provided document. The testing focused on the standalone performance of the CephSimulation software compared to a predicate.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, a standalone performance assessment was done. The "bench testing of the software with predicate software" compared the outputs of CephSimulation directly against those of the predicate software, essentially evaluating the algorithm's performance without the user interface being the primary focus of comparison. The expert reviewer evaluated "major function outputs from CephSimulation and predicate software."

    7. The type of ground truth used:

    The ground truth for the bench testing was the "major function outputs" of the predicate software. This implies that the predicate's performance or outputs were considered the reference truth for comparison, rather than an independent expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    The document does not provide any information regarding the sample size used for the training set, as it describes validation testing (benchmarking) rather than development or training details.

    9. How the ground truth for the training set was established:

    Since no information on a training set is provided, there is no mention of how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K160666
    Device Name
    Dentiq3D
    Date Cleared
    2016-10-05

    (210 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K123519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Dentiq3D is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    Dentiq3D is intended for use as software to load, view and save DICOM images from CT, panorama, cephalometric and intraoral imaging equipment and to provide 3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions.

    Device Description

    Dentiq3D is a dental image software platform for the 3D visualization and analysis of volume data. Dentig3D is optimized for analyzing volume data from CT scan and enables users to examine volume data through 3D visualization, 2D analysis, and various MPR functions to manipulate CT images. The functions include canal tracing, implant simulation, volume segmentation and airway measurement.

    The following are the major functions of Dentiq3D.

    • Visualizes CT volume data
    • Supports various types of data
    • Measures 3D object
    • Analyzes and filters volume data
    • Publishes various forms of report
    • 3D visualization by using GPU
    • Loads and saves project files
    • Restores (Undo) or repeats (Redo) tasks based on operation history
    • Supports a user-friendly interface
    AI/ML Overview

    The provided text describes a 510(k) submission for the Dentiq3D dental imaging software. However, it does not contain specific acceptance criteria or a study detailing the device's performance against such criteria.

    The document states:
    "Verification, validation and testing activities were conducted to establish the performance, functionality and reliability characteristics of the subject device. The device passed all of the tests based on pre-determined Pass/Fail criteria." (Page 5, Section 12. Performance Data)

    This indicates that internal testing was performed, but the details of these tests, including acceptance criteria, reported performance, sample sizes, ground truth establishment, or expert involvement, are not provided in this 510(k) summary.

    Therefore, I cannot fulfill the request to provide the following information from the given text:

    • A table of acceptance criteria and the reported device performance
    • Sample sized used for the test set and the data provenance
    • Number of experts used to establish the ground truth for the test set and the qualifications of those experts
    • Adjudication method for the test set
    • If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
    • If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
    • The type of ground truth used
    • The sample size for the training set
    • How the ground truth for the training set was established

    The document focuses on establishing substantial equivalence to predicate devices (Ez3D Plus and InVivo Dental) based on intended use, technical characteristics, and functionalities, rather than presenting detailed performance study results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K161246
    Device Name
    Ez3D-i / E3
    Date Cleared
    2016-05-31

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K150761, K123519

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ez3D-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    Ez3D-i is intended for use as software to load, view and save DICOM images from CT, panorama, cephalometric and intraoral imaging equipment and to provide 3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions.

    Device Description

    Ez3D-i is 3D viewing software for dental CT images in DICOM format with a host of useful functions including MPR, 2-dimensional analysis and 3-dimensional image reformation. It provides advanced simulation functions such as Implant Simulation, Drawing Canal, and Implant Environ Bone Density, etc..for the benefit of effective doctor and patient communication and precise treatment planning.

    Ez3D-i is a useful tool for diagnosis and analysis by processing a 3D image with simple and convenient user interface. Ez3D-i's main functions are;

    • · Image adaptation through various rendering methods such as Teeth/Bone/Soft tissue/MIP
    • · Versatile 3D image viewing via MPR Rotating, Curve mode
    • · "Sculpt" for deleting unnecessary parts to view only the region of interest.
    • Implant Simulation for efficient treatment planning and effective patient consultation
    • · Canal Draw to trace alveolar canal and its geometrical orientation relative to teeth.
    • · "Bone Density" test to measure bone density around the site of an implant(s)
    • · Various utilities such as Measurement, Annotation, Gallery, and Report
    • · 3D Volume function to transform the image into 3D Panorama and the Tab has been optimized for Implant Simulation.
    • · Provides the Axial View of TMJ, the Condyle/Fossa images in 3D and the Section images, and supports functions to separate the Condyle/Fossa and display the bone density
    AI/ML Overview

    The provided text is a 510(k) summary for the Ez3D-i / E3 dental imaging software. It describes the device, its intended use, and its substantial equivalence to predicate devices. However, it does not contain the detailed performance data, acceptance criteria, or study specifics requested in your prompt.

    The document states:

    • "Verification, validation and testing activities were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria." (Page 5)

    This statement indicates that performance testing was performed and passed, but it does not provide any specific acceptance criteria, reported device performance metrics, sample sizes, data provenance, ground truth establishment, or details about comparative effectiveness studies.

    Therefore, I cannot fulfill your request for the specific acceptance criteria and study details based purely on the provided text. The document is a regulatory submission summary, not a detailed technical report of the validation study.

    Ask a Question

    Ask a specific question about this device

    K Number
    K140713
    Device Name
    PLANMECA ROMEXIS
    Manufacturer
    Date Cleared
    2014-06-16

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K123519, K061035

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Planmeca Romexis is a medical imaging software, and is intended for use in dental and medical care as a tool for displaying and visualizing dental and medical 2D and 3D image files from imaging devices, such as projection radiography and CBCT. It is intended to retrieve, process, render, diagnose, review, store, print, and distribute images.

    It is also a preoperative software application used for the simulation and evaluation of dental implants. The software includes monitoring features for Planmeca devices for maintenance purposes. It is designed to work as a stand-alone or as an accessory to Planmeca imaging and Planmeca dental unit products in a standard PC. The software is for use by authorized healthcare professionals.

    Planmeca Romexis software is:

    • Not intended for capturing optical impressions for dental restorations.
    • Not intended for optically scanning stone models and impressions for dental restorations.
    • Not intended for optically scanning intraoral preparations for use in designing implants and/or abutments.
    • Not intended for optically scanning intra-orally for use in orthodontics.
    • Not intended for mammography use.
    Device Description

    Planmeca Romexis is a modular imaging software for dental and medical use. It is divided into modules to provide user access to different workflow steps involving different diagnostic views of images. Patient management screen with search capabilities lets users to find patients and identify correct patient file before starting work with a patient. After creating or selecting patient, new images can be acquired using select Planmeca X-ray units.

    Planmeca Romexis is capable of processing and displaying 2D images in different formats and 3D CBCT images in DICOM format. 3D CBCT images can be viewed in near real-time multi projection reconstruction (MPR) views. 2D and 3D image browsers are provided to allow user access to relevant images. Typical image enhancement filters and tools are available to assist the user in making diagnosis but original exposure is always kept in the database for reference.

    AI/ML Overview

    The submitter, Planmeca Oy, describes the Planmeca Romexis, a modular imaging software for dental and medical use.

    Here's the breakdown of acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance
    Stability and Operating as Designed"Testing confirmed that Planmeca Romexis is stable and operating as designed."
    Hazard Evaluation and Risk Reduction"Testing also confirmed that Planmeca Romexis has been evaluated for hazards and that the risk has been reduced to acceptable levels."
    Equivalence in Essential Functions (Bench Testing)"This confirms that both software applications are equally effective in performing the essential functions and provide substantially equivalent clinical data." (Comparison to predicate InVivoDental)

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a sample size for a test set in the traditional sense of a clinical study with patient data. The "non-clinical test results" section describes bench testing and quality assurance measures.

    • Test Set Nature: The testing performed was a "bench-testing to compare with predicate software" and involved assessing the software's stability, functionality, and hazard reduction. It did not involve a test set of patient images for diagnostic performance evaluation or a clinical trial.
    • Data Provenance: Not applicable as it was bench testing of software, not analysis of patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    Not applicable. The testing described is non-clinical bench testing and quality assurance, not a study involving expert readers establishing ground truth for diagnostic image interpretation.

    4. Adjudication Method for the Test Set

    Not applicable, as no human reader studies with specific adjudication methods are described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. The evaluation focused on bench testing against a predicate device.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    The evaluation was of the software itself in its intended functionality, which includes processing and displaying images for human interpretation. The "bench-testing to compare with predicate software" was a standalone evaluation of the software's rendering capabilities and effectiveness in performing essential functions compared to another software. It was not measuring an AI algorithm's diagnostic performance in isolation.

    7. Type of Ground Truth Used

    The "ground truth" for the non-clinical testing was based on:

    • Design Specifications/Requirements: Software was tested against its defined requirements and design.
    • Predicate Device Performance: Equivalence was established by comparing images rendered by Planmeca Romexis with those rendered by the predicate software (InVivoDental). The implicit ground truth here is the established and accepted performance of the predicate device.

    8. Sample Size for the Training Set

    Not applicable. The Planmeca Romexis is described as an "imaging software" that processes and displays images, not an AI/ML algorithm that is trained on a dataset. The document does not mention any machine learning components requiring a training set.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for a machine learning model is mentioned or implied.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1