K Number
K191363
Manufacturer
Date Cleared
2019-10-02

(133 days)

Product Code
Regulation Number
872.4120
Panel
DE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Neocis Planning Software Application (NPSA) for 3rd Party PCs is intended to perform the planning (pre-operative) phase of dental implantation surgery. The NPSA provides pre-operative planning for dental implantation procedures. The output of the NPSA is to be used with the Neocis Guidance System.

Device Description

The Neocis Planning and Software Application (NPSA) for 3rd PCs is intended to facilitate dental implant procedure planning on any properly equipped PC so that procedures may be preplanned in advance of the surgical procedure. The NPSA is designed to upload CT scan images in DICOM file format, reconstruct and optimize 3D images of the patient anatomy, and plan the surgical procedure via defining implant placement location.

AI/ML Overview

Here's an analysis of the provided text regarding the acceptance criteria and the study that proves the device meets them:

Disclaimer: The provided document is a 510(k) summary, which is a regulatory submission to the FDA. It outlines the device's characteristics and demonstrates substantial equivalence to a predicate device. It is not a detailed clinical study report, thus some specific details typically found in such reports (like detailed effect sizes for MRMC studies, specific ground truth methods for clinical trials, or extensive statistical power justifications) may not be present or fully elaborated.


Description of Acceptance Criteria and the Study

The Neocis Planning Software Application (NPSA) for 3rd Party PCs is intended for the pre-operative planning phase of dental implantation surgery. The output of the NPSA is to be used with the Neocis Guidance System.

The provided document describes the verification and validation (V&V) testing performed for the NPSA software. The acceptance criteria are implicitly defined by the successful completion of these V&V activities, demonstrating that the software functions as intended and meets safety and performance requirements.

The device's performance is demonstrated through a series of software and system verification and validation tests, guided by FDA and IEC standards. The submission asserts that the combined software and system testing and analysis of results provide assurance that the device performs as intended and is substantially equivalent to its predicate.


1. Table of Acceptance Criteria and the Reported Device Performance

The document does not explicitly list quantitative acceptance criteria with corresponding performance metrics in a single table. Instead, it describes general categories of verification and validation (V&V) testing. The "reported device performance" is implicitly that the software successfully passed all these V&V tests, demonstrating it performs as intended.

Acceptance Criteria CategoryDescription (Implicit Performance)
Simulated UseThe device performs correctly during typical use cases.
Boundary ConditionThe device correctly handles all potential boundary parameters within the application software.
RegistrationThe registration process functions correctly.
Case File ContentsFeatures associated with saving/loading cases function correctly during simulated use.
Error Case InjectionAll error messages and pop-ups are correctly simulated and handled.
CT Scan VerificationThe device correctly verifies the resolution and validity of CT Scans.
File TransferThe usability of files before and after transfer is verified.
Dental Implant LibrariesThe quality and speed of implant rendering from the dental implant libraries are verified.
Generation and Visualization of 3D ReconstructionAll features of CT scan image reconstruction are functioning and accurate.
Installation, Stability, and Removal from 3rd Party PCsThe software can be successfully installed, runs stably, and can be removed from specified 3rd party PCs.
Risk Analysis (uFMEA, dFMEA, sFMEA, cFMEA)Risks associated with use, usability, performance, design, software functionality, software interaction, and cybersecurity are identified, analyzed, and mitigated, ensuring the device is safe. (Implicitly, the device meets accepted safety thresholds for these risks).

2. Sample Size Used for the Test Set and the Data Provenance

The document does not specify a "test set" in the context of clinical data (e.g., patient cases). The testing described focuses on software verification and validation, which often uses synthetic data, simulated scenarios, and internal testing environments rather than a specific clinical test set of patient images with established ground truth.

  • Sample Size for Test Set: Not specified in terms of patient cases or real-world data. The testing mentioned appears to be software-focused (e.g., boundary conditions, error case injection, file transfers).
  • Data Provenance: Not explicitly stated. The CT scan verification suggests the use of CT scan data, but the source (e.g., country of origin, retrospective/prospective) is not provided. Since the context is software V&V, it's likely a mix of internally generated data, simulated data, and potentially a limited set of representative real CT scan data.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

This information is not provided in the document. As the V&V largely focuses on software functionality, the "ground truth" would be defined by engineering specifications and expected software behavior rather than expert clinical consensus on actual patient cases.


4. Adjudication Method for the Test Set

Not applicable/not specified. The V&V described does not involve a multi-reader, multi-case adjudication process for clinical ground truth.


5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, an MRMC comparative effectiveness study is not mentioned or described in the provided text. The submission focuses on the stand-alone planning software and its functional verification against its own specifications and safety requirements, demonstrating substantial equivalence to a predicate device. It explicitly states: "This submission only includes the planning software. Testing for the software-NGS interactions has been omitted." Furthermore, "This submission is focused on the planning software as a standalone device."


6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, the core of the evaluation described is for the "standalone" performance of the NPSA software. The document states: "This submission is focused on the planning software as a standalone device." The V&V described in Table 2 directly assesses the software's functional performance (e.g., CT scan verification, 3D reconstruction, implant library rendering) without human intervention in the loop of the primary assessment of these features. It's a software-only performance evaluation, though its intended use is by a human user for planning.


7. The Type of Ground Truth Used

The ground truth for the software verification and validation is based on:

  • Engineering Specifications: The expected behavior and output of the software as defined during its development.
  • Compliance with Standards: Adherence to standards like IEC 62304 for software lifecycle processes, ISO 14971 for risk management, and FDA guidance documents.
  • Predicate Device Performance: Implicitly, the NPSA's functional performance must be equivalent to or better than the planning features of the predicate device (Neocis Guidance System K182776).

It does not appear to involve clinical ground truth such as expert consensus on patient pathology or outcomes data.


8. The Sample Size for the Training Set

The document does not mention a "training set" in the context of machine learning. The NPSA's functionality, as described, doesn't suggest it's primarily an AI/ML algorithm that requires a training set for learning. It's a planning software that reconstructs 3D images and allows for virtual implant placement based on predefined instructions and algorithms.


9. How the Ground Truth for the Training Set Was Established

Not applicable, as no training set for machine learning is described.

§ 872.4120 Bone cutting instrument and accessories.

(a)
Identification. A bone cutting instrument and accessories is a metal device intended for use in reconstructive oral surgery to drill or cut into the upper or lower jaw and may be used to prepare bone to insert a wire, pin, or screw. The device includes the manual bone drill and wire driver, powered bone drill, rotary bone cutting handpiece, and AC-powered bone saw.(b)
Classification. Class II.