Search Results
Found 1 results
510(k) Data Aggregation
(88 days)
Invisalign Palatal Expander System
The Invisalign Palatal Expander System is indicated for the orthodontic treatment of malocclusion. The system is used for rapid expansion and subsequent holding of skeletal and/or dental narrow maxilla (upper jaw, dental arch and tooth, palate) with primary, mixed, or permanent dention during orthopedic treatment in children or adolescents. In adults, it is to be used in conjunction with surgery or other interventions when necessary.
The subject device, Invisalign Palatal Expander (IPE) System (hereafter referred to as the IPE System) is a modification to the legally marketed multiple predicate devices. Invisalign System (Class II; Product Code: NXC; K220287) as Primary and Dentaurum Expansion Screws/Hyrax Expander – Hyrax ® and Hyrax neo® (Class I, Product Code: DYJ, K935154) as Secondary.
The IPE System consists of the Invisalign Palatal Expanders, Invisalign Palatal Holders, the Attachment Template and the proprietary 3D Shape generation software.
Invisalign Palatal Expanders are a staged series of removable orthodontic devices designed to expand the patient's skeletal and/or dental narrow maxilla (upper jaw, dental arch, teeth and palate) in small increments to an optimal position determined by the doctor. Once the desired clinical outcome of the expansion phase has been achieved, patients progress to the holding phase.
The Invisalign Palatal Holder is a copy of the last stage of the expansion phase designed to hold the maxilla post active expansion, to allow the maxilla to stabilize before the patient progresses to the next phase of treatment (retention, phase 2 or other treatment).
The proprietary Align internal personnel-facing 3D software enables Align's computer-aided design (CAD) designers to generate the shape and quantity/stages of the device. Using this software, CAD designers create digital files for the incremental stages of the doctorprescribed expansion amount and the design and quantity of holders per the prescribed holding duration. Each device in the series is fabricated via additive manufacturing (3D printing).
The Attachment Template (also a component of the primary predicate device, the Invisalign System) enables correct placement and bonding of attachments made of dental composite (material provided by doctor) to the tooth surface for IPE engagement and retention.
The provided text is a 510(k) Premarket Notification for the Invisalign Palatal Expander System. It outlines the device's indications for use, comparison to predicate devices, and performance data. However, it does not contain the specific details required to fully address all parts of your request regarding acceptance criteria and a study proving the device meets those criteria for an AI/software component, particularly around metrics like sensitivity, specificity, or reader performance with and without AI.
The document mentions "proprietary 3D Shape generation software" and "includes software algorithms that are used to determine the shape and calculate the quantity of devices required." It also states "successful software verification and validation (V&V) testing at the unit, integration, and system level was performed to qualify the orthodontic software component of the subject device." This suggests that the software is a design and manufacturing tool, generating device shapes based on clinician prescriptions and patient scans, rather than an AI performing diagnostic or assistive tasks that would typically require the comparative performance studies you're asking about (e.g., MRMC studies, standalone performance with ground truth labels).
Therefore, based on the provided text, I can only provide information related to the device's overall performance and safety testing, not specific AI acceptance criteria and studies as if it were a diagnostic AI.
Here's what can be extracted and inferred from the document:
1. A table of acceptance criteria and the reported device performance:
The document doesn't provide a table of quantitative acceptance criteria (e.g., specific thresholds for force, retention, etc.) for the various tests. Instead, it states that the test results "met the acceptance criteria and performed as intended" or were "found to be adequately designed."
Test Category | Reported Device Performance |
---|---|
Durability | All IPE devices maintained engagement on attachments, without deformation, cracks, chips, or breaks. |
Stiffness/Force System | The device is structurally stiff and can deliver the required force in the lateral direction on the posterior teeth. Test results met acceptance criteria and performed as intended. (Note: The secondary predicate device, Hyrax, was used for acceptance criteria for bench testing of stiffness/force, suggesting a comparative or normative standard was applied.) |
Packaging | The packaging can protect the System from exposure to all relevant shipping and handling scenarios. Product integrity was maintained during shipping and handling scenarios. |
Retention | The device can retain engagement with attachments on posterior teeth while under compression during an active expansion and holding period. Test results met acceptance criteria and performed as intended. |
Insertion and Removal Force | Forces required to insert and remove the device are significantly less than the bond force of the attachment on the tooth. |
Human Factors & Usability | The device is adequately designed for its intended users, uses, and use environments. Additional device modifications to the user interface were not needed and would not further reduce risk. (Based on available data, the subject device and primary predicate device perform as intended.) |
Biocompatibility | The IPE System does not pose any significant biological risks and is considered safe for its intended use in the intra-oral cavity, as per ISO 10993-1 and ISO 7405. |
Software Testing | Successful software verification and validation (V&V) testing at the unit, integration, and system level was performed to qualify the orthodontic software component. (This indicates the software met its functional requirements and quality standards, but not necessarily clinical performance metrics as might be seen for a diagnostic AI). |
Clinical Performance | An early feasibility study concluded that desired active expansion of the upper jaw width was observed in all subjects. No unanticipated or serious adverse events were reported. Real-world data from commercially available product was also analyzed. (This is more of a qualitative summary of observational findings rather than a formal, quantitative performance study with pre-defined success metrics like sensitivity/specificity). |
Regarding the study that proves the device meets the acceptance criteria (specifically concerning the software component, as that's where AI concepts typically apply):
The document describes the software as a "proprietary, 3D Shape generation software" that "enables Align's computer-aided design (CAD) designers to generate the shape and quantity/stages of the device." It clarifies that this is an "internal Align facing only shape generation software which includes software algorithms that are used to determine the shape and calculate the quantity of devices required per the doctor prescribed expansion distance and holding duration." This is a software that designs the physical device based on inputs, not an AI that interprets medical images, flags abnormalities, or assists in diagnosis, which would typically involve the detailed study characteristics you've requested.
Therefore, many of your questions related to multi-reader studies, expert ground truth adjudication for a test set, effect sizes of human improvement with AI, and specific ground truth types (pathology, outcomes data) are not applicable to the description of the software provided in this 510(k) summary, as it's not a diagnostic or AI-assisted diagnostic tool in the typical sense.
Here's an attempt to answer the relevant questions based on the provided text, and state when information is not applicable or provided:
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- Test Set Sample Size: For the physical device performance tests (durability, stiffness, retention, etc.), specific sample sizes are not provided, only that testing was conducted and met acceptance criteria. For the "Early Feasibility Study," the number of subjects is not specified, only that "desired active expansion of the upper jaw width was observed in all the subjects." For the "real-world data," the sample size is not provided.
- Data Provenance: The document does not specify the country of origin. The "Early Feasibility Study" is mentioned, implying a prospective collection for that specific study. The "real-world data" is described as from "commercially available product," suggesting it is retrospective post-market data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided because the software's function is for device design generation, not for generating 'ground truth' for medical interpretations or diagnoses that would require expert labeling and adjudication. The "doctor-prescribed expansion amount" is the input to the software, implying the clinician dictates the "ground truth" for the device design.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable for the software as described. The software's output is a device design based on physician input, not a diagnostic finding requiring adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done or reported for this device's software. The software's role is not described as an AI assisting human readers in diagnostic tasks.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The software's function is described as enabling "CAD designers to generate the shape" based on "doctor-prescribed expansion amount." It's an internal design tool, not a standalone diagnostic algorithm whose performance would typically be evaluated without human input. Software verification and validation (V&V) was performed, which assesses if the software functions as designed from a technical standpoint.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the software, the "ground truth" for its operation is the "digital scan and doctor/dental practitioner's prescription." The software takes these inputs to generate a 3D model. Clinical performance was evaluated by observing "desired active expansion... in all the subjects" and "no unanticipated or serious adverse events reported," which relates to patient outcomes after using the physical device.
8. The sample size for the training set:
- Not applicable or provided. The document describes software that generates device shapes based on given inputs, rather than a machine learning model that would require a dedicated training set.
9. How the ground truth for the training set was established:
- Not applicable. The software's function as described does not involve a "training set" in the machine learning sense, where ground truth labels would be established. It is a deterministic design software.
Ask a specific question about this device
Page 1 of 1