Search Results
Found 1 results
510(k) Data Aggregation
(301 days)
Park Dental Research Aligners are indicated for the treatment of tooth malocclusion in patients with permanent dentition (i.e. all second molars). Park Dental Research Aligners positions teeth by way of continuous gentle force.
The Park Dental Research Aligners device is fabricated of clear thin thermoformed polyurethane plastic in a sequential series to progressively reposition the teeth. Corrective force to straighten the teeth is delivered via minor changes into a position in each subsequent aligner.
This document, K180648, is a 510(k) premarket notification for the "Park Dental Research Aligners." It appears to be a submission demonstrating substantial equivalence to a predicate device, rather than a submission for a novel artificial intelligence/machine learning (AI/ML) device that would typically involve extensive performance testing against acceptance criteria for AI algorithms.
Therefore, the provided text does not contain the information requested in points 1-9, as it describes a traditional medical device (clear aligners) and seeks clearance based on its similarity to an existing device, not based on AI/ML performance. The "Non-Clinical Performance Testing" section primarily discusses material biocompatibility and manufacturing validation related to software for fabrication, but this is not an AI/ML algorithm performance study.
Therefore, I cannot extract the requested information from the provided text.
To illustrate what a response would look like if the document contained AI/ML device performance data, here's a hypothetical structure and explanation of what each point would cover:
Hypothetical Acceptance Criteria and Study Proof (If this were an AI/ML device submission):
This section is not based on the provided document K180648, as the document does not describe the performance testing of an AI/ML algorithm for a medical device. It is a hypothetical example of what the requested information would look like if the submission were for an AI/ML device.
1. Table of Acceptance Criteria and Reported Device Performance:
| Performance Metric | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Primary Endpoints | ||
| Sensitivity (for Condition A Detection) | ≥ 90% | XX% |
| Specificity (for Condition A Detection) | ≥ 85% | YY% |
| Secondary Endpoints (if applicable) | ||
| AUC (for Classification) | ≥ 0.92 | ZZ |
| Agreement with expert readers (Kappa) | ≥ 0.80 | WW |
| Processing Time per case | < 30 seconds | PP seconds |
2. Sample Size for Test Set and Data Provenance:
- Sample Size: [Number] cases (e.g., 500 cases)
- Data Provenance: [e.g., Retrospective, multicenter study from hospitals across the United States, Europe, and Asia.]
3. Number of Experts and Qualifications for Ground Truth:
- Number of Experts: [e.g., 3 independent expert readers]
- Qualifications: [e.g., Board-certified Radiologists with an average of 10-15 years of experience in musculoskeletal imaging, specializing in dental radiography. One expert also held a Ph.D. in dental imaging, and all had prior experience in image annotation and consensus building.]
4. Adjudication Method for Test Set:
- Adjudication Method: [e.g., 2+1 Adjudication: Initial assessment by two independent experts. In cases of disagreement, a third senior expert (adjudicator) reviewed the case and made the final determination.] OR [e.g., 3-Expert Consensus: All three experts independently reviewed cases, and ground truth was established by unanimous agreement. If disagreement persisted, the case was excluded or a dedicated consensus meeting was held to reach full agreement.] OR [e.g., None: Ground truth was established by a single, highly experienced expert, or by pathology/histology reports as the definitive source.]
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was an MRMC study done? [Yes/No]
- If Yes, Effect Size (Human Readers improve with AI vs. without AI assistance):
- [e.g., The average sensitivity of human readers improved from 75% to 88% (an absolute improvement of 13 percentage points) when assisted by the AI device, as measured by a JAFROC analysis. The AUC for human readers unassisted was 0.85, improving to 0.93 with AI assistance.]
6. Standalone (Algorithm Only) Performance:
- Was a standalone performance study done? [Yes/No]
- If Yes, describe: [e.g., The algorithm's standalone performance was evaluated against the established ground truth, achieving a sensitivity of XX% and specificity of YY% for detecting the target condition. This was a prerequisite for evaluating human-in-the-loop performance.]
7. Type of Ground Truth Used:
- Type of Ground Truth: [e.g., Expert consensus (radiologists' interpretations), confirmed by pathology/histology reports where available.] OR [e.g., Clinical outcomes data (e.g., confirmed disease diagnosis from follow-up tests over 6 months).]
8. Sample Size for Training Set:
- Training Set Sample Size: [Number] cases (e.g., 10,000 cases)
9. How Ground Truth for Training Set was Established:
- Ground Truth Establishment for Training Set: [e.g., Initially labeled by a team of trained clinical annotators under the supervision of a lead radiologist. A subset of these annotations (e.g., 10%) were then independently reviewed by a senior radiologist for quality control and refinement. Active learning techniques were employed, where the model's uncertain predictions were sent for expert review to iteratively improve the training data quality.]
Ask a specific question about this device
Page 1 of 1