Search Results
Found 3 results
510(k) Data Aggregation
(159 days)
The MedCAD® AccuPlan® System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the MedCAD® AccuPlan® System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, surgical guides, and dental splints for use in maxillofacial surgery. The surgical guides and dental splints are intended to be used for the maxillofacial bone in maxillofacial surgery. The MedCAD® AccuPlan® System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The MedCAD® AccuPlan® System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support orthognathic or reconstructive surgery. The system uses electronic medical images of the patient's anatomy or stone castings made from the patient anatomy with input from the physician, to manipulate original patient images for planning and executing surgery. The patient specific outputs from the system includes anatomical models, dental splints, surgical guides, and patient-specific case reports.
Following the MedCAD® Quality System and specific Work Instructions, trained employees utilize Commercial Off-The-Shelf (COTS) software to manipulate 3-D medical scan images which can include Computed Tomography (CT), Cone Beam CT (CBCT), and/or 3-D scan images from patient physical models (stone models of the patient's teeth) to create patient-specific physical and digital outputs. The process requires clinical input and review from the physician during planning and prior to delivery of the final outputs. While the process and dataflow vary somewhat based on the requirements of a given patient and physician, the following description outlines the functions of key sub-components of the system, and how they interact to produce the defined system outputs. It should be noted that the system is operated only by trained MedCAD employees, and the physician does not directly input information. The physician provides input for model manipulation and interactive feedback through viewing of digital models of system outputs that are modified by the engineer during the planning session.
The MedCAD® AccuPlan® System is made up of 4 individual pieces of software for the design and various manufacturing equipment integrated to provide a range of anatomical models (physical and digital), dental splints, surgical guides, and patient-specific planning reports for reconstructive surgery in the maxillofacial region.
The MedCAD® AccuPlan® System requires an input 3-D image file from medical imaging systems (i.e. - CT) and/or implant file. This input is then used, with support from the prescribing physician to provide the following potential outputs to support reconstructive surgery. Each system output is designed with physician input and reviewed by the physician prior to finalization. All outputs are used only with direct physician involvement to reduce the criticality of the outputs.
System outputs include:
- Anatomical Models
- Surgical Guides
- Dental Splints
- Patient-Specific Case Reports
The purpose of this submission was to add titanium cutting / drilling guides to the family of available patient specific outputs. Cutting and drilling instruments can only be used with titanium cutting / drilling guides. Polymer guides are to be used for marking and positioning of anatomy only.
The MedCAD® AccuPlan® System is cleared by the FDA as a software and image segmentation system for maxillofacial surgery. The primary purpose of this specific submission (K223024) was to add titanium cutting/drilling guides to the family of available patient-specific outputs, which were not part of the previous K192282 clearance.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance relies on performance testing to demonstrate substantial equivalence, particularly concerning the new titanium cutting/drilling guides. The document highlights two key performance tests:
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Wear Debris Testing | The wear debris generated by the subject device must be less than what is reported in the literature to be safe. | PASS: The wear debris generated by the subject device is less than that reported in the literature to be safe. |
Fit and Form Validation | All physical samples must meet predetermined alignment and fit acceptance criteria when optically scanned and fitted to a representative anatomical model. | PASS: All samples met the predetermined acceptance criteria (alignment with 3D model, and fit on anatomical model). |
The document also mentions:
- Sterilization Validation: In accordance with ISO 17665 and FDA guidance, to a Sterility Assurance Level (SAL) of 1x10^-6. All test method acceptance criteria were met.
- Biocompatibility Validation: In accordance with ISO 10993-1 and FDA guidance. Results adequately address biocompatibility for the output devices and their intended use.
2. Sample Size Used for the Test Set and Data Provenance
- Wear Debris Testing: "Cutting / drilling instruments were used on a worst-case titanium surgical guide." The sample size is not explicitly stated but implies at least one worst-case guide was tested.
- Fit and Form Validation: "Subject devices from historical cases were manufactured." The sample size is not explicitly stated beyond "All samples met the predetermined acceptance criteria." The provenance is implied to be retrospective as it uses "historical cases." The country of origin of the data is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The document does not mention the use of experts to establish a "ground truth" in the traditional sense for these performance tests. The ground truth for the device's physical outputs (surgical guides) appears to be derived from engineered specifications and objective measurements (optical scanning, fit to master models). The system relies on trained MedCAD employees and physician input for planning, but this is part of the operational workflow rather than a ground truth establishment process for performance testing.
4. Adjudication Method for the Test Set
Not applicable. The performance testing described (wear debris, fit and form) does not involve human readers or a need for adjudication in the context of diagnostic agreement. It's a technical validation of physical properties.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC comparative effectiveness study was mentioned or performed. This device is described as a software system for planning and manufacturing physical outputs (guides, splints, models), which are then used in surgery, rather than an AI-based diagnostic tool that directly assists human readers in interpreting medical images for diagnosis. The system is operated by "trained MedCAD employees" and involves "clinical input and review from the physician during planning," but it's not described as an AI assistance tool for human interpretation of images.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
The performance tests described (wear debris and fit/form) evaluate the physical characteristics and dimensional accuracy of the manufactured outputs, which are direct results of the system's (algorithm's) processing and manufacturing. In that sense, aspects of "standalone" performance of the physical output are assessed. However, the system is explicitly stated as requiring "clinical input and review from the physician" for planning, meaning it's generally a human-in-the-loop system in its intended use, rather than a fully autonomous diagnostic algorithm.
7. The Type of Ground Truth Used
The ground truth for the performance tests appears to be:
- Engineered Specifications/Design Accuracy: For Fit and Form Validation, the manufactured devices are compared to the "3D model" (the digital design generated by the system based on patient imaging). This implies the 3D model itself serves as the ground truth for ideal form and alignment.
- Literature-based Safety Thresholds: For Wear Debris Testing, the acceptance criterion is quantitative: less than "that reported in the literature to be safe." This indicates a ground truth derived from existing scientific literature on safe levels of wear debris from similar materials/applications.
8. The Sample Size for the Training Set
The document does not provide information about a "training set" or "training data" for the MedCAD® AccuPlan® System. This suggests that the system's functionality is not based on a machine learning model that requires a training phase with labeled data in the way many AI/ML medical devices do. It appears to be a rule-based or engineering-based software for image processing, segmentation, and design for manufacturing.
9. How the Ground Truth for the Training Set was Established
Since no training set is mentioned (refer to point 8), this question is not applicable.
Ask a specific question about this device
(195 days)
IPS CaseDesigner is indicated for use as a software and image segmentation system for the transfer of imaging information from a scanner such as a CT scanner. It is also indicated to support the diagnostic and treatment planning process of craniomaxillofacial procedures. IPS CaseDesigner facilitates the service offering of individualized surgical aids.
IPS CaseDesigner has specific functionalities to visualize the diagnostic information, e.g. from CT-imaging, to perform specific measurements in the image data and to plan surgical actions in order to support the diagnostic and treatment planning process. Based on the diagnostic and planning data, the IPS design service can offer individualized surgical aids.
The IPS CaseDesigner 2.0 is a software and image segmentation system for the transfer of imaging information from a CT scanner, indicated to support the diagnostic and treatment planning process of craniomaxillofacial procedures and facilitate the service offering of individualized surgical aids.
Here's an analysis of its acceptance criteria and the supporting study:
1. Table of Acceptance Criteria and Reported Device Performance
The provided documentation does not explicitly list quantitative acceptance criteria in a dedicated table format. The "Differences" section, however, outlines new functionalities and improvements compared to the predicate device, implying that the performance of these new features needed to be validated. The "Performance Data" section states that the device was verified and validated and that "requirements for the features have been met."
Based on the text, the key performance aspects that were likely evaluated for the new features are:
Feature/Functionality | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Additional Segmental Maxillary Osteotomies | Accurate simulation and planning of segmental maxillary osteotomies (Split, Y-cut, H-cut). | Allows planning of segmental maxillary osteotomies, enabling different types of virtual cuts and bone fragment movement. |
Intraoral Surface Scan Data Import | Ability to accurately import and utilize intraoral surface scan data (dental casts) for occlusal information. | Can use intraoral surface scans for detailed occlusal information. |
Osteosynthesis Plates Selection/Ordering | Correct display and selection of osteosynthesis plates, and accurate generation of a list for ordering. | Possible to export a list of specific plates selected from an available list. |
3D Cephalometry | Accurate setting of 3D landmarks and planes, correct calculation of measurements, and automatic updates based on planning. | Supports 3D Cephalometry with setting landmarks, measurements, and automatic updates. Virtual lateral and frontal cephalograms are calculated. |
New Algorithm for Virtual Occlusion | Accuracy and efficiency comparable to the manual workflow of placing and digitizing dental casts. | Concluded to be as efficient and accurate as the manual workflow; provides the same level of accuracy and reliability. |
Splint Visualization | Accurate generation and visualization of the surgical splint directly within the 3D workspace. | Allows generation and visualization of surgical splint directly in the 3D workspace. |
3D Photo Mapping (Soft Tissue Simulation) | Improved visualization of soft tissue simulation with the ability to add "real-world" textures. | Improved visualization of soft tissue simulation. |
Operating System Compatibility | Compatibility and validated performance on specified operating systems (Windows 7, 10; Mac OS Catalina, Mojave, High Sierra). | Tested and validated on all specified systems. |
2. Sample Size for the Test Set and Data Provenance
The document does not specify the sample size used for the test set for any of the performance evaluations. It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature) for any specific testing data.
3. Number of Experts and Qualifications for Ground Truth
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. For the "New algorithm for virtual occlusion and occlusion alignment," it states "This algorithm was validated and it was concluded that it is as efficient and as accurate as the manual workflow..." but does not detail who performed this validation or how the "manual workflow" ground truth was established by experts.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method used for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. There is no mention of human readers improving with AI assistance or without. The device functions as a planning tool for clinicians, and the focus of the validation is on the accuracy and functionality of the software's new features.
6. Standalone Performance
Yes, the study primarily describes the standalone performance of the algorithm and software features. The validation of the "New algorithm for virtual occlusion and occlusion alignment" specifically states that it was concluded to be "as efficient and as accurate as the manual workflow," which implies a standalone assessment of the algorithm's output against a defined standard. The other listed features also focus on the core functionality of the software itself.
7. Type of Ground Truth Used
The type of ground truth used is not explicitly stated in detail for each feature. However, based on the description, it can be inferred that:
- Expert Consensus/Manual Workflow: For the "New algorithm for virtual occlusion and occlusion alignment," the ground truth likely involved a manual workflow of placing dental casts in occlusion and digitizing them, implicitly representing an expert-derived or standard clinical practice ground truth. The algorithm's output was compared against this.
- Engineering/Design Specifications: For functionalities like "Additional segmental maxillary osteotomies," "Osteosynthesis plates," "3D Cephalometry," and "Splint visualization," the ground truth for validation likely involved verifying the software's output against pre-defined engineering specifications, anatomical accuracy, and expected physiological movements/measurements as determined by medical and software experts during the design and development phases.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set. As this device is a planning and visualization tool, rather than a deep learning model requiring extensive training data, explicit mention of a "training set" might not be applicable in the same way as for diagnostic AI algorithms. However, if any machine learning components were used (e.g., for occlusion alignment), this information is not provided.
9. How the Ground Truth for the Training Set was Established
Since the document does not specify a training set or its sample size, it does not describe how ground truth for a training set was established.
Ask a specific question about this device
(139 days)
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used in an additive manufacturing portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in thoracic (excluding spine) and reconstructive surgeries. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support thoracic (excluding spine) and reconstructive surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician. to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a medical device for surgical planning. The provided text contains information about its acceptance criteria and the studies performed to demonstrate its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes specific performance tests related to the materials and software used in the KLS Martin Individual Patient Solutions (IPS) Planning System. However, it does not explicitly provide a table of acceptance criteria with numerical targets and corresponding reported device performance values for the device's primary function of surgical planning accuracy or effectiveness. Instead, it relies on demonstrating that materials withstand sterilization, meet biocompatibility standards, and that software verification and validation were completed.
Here's a summary of the performance claims based on the provided text:
Acceptance Criteria Category | Reported Device Performance (Summary from text) |
---|---|
Material Performance | Polyamide (PA) Devices: Demonstrated ability to withstand multiple sterilization cycles and maintain ≥85% of initial tensile strength, leveraging data from K181241. Testing provides evidence of shelf life. |
Titanium Devices (additively manufactured): Demonstrated substantial equivalence to titanium devices manufactured using traditional (subtractive) methods, leveraging testing from K163579. These devices are identical in formulation, manufacturing processes, and post-processing. | |
Biocompatibility | All testing (cytotoxicity, sensitization, irritation, and chemical/material characterization) was within pre-defined acceptance criteria, in accordance with ISO 10993-1. Adequately addresses biocompatibility for output devices and intended use. |
Sterilization | Steam sterilization validations performed for each output device for the dynamic-air-removal cycle in accordance with ISO 17665-1:2006 to a sterility assurance level (SAL) of 10-6 using the biological indicator (BI) overkill method. All test method acceptance criteria were met. |
Pyrogenicity | LAL endotoxin testing conducted according to AAMI ANSI ST72. Results demonstrate endotoxin levels below USP allowed limit for medical devices and meet pyrogen limit specifications. |
Software Verification & Validation | Performed on each individual software application (Materialise Mimics, Geomagic® Freeform Plus™) used in planning and design. Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were correctly and completely implemented and traceable to system requirements. Testing showed conformity with pre-defined specifications and acceptance criteria. Software documentation ensures mitigation of potential risks and performs as intended based on user requirements and specifications. |
Guide Specifications | Thickness (Cutting/Marking Guide): Min: 1.0 mm, Max: 20 mm. |
Width (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Length (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Degree of curvature (in-plane): N/A | |
Degree of curvature (out-of-plane): N/A | |
Screw hole spacing (Cutting/Marking Guide): Min: ≥4.5 mm, Max: No Max. | |
No. of holes (Cutting/Marking Guide): N/A | |
Screw Specifications | Diameter (Temporary): 2.3 mm - 3.2 mm. |
Length (Temporary): 7 mm - 17 mm. | |
Style: maxDrive (Drill-Free, non-locking, locking). |
2. Sample size used for the test set and the data provenance
The document specifies "simulated use of guides intended for use in the thoracic region was validated by means of virtual planning sessions with the end-user." However, it does not provide any specific sample size for a test set (e.g., number of cases or patients) or details about the provenance of data (e.g., retrospective or prospective, country of origin). The studies appear to be primarily focused on material and software validation, not a clinical test set on patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth for a test set, nor their qualifications. The "virtual planning sessions with the end-user" implies input from clinical professionals, but no details are provided.
4. Adjudication method for the test set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for a test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document states that "Clinical testing was not necessary for the determination of substantial equivalence." Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers was not performed or reported for this submission. The device is a planning system for producing physical outputs, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is described as "a software system and image segmentation system for the transfer of imaging information... The system processes the medical images and produces a variety of patient specific physical and/or digital output devices." It also involves "input from the physician" and "trained employees/engineers who utilize the software applications to manipulate data and work with the physician to create the virtual planning session." This description indicates a human-in-the-loop system, not a standalone algorithm. Performance testing primarily focuses on the software's ability to implement requirements and specifications and material properties, rather than an independent algorithmic assessment.
Software verification and validation were performed on "each individual software application used in the planning and design," demonstrating conformity with specifications. This can be considered the standalone performance evaluation for the software components, ensuring they function as intended without human error, but it's within the context of supporting a human-driven planning process.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the materials and sterilization parts of the study, the "ground truth" or reference is established by international standards (ISO 10993-1, ISO 17665-1:2006) and national standards (AAMI ANSI ST72, USP allowed limit for medical devices).
For the "virtual planning sessions with the end-user," the ground truth is implicitly the expert judgment/agreement of the end-user (physician) on the simulated surgical treatment options and guide designs. No further specifics are given.
8. The sample size for the training set
The document does not mention any training set or its sample size. This is expected as the device is not described as a machine learning or AI device that requires a training set for model development in the typical sense. The software components are commercially off-the-shelf (COTS) applications (Materialise Mimics, Geomagic® Freeform Plus™) which would have their own internal validation and verification from their developers.
9. How the ground truth for the training set was established
Since no training set is mentioned for the device itself, the establishment of ground truth for a training set is not applicable in this document.
Ask a specific question about this device
Page 1 of 1