Search Results
Found 23 results
510(k) Data Aggregation
(88 days)
DZJ
Materialise Personalized Guides for Craniomaxillofacial Surgery are intended to guide the marking of bone and or guide surgical instruments in facial surgery.
CMF Titanium Guides are used during bone repositioning/reconstruction surgical operations for orthognathic and reconstruction (including bone harvesting) indications.
CMF Titanium Guides are intended for children, adolescents and adults.
CMF Titanium Guides are intended for single use only.
CMF Titanium Guides are to be used by a physician trained in the performance of maxillofacial surgery.
Materialise Personalized Models for Craniomaxillofacial Surgery are intended for visualization of the patient's anatomy, preparation of surgical interventions and fitting or adjustment of implants or other medical devices such as osteosynthesis plates or distractors, in mandibular and maxillofacial surgical procedures.
CMF Plastic Models are intended for infants, children, adolescents and adults.
CMF Plastic Models are intended for single use only.
CMF Plastic Models are to be used by a physician trained in the performance of maxillofacial surgery.
Materialise Personalized Guides and Models for Craniomaxillofacial Surgery combines the use of 3D preoperative planning software with patient-matched guides and models to improve and simplify the performance of surgical interventions by transferring the pre-operative plan to surgery. Materialise Personalized Guides and Models for Craniomaxillofacial Surgery are used in the facial skeleton or in maxillofacial surgeries.
The surgical planning is based on medical images of the patient that are segmented in order to create a 3D representation of the patient's anatomy. The surgical treatment of the patient is simulated based on instructions provided by the surgeon and the patient-matched devices are tailored to the treatment and the patient's needs. The patient-matched devices are manufactured from commercially pure Titanium, polyamide, or clear acrylic by means of additive manufacturing technologies. The patient-matched devices are provided non-sterile.
Materialise Personalized Guides and Models for Craniomaxillofacial Surgery include CMF Titanium Guides and CMF Plastic Models.
The provided text is a 510(k) summary for the device "Materialise Personalized Guides and Models for Craniomaxillofacial Surgery." It details the device's indications for use, description, comparison to predicate and reference devices, and non-clinical performance data. However, it does not include information about AI/algorithm performance, acceptance criteria for such an algorithm, or a clinical study for proving the device meets those criteria. The document lists "non-clinical testing" and states that "no guide specific mechanical testing is performed but this is covered by mechanical analysis of CMF Titanium Plates."
Therefore, I cannot extract the information required for an AI device acceptance criteria and study from this document. The document primarily focuses on the substantial equivalence of the physical, patient-matched guides and models to existing devices, covering aspects like material compatibility, mechanical properties, biocompatibility, and sterilization.
To answer your request, the ideal information would be present in a document describing an AI/ML medical device, which would typically contain details regarding the algorithm's performance metrics (acceptance criteria), the dataset used for testing, ground truth establishment, and potential MRMC studies. This document does not describe such a device.
Ask a specific question about this device
(132 days)
DZJ
TECHFIT Digitally Integrated Surgical Reconstruction Platform (DISRP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the DISRP System and the result is an outbut data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including surgical guides and splints for use in maxillofacial surgery. The DISRP System is also intended as a preoperative software tool for simulating / evaluating surgical treatment options.
The DISRP system is compatible with the TECHFIT Patient- Specific Maxillofacial System and the TECHFIT Diagnostic Models and should be used in conjunction with expert clinical judgment.
The TECHFIT DISRP SYSTEM is composed by Anatomic Specificx Guiding System and the Digitally Integrated Surgical Reconstruction Platform (DISRP).
- . The Digitally Integrated Surgical Reconstruction Platform (DISRP) is a web-based collaboration software for digital surgery case flow management and Orthognathic surgery planning that reflects the production process and allows for the interaction of multiple users: surgeons, sales representatives, and the TECHFIT case planning staff (case planning assistant, case planners, and operations director), using multiple devices. It allows collaboration in the planning process. Detail description of the software can be found in later in this section.
- Anatomic Specificx Guiding System is a Patient-Specific single-use device designed to ● assist surgeons in transferring the pre-surgical plan to the operation room. This system includes surqical quides intended for Orthognathic and Reconstructive surgeries in adults. The surgical guides have drilling holes and slots to make osteotomies and ensure the correct positioning of bones and implants.
The Anatomic Specificx Guiding System is divided into two categories: Anatomic Specificx Orthognathic Guides and Anatomic Specificx Reconstruction Guides.
Anatomic Specificx Orthognathic Guides are classified into Titanium and Resin Surgical Guides.
Anatomic Specificx Orthognathic Titanium Guides are manufactured from Commercially Pure (CP) Titanium grade 4; they include mandibular and maxillofacial surgical guides (e.q. Le Fort, Genioplasty, etc). Palatal Splints are also part of Anatomic Specificx Orthognathic Titanium Guides, which are optional quides used in orthognathic surgery to quide and support the correct teeth positioning and validate the patient's final occlusion.
Anatomic Specificx Orthognathic Resin Guides are manufactured through rapid prototyping using the Form 3B and Form 4B printers and Biomed Clear Resin from Formlabs. These guides include Le Fort and genioplasty surgical guides. During surgery, resin surgical guides must be used with slot, drill, and screw metal sleeves. Slot sleeves are made from commercially pure grade 4 titanium, while drill and screw sleeves are made from alloyed titanium (Ti6Al4V). All sleeves are manufactured by machining. The resin quides also include splints (intermediate, final, and palatal), which are optional quides used in orthognathic surgery to quide and support the correct teeth positioning and validate the patient's final occlusion.
Anatomic Specificx Reconstruction Guides
The Anatomic Specificx Reconstruction Guides are intended for mandibular and maxillofacial surqical procedures in adults, using patient grafts/flaps for reconstruction. These guides are made from Commercially Pure (CP) grade 4 titanium, produced through machining and finished with an anodizing process. They are intended for use in the anatomical sites of the maxilla, mandible and fibula. The reconstruction guides provide transfer of the pre-surgical plan to the operating room, facilitating placement and fixation of the patient's bone grafts/flaps at the surgical site.
The TECHFIT DISRP® System is cleared based on non-clinical testing. The device is a software system and image segmentation system intended for use in maxillofacial surgery for transferring imaging information from a medical scanner (such as a CT-based system) to create digital models, or to produce physical outputs such as surgical guides and splints. It also serves as a preoperative software tool for simulating and evaluating surgical treatment options.
Here's an overview of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Test/Assessment | Acceptance Criteria | Reported Device Performance |
---|---|---|---|
Sterilization | Validating sterilization methods | Sterilization to a Sterility Assurance Level (SAL) of 10-6 using recommended steam sterilization instructions. | The results of the steam sterilization validation show that Anatomic Specificx Reconstruction Guides and Palatal Splints were sterilized to a SAL of 10-6 using the recommended steam sterilization instructions. |
Dimensional Accuracy | Scanning of Anatomic Specificx Reconstruction Guides | Scanning comparison between the physical guides and the existing digital files must be successful and meet all acceptance criteria. | The scanning comparison between the Anatomic Specificx Reconstruction Guides and the existing files was successful, meeting all acceptance criteria. |
Compatibility | Compatibility testing between components: Anatomic Specificx Reconstruction Guides, Patient-specific Maxillofacial System, and TECHFIT Diagnostic Models. For palatal splints: compatibility with Anatomic Specificx Orthognathic Titanium Guides, TECHFIT Diagnostic Models, and Patient-specific Maxillofacial System. | All components must be compatible with each other as specified in their intended use. | Anatomic Specificx Reconstruction Guides were compatible with Patient-specific Maxillofacial System and TECHFIT Diagnostic Models. Palatal Splints were compatible with Anatomic Specificx Orthognathic Titanium Guides, TECHFIT Diagnostic Models and Patient-specific Maxillofacial System. |
Mechanical Performance | Static and dynamic four-point bending mechanical tests on TECHFIT plates compared to a mandibular plate from KLS Martin. | TECHFIT plates must offer comparable or significantly greater resistance. | TECHFIT plates offer significantly greater resistance. |
Software Verification & Validation (V&V) | Verifying the item parts for the software. | DISRP meets the Software Design Specification (SDS) and functions as intended in the intended use environment. Rigorous testing for deployment, reliability, and security. | DISRP meets the SDS and testing as intended in the intended use environment. DISRP has rigorous testing, and it is reviewing every time it is deployed working as expected. V&V includes reliability and security processes. |
Biocompatibility | Cytotoxicity test (ISO 10993-5) | No cytotoxic effect. | No cytotoxic effect. |
Sensitization test (ISO 10993-10) | No sensitizing properties. | No sensitizing properties. | |
Irritation/intracutaneous reactivity test (ISO 10993-10 and ISO 10993-23) | No irritant properties. | No irritant properties. | |
Acute systemic toxicity (ISO 10993-11) | No evidence of systemic toxicity. | No evidence of systemic toxicity. | |
Material-Mediated Pyrogenicity (ISO 10993-11) | No pyrogenic properties. | No pyrogenic properties. | |
Genotoxicity (ISO 10993-3) | No genotoxic potential. | No genotoxic potential. | |
Chemical Characterization and risk assessment (ISO 10993-18 and ISO 10993-17) | Non-toxic. | Non-toxic. |
2. Sample Size Used for the Test Set and Data Provenance
The provided document does not specify the exact sample sizes for the test sets in the dimensional validation, compatibility testing, mechanical testing, or software verification and validation. It only states that these tests were conducted.
The provenance of the data (e.g., country of origin, retrospective or prospective) is not explicitly mentioned for any of the non-clinical tests.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. For instance, in the dimensional validation, it does not state how many individuals assessed the comparisons or their qualifications. For software V&V, it does not specify who conducted the testing or their expertise.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication methods (e.g., 2+1, 3+1) for establishing ground truth or evaluating test results. The conclusions appear to be based on direct measurements and adherence to test standards.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, the document does not mention a Multi Reader Multi Case (MRMC) comparative effectiveness study. The studies described are non-clinical hardware and software performance tests, not clinical studies involving human readers or comparative effectiveness with and without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The software verification and validation activities (e.g., "DISRP meets the SDS and testing as intended in the intended use environment") represent standalone algorithm performance testing to some extent. The DISRP system, as a software for planning and segmentation, operates on input data files and produces output data files, which is a core standalone function. However, the overall system still anticipates "expert clinical judgment" as stated in the Indications for Use, meaning it's intended to be used with a human in the loop for clinical application. The V&V focuses on the software's functional correctness.
7. The Type of Ground Truth Used
The type of ground truth used for the non-clinical tests can be inferred as follows:
- Sterilization: Ground truth is established by adherence to recognized international standards (AAMI/ISO 17665-1, ANSI/AAMI/ISO 14937) and demonstrating the specified SAL.
- Dimensional validation: Ground truth would be the original digital design files against which the scanned physical devices are compared. This is a direct comparison to a digital reference.
- Compatibility testing: Ground truth is defined by the functional requirement for components to work together seamlessly.
- Mechanical testing: Ground truth is established by the specified mechanical properties to be achieved or exceeded, often through direct measurement and comparison with known material properties or predicate device performance.
- Software verification and validation: Ground truth is the Software Design Specification (SDS) and the intended functional requirements of the software.
- Biocompatibility testing: Ground truth is established by the pass/fail criteria defined in the referenced ISO 10993 series standards, indicating the absence of adverse biological reactions.
8. The Sample Size for the Training Set
The document does not provide any information about a training set for an AI/ML algorithm. The device is described as a "software system and image segmentation system," but there's no explicit mention of machine learning or deep learning components requiring a dedicated training set. The development described is more akin to traditional software and CAD/CAM processes.
9. How the Ground Truth for the Training Set was Established
Since no training set is mentioned or implied for an AI/ML component, this information is not applicable and not provided in the document.
Ask a specific question about this device
(130 days)
DZJ
The traCMF Solution is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT-based system. The input data file is processed by the tmCMF Solution, and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, surgical guides, and dental splints for use in maxillofacial and mandibular surgery. The surgical guides and dental splints are intended to the maxillofacial and mandibular bone in mandibular surgery. The tmCMF Solution is also intended as a preoperative software tool for simulating/evaluating surgical treatment options.
The TechMah CMF (tmCMF) Solution is a family of personalized product solutions for trauma and reconstruction procedures in the mandible and midface. The solution is comprised of Surgeon Review Tool (SRT) software, and maxillofacial and mandibular surqical instruments (surgical quides, anatomical models, and dental splints). The surqical instruments are patient-specific devices and are designed utilizing CT and dental scan patient image data.
Surgical guides are patient-specific devices or templates that are based on preoperative software planning and are designed to fit a specific patient. These guides are used to assist a surgeon in transferring the pre-operative plan to the surgery by guiding the marking of bone for plate fixation screws and the position of the osteotomy marking slots. Surgical guides are available for treatment (maxilla and mandible) and harvesting (fibula, iliac crest, and scapula). Guides can be used in conjunction with anatomical models to verify anatomical positioning and fit.
Anatomical models are patient-specific models that are based on pre-operative anatomy and surgical planning specific to a patient. These models are used to assist a surgeon in transferring the pre-operative plan to the surgery by representing preoperative, intra-operative, and post-operative anatomical models as guidance. Anatomical models are available for treatment (maxilla and mandible), and harvesting (fibula, iliac crest, and scapula) anatomy. Anatomical models can be used to check quide fit.
Dental splints are patient-specific devices or templates that are based on preoperative software surgical planning and are designed to fit a specific patient. Dental splints are available as single splint desired occlusions or combined splint-in-splint option. These templates are used to assist a surgeon in transferring the pre-operative plan to the surgery by guiding the dental alignment of bone and teeth.
The Surgeon Review Tool (SRT) software is used by surgeons for the review and approval of surgical plans and surgical instrument designs. The SRT software is accessed through a web interface from a surgeon's device.
The tmCMF Solution's acceptance criteria and the study proving it meets these criteria are detailed below.
1. A table of acceptance criteria and the reported device performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Cleaning | Testing was performed to validate the end-user cleaning protocol of the subject device per the FDA Guidance Document "Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling" and AAMI TIR 30. (Passed) |
Sterilization | Testing was performed to validate the end-user sterilization protocol of the subject device per ISO 17665-1, ISO 17665-2, and ANSI/AAMI ST79. (Passed) |
Biocompatibility | Biocompatibility assessment per the FDA Guidance "Use of International Standard ISO 10993-1, "Biological evaluation of medical devices - Part 1: Evaluation and testing within a risk management process" and biocompatibility testing per ISO 10993-1:2018 for its contact classification was conducted. (Ensured biocompatibility) |
Software Verification and Validation | Software Verification and Validation Testing was conducted in accordance with the requirements of IEC 62304:2006/A1:2015, and relevant FDA guidance documents. (Passed) |
Usability | Usability was validated in accordance with IEC 62366-1:2020. (Validated) |
Benchtop Performance - Hardware Performance Verification | Testing demonstrated that the surgical guide and dental splint meets the predetermined acceptance criteria. (Passed) |
Benchtop Performance - Cadaveric Benchtop Performance Testing | Comparison of pre-operative surgical plan to post-operative CT measurements. Results passed the predetermined acceptance criteria, indicating substantial equivalence. (Passed) |
Benchtop Performance - Hardware Integrity Test Verification | Testing demonstrated that the instruments meet the predetermined acceptance criteria. (Passed) |
Benchtop Performance - Hardware Verification Inspection and Analysis | Testing demonstrated that instruments' implementation is compliant with the acceptance criteria. (Compliant) |
Surgical Case Verification | Testing demonstrated that the surgical case reports are compliant with acceptance criteria. (Compliant) |
System Validation | Testing demonstrated that the system has met user needs and is compliant with acceptance criteria. (Compliant) |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Cadaveric Benchtop Performance Testing" as part of the benchtop performance evaluation. However, it does not specify the sample size (number of cadavers or cases) used for this test set, nor does it explicitly state the country of origin or whether the data was retrospective or prospective. It only states that a comparison of pre-operative surgical plan to post-operative CT measurements was performed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number of experts used to establish the ground truth for the test set or their specific qualifications. It mentions that the "Surgeon Review Tool (SRT) software is used by surgeons for the review and approval of surgical plans and surgical instrument designs," implying surgeon involvement in the process but not specifically for ground truth establishment for a test set in the regulatory submission context.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method (such as 2+1 or 3+1) used for establishing ground truth or evaluating the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size related to human reader improvement with or without AI assistance. The tmCMF Solution is described as a software system for image segmentation, surgical planning, and the production of physical outputs, with the Surgeon Review Tool (SRT) facilitating surgeon review and approval, but not explicitly as an AI assistance tool for human readers in a diagnostic setting that would typically involve a MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
While the document describes the tmCMF Solution as a "software system and image segmentation system," it also highlights the "Surgeon Review Tool (SRT) software is used by surgeons for the review and approval of surgical plans and surgical instrument designs" and "Physician Interaction with Planning and Physician Model / Guide Approval." This indicates that the device is intended for human-in-the-loop performance, with the surgeon involved in the review and approval process. A standalone algorithm-only performance is not explicitly described as a primary evaluation method.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "Cadaveric Benchtop Performance Testing," the ground truth was established by "Comparison of pre-operative surgical plan to post-operative CT measurements." This suggests that the ground truth for accuracy was based on direct measurement from CT scans after the surgical plan was implemented on cadavers, rather than expert consensus, pathology, or long-term outcomes data.
8. The sample size for the training set
The document does not provide any information regarding the sample size used for the training set of the tmCMF Solution's "software system and image segmentation system."
9. How the ground truth for the training set was established
The document does not disclose how the ground truth for the training set was established.
Ask a specific question about this device
(142 days)
DZJ
TECHFIT Digitally Integrated Surgical Reconstruction Platform (DISRP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the DISRP System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including surgical guides and splints for use in maxillofacial surgery. The DISRP System is also intended as a preoperative software tool for simulating / evaluating surgical treatment options.
The DISRP system is compatible with the TECHFIT Patient- Specific Maxillofacial System and the TECHFIT Diagnostic Models and should be used in conjunction with expert clinical judgment.
The TECHFIT DISRP SYSTEM is composed by Orthognathic Surgical Guides and the Digitally Integrated Surgical Reconstruction Platform (DISRP).
Digitally Integrated Surgical Reconstruction Platform (DISRP)
The Digitally Integrated Surgical Reconstruction Platform (DISRP) is a web-based collaboration software for digital surgery case flow management and Orthognathic surgery planning that reflects the production process and allows for the interaction of multiple users: surgeons, sales representatives, and the TECHFIT case planning staff (case planning assistant, case planners, and operations director), using multiple devices. It allows easy collaboration in the planning process. Being web-based allows immediate and convenient sharing without the installation or maintenance of the application at the user's end.
Orthognathic Surgical Guides
Orthognathic Surgical Guides are Patient-Specific single use devices that are designed to assist the surgeon in transferring the pre-surgical plan to the operation room. Surgical Guides are intended for Orthognathic surgeries in adults and have drilling holes and slots for making drillings and osteotomies, as well as they guide the correct positioning of bones and implants.
Orthognathic Surgical Guided into two types: Resin Orthognathic Surgical Guides and Machined Orthognathic Surgical Guides.
- Resin Orthognathic Surgical Guides
Resin Orthognathic Surgical Guides include surgical guides and splints.
Surgical Guides consist of the Le Fort and Genioplasty surgical quides, which are composed of a body that is manufactured by TECHFIT Digital Surgery and produced by rapid prototyping with the Form 3B printer and Biomed Clear Resin from Formlabs, Somerville, United States.
In surgery Surgical Guides must be used with Metal Sleeves. There are three types of metal sleeves: slot metal sleeve, drill metal sleeve and screw metal sleeve. The slot, drill and screw metal sleeves are manually fitted by the healthcare professional during surgery into the slots, drilling holes and fixation holes of the surgical guide. The metal sleeves are produced from commercially pure titanium grade 4 through machining and are manufactured equivalent to the TECHFIT Patient-Specific Maxillofacial System (K203282) and AFFINITY Proximal Tibia System (K220199). In addition, the Metal Sleeves are single-use and patient-specific accessories.
The Splints are optional quides used in orthognathic surgery to quide to the correct teeth positioning and to validate the patient's final occlusion. The Splints are manufactured by TECHFIT Digital Surgery and produced by rapid prototyping using the Form 3B printer and Biomed Clear Resin from Formlabs, Somerville, United States.
- Machined Orthognathic Surgical Guides
Machined Orthognathic Surgical Guides consist of the Le Fort and Genioplasty surgical guides that are manufactured from the same material (commercially pure titanium grade 4) and in a manufacturing process equivalent to the TECHFIT Patient-Specific Maxillofacial System (K203282) and AFFINITY Proximal Tibia System (K220199) manufactured through machining.
Machined Orthognathic Surgical Guides and Metal Sleeves are manufactured with the same material and same manufacturing process.
The provided text describes the TECHFIT DISRP® System, including its indications for use, comparison to predicate devices, and performance data from non-clinical testing. However, it does not contain details about a study evaluating the performance of an AI/algorithm component against specific acceptance criteria related to its accuracy or diagnostic capability, nor does it mention any human-in-the-loop studies (MRMC) or standalone algorithm performance studies.
The "Performance Data" section primarily addresses:
- Safety testing: Validating sterilization methods.
- Performance testing: Dimensional validation (comparing physical guides to digital designs), mechanical testing (withstanding forces), and compatibility testing with other systems.
- Software testing: Verification and validation of the DISRP software itself as per IEC 62304.
- Biocompatibility testing: For the materials used in the surgical guides and metal sleeves.
Therefore, based solely on the provided text, I cannot fill out the requested table and details regarding acceptance criteria for an AI/algorithm and a study proving its performance in the context of diagnostic accuracy or reader improvement. The device described appears to be a software system for planning and producing physical surgical guides and splints, not an AI for image interpretation or diagnosis.
If the "DISRP System" inherently contains an AI component for, say, auto-segmentation or surgical plan optimization that would require diagnostic-like performance evaluation, that information is not present in the provided excerpt.
Assuming the request refers to the acceptance criteria for the overall system's performance as described, rather than an AI's diagnostic accuracy:
Here's how I can address the request based on the available information, while highlighting what's not present:
Response based on the provided document:
The provided document describes the TECHFIT DISRP® System, which is a software system and image segmentation system used for surgical planning and the production of physical surgical guides and splints. It is not an AI diagnostic device in the traditional sense that generates clinical interpretations or provides diagnostic classifications. Therefore, the acceptance criteria and study details commonly associated with AI performance validation (like sensitivity, specificity, MRMC studies, ground truth establishment for diagnostic tasks) are not applicable to the data presented in this FDA 510(k) summary.
The performance data presented focuses on the physical and software validation of the device, ensuring it meets its intended use of accurately transferring imaging information to produce dimensionally correct and mechanically sound surgical guides and splints, and that the software functions as intended.
Here's a breakdown based on the provided text, addressing the points where information is available or noting its absence:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (for the system's function/physical output) | Reported Device Performance (from "Performance Data") |
---|---|
Dimensional Validation: Orthognathic surgical guides maintain proportions and dimensions of original digital design after manufacturing and sterilization. | Conclusion: "DISRP and Orthognathic surgical guides maintain the proportions and dimensions of the original digital design after manufacturing and sterilization." (Test Method: "Scanning of orthognathic surgical guides and then comparing them versus the original files, before and after sterilization.") |
Mechanical Testing: Orthognathic surgical guides can withstand maximum bite force, drilling force, and general surgical handling force. | Conclusion: "Orthognathic Surgical Guides withstand bit force and the force applied by the surgeon during handling." (Test Method: "Bending and compressive strength, after sterilization. Evaluating if the orthognathic surgical guides can withstand the maximum bite force, force required for drilling and handle force required for a general surgery.") |
Software Validation: Software meets IEC 62304 standards for verification and validation. | Conclusion: "DISRP software was verified and validated as per IEC 62304." (Test Method: "Software verification and validation activities") |
Sterilization Efficacy: Achieves a Sterility Assurance Level (SAL) of 10^-6 for steam sterilization. | Conclusion: "The results of the steam sterilization validation show that TECHFIT Orthognathic Surgical Guides sterilized to a SAL of 10^-6 using the recommended steam sterilization instructions." (Test Method: AAMI/ISO 17665-1:2006/(R)2013 and ANSI/AAMI/ISO 14937:2009/(R)2013) |
Biocompatibility (Resin Guides): No cytotoxic, sensitizing, irritant, acute systemic toxicity, pyrogenic, or genotoxic properties. Non-toxic chemical characterization. | Conclusion: All biocompatibility tests (Cytotoxicity, Sensitization, Irritation, Acute systemic toxicity, Pyrogenicity, Chemical Characterization) were concluded with "No..." or "Non-toxic" findings in compliance with respective ISO standards (ISO 10993-5, 10993-10, 10993-11, 10993-18, 10993-17). |
Biocompatibility (Machined Guides/Sleeves): No cytotoxic, sensitizing, irritant, acute systemic toxicity, pyrogenic, or genotoxic properties. Non-toxic chemical characterization. | Conclusion: All biocompatibility tests (Cytotoxicity, Sensitization, Irritation, Acute systemic toxicity, Pyrogenicity, Genotoxicity, Chemical Characterization) were concluded with "No..." or "Non-toxic" findings in compliance with respective ISO standards (ISO 10993-5, 10993-10, 10993-11, 10993-3, 10993-18, 10993-17). |
Compatibility: Compatible with the Maxillofacial System and Metal Sleeves. | Conclusion: "Orthognathic Surgical Guides are compatible with the Maxillofacial System and Metal Sleeves." (Test Method: "Verifying compatibility between the Maxillofacial System and orthognathic surgical guides") |
Details Regarding Studies (as pertains to this type of device):
-
Sample sizes used for the test set and the data provenance:
- The document mentions "Scanning of orthognathic surgical guides" and "Bending and compressive strength" tests but does not specify the sample size (number of guides or tests performed) for these non-clinical performance validations.
- The data provenance is not explicitly stated in terms of country of origin for test materials or whether tests were retrospective or prospective, beyond general statements about testing performed for FDA submission. These are typical engineering validation tests, not clinical trials.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable for this type of device validation. The "ground truth" here is based on engineering specifications, material properties, and adherence to digital designs, not expert clinical interpretation of images.
-
Adjudication method for the test set:
- Not applicable. This is not a human-reader study requiring adjudication for diagnostic accuracy.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. An MRMC study was not described. This device is for surgical planning and guide production, not for AI-assisted diagnostic interpretation by human readers.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The "Software testing" indicates that the "DISRP software was verified and validated as per IEC 62304," which is a standard for medical device software lifecycle processes. This constitutes a standalone validation of the software's functionality and safety, but not a performance evaluation in terms of diagnostic accuracy or interpretation, as it is not a diagnostic AI.
-
The type of ground truth used:
- The ground truth for the dimensional validation is the original digital design files of the surgical guides.
- For mechanical testing, the ground truth is engineering specifications for load-bearing capacity (e.g., maximum bite force, drilling force).
- For software validation, the ground truth is the specified software requirements and the IEC 62304 standard.
- For biocompatibility, the ground truth is the ISO 10993 series of standards for biological evaluation of medical devices.
-
The sample size for the training set:
- Not applicable. This document does not describe a machine learning algorithm that requires a training set. The "DISRP System" is described as a "software system and image segmentation system for the transfer of imaging information," implying rule-based or conventional image processing for segmentation rather than a data-driven AI model that learns from large training datasets.
-
How the ground truth for the training set was established:
- Not applicable as no training set for an AI model is described.
Ask a specific question about this device
(159 days)
DZJ
The MedCAD® AccuPlan® System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the MedCAD® AccuPlan® System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, surgical guides, and dental splints for use in maxillofacial surgery. The surgical guides and dental splints are intended to be used for the maxillofacial bone in maxillofacial surgery. The MedCAD® AccuPlan® System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The MedCAD® AccuPlan® System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support orthognathic or reconstructive surgery. The system uses electronic medical images of the patient's anatomy or stone castings made from the patient anatomy with input from the physician, to manipulate original patient images for planning and executing surgery. The patient specific outputs from the system includes anatomical models, dental splints, surgical guides, and patient-specific case reports.
Following the MedCAD® Quality System and specific Work Instructions, trained employees utilize Commercial Off-The-Shelf (COTS) software to manipulate 3-D medical scan images which can include Computed Tomography (CT), Cone Beam CT (CBCT), and/or 3-D scan images from patient physical models (stone models of the patient's teeth) to create patient-specific physical and digital outputs. The process requires clinical input and review from the physician during planning and prior to delivery of the final outputs. While the process and dataflow vary somewhat based on the requirements of a given patient and physician, the following description outlines the functions of key sub-components of the system, and how they interact to produce the defined system outputs. It should be noted that the system is operated only by trained MedCAD employees, and the physician does not directly input information. The physician provides input for model manipulation and interactive feedback through viewing of digital models of system outputs that are modified by the engineer during the planning session.
The MedCAD® AccuPlan® System is made up of 4 individual pieces of software for the design and various manufacturing equipment integrated to provide a range of anatomical models (physical and digital), dental splints, surgical guides, and patient-specific planning reports for reconstructive surgery in the maxillofacial region.
The MedCAD® AccuPlan® System requires an input 3-D image file from medical imaging systems (i.e. - CT) and/or implant file. This input is then used, with support from the prescribing physician to provide the following potential outputs to support reconstructive surgery. Each system output is designed with physician input and reviewed by the physician prior to finalization. All outputs are used only with direct physician involvement to reduce the criticality of the outputs.
System outputs include:
- Anatomical Models
- Surgical Guides
- Dental Splints
- Patient-Specific Case Reports
The purpose of this submission was to add titanium cutting / drilling guides to the family of available patient specific outputs. Cutting and drilling instruments can only be used with titanium cutting / drilling guides. Polymer guides are to be used for marking and positioning of anatomy only.
The MedCAD® AccuPlan® System is cleared by the FDA as a software and image segmentation system for maxillofacial surgery. The primary purpose of this specific submission (K223024) was to add titanium cutting/drilling guides to the family of available patient-specific outputs, which were not part of the previous K192282 clearance.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance relies on performance testing to demonstrate substantial equivalence, particularly concerning the new titanium cutting/drilling guides. The document highlights two key performance tests:
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Wear Debris Testing | The wear debris generated by the subject device must be less than what is reported in the literature to be safe. | PASS: The wear debris generated by the subject device is less than that reported in the literature to be safe. |
Fit and Form Validation | All physical samples must meet predetermined alignment and fit acceptance criteria when optically scanned and fitted to a representative anatomical model. | PASS: All samples met the predetermined acceptance criteria (alignment with 3D model, and fit on anatomical model). |
The document also mentions:
- Sterilization Validation: In accordance with ISO 17665 and FDA guidance, to a Sterility Assurance Level (SAL) of 1x10^-6. All test method acceptance criteria were met.
- Biocompatibility Validation: In accordance with ISO 10993-1 and FDA guidance. Results adequately address biocompatibility for the output devices and their intended use.
2. Sample Size Used for the Test Set and Data Provenance
- Wear Debris Testing: "Cutting / drilling instruments were used on a worst-case titanium surgical guide." The sample size is not explicitly stated but implies at least one worst-case guide was tested.
- Fit and Form Validation: "Subject devices from historical cases were manufactured." The sample size is not explicitly stated beyond "All samples met the predetermined acceptance criteria." The provenance is implied to be retrospective as it uses "historical cases." The country of origin of the data is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The document does not mention the use of experts to establish a "ground truth" in the traditional sense for these performance tests. The ground truth for the device's physical outputs (surgical guides) appears to be derived from engineered specifications and objective measurements (optical scanning, fit to master models). The system relies on trained MedCAD employees and physician input for planning, but this is part of the operational workflow rather than a ground truth establishment process for performance testing.
4. Adjudication Method for the Test Set
Not applicable. The performance testing described (wear debris, fit and form) does not involve human readers or a need for adjudication in the context of diagnostic agreement. It's a technical validation of physical properties.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC comparative effectiveness study was mentioned or performed. This device is described as a software system for planning and manufacturing physical outputs (guides, splints, models), which are then used in surgery, rather than an AI-based diagnostic tool that directly assists human readers in interpreting medical images for diagnosis. The system is operated by "trained MedCAD employees" and involves "clinical input and review from the physician during planning," but it's not described as an AI assistance tool for human interpretation of images.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
The performance tests described (wear debris and fit/form) evaluate the physical characteristics and dimensional accuracy of the manufactured outputs, which are direct results of the system's (algorithm's) processing and manufacturing. In that sense, aspects of "standalone" performance of the physical output are assessed. However, the system is explicitly stated as requiring "clinical input and review from the physician" for planning, meaning it's generally a human-in-the-loop system in its intended use, rather than a fully autonomous diagnostic algorithm.
7. The Type of Ground Truth Used
The ground truth for the performance tests appears to be:
- Engineered Specifications/Design Accuracy: For Fit and Form Validation, the manufactured devices are compared to the "3D model" (the digital design generated by the system based on patient imaging). This implies the 3D model itself serves as the ground truth for ideal form and alignment.
- Literature-based Safety Thresholds: For Wear Debris Testing, the acceptance criterion is quantitative: less than "that reported in the literature to be safe." This indicates a ground truth derived from existing scientific literature on safe levels of wear debris from similar materials/applications.
8. The Sample Size for the Training Set
The document does not provide information about a "training set" or "training data" for the MedCAD® AccuPlan® System. This suggests that the system's functionality is not based on a machine learning model that requires a training phase with labeled data in the way many AI/ML medical devices do. It appears to be a rule-based or engineering-based software for image processing, segmentation, and design for manufacturing.
9. How the Ground Truth for the Training Set was Established
Since no training set is mentioned (refer to point 8), this question is not applicable.
Ask a specific question about this device
(234 days)
DZJ
EmbedMed is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The is processed by the system, and the result is an output data file. This file may then be provided as digital models or used an input to an additive manufacturing portion of the system. The additive manufacturing portion of the system produces physical outputs including anatomical models and surgical guides for use in maxillofacial surgeries. EmbedMed is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
EmbedMed utilizes Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images to create digital and additive manufactured, patient-specific physical anatomical models and surgical guides for use in surgical procedures. Imaging data files are obtained from the surgeons for treatment planning and various patient-specific products that are manufactured with biocompatible photopolymer resins using additive manufacturing (stereolithography).
The provided text describes the 3D LifePrints UK Ltd. EmbedMed device (K220366), an image segmentation software and additive manufacturing system for creating patient-specific anatomical models and surgical guides for maxillofacial surgeries.
Here's an analysis of the acceptance criteria and the study conducted:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) summary does not explicitly present a table of quantitative acceptance criteria with corresponding performance metrics for the EmbedMed device in terms of clinical accuracy (e.g., sensitivity, specificity, or deviation). Instead, the performance data presented is focused on demonstrating the physical and functional aspects of the manufactured outputs and their compliance with general medical device standards.
However, based on the provided text, we can infer some "acceptance criteria" through the verification and validation testing performed. These are more general compliance points rather than precise numerical performance targets for the AI component's diagnostic accuracy.
Acceptance Criteria Category | Stated Verification/Validation/Performance |
---|---|
Biocompatibility | EmbedMed meets the requirements of ISO 10993-1:2018, ISO 14971:2019, and FDA Guidance Document Use of International Standard ISO 10993-1:2016 for short term (≤ 24 hours) contact with tissue and bone. Tested endpoints: cytotoxicity, sensitization, acute systemic toxicity, material-mediated pyrogenicity. |
Sterilization Validation (End User) | Sterilization process validated to a sterility assurance level (SAL) of 10-6 using the over-kill method according to ISO 17665-1:2006. Drying time validation also conducted. |
Functional/System Performance (Software & Manufacturing) | Installation, Operational, and Performance Qualification (IO/PQ) confirmed. |
Dimensional Accuracy (Physical Outputs) | Verified that physical outputs (anatomical models, surgical guides) meet dimensional accuracy requirements across the range of possible patient-specific devices. |
Feature Accuracy (Physical Outputs) | Verified that physical outputs meet feature accuracy requirements across the range of possible patient-specific devices. |
Simulated Use Testing | Performed to confirm EmbedMed physical outputs meet requirements. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a specific "test set" for evaluating the performance of the image segmentation system in terms of diagnostic accuracy or clinical utility. The testing described focuses on the physical outputs and system performance rather than the AI's ability to accurately segment anatomical structures on a test dataset.
- Sample Size for Test Set: Not explicitly stated for the image segmentation component. The "Verification and Validation Testing" indicates testing was performed "across the range of possible patient-specific devices," implying a variety of cases were used for dimensional and feature accuracy, but not a specific count or dataset description.
- Data Provenance: Not specified. The input imaging information is stated to come from "a medical scanner such as a CT based system." There is no mention of country of origin or whether data was retrospective or prospective for any internal testing of the image segmentation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The study described does not involve establishing ground truth for image segmentation using expert consensus for a test set, nor does it quantify the performance of the image segmentation algorithm in terms of accuracy against such a ground truth. The "ground truth" for the physical outputs (dimensional and feature accuracy) would likely be based on the digital models generated by the system and engineering specifications, not expert clinical interpretation.
4. Adjudication Method for the Test Set
This information is not provided. As no specific test set for image segmentation performance against expert ground truth is described, an adjudication method is not mentioned.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document explicitly states under "5.8.4. Clinical Studies": "Clinical testing was not necessary for the demonstration of substantial equivalence." The focus was on the substantial equivalence of the system and the physical outputs, not on comparing reader performance with and without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance evaluation of the image segmentation algorithm in terms of its accuracy (e.g., Dice score, Hausdorff distance, etc.) against a clinical ground truth is not explicitly described or provided. The system is described as a "software system and image segmentation system," but its performance metrics as an algorithm by itself are not detailed. The digital output is "reviewed and approved by the prescribing clinician prior to delivery of the final outputs," indicating a human-in-the-loop workflow.
7. The Type of Ground Truth Used
For the "Feature Accuracy" and "Dimensional Accuracy" validation, the ground truth would likely be the digital design files generated by the EmbedMed software itself, against which the physical 3D-printed outputs are compared. For the biocompatibility and sterilization validation, the ground truth is established by international standards (ISO) and FDA guidance documents.
There is no mention of ground truth derived from expert consensus, pathology, or outcomes data for the performance of the image segmentation component of the software.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size for the image segmentation software. Given that the software is described as utilizing "Commercial Off-The-Shelf (COTS) software to manipulate 3D medical images," it is possible that the underlying segmentation algorithms were developed and trained externally or are based on traditional image processing techniques rather than a large, custom-trained deep learning model.
9. How the Ground Truth for the Training Set Was Established
Since information regarding a specific training set is not provided, how its ground truth was established is also not mentioned.
Ask a specific question about this device
(157 days)
DZJ
The OMF ASP System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the OMF ASP System, and the result is an output data file that may then be provided as digital models or used as input to an additive manufacturing portion of the system that produces physical outputs including anatomical models, templates, and surgical guides for use in maxillofacial surgery. The OMF ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
The Oromaxillofacial Advanced Surgical Planning (OMF ASP) System utilizes Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input, and to produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides for planning and performing maxillofacial surgeries.
The system utilizes medical imaging, such as CT-based imaging data of the patient's anatomy to create surgical plans with input from the physician to inform surgical decisions and guide the surgical procedure. The system produces a variety of patient specific to the maxillofacial region including anatomic models (physical and digital), physical surgical templates and/or guides, and patient specific case reports. The system utilizes additive manufacturing to create patient specific guides, and anatomical models.
The provided text describes the OMF ASP system and its substantial equivalence to a predicate device but does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the study proving it.
Specifically, the document states:
- "All acceptance criteria for design validation were met."
- "All acceptance criteria for performance testing were met."
- "All acceptance criteria for software verification testing were met."
- "All acceptance criteria for the cleaning validation were met."
- "All acceptance criteria for the steam sterilization validation were met."
- "All acceptance criteria for biocompatibility were met and the testing adequality addresses biocompatibility for the output devices and their intended use."
However, it does not explicitly state what those specific acceptance criteria were (e.g., quantifiable metrics like accuracy, sensitivity, specificity, or specific error margins for measurements). It also does not provide the reported device performance in measurable terms against those criteria.
Therefore, I can only provide a partial answer based on the available information.
1. A table of acceptance criteria and the reported device performance
Based on the provided text, specific quantifiable acceptance criteria and reported device performance (e.g., specific accuracy percentages, dimensions, etc.) are not detailed. The document only broadly states that "All acceptance criteria... were met" for various validation tests.
Acceptance Criteria Category | Nature of Acceptance Criteria (as implied) | Reported Device Performance (as implied) |
---|---|---|
Design Validation | Device designs conform to user needs and intended use for maxillofacial surgeries, identical to predicate in indications, design envelope, worst-case configuration, and post-processing conditions. | All acceptance criteria were met. |
Performance Testing | Manufacturing process assessment; operator repeatability within the digital workflow; digital and physical outputs verified against design specifications. | All acceptance criteria were met. |
Software Verification | Compliance with FDA guidance for software in medical devices (Moderate Level of Concern). | All acceptance criteria were met. |
Cleaning Validation | Bioburden, protein, and hemoglobin levels within acceptable limits post-cleaning (in accordance with AAMI TIR 30). | All acceptance criteria were met. |
Sterilization Validation | Sterility assurance level (SAL) of 10^-6 achieved for dynamic-air-removal cycle (in accordance with ISO 17665-1). | All acceptance criteria were met. |
Biocompatibility Testing | Compliance with ISO 10993-1, -5, -10, -11 requirements for biological safety. | All acceptance criteria were met. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Cases used for testing were representative of the reconstruction procedures within the subject device's intended use" for performance testing, but does not specify the sample size used for any test sets, nor the data provenance (country of origin, retrospective/prospective nature).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide any information regarding the number or qualifications of experts used to establish ground truth for any test sets.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not provide any information regarding an adjudication method for test sets.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not conducted or reported. The document states, "No clinical data were provided in order to demonstrate substantial equivalence." The device is described as a "software system and image segmentation system for the transfer of imaging information" and a "pre-operative software tool for simulating/evaluating surgical treatment options," indicating it's primarily a planning and manufacturing aid, not an AI diagnostic tool intended to assist human readers in interpretation that would typically necessitate an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the OMF ASP System as utilizing "Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input," and notes that its primary function is to "produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides." It also mentions "operator repeatability within the digital workflow" during performance testing. This suggests a human-in-the-loop process for surgical planning and model generation, rather than a standalone algorithm performance evaluation. However, the exact nature of the "performance testing" to verify digital and physical outputs might imply some level of standalone assessment against design specifications, but this is not explicitly detailed as an "algorithm only" performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used for any of the validation processes. It refers to "design specifications" against which outputs were verified, but the origin of these specifications (e.g., derived from expert consensus, anatomical measurements, etc.) is not detailed.
8. The sample size for the training set
The document does not mention or provide any information about a training set or its sample size. This is consistent with the description of the system using "Commercial Off-the-Shelf (COTS) software" rather than a custom-developed AI algorithm that would typically require a specific training phase.
9. How the ground truth for the training set was established
Since no training set is mentioned (see point 8), this information is not applicable and not provided.
Ask a specific question about this device
(179 days)
DZJ
The OsteoPlan™ System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the OsteoPlan™ System and the result is an output data file that may then be provided as digital models or used as input to the additive manufacturing portion of the system that produces physical outputs including anatomic models and splints for use in maxillofacial surgery. The OsteoPlan System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
OsteoMed uses computer aided modeling to assist the physician with planning complex maxillofacial surgeries. Specifically, the OsteoPlan™ System provides patient-specific anatomical models, splints, and patient-specific surgical plans and digital files of the surgical plan to assist physicians with maxillofacial surgeries. Outputs of the OsteoPlan™ System are designed with physician input and reviewed by the physician prior to finalization and distribution. All outputs are manufactured by OsteoMed using additive manufacturing (SLS and SLA), only with direct physician involvement to reduce the criticality of the outputs.
The system uses electronic medical images of the patient anatomy (CT and CBCT) with input from the physician to create the plan and splints for executing surgery. Off-the-shelf (OTS) software is used for surgical planning.
The outputs of the system include Orthognathic Occlusal Splints, Case Reports, and Anatomic models. The splints are offered in commonly used forms, in both intermediate and final positioning, and some are available with ligature holes.
Case reports are digital and physical documents created to lay out the surgical plan, dictated by the surgeon, and show outputs of the OsteoPlan™ system that will be used to translate the plan during surgery.
Anatomic models are tools provided to physicians for complex anatomy visualization or to preplan surgery with an accurate physical representation of patient anatomy. Anatomic models may include maxilla, mandible, or skull models.
This Premarket Notification (510(k)) summary for the OsteoPlan System does not include specific details on acceptance criteria and device performance in the format requested. The document focuses on establishing substantial equivalence to predicate devices through a comparison of technological characteristics and a summary of non-clinical testing.
Here's what can be extracted and what information is not present based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly provided in the given text in a quantifiable table format for the OsteoPlan System's primary functions (e.g., accuracy of segmentation, precision of surgical planning). The document generally states that "All testing passed" or "all acceptance criteria being met" for various validations, but doesn't detail what those criteria were nor the specific performance metrics achieved.
2. Sample Size Used for the Test Set and Data Provenance
This information is not explicitly provided for the software's core functionality (image segmentation, surgical planning).
- For the "Cadaver Study," a "simulated use" study was conducted, indicating a test set was used, but its size and specific provenance (e.g., number of cadavers, country of origin) are not mentioned. The study is prospective in nature as it verified the functionality of the design.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided. The document states that "Outputs of the OsteoPlan™ System are designed with physician input and reviewed by the physician prior to finalization and distribution," implying expert involvement in the design and review process, but it doesn't specify how many experts or their qualifications for establishing ground truth in a formal validation test set.
4. Adjudication Method for the Test Set
This information is not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
This information is not mentioned. There is no indication of a comparative effectiveness study comparing human readers with and without AI assistance.
6. Standalone Performance
The document mentions "Software Validation and documentation for software of moderate level of concern was provided per the FDA Guidance Document 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices' and 'Off-The-Shelf Software Use in Medical Devices.' All software verification/validation passed." This indicates that standalone testing of the software was performed, but the specific standalone performance metrics (e.g., accuracy, precision) are not detailed. It states "The OsteoPlan™ System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner... The input data file is processed by the OsteoPlan™ System and the result is an output data file," implying standalone operation.
7. Type of Ground Truth Used
For the software's core functions (segmentation, planning), the specific type of ground truth (e.g., expert consensus on anatomies, pathology reports) used for validation is not explicitly stated. Given the pre-operative planning nature, it would likely involve expert consensus or established anatomical landmarks.
8. Sample Size for the Training Set
This information is not provided. The document doesn't mention a training set, which is typical for AI/ML models. However, the OsteoPlan System is described as using "Off-the-shelf (OTS) software used for surgical planning," suggesting it might be an adaptation or integration of existing tools rather than a completely novel AI model requiring extensive de novo training data for its core algorithms.
9. How the Ground Truth for the Training Set Was Established
This information is not provided, as no training set is mentioned.
Summary of available information regarding acceptance criteria and performance:
The document broadly states that various non-clinical tests were conducted and "all acceptance criteria being met" or "All testing passed." These tests cover:
- Equipment/Process Qualification (IQ/OQ/PQ)
- Software Validation (for moderate level of concern)
- Cleaning Validations
- Steam Sterilization Validation
- Biocompatibility testing (cytotoxicity, sensitization, irritation, acute toxicity, pyrogenicity, subchronic toxicity, implantation) for worst-case splint and anatomical model
- Packaging Validation
- Shelf Life (functional testing)
- Cadaver Study (simulated use to verify design functionality)
Crucially, the document does not provide the specific quantifiable acceptance criteria or the numerical results of performance tests for the software's primary functions of image segmentation or surgical planning accuracy. Instead, it relies on a general statement of "all testing passed" and "performance equivalence was shown through the verification comparison to the predicate device" to establish substantial equivalence.
Ask a specific question about this device
(403 days)
DZJ
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the ASP system and the result is an output data file. This file may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, surgical splints and surgical planning case reports for use in maxillofacial surgery. CenterMed Patient Matched ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is a combination of software design and additive manufacturing for customized virtual pre-surgical treatment planning in maxillofacial reconstruction and orthognathic surgeries. The system processes patients' imaging data files obtained from the surgeons for treatment planning and outputs various patient-specific products (both physical and digital), including surgical guides, anatomical models, surgical splints, and surgical planning case reports. The physical products (surgical guides, anatomical models and surgical splints) are manufactured with biocompatible polyamide (PA-12) using additive manufacturing (Selective Laser Sintering).
The CenterMed Patient Matched Assisted Surgical Planning (ASP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner (e.g., CT) to produce digital models or physical outputs like anatomical models, surgical guides, surgical splints, and surgical planning case reports for maxillofacial surgery. It also serves as a pre-operative software tool for simulating/evaluating surgical treatment options.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document details non-clinical performance data (mechanical, biocompatibility, sterilization, software validation) as supportive evidence for substantial equivalence, rather than a direct clinical performance study with predefined acceptance criteria for sensitivity, specificity, etc. The acceptance criteria for these tests are largely based on meeting established ISO/ASTM standards or demonstrating compliance with pre-defined requirements.
Test Performed | Test Description/Guidelines | Acceptance Criteria | Reported Device Performance | Safety and Efficacy Confirmed |
---|---|---|---|---|
Mechanical Testing | ISO 178:2019, ISO 20795-2:2013 | Maintain 85% of initial bending strength (for sterilized and aged test specimens). | Met the pre-defined acceptance criteria. | Yes |
ISO 20753:2018 | Smaller test specimens for tensile testing designed according to standard. | Test specimens designed according to standard. | Yes | |
ASTM D638 | Larger test specimens for tensile testing designed according to standard. | Test specimens designed according to standard. | Yes | |
ISO 527-2:2012 | Maintain 85% of initial tensile strength (for sterilized and aged test specimens). | Met the pre-defined criteria. | Yes | |
Biocompatibility Testing | ||||
Cytotoxicity | ISO 10993-5, GB/T 16886.5-2017 | No evidence of the test specimen causing cell lysis or toxicity. | Showed no evidence of cell lysis or toxicity. | Yes |
Sensitization | ISO 10993-10, GB/T 16886.10-2017 | No evidence of causing delayed dermal contact sensitization. | Showed no evidence of causing delayed dermal contact sensitization. | Yes |
Intracutaneous Reactivity | ISO 10993-10, GB/T 16886.10-2017 | No evidence of intra-cutaneous reactivity. | Showed no evidence of intra-cutaneous reactivity. | Yes |
Acute Systemic Toxicity | ISO 10993-11, GB/T 16886.11-2011 | No mortality or evidence of systemic toxicity. | Showed no mortality or evidence of systemic toxicity. | Yes |
Pyrogenicity | USP , ISO 10993-11 | Met requirements for the absence of pyrogens. | Met the requirements for the absence of pyrogens. | Yes |
Sterilization Validation | ANSI/AAMI/ISO 17665-1 | Assurance of sterility of 10^-6^ SAL (sterility assurance level) for surgical guides, surgical splints and anatomical models. | Demonstrated assurance of sterility of 10^-6^ SAL. | Yes |
Software Validation | Pre-defined requirement specifications | All software requirements and specifications implemented correctly and completely, traceable to system requirements; conformity with pre-defined specifications and acceptance criteria; mitigation of potential risks. | All COTS software applications are FDA cleared. Quality and on-site user acceptance testing confirmed correct and complete implementation of requirements, traceability, conformity with specifications, and risk mitigation. | Yes |
2. Sample Size Used for the Test Set and Data Provenance
The document does not describe a clinical study with a "test set" in the traditional sense of evaluating algorithm performance against ground truth on patient data. Instead, it details non-clinical tests for the physical and software components.
- Mechanical Testing: Test specimens were used for bending and tensile testing. The specific number of specimens is not provided, but they were sterilized and aged to evaluate material properties.
- Biocompatibility Testing: Test specimens (presumably of the device materials) were used for cytotoxicity, sensitization, intracutaneous reactivity, acute systemic toxicity, and pyrogenicity tests. The specific number of samples for each test is not detailed.
- Sterilization Validation: Not specified, but sufficient samples would be used to demonstrate a sterility assurance level of 10^-6^.
- Software Validation: This involved "Quality and on-site user acceptance testing" based on pre-defined requirement specifications. It does not refer to a data set for algorithm evaluation, but rather to the internal validation of the software development process and functionality.
The data provenance is from internal testing performed by the manufacturer or their designated testing facilities, rather than clinical patient data. The country of origin of the data is not specified directly, but the company is based in Walnut Creek, California, USA, and the tests refer to international standards (ISO, ASTM, USP). The studies were conducted specifically for the purpose of this 510(k) submission (prospective in the context of regulatory filing).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. This device is not an AI/ML diagnostic or prognostic tool that requires expert-established ground truth on a test set of patient cases for performance evaluation. The "ground truth" for the non-clinical tests refers to the established scientific standards and criteria outlined by ISO, ASTM, and USP.
4. Adjudication Method for the Test Set
Not applicable, as there was no clinical test set requiring expert adjudication.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No. The document explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence, or safety and effectiveness of the CenterMed ASP System." Therefore, an MRMC comparative effectiveness study was not performed.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
The device is described as a "software system and image segmentation system" and a "pre-operative software tool." While the software components are validated ("Software validation" section), this validation focuses on the correct implementation of specifications and mitigation of risks. It is not framed as a standalone performance study in the sense of an algorithm making decisions or predictions independently. The workflow involves trained engineers operating the software and physicians evaluating the outputs for surgical planning. The "standalone" performance being assessed here is the functional integrity and safety of the software and physical outputs according to design specifications, rather than a diagnostic performance metric.
7. The Type of Ground Truth Used
For the non-clinical studies described:
- Mechanical Testing: Ground truth is derived from the established physical properties and limits defined by the ISO and ASTM standards.
- Biocompatibility Testing: Ground truth is the biological safety criteria outlined in the ISO 10993 series and USP .
- Sterilization Validation: Ground truth is the defined sterility assurance level (SAL) of 10^-6^ as per ANSI/AAMI/ISO 17665-1.
- Software Validation: Ground truth is the "pre-defined requirement specifications" for the COTS software components, ensuring they function as intended.
No pathology, expert consensus on patient images, or outcomes data were used as ground truth, as no clinical study directly evaluating algorithm performance on patient cases was conducted.
8. The Sample Size for the Training Set
Not applicable. The document does not describe the development of a machine learning algorithm that requires a "training set" of data. The software components mentioned are "Commercially off-the-shelf (COTS) software applications for image segmentation and processing." These are pre-existing software tools, not a newly developed AI algorithm that would undergo a training phase by CenterMed.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there was no training set for a newly developed AI algorithm. The COTS software validation focused on their correct implementation and functionality within the CenterMed ASP System.
Ask a specific question about this device
(570 days)
DZJ
coDiagnostiX is an implant planning and surgery planning software tool intended for use by dental professionals who have appropriate knowledge in the field of application. The software reads imaging information output from medical scanners such as CBCT or CT scanners.
It is indicated for pre-operative simulation and evaluation of patient anatomy, dental implant placement, surgical instrument positioning, and surgical treatment options, in edentulous, partial edentulous or dentition situations, which may require a surgical guide. It is further indicated for the user to design such guides for, alone or in combination, the guiding of a surgical path along a trajectory or a profile, or to help evaluate a surgical preparation or step.
coDiagnostiX software allows for surgical guide export to a validated manufacturing center or to the point of care. Manufacturing at the point of care requires a validated process using CAM equipment (additive manufacturing system, including software and associated tooling) and compatible material (biocompatible and sterilizable). A surgical guide may require to be used with accessories.
The main uses and capabilities of the coDiagnostiX software are unchanged from the primary predicate version.
As in the primary predicate version, it is a software for dental surgical treatment planning. It is designed for the evaluation and analysis of 3-dimensional datasets and the precise image-guided and reproducible preoperative planning of dental surgeries.
The first main steps in its workflow include the patient image data being received from CBCT (Cone Beam Computed Tomography) or CT. The data in DICOM format is then read with the coDiagnostiX DICOM transfer module according to the standard, converted into 3-dimensional datasets and stored in a database.
The pre-operative planning is performed by the computation of several views (such as a virtual Orthopantomogram or a 3-dimensional reconstruction of the image dataset), by the analysis of the image data, and the placement of surgical items (i.e. sleeves, implants) upon the given views. The pre-operative planning is then followed as decided by the design of a corresponding surgical guide that reflects the assigned placement of the surgical items.
Additional functions are available to the user for refinement of the preoperative planning, such as:
- · Active measurement tools, length and angle, for the assessment of surgical treatment options:
- · Nerve module to assist in distinguishing the nervus mandibularis canal;
- 3D sectional views through the jaw for fine adjustment of surgical treatment options; .
- Segmentation module for coloring several areas inside the slice dataset, e.g., jawbone, . native teeth, or types of tissue such as bone or skin, and creating a 3D reconstruction for the dataset;
- Parallelizing function for the adjustment of adjacent images; and
- · Bone densitometry assessment, with a density statistic in areas of interest.
All working steps are automatically saved to the patient file may contain multiple surgical treatment plan proposals which allows the user to choose the ideal surgical treatment plan. The output file of the surgical guide and/or the guided surgical is then generated from the final surgical treatment plan.
coDiagnostiX software allows for surgical guide export to a validated manufacturing center or to the point of care. Manufacturing at the point of care requires a validated process using CAM equipment (additive manufacturing system, including software and associated tooling) and compatible material (biocompatible and sterilizable). A surgical quide may require to be used with accessories.
The provided document is a 510(k) Premarket Notification from the FDA for the coDiagnostiX dental implant planning and surgery planning software. It details the device's indications for use, comparison to predicate/reference devices, and non-clinical performance data used to demonstrate substantial equivalence.
However, the document does not contain specific acceptance criteria for a device's performance (e.g., accuracy metrics or thresholds), nor does it describe a comparative study that proves the device meets such criteria with detailed quantitative results. The section on "Non-Clinical Performance Data" broadly discusses verification and validation, but lacks the granular data requested.
Therefore, many of the requested points cannot be answered from the provided text.
Here's an attempt to answer based on the available information, noting where information is absent:
Device: coDiagnostiX (K193301)
1. Table of Acceptance Criteria and Reported Device Performance
Information Not Provided in Document: The document does not specify quantitative acceptance criteria or provide a table of reported device performance metrics against such criteria. It states that "The acceptance criteria are met" for sterilization validation and "Expected results are met" for process performance qualifications, but does not provide details on what those criteria or results actually were.
2. Sample Size and Data Provenance for Test Set
Information Not Provided in Document: The document mentions "software verification and validation" and "Biocompatibility testing" but does not specify sample sizes for any test sets (e.g., number of patient scans, number of manufactured guides). There is also no explicit mention of data provenance (e.g., country of origin, retrospective/prospective collection).
3. Number and Qualifications of Experts for Ground Truth Establishment
Information Not Provided in Document: The document states "software verification and validation is conducted to assure requirements and specifications as well as risk mitigations (design inputs) are correctly and completely implemented and traceable to design outputs." However, it does not specify if experts were involved in establishing ground truth for a test set, their number, or their qualifications. The device is intended for "dental professionals who have appropriate knowledge in the field of application."
4. Adjudication Method for Test Set
Information Not Provided in Document: No information regarding an adjudication method (e.g., 2+1, 3+1, none) for a test set is provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Information Not Provided in Document: The document focuses on demonstrating substantial equivalence primarily through technological characteristics and non-clinical data, rather than through an MRMC comparative effectiveness study involving human readers. There is no mention of such a study or an effect size for human reader improvement with AI assistance.
6. Standalone (Algorithm Only) Performance Study
Information Not Provided in Document: While the document refers to "software verification and validation" to demonstrate that "the software performs as intended" and "the base accuracy is identical as compared to the predicate device," it does not provide details of a standalone (algorithm only) performance study with specific metrics, such as sensitivity, specificity, or accuracy values. It implies the software's capabilities (CAD type, image sources, output files) are unchanged from the predicate, and therefore its base accuracy is "identical."
7. Type of Ground Truth Used
Information Not Provided in Document: The document does not specify the type of ground truth used for any testing. It mentions "evaluation and analysis of 3-dimensional datasets" from CBCT or CT scans, but not how ground truth (e.g., for specific anatomical features, surgical path accuracy, etc.) was established for validation purposes.
8. Sample Size for Training Set
Information Not Provided in Document: The document does not provide any information about a training set or its sample size. This is a 510(k) submission primarily comparing the device to a predicate, not detailing the development or training of an AI algorithm from scratch. The changes described are primarily feature expansions to an existing software.
9. How Ground Truth for Training Set Was Established
Information Not Provided in Document: As no information on a training set is provided, there is no information on how its ground truth was established.
Summary of Document's Approach to Meeting Requirements:
The document primarily relies on demonstrating substantial equivalence to an existing predicate device (K130724 coDiagnostiX Implant Planning Software) and reference devices rather than presenting a de novo performance study with specific acceptance criteria and detailed quantitative results.
The key arguments for substantial equivalence are:
- The device has the "same intended use" as the primary predicate device.
- It has "similar technological characteristics" (software, interface, inputs, outputs are either identical or considered similar with addressed impacts).
- Changes are described as "feature expansions" and "minor updates" to an existing, cleared software.
- Non-clinical data, including software verification and validation, biocompatibility testing, sterilization validation, and manufacturing process qualifications, are stated to have met acceptance criteria and demonstrated that the device "performs as intended" and is "safe and effective."
The non-clinical data section broadly implies that the software's core accuracy functions are maintained from the predicate device, since its fundamental CAD capabilities, image sources, and output file functions are unchanged. The new features ("planning of a surgical path along a trajectory," "planning of a surgical path along a profile," and "planning to help evaluate surgical preparation or step") are leveraged from the predicate's general implant planning and surgical planning indications. The document asserts that these changes "do not change the intended use or the applicable fundamental technology, and do not raise any new questions of safety or effectiveness."
Ask a specific question about this device
Page 1 of 3