Search Results
Found 2 results
510(k) Data Aggregation
(435 days)
CenterMed, Inc.
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the ASP system and the result is an output data file. This file may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models of the Fibula and Ilium, surgical guides for harvesting bone grafts from the Fibula or Ilium, and surgical planning case reports for use in Maxillofacial reconstructive surgeries. CenterMed Patient Matched ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is a combination of software design and additive manufacturing for customized virtual pre-surgical treatment planning in Maxillofacial reconstructive surgeries. The system processes patients' imaging data files obtained from the surgeons for treatment planning and outputs various patient-specific products (both physical and digital), including surgical guides for harvesting bone grafts from the Fibula or Ilium, anatomical models of the Fibula and Ilium, and surgical planning case reports for use in Maxillofacial reconstructive surgeries. The physical products (surgical guides, anatomical models) are manufactured with biocompatible polyamide (PA-12) using additive manufacturing (Selective Laser Sintering).
The provided text is a 510(k) Summary for the CenterMed Patient Matched Assisted Surgical Planning (ASP) System. This document focuses on demonstrating substantial equivalence to a predicate device (KLS Martin Individual Patient Solutions (IPS) Planning System).
Crucially, this type of submission (510(k)) does not typically involve clinical performance studies to establish "acceptance criteria" through a traditional clinical trial or MRMC study in the way medical AI/imaging devices often do. Instead, the focus is on a comparison to a legally marketed predicate device, demonstrating that any differences do not raise new questions of safety or effectiveness.
Therefore, many of the requested categories for a study that "proves the device meets the acceptance criteria" in terms of clinical performance (like sensitivity, specificity, reader performance, etc.) are not applicable to this 510(k) summary. The "acceptance criteria" here are primarily around demonstrating engineering performance, biocompatibility, sterilization, and software validation to ensure the device performs as intended and is as safe and effective as the predicate.
Here's a breakdown based on the provided document and the nature of a 510(k) submission for this type of device:
1. A table of acceptance criteria and the reported device performance
The provided document details non-clinical performance data and their conclusions. These serve as the "acceptance criteria" for demonstrating the device's functional and safety characteristics.
Test Category | Acceptance Criteria / Guidelines | Reported Device Performance / Conclusion | Safety and Efficacy Confirmed |
---|---|---|---|
Mechanical Testing | ISO 178:2019 (bending strength), ISO 20753:2018 (tensile testing with smaller specimens), ASTM D638 (tensile testing with larger specimens), ISO 527-2:2012 (tensile strength) - Pre-defined criteria: maintain 85% of initial bending/tensile strength. | The results showed that the sterilized and aged test specimens met the pre-defined acceptance criteria: maintain 85% of initial bending strength. The smaller test specimens used for tensile testing were designed according to ISO 20753:2018. The larger test specimens used for tensile testing were designed according to ASTM D638. The results showed that the sterilized and aged test specimens can reach the pre-defined criteria: maintain 85% of initial tensile strength. | Yes |
Dimensional Testing | Pre-defined dimensional tolerance limits | The results showed that the dimensional changes were within the predefined acceptance criteria. | Yes |
Wear Debris Testing | Pre-defined average amount of material loss | The results showed that the average material loss was within the predefined acceptance criteria. | Yes |
Cytotoxicity | ISO 10993-5, GB/T 16886.5-2017 - No evidence of cell lysis or toxicity. | The results showed no evidence of the test specimen causing cell lysis or toxicity. | Yes |
Sensitization | ISO 10993-10, GB/T 16886.10-2017 - No evidence of delayed dermal contact sensitization. | The test specimen extracts showed no evidence of causing delayed dermal contact sensitization. | Yes |
Intracutaneous Reactivity | ISO 10993-10, GB/T 16886.10-2017 - No evidence of intra-cutaneous reactivity. | The results showed no evidence of intra-cutaneous reactivity. | Yes |
Acute Systemic Toxicity | ISO 10993-11, GB/T 16886.11 2011 - No mortality or evidence of systemic toxicity. | The results showed no mortality or evidence of systemic toxicity. | Yes |
Pyrogenicity | USP , ISO 10993-11 - Meet requirements for absence of pyrogens. | The results met the requirements for the absence of pyrogens. | Yes |
Sterilization Validation | ANSI/AAMI/ISO 17665-1 - Assurance of sterility of 10-6 SAL. | The results demonstrated the assurance of sterility of 10-6 SAL (sterility assurance level) for surgical guides and anatomical models individually packaged in a single-pouched or wrapped sterilization configuration. | Yes |
Software Validation | Pre-defined requirement specifications - Conformity with specifications and acceptance criteria. | All the COTS software applications for image segmentation and manipulation are FDA cleared. Quality and on-site user acceptance testing provide objective evidence that all software requirements and specifications were implemented correctly and completely and are traceable to the system requirements. Testing required as a result of risk analysis (level of concern) and impact assessments showed conformity with pre-defined specifications and acceptance criteria. Software documentation demonstrates all appropriate steps have been taken to ensure mitigation of any potential risks and the system performs as intended based on the user requirements and specifications. | Yes |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- No clinical test set was used for this 510(k) submission. The submission relies on non-clinical (mechanical, biocompatibility, sterilization) and software validation testing.
- The raw material for the device (patient CT scans) would be prospective in real-world use, but for the purpose of this submission, no patient data was used to "test" the device's diagnostic performance. The software processes existing DICOM data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. No clinical ground truth was established as no clinical performance study was conducted. The "ground truth" for the device's outputs (models, guides, plans) is established through the surgeon's approval of the digital models and plans, as described in the workflow.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable. No clinical test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was performed. The device is an "Assisted Surgical Planning System" that produces physical and digital outputs for Maxillofacial reconstructive surgeries, not a diagnostic AI that assists human readers in interpreting images. It is used by "well-trained engineers" and "evaluated by physicians" for surgical planning. The submission explicitly states "Clinical testing was not necessary for the determination of substantial equivalence, or safety and effectiveness."
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not applicable for a typical AI performance metric. The device is described as an "Assisted Surgical Planning System" where "well-trained engineers" use "COTS software systems" for image transfer, manipulation, and simulation, with outputs "evaluated by physicians." This implies a human-in-the-loop system for the actual planning process, not a standalone diagnostic algorithm. The software validation detailed in the report is for the underlying software's functionality and adherence to specifications, not its stand-alone diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical tests (mechanical, biocompatibility, sterilization), the "ground truth" is adherence to established international and industry standards (ISO, ASTM, USP) and pre-defined internal acceptance criteria.
- For the software, the "ground truth" is adherence to "pre-defined requirement specifications" and "traceable to the system requirements," validated through "Quality and on-site user acceptance testing."
- There is no clinical ground truth in terms of patient outcomes or expert pathological diagnosis as this was not a clinical performance study for image interpretation.
8. The sample size for the training set
- Not applicable. The document does not describe a machine learning model that requires a "training set" in the context of deep learning. The software uses "Commercially off-the-shelf (COTS) software systems" for image processing and simulation, which would have been developed and validated by their respective vendors, not developed from scratch by CenterMed using a large training dataset for a novel AI algorithm.
9. How the ground truth for the training set was established
- Not applicable. See point 8.
Ask a specific question about this device
(403 days)
CenterMed, Inc.
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the ASP system and the result is an output data file. This file may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, surgical splints and surgical planning case reports for use in maxillofacial surgery. CenterMed Patient Matched ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
CenterMed Patient Matched Assisted Surgical Planning (ASP) System is a combination of software design and additive manufacturing for customized virtual pre-surgical treatment planning in maxillofacial reconstruction and orthognathic surgeries. The system processes patients' imaging data files obtained from the surgeons for treatment planning and outputs various patient-specific products (both physical and digital), including surgical guides, anatomical models, surgical splints, and surgical planning case reports. The physical products (surgical guides, anatomical models and surgical splints) are manufactured with biocompatible polyamide (PA-12) using additive manufacturing (Selective Laser Sintering).
The CenterMed Patient Matched Assisted Surgical Planning (ASP) System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner (e.g., CT) to produce digital models or physical outputs like anatomical models, surgical guides, surgical splints, and surgical planning case reports for maxillofacial surgery. It also serves as a pre-operative software tool for simulating/evaluating surgical treatment options.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document details non-clinical performance data (mechanical, biocompatibility, sterilization, software validation) as supportive evidence for substantial equivalence, rather than a direct clinical performance study with predefined acceptance criteria for sensitivity, specificity, etc. The acceptance criteria for these tests are largely based on meeting established ISO/ASTM standards or demonstrating compliance with pre-defined requirements.
Test Performed | Test Description/Guidelines | Acceptance Criteria | Reported Device Performance | Safety and Efficacy Confirmed |
---|---|---|---|---|
Mechanical Testing | ISO 178:2019, ISO 20795-2:2013 | Maintain 85% of initial bending strength (for sterilized and aged test specimens). | Met the pre-defined acceptance criteria. | Yes |
ISO 20753:2018 | Smaller test specimens for tensile testing designed according to standard. | Test specimens designed according to standard. | Yes | |
ASTM D638 | Larger test specimens for tensile testing designed according to standard. | Test specimens designed according to standard. | Yes | |
ISO 527-2:2012 | Maintain 85% of initial tensile strength (for sterilized and aged test specimens). | Met the pre-defined criteria. | Yes | |
Biocompatibility Testing | ||||
Cytotoxicity | ISO 10993-5, GB/T 16886.5-2017 | No evidence of the test specimen causing cell lysis or toxicity. | Showed no evidence of cell lysis or toxicity. | Yes |
Sensitization | ISO 10993-10, GB/T 16886.10-2017 | No evidence of causing delayed dermal contact sensitization. | Showed no evidence of causing delayed dermal contact sensitization. | Yes |
Intracutaneous Reactivity | ISO 10993-10, GB/T 16886.10-2017 | No evidence of intra-cutaneous reactivity. | Showed no evidence of intra-cutaneous reactivity. | Yes |
Acute Systemic Toxicity | ISO 10993-11, GB/T 16886.11-2011 | No mortality or evidence of systemic toxicity. | Showed no mortality or evidence of systemic toxicity. | Yes |
Pyrogenicity | USP , ISO 10993-11 | Met requirements for the absence of pyrogens. | Met the requirements for the absence of pyrogens. | Yes |
Sterilization Validation | ANSI/AAMI/ISO 17665-1 | Assurance of sterility of 10^-6^ SAL (sterility assurance level) for surgical guides, surgical splints and anatomical models. | Demonstrated assurance of sterility of 10^-6^ SAL. | Yes |
Software Validation | Pre-defined requirement specifications | All software requirements and specifications implemented correctly and completely, traceable to system requirements; conformity with pre-defined specifications and acceptance criteria; mitigation of potential risks. | All COTS software applications are FDA cleared. Quality and on-site user acceptance testing confirmed correct and complete implementation of requirements, traceability, conformity with specifications, and risk mitigation. | Yes |
2. Sample Size Used for the Test Set and Data Provenance
The document does not describe a clinical study with a "test set" in the traditional sense of evaluating algorithm performance against ground truth on patient data. Instead, it details non-clinical tests for the physical and software components.
- Mechanical Testing: Test specimens were used for bending and tensile testing. The specific number of specimens is not provided, but they were sterilized and aged to evaluate material properties.
- Biocompatibility Testing: Test specimens (presumably of the device materials) were used for cytotoxicity, sensitization, intracutaneous reactivity, acute systemic toxicity, and pyrogenicity tests. The specific number of samples for each test is not detailed.
- Sterilization Validation: Not specified, but sufficient samples would be used to demonstrate a sterility assurance level of 10^-6^.
- Software Validation: This involved "Quality and on-site user acceptance testing" based on pre-defined requirement specifications. It does not refer to a data set for algorithm evaluation, but rather to the internal validation of the software development process and functionality.
The data provenance is from internal testing performed by the manufacturer or their designated testing facilities, rather than clinical patient data. The country of origin of the data is not specified directly, but the company is based in Walnut Creek, California, USA, and the tests refer to international standards (ISO, ASTM, USP). The studies were conducted specifically for the purpose of this 510(k) submission (prospective in the context of regulatory filing).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. This device is not an AI/ML diagnostic or prognostic tool that requires expert-established ground truth on a test set of patient cases for performance evaluation. The "ground truth" for the non-clinical tests refers to the established scientific standards and criteria outlined by ISO, ASTM, and USP.
4. Adjudication Method for the Test Set
Not applicable, as there was no clinical test set requiring expert adjudication.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No. The document explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence, or safety and effectiveness of the CenterMed ASP System." Therefore, an MRMC comparative effectiveness study was not performed.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
The device is described as a "software system and image segmentation system" and a "pre-operative software tool." While the software components are validated ("Software validation" section), this validation focuses on the correct implementation of specifications and mitigation of risks. It is not framed as a standalone performance study in the sense of an algorithm making decisions or predictions independently. The workflow involves trained engineers operating the software and physicians evaluating the outputs for surgical planning. The "standalone" performance being assessed here is the functional integrity and safety of the software and physical outputs according to design specifications, rather than a diagnostic performance metric.
7. The Type of Ground Truth Used
For the non-clinical studies described:
- Mechanical Testing: Ground truth is derived from the established physical properties and limits defined by the ISO and ASTM standards.
- Biocompatibility Testing: Ground truth is the biological safety criteria outlined in the ISO 10993 series and USP .
- Sterilization Validation: Ground truth is the defined sterility assurance level (SAL) of 10^-6^ as per ANSI/AAMI/ISO 17665-1.
- Software Validation: Ground truth is the "pre-defined requirement specifications" for the COTS software components, ensuring they function as intended.
No pathology, expert consensus on patient images, or outcomes data were used as ground truth, as no clinical study directly evaluating algorithm performance on patient cases was conducted.
8. The Sample Size for the Training Set
Not applicable. The document does not describe the development of a machine learning algorithm that requires a "training set" of data. The software components mentioned are "Commercially off-the-shelf (COTS) software applications for image segmentation and processing." These are pre-existing software tools, not a newly developed AI algorithm that would undergo a training phase by CenterMed.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there was no training set for a newly developed AI algorithm. The COTS software validation focused on their correct implementation and functionality within the CenterMed ASP System.
Ask a specific question about this device
Page 1 of 1