Search Results
Found 5 results
510(k) Data Aggregation
(132 days)
KLS Martin Individual Patient Solutions (IPS) Planning System
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a computerized tomography (CT) medical scan. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in the marking and cutting of cranial bone in cranial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options. Information provided by the software and device output is not intended to eliminate, replace, or substitute, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive cranial surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports for use in the marking and cutting of cranial bone in cranial surgery.
The provided text is a 510(k) summary for the KLS Martin Individual Patient Solutions (IPS) Planning System. It details the device, its intended use, and comparisons to predicate and reference devices. However, it does not describe specific acceptance criteria and a study dedicated to proving the device meets those criteria in the typical format of a diagnostic AI/ML device submission.
Instead, the document primarily focuses on demonstrating substantial equivalence to a predicate device (K182889) and leveraging existing data from that predicate, as well as two reference devices (K182789 and K190229). The "performance data" sections describe traditional medical device testing (tensile, biocompatibility, sterilization, software V&V) and a simulated design validation testing and human factors and usability testing rather than a clinical study evaluating the accuracy of an AI/ML algorithm's output against a ground truth.
Specifically, there is no mention of:
- Acceptance criteria for an AI/ML model's performance (e.g., sensitivity, specificity, AUC).
- A test set with sample size, data provenance, or ground truth establishment details for AI/ML performance evaluation.
- Expert adjudication methods, MRMC studies, or standalone algorithm performance.
The "Simulated Design Validation Testing" and "Human Factors and Usability Testing" are the closest sections to a performance study for the IPS Planning System, but they are not framed as an AI/ML performance study as requested in the prompt.
Given this, I will extract and synthesize the information available regarding the described testing and attempt to structure it to address your questions, while explicitly noting where the requested information is not present in the provided document.
Acceptance Criteria and Device Performance (as inferred from the document)
The document primarily states that the device passes "all acceptance criteria" for various tests, but the specific numerical acceptance criteria (e.g., minimum tensile strength, maximum endotoxin levels) and reported performance values are generally not explicitly quantified in a table format. The closest to "performance" is the statement that "additively manufactured titanium devices are equivalent or better than titanium devices manufactured using traditional (subtractive) methods."
Since the document doesn't provide a table of acceptance criteria and reported numerical performance for an AI/ML model's accuracy, I will present the acceptance criteria and performance as described for the tests performed:
Test Category | Acceptance Criteria (as described) | Reported Device Performance (as described) |
---|---|---|
Tensile & Bending Testing | Polyamide guides can withstand multiple sterilization cycles without degradation and can maintain 85% of initial tensile strength. Titanium devices must be equivalent or better than those manufactured using traditional methods. | Polyamide guides meet criteria. Additively manufactured titanium devices are equivalent or better than traditionally manufactured ones. |
Biocompatibility Testing | All biocompatibility endpoints (cytotoxicity, sensitization, irritation, chemical/material characterization, acute systemic, material-mediated pyrogenicity, indirect hemolysis) must be within pre-defined acceptance criteria. | All conducted tests were within pre-defined acceptance criteria, adequately addressing biocompatibility. |
Sterilization Testing | Sterility Assurance Level (SAL) of 10^-6 for dynamic-air-removal cycle. All test method acceptance criteria must be met. | All test method acceptance criteria were met. |
Pyrogenicity Testing | Endotoxin levels must be below the USP allowed limit for medical devices that have contact with cerebrospinal fluid ( |
Ask a specific question about this device
(139 days)
KLS Martin Individual Patient Solutions (IPS) Planning System
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used in an additive manufacturing portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in thoracic (excluding spine) and reconstructive surgeries. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support thoracic (excluding spine) and reconstructive surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician. to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a medical device for surgical planning. The provided text contains information about its acceptance criteria and the studies performed to demonstrate its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes specific performance tests related to the materials and software used in the KLS Martin Individual Patient Solutions (IPS) Planning System. However, it does not explicitly provide a table of acceptance criteria with numerical targets and corresponding reported device performance values for the device's primary function of surgical planning accuracy or effectiveness. Instead, it relies on demonstrating that materials withstand sterilization, meet biocompatibility standards, and that software verification and validation were completed.
Here's a summary of the performance claims based on the provided text:
Acceptance Criteria Category | Reported Device Performance (Summary from text) |
---|---|
Material Performance | Polyamide (PA) Devices: Demonstrated ability to withstand multiple sterilization cycles and maintain ≥85% of initial tensile strength, leveraging data from K181241. Testing provides evidence of shelf life. |
Titanium Devices (additively manufactured): Demonstrated substantial equivalence to titanium devices manufactured using traditional (subtractive) methods, leveraging testing from K163579. These devices are identical in formulation, manufacturing processes, and post-processing. | |
Biocompatibility | All testing (cytotoxicity, sensitization, irritation, and chemical/material characterization) was within pre-defined acceptance criteria, in accordance with ISO 10993-1. Adequately addresses biocompatibility for output devices and intended use. |
Sterilization | Steam sterilization validations performed for each output device for the dynamic-air-removal cycle in accordance with ISO 17665-1:2006 to a sterility assurance level (SAL) of 10-6 using the biological indicator (BI) overkill method. All test method acceptance criteria were met. |
Pyrogenicity | LAL endotoxin testing conducted according to AAMI ANSI ST72. Results demonstrate endotoxin levels below USP allowed limit for medical devices and meet pyrogen limit specifications. |
Software Verification & Validation | Performed on each individual software application (Materialise Mimics, Geomagic® Freeform Plus™) used in planning and design. Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were correctly and completely implemented and traceable to system requirements. Testing showed conformity with pre-defined specifications and acceptance criteria. Software documentation ensures mitigation of potential risks and performs as intended based on user requirements and specifications. |
Guide Specifications | Thickness (Cutting/Marking Guide): Min: 1.0 mm, Max: 20 mm. |
Width (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Length (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Degree of curvature (in-plane): N/A | |
Degree of curvature (out-of-plane): N/A | |
Screw hole spacing (Cutting/Marking Guide): Min: ≥4.5 mm, Max: No Max. | |
No. of holes (Cutting/Marking Guide): N/A | |
Screw Specifications | Diameter (Temporary): 2.3 mm - 3.2 mm. |
Length (Temporary): 7 mm - 17 mm. | |
Style: maxDrive (Drill-Free, non-locking, locking). |
2. Sample size used for the test set and the data provenance
The document specifies "simulated use of guides intended for use in the thoracic region was validated by means of virtual planning sessions with the end-user." However, it does not provide any specific sample size for a test set (e.g., number of cases or patients) or details about the provenance of data (e.g., retrospective or prospective, country of origin). The studies appear to be primarily focused on material and software validation, not a clinical test set on patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth for a test set, nor their qualifications. The "virtual planning sessions with the end-user" implies input from clinical professionals, but no details are provided.
4. Adjudication method for the test set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for a test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document states that "Clinical testing was not necessary for the determination of substantial equivalence." Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers was not performed or reported for this submission. The device is a planning system for producing physical outputs, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is described as "a software system and image segmentation system for the transfer of imaging information... The system processes the medical images and produces a variety of patient specific physical and/or digital output devices." It also involves "input from the physician" and "trained employees/engineers who utilize the software applications to manipulate data and work with the physician to create the virtual planning session." This description indicates a human-in-the-loop system, not a standalone algorithm. Performance testing primarily focuses on the software's ability to implement requirements and specifications and material properties, rather than an independent algorithmic assessment.
Software verification and validation were performed on "each individual software application used in the planning and design," demonstrating conformity with specifications. This can be considered the standalone performance evaluation for the software components, ensuring they function as intended without human error, but it's within the context of supporting a human-driven planning process.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the materials and sterilization parts of the study, the "ground truth" or reference is established by international standards (ISO 10993-1, ISO 17665-1:2006) and national standards (AAMI ANSI ST72, USP allowed limit for medical devices).
For the "virtual planning sessions with the end-user," the ground truth is implicitly the expert judgment/agreement of the end-user (physician) on the simulated surgical treatment options and guide designs. No further specifics are given.
8. The sample size for the training set
The document does not mention any training set or its sample size. This is expected as the device is not described as a machine learning or AI device that requires a training set for model development in the typical sense. The software components are commercially off-the-shelf (COTS) applications (Materialise Mimics, Geomagic® Freeform Plus™) which would have their own internal validation and verification from their developers.
9. How the ground truth for the training set was established
Since no training set is mentioned for the device itself, the establishment of ground truth for a training set is not applicable in this document.
Ask a specific question about this device
(284 days)
KLS Martin Individual Patient Solutions (IPS) Planning System
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a computerized tomography (CT) medical scan. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides and case reports for use in the marking of cranial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive cranial surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports for use in the marking of cranial bone in cranial surgery.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a software system and image segmentation system used for transferring imaging information from a CT scan. The system processes input data to produce output data files, which can be digital models or physical outputs like anatomical models, guides, and case reports for cranial surgery. It is also a pre-operative software tool for simulating surgical treatment options.
Here's an analysis of the acceptance criteria and supporting studies based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Tensile & Bending Testing | Polyamide guides withstand multiple sterilization cycles without degradation and maintain 85% of initial tensile strength after 6 months. Additively manufactured titanium devices are equivalent to or better than traditionally manufactured titanium devices. |
Biocompatibility Testing | Polyamide devices meet pre-defined acceptance criteria (cytotoxicity, sensitization, irritation, chemical/material characterization, acute systemic toxicity, material-mediated pyrogenicity, indirect hemolysis). Titanium devices (including acute systemic toxicity, material-mediated pyrogenicity, indirect hemolysis) meet pre-defined acceptance criteria. |
Sterilization Testing | All output devices (polyamide, epoxy/resin/acrylic, titanium) achieve a sterility assurance level (SAL) of $10^{-6}$ using the biological indicator (BI) overkill method for steam sterilization. |
Pyrogenicity Testing | Devices contain endotoxin levels below the USP allowed limit for medical devices in contact with cerebrospinal fluid ( |
Ask a specific question about this device
(161 days)
KLS Martin Individual Patient Solutions (IPS) Planning System
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, splints, and case reports for use in maxillofacial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive and orthognathic surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, splints and case reports.
The provided text describes the KLS Martin Individual Patient Solutions (IPS) Planning System and its regulatory clearance (K182789) by the FDA. However, it does not contain information about a study proving the device meets specific acceptance criteria in the context of an AI/algorithm's performance.
Instead, the document is a 510(k) summary demonstrating substantial equivalence to a predicate device (K181241), primarily to expand the patient population to include pediatric subgroups. The core of the device is a planning system involving commercial off-the-shelf (COTS) software and additive manufacturing for physical outputs, with human-in-the-loop interaction from trained employees/engineers and physicians.
Therefore, most of the requested information regarding AI acceptance criteria, performance metrics, sample sizes, ground truth establishment, expert adjudication, and MRMC studies is not present in the provided document. The device, as described, is not an AI algorithm in the sense that it performs automated diagnostic or treatment recommendations independently based on image analysis with defined metrics. It's a system to assist human planning processes.
Here's an analysis based on the information available in the document, and a clear indication of what is not available.
Acceptance Criteria and Device Performance (as far as applicable from the document)
The document focuses on demonstrating substantial equivalence, not on acceptance criteria for a freestanding AI algorithm's performance. The "performance" described is primarily related to material properties, biocompatibility, and sterilization, and the functioning of the software as a tool for human planning.
Acceptance Criteria (Inferred from Substantial Equivalence and Safety/Performance) | Reported Device Performance (from document) |
---|---|
Material Degradation (Polyamide Guides) | Subject polyamide guides can withstand multiple sterilization cycles without degradation and can maintain 85% of its initial tensile strength. Demonstrates shelf life of 6 months. (p.10) |
Titanium Device Equivalency | Additively manufactured titanium devices are equivalent to, or better than, titanium devices manufactured using traditional (subtractive) methods, leveraging data from reference device K163579. (p.10) |
Biocompatibility | Biocompatibility endpoints (cytotoxicity, sensitization, irritation, chemical/material characterization) for both polyamide and titanium manufactured devices met pre-defined acceptance criteria (leveraged from predicate K181241 and reference K163579). (p.10) |
Sterility Assurance Level (SAL) | Achieved an SAL of 10^-6 for each output device using the biological indicator (BI) overkill method for steam sterilization. Validations for polyamide and titanium leveraged from predicate K181241 and reference K163579. (p.10) |
Pyrogenicity (Endotoxin Levels) | Endotoxin levels were below the USP allowed limit for medical devices and met pyrogen limit specifications, leveraging data from reference device K163579 for titanium. (p.10) |
Software Functionality and Validation | Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were implemented correctly and completely and are traceable to system requirements. Testing from risk analysis showed conformity with pre-defined specifications and acceptance criteria. Software documentation demonstrates mitigation of potential risks and performance as intended. (p.11, p.14) |
Safety and Effectiveness in Pediatric Subpopulations | A risk assessment based on FDA guidance and supporting peer-reviewed clinical literature was performed. The conclusion is that expanding the patient population to neonates, infants, and children does not identify new issues of safety or effectiveness and the device is substantially equivalent to the predicate. (p.5, p.11, p.14) Note: This is a risk assessment and literature review, not a new clinical performance study. |
Information NOT Available in the Document (as it pertains to an AI/Algorithm performance study):
-
Sample size used for the test set and data provenance:
This information is not provided. The study performed was primarily non-clinical (material testing, biocompatibility, sterilization) and a risk assessment for pediatric use, not a clinical trial evaluating algorithm performance on a test set of patient data. -
Number of experts used to establish the ground truth for the test set and their qualifications:
Not applicable, as no dedicated test set for evaluating AI/algorithm performance against a ground truth is described. The system relies on physician input and interaction with trained employees/engineers for planning, not on an autonomous algorithmic output that requires expert adjudication for ground truth. -
Adjudication method for the test set:
Not applicable for the same reasons as above. -
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence." (p.11, p.14). The device is a planning system with human-in-the-loop, not an AI intended to improve human readers' performance directly. -
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
No, a standalone algorithm performance study was not described. The device is presented as a "software system and image segmentation system" where "trained employees/engineers... manipulate data and work with the physician to create the virtual planning session." (p.4, p.13) The core function described is human-assisted planning and production of physical models, not an autonomous algorithmic output. -
The type of ground truth used:
Ground truth in the context of an AI algorithm's diagnostic or predictive performance is not relevant here, as no such AI is described. The "ground truth" for the device's function would relate to the accuracy of the generated physical models and plans relative to the patient's anatomy and surgical intent, which is managed through human interaction and validation within the system's intended use. -
The sample size for the training set:
Not applicable. The document refers to "commercially off-the-shelf (COTS) software applications" (p.4, p.13) which implies existing, validated software tools are being used, not a newly developed AI model requiring a separate training set. -
How the ground truth for the training set was established:
Not applicable, as no new training set for an AI model is described.
Ask a specific question about this device
(126 days)
KLS Martin Individual Patient Solutions (IPS) Planning System
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, splints, and case reports for use in maxillofacial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive and orthognathic surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, splints and case reports.
Here's an analysis of the acceptance criteria and study information based on the provided text, focusing on what is present and what is not:
The document (K181241 510(k) Summary) describes the KLS Martin Individual Patient Solutions (IPS) Planning System, which is a software system for image segmentation and pre-operative planning, and also provides physical outputs like anatomical models, guides, and splints for maxillofacial surgery. The submission focuses on demonstrating substantial equivalence to a predicate device (VSP System K120956) and uses several reference devices.
Acceptance Criteria and Study Information:
Based on the provided text, the device itself is a planning system that ultimately produces physical outputs. The "performance" being evaluated relates to the characteristics of these physical outputs (tensile strength, biocompatibility, sterility, pyrogenicity) and the software's functionality. There isn't a direct "device performance" metric in the traditional sense of an AI diagnostic device's sensitivity, specificity, or accuracy against a clinical outcome.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the device (planning system with physical outputs, not a diagnostic AI), the performance metrics are primarily related to safety and manufacturing quality.
Acceptance Criteria Category | Specific Acceptance Criteria (as implied) | Reported Device Performance (as summarized) |
---|---|---|
Tensile & Bending | Polyamide guides maintain 85% of initial tensile strength after multiple sterilization cycles. Demonstrate a 6-month shelf life. Titanium devices are equivalent or better than traditional methods. | Polyamide guides met the criteria, demonstrating resistance to degradation after sterilization and supporting a 6-month shelf life. Titanium test results were leveraged from a reference device (K163579) and confirmed equivalence or superiority to traditional manufacturing. |
Biocompatibility | Meet pre-defined acceptance criteria for cytotoxicity, sensitization, irritation, and chemical/material characterization (according to ISO 10993-1). | All conducted tests (cytotoxicity, sensitization, irritation, chemical/material characterization) for subject devices (polyamide and titanium) were within pre-defined acceptance criteria. Titanium results also leveraged from K163579. |
Sterilization | Achieve a Sterility Assurance Level (SAL) of 10^-6 for each output device using the BI overkill method for steam sterilization (according to ISO 17665-1:2006 for dynamic-air-removal cycle). | All test method acceptance criteria were met, achieving the specified SAL of 10^-6. Validations for titanium were leveraged from K163579. |
Pyrogenicity | Meet pyrogen limit specifications, with endotoxin levels below USP allowed limit for medical devices (according to AAMI ANSI ST72 for LAL endotoxin testing). | The devices contain endotoxin levels below the USP allowed limit for medical devices and meet pyrogen limit specifications. Testing for titanium was leveraged from K163579. |
Software Verification & Validation (V&V) | All software requirements and specifications are implemented correctly and completely, traceable to system requirements. Conformity with pre-defined specifications and acceptance criteria based on risk analysis and impact assessments. Mitigation of potential risks and performance as intended based on user requirements. | Quality and on-site user acceptance testing provided objective evidence of correct and complete implementation of software requirements, traceability, and conformity with specifications and acceptance criteria. Software documentation demonstrated risk mitigation and intended performance. (Note: Specific quantitative metrics for software performance are not provided in this summary). |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The document does not specify a "test set" in the context of a dataset for evaluating AI performance (e.g., medical images for segmentation accuracy). Instead, "testing" refers to non-clinical performance evaluations of the physical outputs and software.
- For Tensile & Bending, Biocompatibility, Sterilization, and Pyrogenicity, the sample sizes are not explicitly mentioned, but the tests were performed on "the subject polyamide guides" and "titanium" components. The provenance is internal testing performed by the manufacturer, or results leveraged from previous KLS Martin device submissions.
- For Software V&V, no specific numerical "test set" of software inputs is given, but testing was performed on "each individual software application."
- Data Provenance: The data provenance for non-clinical testing is internal to the manufacturer or relied upon previous regulatory submissions for similar materials/processes. It is not patient or country-specific data as would be for clinical studies. The data used by the IPS Planning System itself (CT data) would be patient data, but the evaluation here is of the system's outputs, not its interpretation of patient data in a diagnostic manner. The document states the system "transfers imaging information from a medical scanner such as a CT based system."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This question is not applicable in the context of this 510(k) submission. The "ground truth" for the non-clinical tests (tensile strength, biocompatibility, etc.) is established by standard scientific and engineering methodologies, not by expert medical review of images.
- For the "Software Verification and Validation," the "ground truth" for software functionality is defined by the established software requirements and specifications, validated by internal quality and user acceptance testing, not by external experts in the medical domain. The document mentions "input from the physician" for manipulation of original patient images, suggesting physicians set the clinical goals for the plan, but the validation of the system's performance is not described as involving experts establishing a "ground truth" concerning image interpretation.
4. Adjudication method for the test set
- Not applicable. There is no adjudication method described as would be used for clinical interpretation or diagnostic performance evaluation by multiple experts. The non-clinical tests follow established standards and protocols.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done.
- This device is a planning system for maxillofacial surgery, not a diagnostic AI that assists human readers in image interpretation or diagnosis. It aids in surgical planning and creates physical outputs. The submission explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Given the device's function, it is inherently a "human-in-the-loop" system. The description states: "The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery." And, "The physician provides input for model manipulation and interactive feedback through viewing of digital models...that are modified by the trained employee/engineer during the planning session."
- Therefore, performance of the algorithm without human intervention is not the intended use or focus of this submission. The "software verification and validation" (Section 11) is the closest thing to an "algorithm-only" evaluation, but it's about software functionality, not standalone image interpretation performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical performance tests of the physical outputs (Tensile & Bending, Biocompatibility, Sterilization, Pyrogenicity), the "ground truth" is defined by established engineering and scientific standards (e.g., ISO 10993-1, ISO 17665-1:2006, AAMI ANSI ST72, and internal specifications).
- For Software Verification & Validation, the "ground truth" is adherence to predefined software requirements and specifications (functional and non-functional, related to image transfer, manipulation, and output file generation). It is not based on medical "ground truth" like pathology or clinical outcomes.
8. The sample size for the training set
- This question is not applicable. The KLS Martin IPS Planning System is described as using "validated commercially off-the-shelf (COTS) software applications" for image manipulation. There is no mention of a "training set" in the context of machine learning or AI model development within this summary. It appears to be a rule-based or conventional algorithmic system rather than a deep learning/machine learning model that would require a distinct training set.
9. How the ground truth for the training set was established
- This question is not applicable, as no training set for machine learning was mentioned or identified in the document (see point 8).
Ask a specific question about this device
Page 1 of 1