Search Results
Found 10 results
510(k) Data Aggregation
(205 days)
SurgiCase Viewer is intended to be used as a software interface to assist in visualization of treatment options.
SurgiCase Viewer provides functionality to allow visualization of 3D data and to perform measurements on these 3D data, which should allow a clinician to evaluate and communicate about treatment options.
SurgiCase Viewer is intended for use by people active in the medical sector. When used to review and validate treatment options, SurgiCase Viewer is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.
The SurgiCase Viewer can be used by a medical device/service manufacturer/provider or hospital department to visualize 3D data during the manufacturing process of the product/service to the end-user who is ordering the device/service. This allows the end-user to evaluate and provide feedback on proposals or intermediate steps in the manufacturing of the device or service.
The SurgiCase Viewer is to be integrated with an online Medical Device Data System which is used to process the medical device or service and which is responsible for case management, user management, authorization, authentication, etc.
The data visualized in the SurgiCase Viewer is controlled by the medical device manufacturer using the SurgiCase Viewer in its process. The Device manufacturer will create the 3D data to be visualized to the end-user and export it to one of the dedicated formats supported by the SurgiCase Viewer. Each of these formats describe the 3D data in STL format with additional meta-data on the 3D models. The SurgiCase Viewer does not alter the 3D data it imports and its functioning is independent of the specific medical indication/situation or product/service it is used for. It's the responsibility of the Medical device company using the SurgiCase Viewer to comply with the applicable medical device regulations.
The provided text describes the 510(k) submission for the "SurgiCase Viewer" device (K213684). However, it does not contain the specific details required to fully address all parts of your request related to acceptance criteria, test set specifics, expert ground truth establishment, MRMC studies, or training set details. This document primarily focuses on demonstrating substantial equivalence to a predicate device.
The study presented here is a non-clinical performance evaluation comparing the new SurgiCase Viewer with its predicate (K170419) and a secondary reference device (K183105).
Here's a breakdown of what can be extracted and what is missing, based on your questions:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with numerical performance metrics. Instead, it states that the device was validated to determine substantial equivalence based on:
- Intended Use: "Both the subject device as well as the predicate device have the same intended use; They are both intended to be used as a software interface to assist in visualization and communication of treatment options."
- Device Functionality: The new device was compared to the predicate in terms of features like 3D view navigation, visualization options, measuring, and annotations. For new functionalities (medical image visualization, VR visualization), it states "The abovementioned technological differences do not impact the safety and effectiveness of the subject device for the proposed intended use as is demonstrated by the verification and validation plan."
- Medical Images Functionality (compared to Mimics Medical K183105): "Both functionality produce the same results in: Contrast adjustments, Interactive image reslicing, 3D contour overlay on images."
- Measurement functionality: "Measurement functionality on images was compared with already existing functionality on the 3D models and shown to provide correct results both on images and 3D."
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated. The document refers to "verification and validation" and "performance testing" but does not provide details on the number of cases or images used in these tests.
- Data Provenance: Not explicitly stated (e.g., country of origin). It refers to "medical images functionality" and "3D models" but doesn't specify if these were from retrospective patient data, simulated data, etc. The study is described as "non-clinical testing."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Experts: Not explicitly stated. The validation involved "end-users," but their specific number, roles, or qualifications are not provided.
- Ground Truth Establishment: Not explicitly detailed. The comparison against the predicate and reference device functionalities implies that their established performance served as a form of "ground truth" for the new device's functions.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not explicitly stated. There is no mention of a formal reader adjudication process.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study described. This submission focuses on the device's substantial equivalence in functionality and safety, not on human reader performance improvement with AI assistance. The device's stated indication is "to assist in visualization of treatment options," implying a tool for clinicians, but not an AI-driven diagnostic aid that would typically undergo MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The context suggests a standalone functional assessment of the software's capabilities (e.g., whether it correctly performs contrast adjustments, measurement calculations, etc.) in comparison to the predicate and reference device. It's not an AI algorithm with a distinct "performance" metric like sensitivity/specificity, but rather a functional software application.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the functional comparison: The "ground truth" seems to be the established, correct functioning of the predicate and reference devices for equivalent features, and the defined requirements for new features. For instance, if the Mimics Medical device correctly performs "contrast adjustments," the SurgiCase Viewer needs to produce the "same results." For measurements, it needs to provide "correct results." This isn't a traditional clinical ground truth like pathology for a diagnostic AI.
8. The sample size for the training set:
- Not applicable / Not mentioned. This device description does not indicate the use of machine learning or AI models that require a "training set" in the conventional sense. It's described as a software interface for visualization and measurements.
9. How the ground truth for the training set was established:
- Not applicable. (See point 8).
In summary, the provided document demonstrates that the SurgiCase Viewer is substantially equivalent to existing cleared devices based on a functional and software validation process. It assures that new functionalities do not negatively impact safety or effectiveness and that shared functionalities perform comparably. However, it does not detail the type of rigorous clinical performance study (e.g., with patient data, expert readers, and quantitative statistical metrics) that would be common for AI/ML-driven diagnostic devices.
Ask a specific question about this device
(87 days)
SurgiCase Viewer is intended to be used as a software interface to assist in visualization and communication of treatment options.
SurgiCase Viewer provides functionality to visualize 3D data and to perform measurements on these 3D data, which should allow a clinician to evaluate and communicate about treatment options.
SurgiCase Viewer is intended for use by people active in the medical sector. When used to review and validate treatment options, SurgiCase Viewer is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.
The SurgiCase Viewer can be used by a medical device/service manufacturer/provider or hospital department to visualize 3D data during the manufacturing process of the end-user who is ordering the device/service. This allows the end-user to evaluate and provide feedback on proposals or intermediate steps in the manufacturing of the device or service.
The SurgiCase Viewer is to be integrated with an online Medical Device Data System which is used to process the medical device or service and which is responsible for case management, authorization, authentication, etc.
The data visualized in the SurgiCase Viewer is controlled by the medical device manufacturer using the SurgiCase Viewer in its process. The Device manufacturer will create the 3D data to the end-user and export it to one of the dedicated formats supported by the SurgiCase Viewer. Each of these formats describe the 3D data in STL format with additional meta-data on the 3D models. The SurgiCase Viewer does not alter the 3D data it imports and its functioning is independent of the specific medical indication or product/service it is used for. It's the responsibility of the Medical device company using the SurgiCase Viewer to comply with the applicable medical device regulations.
The Materialise SurgiCase Viewer is a software interface intended for the visualization and communication of treatment options. The provided document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed study results on specific acceptance criteria and performance metrics of the device itself.
Based on the provided text, detailed acceptance criteria and the study proving the device meets them, in the typical format of clinical or standalone performance studies, are not extensively described. The document primarily highlights its non-clinical testing for substantial equivalence.
Here's an attempt to extract and synthesize the requested information, noting where specific details are not available in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics like sensitivity, specificity, accuracy, or effect sizes, which are typically seen in clinical performance studies of AI/imaging devices. Instead, the "Performance Data" section refers to "Non-clinical tests" conducted to validate the application for its intended use and determine substantial equivalence.
| Acceptance Criterion (Inferred from "Non-clinical tests") | Reported Device Performance (Inferred/Summarized) |
|---|---|
| Functionality and performance of the SurgiCase Viewer are substantially equivalent to predicate devices (K113599 and K132290). | Non-clinical testing indicated that the subject device is as safe, as effective, and performs as well as the predicates. |
| Ability to visualize 3D data. | Device provides functionality to visualize 3D data. |
| Ability to perform measurements on 3D data. | Device provides functionality to perform measurements on 3D data. |
| Integration with an online Medical Device Data System. | Intended to be integrated with an online Medical Device Data System for case management, authorization, authentication, etc. |
| Does not alter the 3D data it imports. | The SurgiCase Viewer does not alter the 3D data it imports. |
| Supports dedicated 3D data formats (e.g., STL with additional meta-data). | Device imports 3D data in STL format with additional meta-data on the 3D models. |
| Functioning independent of specific medical indication or product/service. | Its functioning is independent of the specific medical indication or product/service it is used for. |
2. Sample Size for the Test Set and Data Provenance
The document states "Non-clinical tests" were performed. However, it does not specify the sample size used for any test set (e.g., number of cases, number of 3D models). It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature) as it refers to non-clinical testing, which typically involves technical verification and validation rather than studies on patient data.
3. Number of Experts and Qualifications for Ground Truth
The document does not mention the use of experts to establish ground truth for a test set. This is consistent with its focus on non-clinical testing and substantial equivalence rather than a clinical performance evaluation against expert consensus.
4. Adjudication Method for the Test Set
As no expert ground truth or clinical test set is described, there is no mention of an adjudication method (e.g., 2+1, 3+1, none).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not describe an MRMC comparative effectiveness study comparing human readers with and without AI assistance. Therefore, no effect size for human improvement is provided.
6. Standalone (Algorithm Only) Performance Study
The document does not present a standalone performance study in terms of typical clinical metrics (e.g., sensitivity, specificity) for the algorithm itself. The "non-clinical tests" relate to the device's functional performance and its equivalence to predicates.
7. Type of Ground Truth Used
The document does not specify a "ground truth" type in the context of expert consensus, pathology, or outcomes data. The validation described is focused on functional and performance equivalence during "non-clinical tests," implying a technical or engineering validation against specified requirements or predicate device behavior.
8. Sample Size for the Training Set
The document does not mention a training set sample size. This aligns with the description of "SurgiCase Viewer" as a software interface for visualization and measurements, suggesting it might not be a machine learning or AI algorithm that requires a traditional training set in the same way. It's more of a tool that processes and displays pre-existing 3D data.
9. How Ground Truth for the Training Set Was Established
As no training set is mentioned or implied in the context of machine learning, the document does not describe how ground truth for a training set was established.
Ask a specific question about this device
(69 days)
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to assist in pre-operative planning and/ or in guiding the marking of bone and/or guide surgical instruments in non-acute, non-joint replacing osteotomies
· For adult patients; in upper extremity orthopedic surgical procedures and orthopedic surgical procedures around the knee.
· For pediatic patients 7 years of age and older; in orthopedic surgical procedures involving the radius and ulna.
SurgiCase Guides are intended for single use only.
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to transfer a pre-surgical plan to surgeries involving osteotomies in upper extremity orthopedic surgical procedures and orthopedic surgical procedures around the knee.
For pediatric patients 7 years of age and older, it is intended to be used in osteotomies involving the radius and ulna.
The SurgiCase Orthopaedics system is composed of two components: SurgiCase Connect (software) and SurgiCase Guides (hardware).
SurgiCase Connect is a medical device for Materialise and a surgeon for pre-surgical simulation of surgical treatment options. This includes transferring, visualizing, measuring, and editing medical data.
The SurgiCase Guides are patient specific templates that are designed and manufactured based on a pre-surgical software plan for a specific patient. In surgery these guides are used to assist a surgeon in guiding the marking of bone and/or guiding surgical instruments to cut and drill according to the pre-surgical plan.
All guides are individually designed and manufactured for each patient using a design and manufacturing process with strict procedures and work instructions. Part of this process is a scientific Stability Model which measures the sensitivity of a guide to movement during surgery. The use of this Stability Model provides a way to find the most stable position of the base plate on the individual patient's anatomy for accurate guiding of surgical instruments. The Stability Model is anatomy independent, thus it can be applied to any bony structure in upper and lower extremity surgical procedures.
The provided text is a 510(k) summary for the Materialise N.V. Surgicase Orthopaedics system (K163156). It describes a Class II medical device intended for surgical planning and guiding instruments, particularly for osteotomies. The submission expands the intended use to include pediatric patients aged 7 years and older, specifically for procedures involving the radius and ulna.
Based on the provided text, the acceptance criteria and study proving the device meets those criteria can be summarized as follows:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly present a table of quantitative acceptance criteria with corresponding performance metrics. Instead, it focuses on demonstrating substantial equivalence to the predicate device (K132290), particularly for the new pediatric patient population. The primary "acceptance" is tied to the guide's fit despite bone growth in pediatric patients.
| Acceptance Criterion (Implicit) | Reported Device Performance (Summary) |
|---|---|
| Guide maintains good fit despite pediatric bone growth. | "A fit test is performed in which guides were placed on the grown, 3D-printed, pediatric bone models and evaluated." This test determined the maximal allowed growth. Based on this, a "useful life" period of 3 weeks was established for all indicated pediatric patients, meaning the device's performance is not expected to be affected by bone growth within this timeframe. Multiple safety factors were incorporated into useful life calculations due to extrapolations from limited literature. |
| Functional elements (drill sleeves, cutting slots, fixation holes) perform identically to predicate device on pediatric patients. | Stated as: "The functional elements on the guides, i.e. drill sleeves, cutting slots, fixation holes, remain identical to the predicate device when used on a pediatric patient." No specific performance data for these elements for pediatric patients are provided beyond this qualitative statement, implying that their performance is considered equivalent due to identical design and function. |
| Device is as safe and effective as the predicate device. | "All non clinical testing and the retrospective analysis of clinical cases indicate that the subject device is as safe, as effective, and performs as well as the predicate device." This is a general conclusion based on the aggregate of the reported tests. |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set (for "Fit Tests"): The text mentions "grown, 3D-printed, pediatric bone models." It does not specify the numerical sample size for these models. The provenance is implied to be from "pediatric clinical cases" that were used to simulate growth.
- Retrospective Analysis: "Retrospective analysis of US and OUS pediatric clinical cases" was conducted. The specific sample size for this analysis is not provided. The data provenance is stated as "US and OUS" (Outside US), indicating a mix of international data, and it was "retrospective."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document does not mention the use of experts to establish ground truth for the "fit tests" on 3D-printed models. The evaluation of guide fit on these models appears to be an objective measurement against defined criteria for "maximal allowed growth."
For the retrospective clinical cases, there's no mention of experts establishing ground truth or their qualifications. The analysis "helped to further support the safety and short-term efficacy," suggesting a review of clinical outcomes rather than establishing a gold standard for specific measurements performed by the device itself.
4. Adjudication Method for the Test Set:
No adjudication method (e.g., 2+1, 3+1) is described for the "fit tests" or the retrospective analysis, as these are not studies involving subjective interpretations requiring consensus.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
No MRMC comparative effectiveness study involving human readers with and without AI assistance is mentioned. The device described (SurgiCase Orthopaedics system) is a surgical planning and guiding system, not an AI diagnostic tool primarily evaluated for human reader improvement.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:
The "fit tests" on 3D-printed models and the determination of "useful life" can be considered an assessment of the device's performance (specifically the guide's physical fit) in a standalone manner, independent of a human surgeon's real-time interaction during surgery, but based on the system's design and manufacturing outputs. The "Stability Model" is also described as a scientific model for determining stable placement, which would be a standalone algorithmic component.
7. The Type of Ground Truth Used:
- For "Fit Tests": The ground truth for the fit tests appears to be defined by a "maximal allowed growth" criterion, which was determined through simulating growth on 3D-printed bone models derived from pediatric clinical cases. This is a synthetic or simulated ground truth based on anatomical measurements.
- For Retrospective Analysis: The "safety and short-term efficacy" in the retrospective analysis implies clinical outcomes (e.g., successful procedure, absence of complications related to the device) as the ground truth, rather than a specific measurement.
8. The Sample Size for the Training Set:
The document does not mention a "training set" in the context of an AI/machine learning model. The device components described are software for planning and hardware (guides). The "Stability Model" is described as a "scientific Stability Model," not explicitly as a machine learning model that would require a training set. The "design and manufacturing process with strict procedures and work instructions" implies a more traditional engineering approach rather than an AI-driven one.
9. How the Ground Truth for the Training Set Was Established:
Since no distinct "training set" for an AI model is described, there's no information on how its ground truth was established. The "useful life" period calculation was based on a "literature study" covering bone growth and the results of the "fit tests" on 3D-printed models.
Ask a specific question about this device
(261 days)
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to assist in pre-operative planning and/or in guiding the marking of bone and/or guide surgical instruments in non-acute, non-joint replacing osteotomies for upper extremity orthopedic surgical procedures and osteotomies around the knee.
The system is to be used for adult patients.
SurgiCase Guides are intended for single use only.
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to transfer a pre-surgical plan to the surgery with osteotomies on upper extremity orthopedic procedures anound the knee.
The SurgiCase Orthopaedics system is components: SurgiCase Connect (software) and SurgiCase Guides (hardware).
SurgiCase Connect is a medical device for Materialise and a surgeon for pre-surgical simulation of surgical treatment options. This includes transfering, visualizing, measuring, annotating and editing medical data.
The SurgiCase Guides are patient specific templates that are designed and manufactured based on a pre-surgical software plan for a specific patient. In surgery these guides are used to assist a surgeon in guiding the marking of bone and/or guiding surgical instruments to cut and drill according to the pre-surgical plan.
All guides are individually designed and manufactured for each patient using a design and manufacturing process with strict procedures and work instructions to guarantee guides that consistently perform in a safe and effective way. Part of this process is a scientific Stability Model which measures the sensitivity of a guide to movement during surgery. The use of this Stability Model ensures a stable position on the patient's anatomy for accurate guiding of surgical instruments. The Stability Model is anatomy independent, thus it can be applied to any bony structure in upper extremity surgical procedures and osteotomies around the knee.
Here's a breakdown of the acceptance criteria and the study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Test/Study | Reported Device Performance and Acceptance |
| :--------------------------- | :------------------------------------------------ |
| Accuracy | Bone Model Tests | "All results were within the preset acceptance criteria." (Specific numerical criteria not provided) |
| Accuracy | Cadaveric Tests | "All results were within the preset acceptance criteria." (Specific numerical criteria not provided) |
| Safety & Effectiveness | Biocompatibility Test | "Testing verified that the accuracy and performance of the device is adequate to perform as intended." |
| Safety & Effectiveness | Sterilization Dimensional Stability Test | "Testing verified that the accuracy and performance of the device is adequate to perform as intended." |
| Safety & Effectiveness | Cleaning Validation Test | "Testing verified that the accuracy and performance of the device is adequate to perform as intended." |
| Safety & Effectiveness | Packaging and Shipment Test | "Testing verified that the accuracy and performance of the device is adequate to perform as intended." |
| Stability/Fit | Scientific Stability Model | "Ensures the most stable position on the patient's anatomy for accurate guiding of surgical instruments." |
| Software Functionality | Internal and External User Testing & Observations | "Results from this verification and validation testing demonstrate the device's safety and effectiveness is substantially equivalent to the predicate device." |
| Clinical Efficacy | Retrospective Analysis of Clinical Cases (Europe) | "Confirms the subject device's safety and effectiveness is substantially equivalent to the predicate device for use as intended based on surgeon evaluation of expected outcome." |
2. Sample Size Used for the Test Set and Data Provenance
- Bone Model Tests: "On a series of femoral and tibial models" (Specific number of models not provided).
- Cadaveric Tests: "On a series of cadaveric specimens" (Specific number of specimens not provided).
- Retrospective Clinical Cases: "Retrospective analysis of clinical cases performed in Europe" (Specific number of cases not provided).
3. Number of Experts Used to Establish Ground Truth and Qualifications
- For the bone model and cadaveric tests, the "pre-operative planned versus achieved corrected models/specimens were compared." This implies a comparison to a pre-defined plan or standard, rather than a consensus among external experts. The qualifications of who established the "pre-operative plan" and conducted the comparison are not explicitly stated, but it would typically be engineers or qualified personnel involved in the device development.
- For the retrospective clinical cases, "surgeon evaluation of expected outcome" was used. The number and specific qualifications of these surgeons are not provided, other than them being clinical users in Europe.
4. Adjudication Method for the Test Set
- The document implies a comparison of achieved results against "preset acceptance criteria" for the non-clinical tests. This suggests a direct measurement against a standard, rather than a multi-expert adjudication process.
- For the retrospective clinical cases, the ground truth was based on "surgeon evaluation of expected outcome," which might implicitly involve some level of individual surgeon judgment rather than a formal adjudication panel. No specific adjudication method like 2+1 or 3+1 is mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study is mentioned that assesses how much human readers improve with AI vs without AI assistance. The device is a surgical planning and guiding system, not an AI for image interpretation that would typically involve human readers.
6. Standalone (Algorithm Only) Performance Study
- A standalone performance was implicitly conducted for the SurgiCase Connect software component through "internal and external user testing and observations" and verification against specifications.
- The SurgiCase Guides were validated through "bone model tests" and "cadaveric tests," which involved applying the guides according to a pre-operative plan and comparing the achieved corrected models/specimens to the planned ones. This represents a standalone performance assessment of the guide's accuracy in transferring the surgical plan.
7. Type of Ground Truth Used
- Non-clinical tests (Bone models, Cadavers): "Pre-operative planned" outcomes served as the ground truth against which the achieved corrected models/specimens were compared. This is an engineered or defined ground truth.
- Retrospective Clinical Study: "Surgeon evaluation of expected outcome" served as the ground truth. This is a form of expert clinical judgment/outcome data.
8. Sample Size for the Training Set
- The document does not provide explicit information on the sample size used for a "training set." The device is primarily a software system for planning and physical guides for execution, rather than a machine learning algorithm that typically undergoes a separate supervised training phase with a dedicated training set. The descriptions focus on verification and validation against specifications and clinical use.
9. How Ground Truth for the Training Set Was Established
- As no explicit training set is mentioned in the context of a machine learning algorithm, the method for establishing its ground truth is not applicable or described in this document. The "training" for such a system would involve software development, engineering, and iterative testing/refinement against design specifications and user feedback, rather than a formal ground truth for a machine learning model.
Ask a specific question about this device
(337 days)
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to assist in pre-operative planning and/or in guiding the marking of bone and/or guide surgical instruments in non-acute, non-joint replacing osteotomies for upper extremity orthopedic surgical procedures.
The system is to be used for adult patients.
SurgiCase Guides are intended for single use only.
The SurgiCase Orthopaedics system is composed of two components: SurgiCase Connect (software) and SurgiCase Guides (hardware).
The SurgiCase Orthopaedics system is intended to be used as a surgical instrument to transfer a pre-surgical plan to the lower and upper extremity during orthopaedic surgical procedures.
SurgiCase Connect is a medical device for Materialise and a surgeon for pre-surgical simulation and evaluation of surgical treatment options. This includes transferring, visualizing, measuring, annotating and editing medical data.
The SurgiCase Guides are patient specific templates that are based on a pre-surgical software plan and are designed to fit a specific patient. All guides are individually designed and manufactured for each patient using a design and manufacturing process with strict procedures and work instructions to guarantee guides that consistently perform in a safe and effective way. In surgery these guides are used to assist a surgeon in guiding the marking of bone and/or guiding surgical instruments according to the pre-surgical plan.
Here's an analysis of the provided text regarding the acceptance criteria and study for the SurgiCase Orthopaedics system:
The provided 510(k) summary (K112389) for the SurgiCase Orthopaedics system does not explicitly detail specific acceptance criteria with quantifiable metrics for device performance (e.g., "accuracy of +/- 1mm") nor does it describe a formal clinical or standalone comparative study with human readers or a detailed statistical analysis of performance against such criteria.
Instead, the submission focuses on demonstrating substantial equivalence to predicate devices through a comparison of intended use, materials, and performance characteristics, and relies on non-clinical testing for validation. This is a common approach for certain types of medical devices, especially when establishing equivalence to existing technology.
However, based on the limited information provided, we can infer some aspects and present what is available:
1. Table of Acceptance Criteria and Reported Device Performance
As mentioned, specific quantifiable acceptance criteria are not explicitly stated in this document. The "performance" described is more qualitative and relates to successful completion of non-clinical tests to demonstrate safety and effectiveness, and accuracy adequate to perform as intended.
| Acceptance Criteria (Inferred/General) | Reported Device Performance |
|---|---|
| Substantial equivalence to predicate devices | Device comparison showed substantial equivalence. |
| Software validation for intended use | SurgiCase Connect software validated. |
| Accuracy of guides for surgical planning/guidance | Testing verified accuracy and performance of guides is adequate. |
| Biocompatibility of SurgiCase Guides | Biocompatibility tests performed and met requirements. |
| Sterilization dimensional stability of Guides | Sterilization dimensional stability tests performed and met requirements. |
| Debris test results for Guides | Debris tests performed and met requirements. |
| Packaging and shipment integrity for Guides | Packaging and shipment tests performed and met requirements. |
| Cleaning validation for Guides | Cleaning validation tests performed and met requirements. |
2. Sample Size for the Test Set and Data Provenance
The document states: "SurgiCase Guides were validated through non-clinical studies using bone models and cadaver specimens."
- Sample Size for Test Set: Not specified. The number of bone models and cadaver specimens used is not provided.
- Data Provenance: The validation was non-clinical, using bone models and cadaver specimens. The country of origin for these specimens is not mentioned, nor whether the data was retrospective or prospective (though for non-clinical lab testing, "prospective" would be the more fitting description of how the tests were conducted).
3. Number of Experts Used to Establish Ground Truth and Qualifications
This information is not provided in the document. Since the validation was entirely non-clinical using bone models and cadaver specimens, the concept of "experts establishing ground truth" in the diagnostic imaging sense (e.g., radiologists reviewing images) does not directly apply here. Instead, ground truth would be physical measurements and objective assessments against known parameters of the models/specimens, likely performed by engineers, technicians, and potentially surgeons involved in the study design.
4. Adjudication Method for the Test Set
This information is not applicable/provided. Adjudication methods like 2+1 or 3+1 are typically used in clinical studies involving human readers or expert review of data where there might be disagreements. Since the validation was non-clinical with bone models and cadavers, and no human reader interpretation of images is described as part of the primary validation for the stated performance, an adjudication method is not relevant.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document does not mention any studies involving human readers, either with or without AI assistance, or comparisons between them. The focus is on the device's standalone performance in non-clinical settings.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, implicitly. The non-clinical validation tests using bone models and cadaver specimens assess the performance of the SurgiCase Guides (hardware) and the SurgiCase Connect software (algorithm for planning/design) in isolation from a live surgical scenario involving a human surgeon's real-time interaction. The software is validated for its intended use, and the guides are tested for accuracy. This can be considered a form of standalone performance assessment as it evaluates the device's ability to "perform as intended" without human intervention in the measurement of its accuracy.
7. The Type of Ground Truth Used
The ground truth for the non-clinical validation was likely based on:
- Physical measurements: Precise measurements taken on the bone models and cadaver specimens to assess the accuracy of the guides and the outcomes of the simulated procedures. This would involve comparing the guided cuts/markings to the pre-surgical plan.
- Known parameters of the models: For engineered bone models, the "ground truth" of anatomical features and target osteotomy locations would be precisely known.
- Biocompatibility standards: For biocompatibility, the ground truth would be established regulatory standards and test results.
- Sterilization efficacy: For sterility, established protocols and detection limits.
It is not pathology, expert consensus in a diagnostic sense, or outcomes data from real patients.
8. The Sample Size for the Training Set
This information is not provided in the document. The filing describes the product as an "Image processing system and software for simulating/evaluating implant placement and surgical treatment options." While image processing software often involves machine learning that requires training data, the document does not specify any ML/AI components in detail or reference a training set. The descriptions point more towards conventional computational geometry and visualization software. If a machine learning component were present, its training data size and provenance would be crucial.
9. How the Ground Truth for the Training Set Was Established
Since information about a training set or specific machine learning components is not provided, how its ground truth was established is also not described.
Ask a specific question about this device
(144 days)
The SurgiCase system is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner. It is also intended as pre-operative software for simulating / evaluating implant placement and surgical treatment options.
SurgiCase Connect for iPad is a component of the SurgiCase system and intended to be used as a software interface to assist in pre-operative planning by simulation / evaluation of surgical treatment options.
The Materialise SurgiCase system is a software medical device to transfer and to segment imaging information from a medical scanner such as a CT or MRI scanner. It allows for presurgical simulation and evaluation of implant placement and surgical treatment options.
SurgiCase Connect is a medical device for pre-surgical simulation and evaluation of surgical treatment options. This includes transferring, visualizing and editing medical data. .
Based on a pre-surgical software plan the patient specific templates - SurgiCase Guides can be manufactured to fit a specific patient. SurgiCase Guides are not a part of this premarket notification submission.
The provided text is a 510(k) summary from Materialise N.V. regarding their SurgiCase system, specifically focusing on the new component, SurgiCase Connect for iPad. The core of the submission is to demonstrate substantial equivalence to a predicate device (SurgiCase K073449).
The document explicitly states:
- Clinical testing: Not applicable.
This means that a clinical study with acceptance criteria and reported device performance, as typically understood in a clinical trial context, was not performed for this submission. The device is being cleared based on its substantial equivalence to a predicate device through non-clinical testing.
Therefore, I cannot provide the detailed information requested regarding acceptance criteria and the study proving the device meets them, because such a study (clinical or performance study with defined acceptance criteria and results) is not described in this 510(k) summary.
The submission focuses entirely on demonstrating substantial equivalence through non-clinical testing, primarily asserting that the new component (SurgiCase Connect for iPad) has equivalent intended use, performance characteristics, design, and function to the predicate SurgiCase system.
Ask a specific question about this device
(298 days)
SurgiCase Guides are intended to be used as surgical tools to transfer a pre-operative plan to the surgery. The devices are intended to guide the marking of bone and/or guide surgical instruments during craniofacial osteotomies.
SurgiCase Guides are intended for single use only.
The SurgiCase Guides are patient specific devices or templates that are based on a pre-operative software planning and are designed to fit a specific patient. These templates are used to assist a surgeon in transferring this pre-operative plan to the surgery by guiding the marking of bone and/or guiding surgical instruments. Guides are individually designed and manufactured for each patient using a design and manufacturing process with strict procedures and work instructions to guarantee templates that consistently perform in a safe and effective way.
The SurgiCase Guides are based on a software planning generated using the previously cleared SurgiCase software (K073449).
The provided text describes non-clinical tests but does not include a table of acceptance criteria or specific performance metrics with numerical values. Therefore, I cannot generate 'A table of acceptance criteria and the reported device performance'.
Here's the information extracted from the document regarding the study and acceptance criteria:
1. A table of acceptance criteria and the reported device performance
No quantitative acceptance criteria or reported performance metrics are provided in the document. The text broadly states: "The guides meet the predefined acceptance criteria." and "Testing verified that the accuracy and performance of the system is adequate to perform as intended."
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The document mentions "bone models and cadaveric specimens" but does not give the number used.
- Data Provenance: Not specified. The study involved "bone models and cadaveric specimens," implying a laboratory or simulated environment rather than human patient data, making it prospective in nature for device validation. Country of origin is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. The ground truth for the "quantitative validation" would be based on measurements against the pre-operative plan, not expert consensus as it's a device accuracy study. Therefore, no experts were explicitly used to establish ground truth in this context.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. This study focused on the accuracy of the device in transferring a pre-operative plan, not on diagnostic interpretations requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. The study focused on the device's accuracy in guiding surgical procedures, not on human reader performance with or without AI assistance. The device in question is a surgical guide, not an AI-driven diagnostic tool for interpretation by human readers.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, a standalone performance assessment was conducted for the device. The "Quantitative validation using bone models and cadaveric specimens to validate the accuracy the guides obtain in transferring a surgical planning to the actual surgery during craniofacial osteotomies" evaluated the device's inherent accuracy. This is a standalone assessment of the physical guide's performance.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
The ground truth implicitly used for the "quantitative validation" would be the precise measurements and positions defined in the pre-operative software plan against which the actual surgical outcome (guided by the device) was compared. This is a form of direct measurement against a defined target. For "qualitative validation," the ground truth was the expected "fit and stability" as assessed by observers, but the specific criteria for this are not detailed.
8. The sample size for the training set
Not applicable. This device is a physical surgical guide developed through a design and manufacturing process, not a machine learning algorithm that requires a training set. The "SurgiCase software" which generates the planning data for the guides was "previously reviewed under K073449" and is not part of this 510(k) submission. Therefore, no training set for the SurgiCase Guides themselves is relevant here.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for the SurgiCase Guides.
Ask a specific question about this device
(105 days)
SurgiCase is software for pre-operative simulation of orthognathic surgical treatment options, based on imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging (MRI) scanner.
This submission is a Traditional 510(k) for the Orthognathic wizard of SurgiCase software application.
SurgiCase is software for pre-operative simulation of orthognathic surgical treatment options, based on imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging (MRI) scanner.
Based on the software planning several options are available to transfer the result of the planning to surgery. Examples:
- The software planning can be used to select appropriate implants or implant sizes for use during surgery.
- . Based on the planning, patient-specific surgical guides and implants can be designed.
- . Patient-specific surgical splints can be generated to transfer the planned dental occlusion to surgery.
The SurgiCase software platform is the basis of all clinical Materialise software designed for surgery planning. The platform allows basic functionality such as visualizing 3D objects, visualizing medical image data, generating 3D objects from medical image data and measuring.
On top of this platform, modules, also called wizards, can be added that each offer additional functionality such as planning a specific surgical routine. This platform is the main general wizard, while additional modules (wizards) are mainly based on the functionality of this general wizard; they assist the surgeon to plan specific surgery types step-by-step by providing each a different user interface, giving the surgeon the opportunity to fine tune parameters specific for that type of surgery. Current premarket notification is only for the Orthognathic wizard of the SurgiCase software. The rest of software wizards have been cleared under K073449 submission for the SurgiCase software.
Here's an analysis of the provided text regarding the acceptance criteria and study for the SurgiCase Orthognathic software wizard, based on the information available in the 510(k) summary:
Summary of Device Acceptance Criteria and Performance Data (Based on this 510(k) Pre-submission Documentation):
Based on the provided 510(k) summary, the device's acceptance criteria primarily revolve around its equivalence to its predicate device (SurgiCase, K073449) and successful completion of non-clinical software verification and validation. There are no explicit, quantifiable acceptance criteria or reported performance metrics in the provided text other than the successful completion of these non-clinical tests.
1. Table of Acceptance Criteria and Reported Device Performance:
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Equivalence to Predicate Device (SurgiCase K073449) in: | |
| * Intended Use | Achieved (stated in "Summary of technological characteristics" and implied by FDA clearance) |
| * Materials | Achieved (stated in "Summary of technological characteristics") |
| * Performance Characteristics | Achieved (stated in "Summary of technological characteristics") |
| Software Verification and Validation Testing: | |
| * Completion of Verification and Validation Reports | "Will be completed by the end of August 2011. Verification and validation reports will be on file at Materialise from that point on and can be sent on request." (Indicates an intent to meet, and subsequently FDA clearance implies it was met) |
Missing Information/Caveats: The document explicitly states that "Software verification and validation testing will be completed by the end of August 2011." While the FDA's clearance letter implies these were successfully completed and reviewed, the detailed reports themselves are not part of this public summary. Therefore, specific quantifiable acceptance criteria (e.g., accuracy, precision, processing time) and their corresponding performance values from these internal tests are not provided in this document. The "reported device performance" in the table above is inferred from the FDA's clearance.
2. Sample Size Used for the Test Set and the Data Provenance:
- Sample Size for Test Set: Not specified.
- Data Provenance: Not specified. The document states the software uses "imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging (MRI) scanner," but does not mention the origin (country, specific hospitals) or nature (retrospective/prospective) of any specific data used for testing or validation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not specified.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No. The document explicitly states "Clinical testing: Not applicable." This indicates that no studies comparing human readers with and without AI assistance were conducted or submitted as part of this 510(k).
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: The 510(k) summary only mentions "Software verification and validation testing." While these tests likely assessed the algorithm's performance in isolation (standalone), the specific details of these tests, including the metrics and results, are not provided in this document. The document primarily focuses on the regulatory submission process and substantial equivalence, not detailed technical performance studies.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: Not specified. Given the absence of detailed clinical or performance studies, the specific type of ground truth used for any internal software testing is not available in this summary.
8. The sample size for the training set:
- Sample Size for Training Set: Not specified. This type of detail, if applicable to the software's development (e.g., for machine learning components, which are not explicitly mentioned but could be part of "image processing"), is not included in this 510(k) summary.
9. How the ground truth for the training set was established:
- Ground Truth for Training Set: Not specified.
Ask a specific question about this device
(144 days)
SurgiCase Guides are intended to be used as surgical tools to transfer a pre-operative plan to the surgery. The devices are intended to guide the marking of bone and/or guide surgical instruments in mandibular and maxillofacial surgical procedures.
SurgiCase Guides are intended for single use only.
The SurgiCase Guides are patient specific devices or templates that are based on a preoperative software planning and are designed to fit a specific patient. These templates are used to assist a surgeon in transferring this pre-operative plan to the surgery by guiding the marking of bone and/or guiding surgical instruments.A standardized design and manufacturing process with detailed procedures and work instructions allows manufacturing patient-specific templates that consistently perform in a safe and effective way during surgery.
The SurgiCase Guides are based on a software planning generated using the previously cleared SurgiCase software (K073449).
SurgiCase is software for pre-operative simulation and evaluation of implant placement and surgical treatment options, based on imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging (MRI) scanner. The SurgiCase software was previously reviewed under K073449 and is not submitted for review in this 510k submission. References to the software are included to give a complete overview on the guide design process.
The provided text does not contain detailed information about specific acceptance criteria with quantifiable metrics, a formal study demonstrating the device meets these criteria, or granular details about test sets, ground truth establishment, or multi-reader multi-case studies typically associated with AI/ML device evaluations.
The document describes a 510(k) submission for "SurgiCase Guides," which are patient-specific surgical templates based on pre-operative software planning. The performance data section broadly mentions "non-clinical tests such as quantitative validation using bone models and cadaveric specimens" were performed and "verified that the accuracy and performance of the system is adequate to perform as intended." However, it does not provide the specific acceptance criteria or the results of these tests in a quantifiable manner.
Here's an analysis of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Quantitative) | Reported Device Performance |
|---|---|
| Not specified in document | "accuracy and performance of the system is adequate to perform as intended." (No quantitative metrics provided) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size: Not specified. The document mentions "bone models and cadaveric specimens" but does not give the number of models or specimens used.
- Data provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/Not specified. The term "ground truth" as typically used in AI/ML evaluation referring to expert-annotated data is not mentioned. The device is a surgical guide system, and its validation involves physical accuracy on models/cadavers, not interpretation of images by experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/Not specified. Adjudication methods are typically relevant for cases where multiple experts provide annotations that need to be reconciled to establish a ground truth for a test set in AI/ML contexts. This is not described for a physical device validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study is not mentioned. This type of study is usually conducted for AI-powered diagnostic or assistive tools where human readers are involved in interpreting data with and without AI. The SurgiCase Guides are physical surgical tools, not AI for human interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the "quantitative validation using bone models and cadaveric specimens" focused on the physical accuracy of the guides themselves, which could be considered a form of standalone performance of the device. However, it's not an "algorithm only" performance because the device itself is a physical object designed to guide surgical instruments. The software used to design the guides (SurgiCase software K073449) was previously cleared and is not the subject of this 510(k) submission.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" in this context would likely refer to the actual surgical plan (as transferred to the models/cadavers) or ideal anatomical positioning, against which the guided surgical actions were compared for accuracy. The document mentions "quantitative validation," suggesting objective measurements of accuracy were taken, but the specific nature of this "ground truth" (e.g., precise measurements of osteotomy planes or screw placement from a gold standard) is not detailed.
8. The sample size for the training set
- Not applicable/Not specified. Surgical guides are precisely manufactured to a patient's anatomy based on imaging and a pre-operative plan. There is no "training set" in the sense of machine learning model development mentioned for the SurgiCase Guides themselves. The underlying SurgiCase software (K073449) generates the plan, but its "training" details are not part of this submission.
9. How the ground truth for the training set was established
- Not applicable/Not specified. As there is no "training set" described for the SurgiCase Guides, the establishment of ground truth for such a set is not discussed.
Ask a specific question about this device
(128 days)
The Materialise SurgiCase System is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner. If is also intended as pre-operative software for simulating I evaluating implant placement and surgical treatment options.
The Materialise SurgiCase System is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner or a Magnetic Resonance Imaging scanner. It is also intended as pre-operative software for simulating / evaluating implant placement and surgical treatment options.
I am sorry, but the provided text does not contain information about acceptance criteria, device performance results, sample sizes, data provenance, expert ground truth establishment, adjudication methods, MRMC studies, or standalone algorithm performance.
The document is a 510(k) summary for the Materialise SurgiCase System, focusing on demonstrating substantial equivalence to a predicate device (SimPlant). It describes the device's intended use and classification but does not include details of a study that proves the device meets specific acceptance criteria based on performance data. The FDA letter confirms the clearance based on substantial equivalence, not on a detailed performance study presented in this summary.
Ask a specific question about this device
Page 1 of 1