Search Results
Found 2 results
510(k) Data Aggregation
(205 days)
SurgiCase Viewer
SurgiCase Viewer is intended to be used as a software interface to assist in visualization of treatment options.
SurgiCase Viewer provides functionality to allow visualization of 3D data and to perform measurements on these 3D data, which should allow a clinician to evaluate and communicate about treatment options.
SurgiCase Viewer is intended for use by people active in the medical sector. When used to review and validate treatment options, SurgiCase Viewer is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.
The SurgiCase Viewer can be used by a medical device/service manufacturer/provider or hospital department to visualize 3D data during the manufacturing process of the product/service to the end-user who is ordering the device/service. This allows the end-user to evaluate and provide feedback on proposals or intermediate steps in the manufacturing of the device or service.
The SurgiCase Viewer is to be integrated with an online Medical Device Data System which is used to process the medical device or service and which is responsible for case management, user management, authorization, authentication, etc.
The data visualized in the SurgiCase Viewer is controlled by the medical device manufacturer using the SurgiCase Viewer in its process. The Device manufacturer will create the 3D data to be visualized to the end-user and export it to one of the dedicated formats supported by the SurgiCase Viewer. Each of these formats describe the 3D data in STL format with additional meta-data on the 3D models. The SurgiCase Viewer does not alter the 3D data it imports and its functioning is independent of the specific medical indication/situation or product/service it is used for. It's the responsibility of the Medical device company using the SurgiCase Viewer to comply with the applicable medical device regulations.
The provided text describes the 510(k) submission for the "SurgiCase Viewer" device (K213684). However, it does not contain the specific details required to fully address all parts of your request related to acceptance criteria, test set specifics, expert ground truth establishment, MRMC studies, or training set details. This document primarily focuses on demonstrating substantial equivalence to a predicate device.
The study presented here is a non-clinical performance evaluation comparing the new SurgiCase Viewer with its predicate (K170419) and a secondary reference device (K183105).
Here's a breakdown of what can be extracted and what is missing, based on your questions:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with numerical performance metrics. Instead, it states that the device was validated to determine substantial equivalence based on:
- Intended Use: "Both the subject device as well as the predicate device have the same intended use; They are both intended to be used as a software interface to assist in visualization and communication of treatment options."
- Device Functionality: The new device was compared to the predicate in terms of features like 3D view navigation, visualization options, measuring, and annotations. For new functionalities (medical image visualization, VR visualization), it states "The abovementioned technological differences do not impact the safety and effectiveness of the subject device for the proposed intended use as is demonstrated by the verification and validation plan."
- Medical Images Functionality (compared to Mimics Medical K183105): "Both functionality produce the same results in: Contrast adjustments, Interactive image reslicing, 3D contour overlay on images."
- Measurement functionality: "Measurement functionality on images was compared with already existing functionality on the 3D models and shown to provide correct results both on images and 3D."
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated. The document refers to "verification and validation" and "performance testing" but does not provide details on the number of cases or images used in these tests.
- Data Provenance: Not explicitly stated (e.g., country of origin). It refers to "medical images functionality" and "3D models" but doesn't specify if these were from retrospective patient data, simulated data, etc. The study is described as "non-clinical testing."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Experts: Not explicitly stated. The validation involved "end-users," but their specific number, roles, or qualifications are not provided.
- Ground Truth Establishment: Not explicitly detailed. The comparison against the predicate and reference device functionalities implies that their established performance served as a form of "ground truth" for the new device's functions.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not explicitly stated. There is no mention of a formal reader adjudication process.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study described. This submission focuses on the device's substantial equivalence in functionality and safety, not on human reader performance improvement with AI assistance. The device's stated indication is "to assist in visualization of treatment options," implying a tool for clinicians, but not an AI-driven diagnostic aid that would typically undergo MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The context suggests a standalone functional assessment of the software's capabilities (e.g., whether it correctly performs contrast adjustments, measurement calculations, etc.) in comparison to the predicate and reference device. It's not an AI algorithm with a distinct "performance" metric like sensitivity/specificity, but rather a functional software application.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the functional comparison: The "ground truth" seems to be the established, correct functioning of the predicate and reference devices for equivalent features, and the defined requirements for new features. For instance, if the Mimics Medical device correctly performs "contrast adjustments," the SurgiCase Viewer needs to produce the "same results." For measurements, it needs to provide "correct results." This isn't a traditional clinical ground truth like pathology for a diagnostic AI.
8. The sample size for the training set:
- Not applicable / Not mentioned. This device description does not indicate the use of machine learning or AI models that require a "training set" in the conventional sense. It's described as a software interface for visualization and measurements.
9. How the ground truth for the training set was established:
- Not applicable. (See point 8).
In summary, the provided document demonstrates that the SurgiCase Viewer is substantially equivalent to existing cleared devices based on a functional and software validation process. It assures that new functionalities do not negatively impact safety or effectiveness and that shared functionalities perform comparably. However, it does not detail the type of rigorous clinical performance study (e.g., with patient data, expert readers, and quantitative statistical metrics) that would be common for AI/ML-driven diagnostic devices.
Ask a specific question about this device
(87 days)
SurgiCase Viewer
SurgiCase Viewer is intended to be used as a software interface to assist in visualization and communication of treatment options.
SurgiCase Viewer provides functionality to visualize 3D data and to perform measurements on these 3D data, which should allow a clinician to evaluate and communicate about treatment options.
SurgiCase Viewer is intended for use by people active in the medical sector. When used to review and validate treatment options, SurgiCase Viewer is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.
The SurgiCase Viewer can be used by a medical device/service manufacturer/provider or hospital department to visualize 3D data during the manufacturing process of the end-user who is ordering the device/service. This allows the end-user to evaluate and provide feedback on proposals or intermediate steps in the manufacturing of the device or service.
The SurgiCase Viewer is to be integrated with an online Medical Device Data System which is used to process the medical device or service and which is responsible for case management, authorization, authentication, etc.
The data visualized in the SurgiCase Viewer is controlled by the medical device manufacturer using the SurgiCase Viewer in its process. The Device manufacturer will create the 3D data to the end-user and export it to one of the dedicated formats supported by the SurgiCase Viewer. Each of these formats describe the 3D data in STL format with additional meta-data on the 3D models. The SurgiCase Viewer does not alter the 3D data it imports and its functioning is independent of the specific medical indication or product/service it is used for. It's the responsibility of the Medical device company using the SurgiCase Viewer to comply with the applicable medical device regulations.
The Materialise SurgiCase Viewer is a software interface intended for the visualization and communication of treatment options. The provided document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed study results on specific acceptance criteria and performance metrics of the device itself.
Based on the provided text, detailed acceptance criteria and the study proving the device meets them, in the typical format of clinical or standalone performance studies, are not extensively described. The document primarily highlights its non-clinical testing for substantial equivalence.
Here's an attempt to extract and synthesize the requested information, noting where specific details are not available in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics like sensitivity, specificity, accuracy, or effect sizes, which are typically seen in clinical performance studies of AI/imaging devices. Instead, the "Performance Data" section refers to "Non-clinical tests" conducted to validate the application for its intended use and determine substantial equivalence.
Acceptance Criterion (Inferred from "Non-clinical tests") | Reported Device Performance (Inferred/Summarized) |
---|---|
Functionality and performance of the SurgiCase Viewer are substantially equivalent to predicate devices (K113599 and K132290). | Non-clinical testing indicated that the subject device is as safe, as effective, and performs as well as the predicates. |
Ability to visualize 3D data. | Device provides functionality to visualize 3D data. |
Ability to perform measurements on 3D data. | Device provides functionality to perform measurements on 3D data. |
Integration with an online Medical Device Data System. | Intended to be integrated with an online Medical Device Data System for case management, authorization, authentication, etc. |
Does not alter the 3D data it imports. | The SurgiCase Viewer does not alter the 3D data it imports. |
Supports dedicated 3D data formats (e.g., STL with additional meta-data). | Device imports 3D data in STL format with additional meta-data on the 3D models. |
Functioning independent of specific medical indication or product/service. | Its functioning is independent of the specific medical indication or product/service it is used for. |
2. Sample Size for the Test Set and Data Provenance
The document states "Non-clinical tests" were performed. However, it does not specify the sample size used for any test set (e.g., number of cases, number of 3D models). It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature) as it refers to non-clinical testing, which typically involves technical verification and validation rather than studies on patient data.
3. Number of Experts and Qualifications for Ground Truth
The document does not mention the use of experts to establish ground truth for a test set. This is consistent with its focus on non-clinical testing and substantial equivalence rather than a clinical performance evaluation against expert consensus.
4. Adjudication Method for the Test Set
As no expert ground truth or clinical test set is described, there is no mention of an adjudication method (e.g., 2+1, 3+1, none).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not describe an MRMC comparative effectiveness study comparing human readers with and without AI assistance. Therefore, no effect size for human improvement is provided.
6. Standalone (Algorithm Only) Performance Study
The document does not present a standalone performance study in terms of typical clinical metrics (e.g., sensitivity, specificity) for the algorithm itself. The "non-clinical tests" relate to the device's functional performance and its equivalence to predicates.
7. Type of Ground Truth Used
The document does not specify a "ground truth" type in the context of expert consensus, pathology, or outcomes data. The validation described is focused on functional and performance equivalence during "non-clinical tests," implying a technical or engineering validation against specified requirements or predicate device behavior.
8. Sample Size for the Training Set
The document does not mention a training set sample size. This aligns with the description of "SurgiCase Viewer" as a software interface for visualization and measurements, suggesting it might not be a machine learning or AI algorithm that requires a traditional training set in the same way. It's more of a tool that processes and displays pre-existing 3D data.
9. How Ground Truth for the Training Set Was Established
As no training set is mentioned or implied in the context of machine learning, the document does not describe how ground truth for a training set was established.
Ask a specific question about this device
Page 1 of 1