Search Results
Found 3 results
510(k) Data Aggregation
(192 days)
Abys® Medical Cysware® 4H is intended for use as a software interface and image segmentation system for the transfer of medical imaging information to an output file. Abys® Medical Cysware® 4H is also intended as pre-operative software for surgical planning assistance. Abys® Medical Cysware® 4H is intended to be used by clinician with appropriate clinical judgement.
Abys® Medical Cysart@ 4H is a medical display intended for 3D image visualization and image interaction. The stereoscopic 3D images are generated from 3D volumetric data acquired from CT scan source. The device is intended to provide visual information to be used by clinical with appropriate clinical judgement for analysis of surgical options, and the intraoperative display of the images. Abys® Medical Cysart@ 4H is intended to be used as an adjunct to the interpretation of images performed using diagnostic imaging systems and is not intended for primary diagnosis. Abys® Medical Cysart® 4H is intended to be used as a reference display for consultation to assist the clinician with appropriate clinical judgement who is responsible for making all final patient management decisions.
Abys® Medical Cysware® 4H web platform is a web-based medical device designed and intended for use prior to surgery to gather in one place the information needed by the surgeon to make a surgical planning. As a result, a planning assistance file is created and contains medical imaging, 3d models, documents, and notes. The ABYS® MEDICAL Cysware® 4H web platform is used to export the planning assistance file to the Abys® Medical Cysart® 4H mixed reality application, another medical software.
The Abys® Medical Cysart® 4H mixed reality application is a medical device designed and intended for use in office room and in operating room to display and manipulate all documents in the planning assistance file generated from the Abys® Medical Cysware® 4H web platform.
Here's an analysis of the acceptance criteria and study information for Abys Medical's Cysware 4H and Cysart 4H devices, based on the provided text:
Acceptance Criteria and Device Performance Study
The FDA 510(k) summary provides details on the performance testing conducted for the Cysware 4H and Cysart 4H devices. The testing was non-clinical.
1. Table of Acceptance Criteria and Reported Device Performance
For Cysware 4H:
Acceptance Criteria | Reported Device Performance |
---|---|
Global time needed to open a planning assistance file is below 40 seconds (excluding credentials entry). | Global time needed to open a planning assistance file is below 40 seconds. (Note: The text clarifies that "Global time with credentials entering is user dependent and may reach 1-2 minutes, as showed by summative tests.") |
Features are usable when fifteen users are simultaneously connected to Cysware 4H. | Features are usable when fifteen users are simultaneously connected to Cysware 4H. |
Features are usable when three users are simultaneously connected to the same folder. | Features are usable when three users are simultaneously connected to the same folder. |
Accuracy of measures (distances and angles) meets specified thresholds. | Accuracy of measures showed an error lower than 1.6 mm for distances and 2.9° for the angles. |
Accuracy of Cysware 4H segmentation algorithm and Mesh generation for Cysart 4H export allows segmenting DICOM from CT scan sources with an error lower than 1.25mm. | Accuracy of Cysware 4H segmentation algorithm and Mesh generation for Cysart 4H export allows segmenting DICOM from CT scan sources with an error lower than 1.25mm. |
For Cysart 4H:
Acceptance Criteria | Reported Device Performance |
---|---|
Images displayed have a refresh rate always higher than 30 frames per second. | The images displayed have a refresh rate always higher than 30 frames per second, ensuring the smooth movement of the 3D objects. |
Autonomy of the HoloLens 2 allows for the entirety of a surgery (specifically 1h30 without video stream sharing and 45 minutes with video stream sharing). | Autonomy of the HoloLens 2 when the application is open allows for the entirety of a surgery. More specifically 1h30 without sharing the video stream and 45 minutes while sharing the video stream to a workstation connected to the same network. |
The Cysart 4H device reproduces 3D objects at a scale of 1:1. | The Cysart 4H device reproduces the 3D objects at a scale of 1:1 and thus ensures that the 3D medical images displayed are representative of the medical images acquired from the CT scan. |
Global time to connect to a Cysart 4H session is no longer than 3 minutes. | The global time to connect to a Cysart 4H session is no longer than 3 minutes. |
Quality of display is sufficient for intended use and no degradation occurs when adding objects/documents. | The quality of display is sufficient for the intended use and no degradation of display occurs when adding objects or documents to an opened session. |
Voice commands can be used in the operating room as long as ambient noise does not exceed 60dB. | The voice commands can be used in operating room as long as the ambient noise does not exceed 60dB. |
Performance of the Microsoft HoloLens 2 display used with Cysart 4H is adequate (verified for luminance, distortion, contrast, motion-to-photon latency). | The performance of the Microsoft® HoloLens 2 display used with Cysart® 4H is adequate and has been demonstrated by verifying: luminance, distortion, contrast, and motion-to photon latency. |
2. Sample Size Used for the Test Set and Data Provenance
The provided document does not explicitly state the sample size used for the non-clinical performance test set. It mentions "fifteen users," and "three users" for simultaneous connection tests for Cysware 4H, but not for the accuracy of measurements or segmentation where image data would be the primary "sample."
The data provenance is not explicitly mentioned (e.g., country of origin of data, retrospective or prospective). However, the general context is about software testing and validation against technical specifications rather than a clinical study on patient data from specific sources. The segmentation and mesh generation accuracy for Cysware 4H specifically mentions using "DICOM from CT scan source," but the origin of these CT scans is not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not explicitly provided in the non-clinical performance data section. The testing described focuses on technical specifications and usability, rather than expert-derived ground truth on clinical diagnostic images. For measures like accuracy of segmentation, there would have been a "ground truth" for comparison, but the method of establishing it and the experts involved are not detailed.
4. Adjudication Method for the Test Set
An adjudication method (e.g., 2+1, 3+1) is not mentioned as the study described is non-clinical performance testing rather than a clinical study requiring adjudication of findings (like a diagnostic accuracy study).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document states, "Clinical testing was not required to demonstrate substantial equivalence." Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The performance tests for Cysware 4H's "Accuracy of Cysware 4H segmentation algorithm and Mesh generation" can be considered a standalone performance assessment of the algorithm's capability. The reported error of "lower than 1.25mm" against a ground truth (though not fully described) indicates a standalone evaluation.
7. The Type of Ground Truth Used
For the accuracy of measures (distance, angle) and segmentation accuracy of Cysware 4H, the ground truth would typically be reference measurements or segmentation derived from the input CT scan data. While the method of establishing this ground truth (e.g., expert consensus, manual annotation by a highly qualified individual, comparison to a gold standard software) is not explicitly detailed, it would inherently be a technical ground truth rather than pathological or outcomes data, as these are non-clinical hardware/software performance tests.
For Cysart 4H, the ground truth for parameters like refresh rate, autonomy, scale, connection time, display quality, and voice command efficacy are based on technical specifications and measurable operational performance criteria rather than clinical ground truth from patient data.
8. The Sample Size for the Training Set
The document does not provide information on the sample size used for the training set for any algorithms within Cysware 4H or Cysart 4H. It is stated that the software was developed, verified, and validated, implying standard software development and QA practices, but details on machine learning model training data are absent.
9. How the Ground Truth for the Training Set was Established
As no information on a training set or specific machine learning models requiring labeled training data is provided, how the ground truth for such a training set was established is not detailed.
Ask a specific question about this device
(21 days)
Nu Vasive NuvaLine is a medical device software application intended to assist healthcare professionals in capturing, viewing, measuring, and storage and distribution of spinal alignment assessment images at various time points in patient care. Online synchronization of the database allows healthcare professionals and service providers to conveniently perform and review spinal alignment assessments of images by featuring measurement tools on various platforms. Clinical judgment and experience are required to properly use the software.
NuVasive NuvaLine is a medical device software application used to calculate the spinal pelvic, lumbar, thoracic, and cervical parameters for pre-operative and post-operative assessment of spinal x-ray images. These measured parameters provide a quantifiable way to assess a patient's spinal deformity and correction correlated to health related quality of life (HRQOL) scores.
The purpose of this premarket notification is to gain clearance of the previously cleared NuvaLine app to communicate with cloud server for online synchronization of database to transfer and store assessment data to allow for use of the NuvaLine app on different platforms (e.g.: mobile, web interface, desktop) by healthcare professionals and service providers.
The provided text does not contain explicit acceptance criteria and corresponding performance data in a dedicated table format. However, it does mention performance characteristics in the comparison table and describes the testing performed. I will extract the relevant information and present it in the requested format, inferring acceptance criteria where it implies a match to the predicate device's performance.
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Implied) | Reported Device Performance (NuVaLine® NuvaLine®) |
---|---|
Spinal alignment assessments of images (Matching predicate functionality) | Spinal alignment assessments of images |
Various spinal assessment algorithms (Matching predicate functionality) | Various spinal assessment algorithms |
User Interface: PC or mobile device or web interface (Matching reference devices) | PC or mobile device or web interface |
Obtaining an image: Transferred from PACS (Matching reference device functionality) | Transferred from PACS (DICOM images from PACS converted to jpeg for use in NuvaLine) |
Online synchronization of database (Matching reference device functionality) | Yes |
PACS connectivity (Matching reference device functionality) | Yes |
DICOM compatibility (Matching reference device functionality) | Yes (DICOM images from PACS converted to jpeg for use in NuvaLine) |
Supported Platforms: Mobile application on iOS 10.0+; Web client on Windows 10, 3GHz processor, 18GB RAM, modern browser, 1920x1200 display resolution (Matching predicate/reference devices and added web client support) | Mobile application supported on devices running iOS version 10.0 or later. |
Web client is supported for the following minimum system specifications: Windows 10, 3GHz processor, 18GB RAM, Modern browser supporting HTML5.2 and JavaScript ES7 or better, 1920x1200 display resolution | |
Measurement accuracy: Angles within ± 3°, offsets within ± 1 cm (Improved from predicate's ± 2 cm) | NuvaLine measures angles within ± 3° and offsets within ± 1 cm accuracy. |
Cloud Connectivity Validation | NuvaLine Cloud Connectivity Validation performed and met |
Web Client Cloud Connectivity Validation | NuvaLine Web Client Cloud Connectivity Validation performed and met |
Cloud Connectivity Measurement Library Verification | NuvaLine Cloud Connectivity Measurement Library Verification performed and met |
2. Sample size used for the test set and the data provenance
The document mentions "Nonclinical testing was performed..." and lists types of validation tests. However, it does not specify the sample size for the test set used in these validations (e.g., number of images, number of measurements). It also does not specify the data provenance (e.g., country of origin, retrospective or prospective nature of the data).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide information regarding the number of experts, their qualifications, or their involvement in establishing ground truth for any test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention or describe a multi-reader multi-case (MRMC) comparative effectiveness study. It focuses on the device's standalone performance and its equivalence to predicate devices, not on human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the document implies that a standalone performance evaluation was conducted. The "Measurement accuracy" specification confirms the device's ability to measure angles and offsets with specific accuracy limits (angles within ± 3° and offsets within ± 1 cm). This indicates an evaluation of the algorithm's performance independent of human-in-the-loop assistance for measurement, as it's a characteristic directly attributed to NuvaLine.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document does not explicitly state the type of ground truth used for the measurement accuracy evaluation or other validation tests. Given the nature of a "Picture archiving and communications system" and "spinal alignment assessments," it is highly probable that the ground truth for measurement accuracy would have been established through highly precise manual measurements by qualified experts on a reference standard or through established anatomical landmarks on images, but this is not explicitly stated.
8. The sample size for the training set
The document does not provide information regarding the sample size of a training set. This is consistent with the subject device being described as a "medical device software application" that provides measurement tools, rather than a machine learning or AI algorithm that requires a distinct training phase.
9. How the ground truth for the training set was established
Since no training set is mentioned or implied for a machine learning or AI component, the document does not provide information on how ground truth for a training set was established.
Ask a specific question about this device
(27 days)
The UNiD Spine Analyzer is intended for assisting healthcare professionals in viewing and measuring images as well as planning orthopedic surgeries. The device allows surgeons and service providers to perform generic as well as spine related measurements on images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants. Clinical judgment and experience are required to properly use the software.
The purpose of this submission is to update the UNiD Spine Analyzer with the addition of a new software feature: "Data base of implants". This component will allow a user to draw implants (cages, screws and rods) taken from a range of MEDICREA INTERNATIONAL implants, previously cleared in K08009, K083810, K163595, in addition to the design of custom-made implants specific to a unique patient. A catalog of these implants is provided in this submission.
The provided text is a 510(k) summary for the UNiD Spine Analyzer. It states that the submission is to add a new software feature, "Data base of implants," to an already cleared device (UNiD Spine Analyzer, K170172). Therefore, the acceptance criteria and performance data described in this document relate to the new feature and its integration, rather than a full study of the entire device's performance from scratch.
However, the 510(k) summary does not contain specific acceptance criteria tables or detailed performance study results (like sensitivity, specificity, AUC, or other quantitative measures typically found in standalone AI/ML device studies). It primarily focuses on demonstrating substantial equivalence by comparing features and outlining the type of testing performed.
Based on the information provided, here's what can be extracted and what is NOT available:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as quantitative metrics (e.g., "accuracy > X%"). The document implies acceptance based on successful "verification and validation activities" for the new software feature. For a medical device, this typically means:
- The software correctly performs the functions it's designed for (e.g., implants are drawn accurately, catalog is accessible).
- The new feature doesn't introduce new safety or effectiveness issues.
- The software meets industry standards for medical device software development (e.g., IEC 62304).
- Reported Device Performance: No quantitative performance metrics (like accuracy, precision, etc.) are provided for the new "Database of implants" feature. The document only states that "Performance data for the modified UNiD Spine Analyzer consisted of verification and validation activities." and "The addition of the database of implants creates additional tools which were also tested, and documentation was provided."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified. The document only mentions "verification and validation activities" for the software feature itself, not a clinical data set for performance evaluation.
- Data Provenance: Not specified. Since this is about adding a database of implants and related drawing tools, it's less about analyzing patient image data for diagnosis and more about the software's functional correctness.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of experts & Qualifications: Not applicable/not specified. The "ground truth" for this specific submission likely relates to the accuracy of implant representation and placement tools, which would be verified against design specifications, engineering standards, and potentially input from orthopedic surgeons during development, rather than a clinical ground truth established by diagnosing cases.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable/not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC study was not described. The device, UNiD Spine Analyzer, assists healthcare professionals in viewing, measuring, and planning orthopedic surgeries. The specific update in this submission is the addition of an implant database. This generally falls under medical image management/measurement software (PACS-like functionality) rather than an AI/ML diagnostic or prognostic tool that would typically undergo MRMC studies. The software is explicitly stated to require "Human Intervention for interpretation and manipulation of images."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Study: Not explicitly described in terms of clinical performance. The "verification and validation activities" confirm the software's functionality, but these are not presented as a standalone clinical performance study typically seen for AI algorithms making diagnostic interpretations. The device is a tool for human use, not an autonomous diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: Not specified in the provided text. For a feature involving an implant database and drawing tools, ground truth would likely be based on design specifications, physical accuracy of implant models, and functional correctness according to orthopedic surgical planning principles, rather than clinical outcomes or pathology.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This document does not describe the development of a machine learning algorithm that learns from a training set of data. It describes the addition of a database and associated software tools.
9. How the ground truth for the training set was established
- Training Set Ground Truth Establishment: Not applicable.
Summary of what the document focuses on regarding device acceptance:
The document leverages the concept of "substantial equivalence" to a previously cleared version of the same device (K170172). The acceptance criteria for the new feature (database of implants) are implicit in the statement that "The addition of this new component (i.e., data base of cleared implants) to the UNiD Spine Analyzer does not raise new issues of safety or effectiveness compared to the previously cleared version of the UNiD Spine Analyzer." This implies that the testing (verification and validation) confirmed:
- The implant database functions as intended.
- The drawing tools work correctly.
- The new feature does not adversely affect the safety or performance of the existing cleared functionalities of the UNiD Spine Analyzer.
- The software development followed appropriate guidelines for medical device software ("Guidance for Industry and FDA Staff, 'Guidance for the Content of Premarket Submissions for Software Contained on Medical Devices'").
Essentially, for this 510(k) (which is an update to an existing device), the "proof" for acceptance is the demonstration that the change does not negatively impact safety or effectiveness, and the new feature itself is functionally sound, rather than a de novo clinical performance study against specific acceptance criteria.
Ask a specific question about this device
Page 1 of 1