Search Results
Found 3 results
510(k) Data Aggregation
(133 days)
The Arthrex VIP Web Portal is intended for use as a software interface of imaging information from a medical scanner such as a CT scanner. It is also intended as software for displaying/editing implant placement and surgical treatment options that were generated in the OrthoVis Desktop Software by trained Arthrex technicians. The Arthrex VIP Web Portal is intended for use with the Arthrex Glenoid Intelligent Reusable Instrument System (Arthrex Glenoid IRIS) and with the Arthrex OrthoVis Preoperative Plan. It is indicated for use with the following glenoid implant lines: Arthrex Univers II and Arthrex Univers Apex, Keeled or Pegged Glenoid components, the Vault Lock Glenoid Component, as well as the Univers Revers and Modular Glenoid System (MGS) Baseplate components.
The ArthrexVIP Web Portal is composed of software intended for use to facilitate upload of medical images, preoperative planning, and plan approval of placement and orientation of total shoulder joint replacement components. Each surgeon user's uploaded images are associated with specific cases and associated with that surgeon's profile. Uploaded images can be downloaded from the portal by Arthrex technicians and used to create preoperative plans (see 510(k) K151568) in the OrthoVis Desktop Software. The surgeon user is then able to login to the ArthrexVIP Web Portal to review the preoperative plan and either approve or modify the location and/or orientation of the joint replacement component. The approved plan is then downloaded by Arthrex technicians for production (see 510(k) K151500 and K151568) as part of the Arthrex Glenoid IRIS device.
The ArthrexVIP Web Portal is a software device intended for managing imaging information and displaying/editing implant placement and surgical treatment options for shoulder joint replacement.
Here's an analysis based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state specific quantitative acceptance criteria or corresponding reported device performance metrics for the ArthrexVIP Web Portal in a typical medical device performance study context (e.g., sensitivity, specificity, accuracy against a gold standard).
Instead, the "acceptance criteria" are implied by the non-clinical testing performed to demonstrate substantial equivalence to a predicate device. The performance is assessed through various software testing methodologies rather than clinical outcomes or diagnostic accuracy.
Implied Acceptance Criteria (based on testing performed):
Acceptance Criteria Category | Description (as implied by testing) | Reported Device Performance |
---|---|---|
Software Functionality | The software must function as intended for uploading, transferring, displaying, and editing imaging information and surgical plans, maintaining data integrity, and allowing for plan approval/modification. Changes from predicate must be correctly integrated. | Successfully passed Software Verification and Validation. |
Software Quality | The software code must be robust, free of critical defects, and correctly integrated. | Successfully passed Regression Testing, Unit Testing, Code Reviews and Checks, Integration Testing. |
Dimensional Accuracy | The display/editing of implant placement and surgical treatment options, including new implant models and features, must be dimensionally accurate and consistent with the intended design. | Successfully passed Dimensional Verification. |
Intended Use Fulfillment | The device fulfills its intended use as a software interface, for transfer of imaging data, and for displaying/editing plans generated by trained Arthrex technicians, for the specified glenoid implant lines, including the added Modular Glenoid System baseplate. | Demonstrated through software testing that the device functions as described for its intended use and expanded implant lines. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a sample size for a test set in the traditional sense of a clinical or imaging dataset. The testing performed is primarily software verification and validation, regression testing, unit testing, code reviews, integration testing, and dimensional verification.
- Test Set: Not applicable in the context of clinical images/data. The "test set" would refer to the various software modules, functions, and simulated data used during the V&V processes.
- Data Provenance: Not applicable. No patient data provenance is mentioned as this is not a device that directly analyzes patient data for diagnostic or treatment recommendations. The "data" being tested are software inputs, outputs, and internal states.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
This information is not provided and is generally not applicable for this type of software-only device where "ground truth" relates to functional correctness and dimensional accuracy rather than clinical interpretation of medical images. For dimensional verification, engineers or subject matter experts in CAD/imaging might define the "ground truth" for measurements, but the document does not elaborate.
4. Adjudication Method for the Test Set
This information is not provided and is generally not applicable. Software testing typically involves predefined test cases and expected outcomes. Discrepancies would be logged as bugs and resolved, rather than adjudicated by multiple experts in a consensus-based manner.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical testing was not necessary to determine substantial equivalence to the predicate." This device is a software portal for image transfer and display/editing of pre-generated plans, not an AI diagnostic tool or an imaging modality that requires human reader performance evaluation.
6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)
Not applicable in the context of typical AI/CADe/CADx devices. This device is a portal and editing tool; it doesn't have an "algorithm" in the sense of making independent diagnostic or treatment recommendations that would require standalone performance metrics like sensitivity or specificity. Its standalone performance is assessed through its software verification and validation processes (i.e., does it perform its specific functions correctly, reliably, and accurately as a piece of software).
7. Type of Ground Truth Used
For the non-clinical testing:
- Software Verification and Validation, Regression Testing, Unit Testing, Code Reviews, Integration Testing: The ground truth is the software requirements specification and design documentation. The software must perform according to these documented specifications.
- Dimensional Verification: The ground truth would be based on engineering specifications, CAD models, or established measurement standards for the implant models and visualization tools.
8. Sample Size for the Training Set
Not applicable. The ArthrexVIP Web Portal is a software tool for data transfer, display, and editing of pre-generated plans. It does not utilize machine learning or AI that requires a "training set" of data.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no training set.
Ask a specific question about this device
(89 days)
The Arthrex VIP Web Portal is intended for use as a software interface of imaging information from a medical scanner such as a CT scanner. It is also intended as software for displaying/editing implant placement and surgical treatment options that were generated in the OrthoVis Desktop Software by trained COS technicians. The Arthrex VIP Web Portal is intended for use with the Arthrex Glenoid Intelligent Reusable Instrument System (Arthrex Glenoid IRIS) and with the Arthrex OrthoVis Preoperative Plan. It is indicated for use with the following glenoid implant lines: Arthrex Univers Apex, Arthrex Univers II, and Arthrex Univers Revers.
The ArthrexVIP Web Portal is composed of software intended for use to facilitate upload of medical images, preoperative planning, and plan approval of placement and orientation of total shoulder joint replacement components. Each surgeon user's uploaded images are associated with specific cases and associated with that surgeon's profile. Uploaded images can be downloaded from the portal by COS technicians and used to create preoperative plans (see 510(k) K151568) in the OrthoVis Desktop Software. The surgeon user is then able to login to the ArthrexVIP Web Portal to review the plan and either approve or modify the location and/or orientation of the joint replacement component. The approved plan is then downloaded by COS technicians for production (see 510(k) K151500 and K151568) as part of the Arthrex Glenoid IRIS device.
The provided document is a 510(k) summary for the ArthrexVIP Web Portal, which is a software device intended for use in preoperative planning for shoulder joint replacement. This document primarily focuses on demonstrating substantial equivalence to a predicate device and does not contain detailed information about a study proving the device meets specific acceptance criteria in the format requested.
Here's an attempt to answer the questions based on the limited information available in the document, along with an explanation of why certain information cannot be provided:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria or specific reported device performance metrics tied to such criteria. The submission focuses on demonstrating "substantial equivalence" to a predicate device through non-clinical testing, rather than establishing performance against defined criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a test set sample size or data provenance for any clinical performance evaluation. The non-clinical testing performed includes "Software verification and validation," "Regression Testing," "Unit Testing," "Code reviews and checks," and "Integration Testing." These are software development and quality assurance activities, not studies involving human subjects or medical image data in a clinical context.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
No information is provided regarding experts, ground truth establishment, or clinical test sets.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
No information is provided regarding adjudication methods, as no clinical test set is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was mentioned or performed. The device is a "software interface" for displaying/editing implant placement and surgical treatment options, not an AI-assisted diagnostic tool that would typically involve improving human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No information is provided about a standalone algorithm performance study. The device is described as a "software interface" and a tool for displaying/editing options generated by "trained COS technicians" and reviewed by "surgeon users," indicating a human-in-the-loop design.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not describe the establishment of a "ground truth" in a clinical sense for performance evaluation. The testing performed ("Software verification and validation," etc.) would involve internal quality metrics and ensuring the software functions as designed, rather than comparison to a clinical ground truth.
8. The sample size for the training set
No training set is mentioned. This device is not described as an AI/ML algorithm that learns from data in a training set. It is a software interface and planning tool.
9. How the ground truth for the training set was established
Not applicable, as no training set for an AI/ML algorithm is described.
Summary of what can be extracted from the document regarding acceptance criteria and studies:
The document states:
- Non-Clinical Testing: "The following testing was performed to demonstrate substantial equivalency of the ArthrexVIP Web Portal to the OrthoVis Web Portal: Software verification and validation, Regression Testing, Unit Testing, Code reviews and checks, Integration Testing, Dimensional Validation (performed on predicate device and code has not changed for the subject device)."
- Clinical Testing: "Clinical testing was not necessary to determine substantial equivalence between to the predicate."
This 510(k) submission relies on non-clinical software validation and verification activities to establish substantial equivalence to a predicate device, rather than explicit clinical performance criteria with associated studies involving patient data or experts. Therefore, most of the requested information regarding acceptance criteria, sample sizes, ground truth, and human reader performance is not present in this document.
Ask a specific question about this device
(120 days)
The OrthoVis Web Portal is intended for use as a software interface and for the transfer of imaging information from a medical scanner such as a CT scanner. It is also intended as pre-operative software for simulating implant placement and surgical treatment options. The OrthoVis Web Portal is intended for use with the Glenoid Intelligent Reusable Instrument system and with the OrthoVis Preoperative Plan. It is also intended for use with the Arthrex and DePuy shoulder replacement implant systems listed below.
- · Arthrex Univers Apex
- · Arthrex Univers II
- · Arthrex Univers Revers
- · DePuy Global AP
- DePuy Global StepTech
- · DePuy Delta Xtend Reverse
The Glenoid Intelligent Reusable Instrument System ("Glenoid IRIS") is a patient specific manual instrument system intended for use to facilitate preoperative planning and intraoperative placement of the central glenoid guide pin used in the preparation of the glenoid in total shoulder systems that utilize a central guide pin for preparing the glenoid to receive the glenoid implant.
The OrthoVis Web Portal is composed of software intended for use to facilitate upload of medical images and preoperative planning and plan approval of placement and orientation of Arthrex and DePuy total shoulder replacement systems. Each surgeon user's uploaded images are grouped into cases and associated with that user's profile. Uploaded images can be downloaded from the portal by COS technicians and used to create preoperative plans (see 510(k) K13367 and K151568). The user is then able to login to the OrthoVis Web Portal to review the preoperative plan and either approve or modify the location and/or orientation of the shoulder replacement component and then approved plan is then available to download by COS technicians for further preoperative planning production (see 510(k) K123122 and K142072) and for download by the user/surgeon.
The provided document does not contain explicit acceptance criteria or a study that proves the device meets specific performance criteria. The document is a 510(k) premarket notification letter from the FDA to Custom Orthopaedics Solutions, Incorporated for their OrthoVis Web Portal.
The letter focuses on the FDA's determination of substantial equivalence to legally marketed predicate devices, rather than a clinical performance study with predefined acceptance criteria for the device itself. The device, OrthoVis Web Portal, is described as a software interface for transferring imaging information and pre-operative simulation, essentially a communication tool between surgeons and technicians.
However, the "NON-CLINICAL TESTING" section (page 6) describes the testing performed to demonstrate substantial equivalence, which can be interpreted as fulfilling the requirements for market clearance for this type of device.
Here's a breakdown based on the information provided, recognizing that it's not a typical clinical performance study:
1. A table of acceptance criteria and the reported device performance
The document does not present a table of acceptance criteria with corresponding device performance for a typical clinical study (e.g., sensitivity, specificity, accuracy). Instead, the "performance" demonstrated relates to software functionality and safety in comparison to existing methods.
Acceptance Criteria (Inferred from non-clinical testing) | Reported Device Performance (Summary of testing results) |
---|---|
Software Verification and Validation: Software functions correctly, securely, and reliably as intended. Ensures data integrity and proper operation. | Comprehensive coverage of application requirements demonstrated through automated unit tests, automatic integration testing, and manual testing. Constraints and protections verified for correct results and error handling. |
Dimensional Validation: Accuracy of reading and display of CT scans and derived measurements is maintained. | The same code responsible for reading and display of CT scans is used as in already cleared OrthoVis Shoulder software (K123122, K133367), implying maintained accuracy. |
Simulated Use Comparison: The OrthoVis Web Portal process is comparable or superior to existing manual processes (CD mailer/Go-To-Meeting/Email) for surgeon interaction, planning, and approval. | Surgeons with experience in the old process were surveyed to compare their experience with the Web Portal process. While specific results are not detailed, the implication is that the comparison was favorable for substantial equivalence. |
Security and Data Handling: Secure authentication, confidential communication, encrypted storage, and appropriate notifications. | Secure portal login with hashed password, HTTPS/SSH network communication, SFTP for local office uploads/downloads, encrypted database on server, email for status change (no PHI). |
2. Sample size used for the test set and the data provenance
- Software Verification and Validation:
- Test Set Sample Size: Not explicitly stated as a number of "cases" or "patients." This testing involved numerous automated unit tests and integration tests, as well as manual tests covering various software functionalities and scenarios.
- Data Provenance: Not applicable in the traditional sense of patient data. The "data" would be test cases and simulated inputs designed to exercise the software's capabilities and constraints.
- Dimensional Validation:
- Test Set Sample Size: Not specified. This validation likely involved testing the software's rendering and measurement capabilities rather than a patient dataset.
- Data Provenance: Not applicable.
- Simulated Use Comparison:
- Test Set Sample Size: "Surgeons with experience" were asked to participate in a survey. The exact number of surgeons is not specified.
- Data Provenance: Prospective, as surgeons were asked to compare their experiences with the old and new processes. The origin of the data is implicit as part of the manufacturer's testing in the US.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- For the software verification and dimensional validation, "ground truth" would be the expected correct software behavior and measurement accuracy, established by software engineers and potentially medical imaging experts involved in the development and previous clearances of OrthoVis. Specific numbers and qualifications are not provided.
- For the simulated use comparison, the "experts" were the participating surgeons. Their qualifications are stated as "Surgeons with experience in using the OrthoVis Preoperative Plan via the CD Mailer/Go-To-Meeting/Email process." No specific number is given.
4. Adjudication method for the test set
- Not applicable in the context of a clinical performance study with human reviewers.
- Software testing would involve test engineers and potentially QA personnel verifying test outcomes against expected results.
- For the surgeon survey, presumably, the survey responses were collected and analyzed directly, not adjudicated among experts to establish a "ground truth."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done.
- The OrthoVis Web Portal is described as a software interface for communication and pre-operative simulation, not an AI-assisted diagnostic or interpretative tool. Therefore, the concept of improving human readers with AI assistance does not apply directly to this device's intended use according to the provided text. The "Simulated Use Comparison" was about process efficiency and user experience, not diagnostic accuracy.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The device is a human-in-the-loop system by design, facilitating communication and planning between surgeons and technicians. The software itself is designed to display, transfer, and allow modification of plans by human users.
- The "standalone" performance would be related to its software functionality (e.g., ability to upload, store, display data correctly), which was addressed by the software verification and validation. However, this is not a clinical "standalone" performance in the sense of making a medical decision without human input.
7. The type of ground truth used
- Software Verification and Validation: Expected software behavior, functional requirements, and logical outputs.
- Dimensional Validation: Known measurement standards and calculations, validated against established (previously cleared) software code.
- Simulated Use Comparison: User experience and feedback from surgeons, evaluated against the existing manual processes.
8. The sample size for the training set
- Not applicable. The OrthoVis Web Portal is a communication and simulation tool, not a machine learning or AI algorithm that requires a training set of data.
9. How the ground truth for the training set was established
- Not applicable, as there is no training set for this device.
Ask a specific question about this device
Page 1 of 1