Search Results
Found 1 results
510(k) Data Aggregation
(67 days)
ArthroPlan™ is indicated for use by suitably licensed and qualified healthcare professionals requiring access to medical images to be used in conjunction with templates for prosthetic and fixation devices, for the purposes of choosing the size and geometry of the prosthetic/fixation device when planning a potential hip arthroplasty surgical procedure. Templating is done without alteration of the original image, using scaling and measurement tools in a digital environment, in conjunction with manufacturers' templates available via the ArthroPlan library of digital templates for prosthetic and fixation devices.
ArthroPlan™ is software designed and developed for preoperative planning, a.k.a. digital templating, for orthopedic operations. It includes tools for performing common measurements and drawings in combination with orthopedic implant manufacturer's electronic templates (provided in the ArthroPlan Template Library, which is part of the software). The measurements and scaling tools enable the user performing preoperative planning for orthopedic procedures. - The software allows the user to capture the radiographic image, import it to the software, accurately scale the degree of maqnification of the image, and overlay and manipulated (size, angle, rotate, invert, etc.) the desired electronic template(s) on the image facilitating the election of the appropriate size of prosthetic/fixing.
The provided text describes the ArthroPlan Digital Templating Software and its 510(k) submission. However, it does not detail specific acceptance criteria with quantifiable metrics for device performance (e.g., accuracy, precision) or a study report that provides these performance numbers against those criteria.
Instead, the document focuses on:
- Substantial Equivalence: Asserting that the ArthroPlan is as safe and effective as predicate devices with similar intended uses, technological characteristics, and principles of operation.
- Functional Testing: Stating that performance testing showed the software "functioned as intended" and "meets its specifications" for functions like image collection, scaling, templating, and reporting.
- Compliance with Standards: Listing various FDA guidances and ISO/IEC/NEMA standards that the testing complied with.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, and detailed study parameters are not present in the provided text.
Here's an attempt to answer based on the available information, noting where information is absent:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantifiable acceptance criteria (e.g., "Accuracy of scaling must be within X%") or report specific performance numbers for the device against such criteria. Instead, it broadly states that the device "functioned as intended" and "meets its specifications."
The "Performance Data" section lists functions that were tested and confirmed:
Acceptance Criteria (Inferred from tested functions) | Reported Device Performance (as stated in document) |
---|---|
Patient and procedure selection operates correctly. | Confirmed as operating according to specified requirements. |
Image collection (capture) and scaling operates correctly and accurately. | Confirmed as operating according to specified requirements. User validation confirmed accurate scaling. |
Procedure planning functions as intended. | Confirmed as operating according to specified requirements. |
Templating (overlay, manipulation, combining, storing templates) functions as intended. | Confirmed as operating according to specified requirements. User validation confirmed templating functionality and manipulation (size, angle, rotate, inverting, etc.). |
Committing and saving operating session data functions correctly. | Confirmed as operating according to specified requirements. Saving of images using APL or BMP format recommended to prevent data loss. |
Compilation and printing of associated reports functions correctly. | Confirmed as operating according to specified requirements. |
Software complies with relevant standards and guidances. | Testing was conducted in compliance with FDA Guidance (General Principles of Software Validation, Premarket Submissions for Software), ISO 62304, ISO 62366, IEC/ISO 10918-1, NEMA PS 3.1-3.20, and ANSI/AAMI HE75. In all instances, ArthroPlan functioned as intended, and results were as expected. |
Device is as safe and effective as predicate devices. | Performance data demonstrate that the ArthroPlan software is as safe and effective as the cited predicates (Orthoview™ K063327, Agfa Orthopedic Software K071972). |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Performance testing including a non-clinical user validation was conducted on the ArthroPlan software." However, it does not specify the sample size for this test set (e.g., number of images, number of cases). It also does not mention the data provenance (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document refers to "a non-clinical user validation" but does not specify the number or qualifications of "users" or "experts" involved in establishing ground truth or validating the software's performance. The "Indications for Use" specifies "suitably licensed and qualified healthcare professionals," but this refers to the intended end-users, not necessarily the validation experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method used for establishing ground truth or evaluating the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The focus is on demonstrating substantial equivalence to predicate devices, not on quantifying human reader improvement with AI assistance. The device is referred to as "software for preoperative planning," and it clearly states "Human Intervention for Interpretation of Images: Requires physician to use and interpret data. Decision on implant selection is up to the physician." This implies it's a tool for assistance, not a standalone diagnostic AI, but no study on its assistive impact is detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the "ArthroPlan™ Digital Templating Software" as a "standalone Microsoft Windows compatible software." The performance testing described (e.g., image collection, scaling, templating) implies testing of the algorithm's functionality in a standalone manner. However, the document also explicitly states, "Requires physician to use and interpret data. Decision on implant selection is up to the physician," indicating it's intended to be used with human-in-the-loop, even if the software itself can operate independently for its defined functions. The testing largely covers the accuracy of its tools (scaling, overlay) rather than diagnostic performance metrics often associated with "standalone" AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used. Given the nature of the software (templating and measurement tools), it is highly likely that the ground truth would involve:
- Known physical measurements/standards: For validating the scaling accuracy.
- Manual measurements by experts: For verifying calculations performed by the software.
- Comparison to expert-performed templating: To validate the templating accuracy and functionality against clinically accepted norms, possibly involving comparison to predicate device outputs or expert-derived plans.
However, this is inferred, not explicitly stated.
8. The sample size for the training set
The document does not mention a training set or any deep learning components that would typically require a training set. The software is described as using "scaling and measurement tools" and "manufacturers' templates," suggesting rule-based or algorithmic functionality rather than a machine learning model that needs a training set.
9. How the ground truth for the training set was established
Since no training set is mentioned or implied for a machine learning model, this question is not applicable based on the provided text.
Ask a specific question about this device
Page 1 of 1