K Number
K242599
Manufacturer
Date Cleared
2024-12-20

(112 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The aprevo® Digital Planning software is intended to be used by trained, medically knowledgeable design personnel to generate surgical plans for skeletally mature patients. The device processes a 3D spine model in conjunction with inputs from healthcare professionals to produce 3D models of spinal corrections and anatomical measurements.

Device Description

The aprevo® Digital Planning, is a rules-based, semi-automated surgical planning software that, based on surgeon input, measures the 3D spine model and results in a spinal correction plan. The outputs consist of a surgical plan and 3D spinal correction assets, which are then reviewed by the surgeon. The device is operated by trained, medically knowledgeable Carlsmed design personnel.

AI/ML Overview

The provided document is a 510(k) summary for the aprevo® Digital Planning software. While it describes the software and its intended use, it does not contain specific details on acceptance criteria or a comprehensive study report with quantitative performance metrics that would typically be found in a detailed validation study.

The relevant section for performance criteria and testing is under "Non-clinical Testing" (page 6). It states: "The surgical plans generated by aprevo® Digital Planning were shown to be substantially equivalent, with 90% confidence and reliability, to plans manually created by medically knowledgeable operators." This is a summary statement of a study result, but it lacks the detailed breakdown requested.

Therefore, many of the requested fields cannot be filled directly from the provided text. I will fill in what can be inferred or explicitly stated and highlight where information is missing.


Acceptance Criteria and Study for aprevo® Digital Planning Software

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance Criteria (Quantitative / Qualitative)Reported Device Performance
Surgical plans generated by aprevo® Digital Planning are substantially equivalent to plans manually created by medically knowledgeable operatorsSubstantially equivalent with 90% confidence and reliability to manually created plans.
Software performs according to specifications.Verified.
Software performs according to user requirements.Verified.

Missing Information:

  • Specific quantitative metrics for "substantial equivalence" (e.g., specific measurement tolerances for anatomical landmarks, angular measurements, or correctional values). The document states "90% confidence and reliability," which is a statistical measure of the comparison, not the quantitative metric itself.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size for Test Set: Not explicitly stated in the provided text.
  • Data Provenance: Not explicitly stated (e.g., country of origin). The study involved comparing software-generated plans to "plans manually created by medically knowledgeable operators," implying retrospective data processed by operators.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

  • Number of Experts: Not explicitly stated.
  • Qualifications of Experts: Described as "medically knowledgeable operators" and "Internal Carlsmed personnel only" (page 6). Specific qualifications like "radiologist with 10 years of experience" are not provided.

4. Adjudication Method for the Test Set

  • Adjudication Method: Not explicitly stated. The comparison is between the automated software plans and "manually created plans," but how discrepancies, if any, were resolved for the ground truth is not detailed.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • MRMC Study Done? Not explicitly mentioned as an MRMC study in the standard sense (human readers assisting with/without AI). The study described is a comparison between software-generated plans and manually created plans. It's not a study on how human readers improve with AI assistance, but rather a performance validation comparing the AI's output to human-generated output.
  • Effect Size of Human Reader Improvement: Not applicable given the type of study described.

6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

  • Standalone Study Done? Yes, in a sense. The "Non-clinical Testing" section validates the performance of the software-generated surgical plans against manually created plans. While the software takes "inputs from healthcare professionals" and trained personnel operate it, the evaluation seems to focus on the output of the algorithms ("surgical plans generated by aprevo® Digital Planning") compared to a human-derived ground truth. The device is described as "rules-based, semi-automated."

7. Type of Ground Truth Used

  • Ground Truth Type: "Plans manually created by medically knowledgeable operators" (page 6). This is a form of expert consensus or expert-derived ground truth.

8. Sample Size for the Training Set

  • Sample Size for Training Set: Not explicitly stated in the provided text. The document describes the software as "rules-based, semi-automated" and uses algorithms for landmark determination and surgical plan generation. This implies a rule-based system or potentially a machine learning model, but training set details are not provided.

9. How the Ground Truth for the Training Set Was Established

  • Ground Truth Establishment for Training Set: Not explicitly stated. Given the description of the device as "rules-based, semi-automated," it suggests that the "rules" and "algorithms" were likely developed based on medical knowledge and potentially expert-verified data, but the specific process for establishing ground truth for any potential training data is not detailed.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).