Search Results
Found 3 results
510(k) Data Aggregation
(78 days)
3D-Side SA
Customize for ankle arthroplasty is intended to be used as a software interface to assist in:
- Visualization, modification, validation of the planning of total ankle arthroplasty
- Communication of treatment options
- Segmentation of CT-scan data
- 3D CAD models generation
- use x-ray scan information to enhance the planning of ankle arthroplasty
- Managing timeline and cases
Customize is not intended to be used for
- Spine surgeries
- Implant and instrument design
Experience in usage and clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
Customize is intended to be used during the preparation of ankle arthroplasties. It visualizes surgical treatment options that were previously created based on 3D CAD files generated from multi-slice DICOM data from a CT scanner. It consists of a single software user interface where the physician can review the CAD files in 3D, CT and X-Ray images in 2D and modify the position and orientation of the different 3D objects. Customize includes an implant library with 3D digital representations of various implant models so that the right implant positioning and sizing can be achieved based on the physician's input. After approval by the physician, the treatment plan is saved on the server and can be used as a reference during surgery. It is also possible to export the treatment plan for further processing such as designing and manufacturing of patient specific devices (the design of those later is done using an external software).
Here's a breakdown of the acceptance criteria and the study details for the "Customize" device based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
The provided document describes performance testing for the "Customize" device, focusing on segmentation validation, repeatability and reproducibility (R&R) of segmentation, and 3D model generation accuracy. While explicit "acceptance criteria" in terms of specific numeric thresholds are not directly stated in a table format, the narrative implies that the device successfully met the requirements of these tests to demonstrate substantial equivalence.
Implied Acceptance Criteria and Reported Device Performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Image Segmentation | Successful segmentation of ankle anatomies using an AI-based algorithm. The AI produces a "temporary label" as an initialization for manual editing, implying accuracy sufficient for efficient human supervision. The study "demonstrat[ed] Customize has been validated for its intended use" for image segmentation. |
Repeatability & Reproducibility (R&R) Study | Successful demonstration of repeatability and reproducibility in the segmentation of ankle anatomies. This indicates consistency in the device's output under similar conditions. The study "demonstrat[ed] Customize has been validated for its intended use" for R&R. |
Accuracy of 3D Model Generation (Tibia/Talus) | Accurate generation of 3D models for the tibia and talus based on CT-scan data. The study "demonstrat[ed] Customize has been validated for its intended use" for 3D model generation accuracy. No specific measurement units (e.g., mm, %) are provided for accuracy, but the overall conclusion of substantial equivalence implies these outcomes were within acceptable limits for a medical image management and processing system. |
Study Details
2. Sample size used for the test set and the data provenance:
- Segmentation validation and R&R study: The document states the AI-based ankle algorithm was trained on a dataset of 126 medical images (CT scans). It doesn't explicitly state the size of a separate "test set" for validation, but for these types of studies, a portion of the overall dataset or a new, unseen dataset is typically used for testing. Without further information, it's difficult to distinguish a test set size beyond the training dataset described.
- Data Provenance: Not explicitly stated whether the data was retrospective or prospective, nor the country of origin. The mix of patients (30% eligible for ankle arthroplasty, 70% cadaveric cases) suggests a blend of clinical and research data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The general statement about "human supervision by a 3D-Side engineer" applies to the device's operational use (editing AI output), but not necessarily to the establishment of ground truth for the validation studies.
4. Adjudication method for the test set:
- This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not described. The document mentions that the AI produces an "initialization" that "requires manual editions" and human supervision, suggesting it functions as an assistive tool rather than a standalone diagnostic or decision-making system. The focus is on the software's performance metrics (segmentation, R&R, 3D model generation), not on comparing human performance with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The description implies that standalone performance was assessed to some extent for the "initialization" step. The AI algorithm generates a "temporary label" which then undergoes "human supervision." The "performance testing included 1) segmentation validation" of the software, suggesting its initial output (before human intervention) was evaluated against a ground truth. However, the device's intended use is described as AI followed by human supervision, so the final, clinical performance relies on the human-in-the-loop.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies that the "corresponding segmentation" data was used as ground truth for training the AI. For the performance studies, it is logical that similar "ground truth" segmentations (likely created by experts, though their number and qualifications are not specified) would be used for comparison, especially for segmentation accuracy and 3D model generation. Pathology or outcomes data are not mentioned.
8. The sample size for the training set:
- 126 medical images (CT scans).
9. How the ground truth for the training set was established:
- The ground truth for the training set was established through "corresponding segmentation." This typically means that experts manually segmented the anatomical structures (bones in this case) on the CT scans, and these expert-generated segmentations were used as the gold standard to train the AI model. The details of who performed these segmentations (e.g., number of experts, their qualifications) are not provided.
Ask a specific question about this device
(103 days)
3D-Side SA
Customize for shoulder arthroplasty is intended to be used as a software interface to assist in:
- Visualization, modification, validation of the planning of shoulder arthroplasty
- Communication of treatment options
- Segmentation of CT-scan data
- 3D CAD models generation
- Managing timeline and cases
Customize is not intended to be used for
- Spine surgeries
- Implant and instrument design
Experience in usage and clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
Customize is intended to be used during the preparation of shoulder arthroplasties. It visualizes surgical treatment options that were previously created based on 3D CAD files generated from multi-slice DICOM data from a CT scanner. It consists of a single software user interface where the physician can review the CAD files in 3D and modify the position and orientation of the different 3D objects. Customize includes an implant library with 3D digital representations of various implant models so that the right implant positioning and sizing can be achieved based on the physician's input. After approval by the physician, the treatment plan is saved on the server and can be used as a reference during surgery.
Customize is prescription use only.
This 510(k) summary for the "Customize" device describes its intended use for shoulder arthroplasty planning. However, it does not provide specific acceptance criteria or a dedicated study report that details how the device meets those criteria with numerical performance metrics.
The document states that "Non-clinical performance data was included in the 510(k)-submission demonstrating Customize has been validated for its intended use and substantial equivalence to the predicate device." It also mentions "Software verification and validation was performed" and "performance testing included 1) segmentation validation of the Customize software, 2) Repeatability and Reproducibility study on the segmentation of shoulder anatomies and 3) accuracy study on 3D model generation for humerus and scapula."
However, the summary itself does not contain the detailed results, acceptance criteria, or study methodologies that would allow for a comprehensive answer to your request. It only broadly states that these tests were performed and that the device was deemed "substantially equivalent."
Therefore, I cannot populate the table or provide the detailed information requested regarding acceptance criteria and study particulars based solely on the provided text. The requested data points (sample sizes, ground truth details, expert qualifications, etc.) are not present in this summary.
Ask a specific question about this device
(133 days)
3D-Side SA
3D-Cut is intended to be used as a surgical instrument to assist in preoperative planning and/or in guiding the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing ostector of bone tumors, for femur, tibia and pelvis including sacrum.
3D-Cut is a patient-matched additively manufactured single use surgical instrument (PS). Based on a preoperative planning, the instruments are intended to assist physicians in guiding the marking of bone and guiding surgical instruments in bone tumor resection surgery, excluding joint replacement surgeries.
The 3D-Cut instruments are designed starting from patient medical images, computed tomography (CT) and magnetic resonance imaging (MRI) device. The clinician delineates the tumor on the MRI. MRI and the delineated tumor are merged onto the CT which is used to extract the 3D CAD model of the bone. A draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced using additive manufacturing.
The provided text describes the 3D-Cut device and its 510(k) submission, focusing on regulatory aspects, indications for use, and a high-level summary of performance data. However, this document does not contain the detailed information necessary to fully answer the specific questions regarding acceptance criteria, sample sizes, expert qualifications, ground truth establishment, or clinical study specifics like MRMC study results or effect sizes.
The text states: "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications. A combination of bench, cadaveric and clinical (OUS published case series) testing was executed to demonstrate the subject device is substantially equivalent to the predicate device and performs in accordance with its intended use." It also mentions "Software verification and validation were performed, and documentation was included in this submission in accordance with FDA Guidance 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices'".
This indicates that internal performance specifications and software verification/validation were performed, but the document does not elaborate on the specific acceptance criteria for these tests, nor does it provide details of a clinical study that would assess algorithm performance in the way suggested by the questions (e.g., number of experts, adjudication methods, MRMC studies, standalone performance data).
Therefore, I can only provide an answer that reflects the absence of the requested detailed information in the provided document.
Here's an assessment based on the provided document, highlighting the missing information:
1. A table of acceptance criteria and the reported device performance
The document mentions that "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications." and "Software verification and validation were performed". However, the specific acceptance criteria and the quantitative reported device performance for these tests are NOT provided in this document.
2. Sample sizes used for the test set and the data provenance
The document refers to "bench, cadaveric and clinical (OUS published case series) testing."
- Bench and Cadaveric Testing: No sample sizes are specified.
- Clinical Testing ("OUS published case series"): No sample size for the "case series" is provided, nor are details about the data provenance (e.g., specific country of origin, retrospective or prospective nature of these case series).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document states that the "clinician delineates the tumor on the MRI" and a "draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced." This implies clinical input for planning, but it does not specify the number of experts, their qualifications, or how a 'ground truth' for evaluating the device's performance (e.g., guiding surgery accuracy) was established for a test set. The clinical "validation" mentioned likely refers to the surgeon's approval of the design for a specific patient, not a generalized ground truth for a test set.
4. Adjudication method for the test set
No information on adjudication methods for a test set is provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC comparative effectiveness study, nor any data on how human readers (or surgeons in this context) improve with or without AI (device) assistance. The device is a physical surgical instrument resulting from preoperative planning, not explicitly an AI diagnostic tool for image interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is a "patient-matched additively manufactured single use surgical instrument (PSI)" based on "preoperative planning." The planning process involves a "clinician delineat[ing] the tumor on the MRI" and approving the treatment plan and PSI design. This indicates a human-in-the-loop process. No standalone algorithm performance data without human input is mentioned or applicable given the nature of the device.
7. The type of ground truth used
For the design and approval process, the "treating clinician's approval" and "validation" serves as the ground truth for shaping the PSI. For the performance of the manufactured device, "dimensional stability," "mechanical testing," and "simulated use testing on cadaveric specimen" would rely on physical measurements and surgical outcomes on cadavers. The "OUS published case series" would likely rely on clinical outcomes. However, a specific, generalized "ground truth" definition for a test set that would validate the device's accuracy in a structured study format (e.g., expert consensus on image interpretation, pathology, or long-term patient outcomes for a large cohort) is not described.
8. The sample size for the training set
The document describes a custom manufacturing process where the device is "designed starting from patient medical images." This implies a patient-specific design, not a general algorithm that is "trained" on a large dataset in the typical sense of machine learning. While there might be internal design rules or algorithms, the concept of a "training set" as understood in a machine learning context for diagnostic AI is not explicitly described or applicable in the provided information about this custom-manufactured surgical instrument.
9. How the ground truth for the training set was established
As there is no described "training set" in the context of an AI algorithm, this question is not applicable based on the provided text. The "ground truth" for the device's design is the clinician's approval of the proposed plan and PSI.
Ask a specific question about this device
Page 1 of 1