Search Results
Found 4 results
510(k) Data Aggregation
(181 days)
3D-Side
Archer^R PSI System is indicated as an orthopedic instrument to assist the physician in the intraoperative positioning of total shoulder replacement components and in guiding the drill and the cut of the bone.
Archer^R PSI System must only be used conjointly with Archer™ CSR Total Shoulder (K152825, K173812, K181287, K182500, K191811), Catalyst EA Convertible Stemmed Shoulder (K222317) and Archer™ R1 Reverse (K202611, K211991, K213349, K223655, K232583) components in the context of primary total shoulder replacement and following a delto-pectoral approach only. Archer^R PSI System is manufactured from a pre-operative planning validated by the surgeon in the 'Archer™ 3D Targeting' platform (K213779). Archer^R PSI System is indicated for patient population fulfilling the Archer™ CSR Total Shoulder, Catalyst EA Convertible Stemmed Shoulder and Archer™ R1 Reverse indications and for which CT images are available with identifiable placement anatomical landmarks and compliant with imaging protocol provided by Archer 3D Targeting.
The device is intended for single use only.
The device is intended for adult patients.
The device has to be used by a physician trained in the performance of surgery.
The "Archer PSI System" device is a patient-matched additively manufactured single use surgical instrument (PSI). Archer PSI System is an instrument set containing a glenoid guide and its bone model and/or a humeral guide and its bone model. This patient-specific medical device is designed to fit the patient's anatomy to transfer a patient-specific pre-operative plan to the operating room. It is intended for surgical interventions in orthopaedic procedures for total shoulder arthroplasty.
The Archer PSI system instruments are designed from a draft treatment plan available via the Archer™ 3D Targeting' platform. Based on computed tomography (CT) of the shoulder anatomy, 3D CAD models of the bones and positioning and sizing of the glenoid and humeral components are submitted for evaluation to the surgeon. Upon the surgeon's approval, the guides and bone models are designed based on the validated planning and are manufactured using additive manufacturing.
The provided FDA 510(k) summary for the "Archer PSI System" does not contain the detailed acceptance criteria or the specific study that directly proves the device meets those criteria in a quantitative manner as typically expected for medical device performance studies involving sensitivity, specificity, accuracy, etc. However, it does outline the types of non-clinical and cadaveric testing performed to demonstrate substantial equivalence to a predicate device.
Here's an attempt to structure the information based on the request, extracting what is available and noting what is not:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not overtly state quantitative "acceptance criteria" (e.g., "accuracy must be > 95%") nor does it provide "reported device performance" in terms of explicit metrics like sensitivity, specificity, or error rates. Instead, the "performance" is described in terms of demonstrating "substantial equivalence" through various engineering and cadaveric tests.
A more accurate representation, based on the provided text, would be:
Acceptance Criteria Category | Description (from document) | Reported Device Performance (from document) |
---|---|---|
Mechanical Integrity | Demonstrate mechanical integrity post-processing. | Testing was conducted. |
Debris Generation | Assess debris generation. | Testing was conducted. |
Intra-Designer Variability | Assess variability within a single designer's output. | Testing was conducted. |
Inter-Designer Variability | Assess variability between different designers' outputs. | Testing was conducted. |
Biocompatibility | Ensure material biocompatibility. | Assessment conducted. |
Cleaning & Sterilization | Validate cleaning and sterilization processes. | Validations conducted. |
Manufacturing Cleaning | Validate manufacturing cleaning processes. | Validation conducted. |
Packaging & Shelf-life | Validate packaging integrity and shelf-life. | Validation conducted. |
Functional Equivalence | Demonstrate functional equivalence to manual techniques for positioning and guiding drill/cut. | Cadaveric testing executed to demonstrate substantial equivalence between two techniques (manual and PSI, for both anatomic and for reverse techniques). |
Pre-operative Planning | Manufactured from a pre-operative planning validated by the surgeon in the 'Archer™ 3D Targeting' platform (K213779). | The device design is based on surgeon-validated plans within the Archer™ 3D Targeting platform. This implies an acceptance of the planning accuracy by the surgeon. |
2. Sample Size for the Test Set and Data Provenance
- Sample Size for Cadaveric Testing: The document states that "Cadaveric testing was executed" but does not specify the sample size (number of cadavers or procedures) used for this testing.
- Data Provenance: The cadaveric testing is implied to be prospective in nature, as it was "executed to demonstrate the substantial equivalence." There is no information provided regarding the country of origin of the cadaveric data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The document states that "a pre-operative planning validated by the surgeon" is part of the process. For the cadaveric testing, it does not explicitly state the number of experts (e.g., surgeons) involved in establishing the "ground truth" or assessing the "substantial equivalence."
- Qualifications of Experts: The document mentions that the device is to be used by a "physician trained in the performance of surgery." For the validation of the pre-operative plan, the expert is identified as "the surgeon." While this indicates a medical professional, specific qualifications (e.g., years of experience, subspecialty) are not provided.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the cadaveric testing or any other performance evaluation. The "validation by the surgeon" for the pre-operative plan suggests a form of single-expert consensus at the planning stage, but not for the overall performance assessment.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No.
- Effect Size: As no MRMC study was performed, no effect size of human readers improving with AI vs. without AI assistance is reported. The Archer PSI System is a patient-specific instrument, not an AI diagnostic or assistive tool in the MRMC sense. The comparison was between manual surgical techniques and PSI-assisted techniques in cadavers.
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Not explicitly in terms of an "algorithm only" performance study. The device itself is a physical, patient-specific instrument derived from a digital plan. The document describes "Cadaveric testing" which evaluates the combined PSI system (planning software output + physical guide) in a simulated surgical environment, not just the planning algorithm in isolation from its physical manifestation or use.
- The "Intra- and Inter-Designer Variability testing" and "Mechanical Integrity" tests are standalone evaluations of aspects of the device's design and manufacturing, but not of the surgical guidance algorithm's performance on its own.
7. Type of Ground Truth Used
- For Pre-operative Planning: The ground truth for the design of the PSI System is based on a "pre-operative planning validated by the surgeon" using CT images and anatomical landmarks. This can be considered a form of expert consensus/validation on the desired surgical outcome/instrument design.
- For Cadaveric Testing: The "ground truth" for the cadaveric study would be the actual anatomical targets and the achieved drill/cut placements, compared to the planned placements and traditional manual techniques. While experts (surgeons) would perform and assess these, the ultimate "truth" is the physical reality within the cadaver. The document implies comparison to "manual techniques" as a reference.
8. Sample Size for the Training Set
The document does not provide any information regarding a "training set sample size." The Archer PSI System is described as being "designed from a draft treatment plan" and "manufactured from a pre-operative planning validated by the surgeon." This suggests a patient-specific design process rather than a machine learning model trained on a large dataset. The underlying "Archer™ 3D Targeting' platform (K213779)" (a separate cleared device) would be the system performing the planning, and its own 510(k) might contain training data details if it uses AI/ML. However, for the Archer PSI System itself, no training set information is present.
9. How Ground Truth for the Training Set Was Established
As no training set is mentioned for the Archer PSI System in this document, no information is provided on how its ground truth would have been established.
Ask a specific question about this device
(78 days)
3D-Side SA
Customize for ankle arthroplasty is intended to be used as a software interface to assist in:
- Visualization, modification, validation of the planning of total ankle arthroplasty
- Communication of treatment options
- Segmentation of CT-scan data
- 3D CAD models generation
- use x-ray scan information to enhance the planning of ankle arthroplasty
- Managing timeline and cases
Customize is not intended to be used for
- Spine surgeries
- Implant and instrument design
Experience in usage and clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
Customize is intended to be used during the preparation of ankle arthroplasties. It visualizes surgical treatment options that were previously created based on 3D CAD files generated from multi-slice DICOM data from a CT scanner. It consists of a single software user interface where the physician can review the CAD files in 3D, CT and X-Ray images in 2D and modify the position and orientation of the different 3D objects. Customize includes an implant library with 3D digital representations of various implant models so that the right implant positioning and sizing can be achieved based on the physician's input. After approval by the physician, the treatment plan is saved on the server and can be used as a reference during surgery. It is also possible to export the treatment plan for further processing such as designing and manufacturing of patient specific devices (the design of those later is done using an external software).
Here's a breakdown of the acceptance criteria and the study details for the "Customize" device based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
The provided document describes performance testing for the "Customize" device, focusing on segmentation validation, repeatability and reproducibility (R&R) of segmentation, and 3D model generation accuracy. While explicit "acceptance criteria" in terms of specific numeric thresholds are not directly stated in a table format, the narrative implies that the device successfully met the requirements of these tests to demonstrate substantial equivalence.
Implied Acceptance Criteria and Reported Device Performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Image Segmentation | Successful segmentation of ankle anatomies using an AI-based algorithm. The AI produces a "temporary label" as an initialization for manual editing, implying accuracy sufficient for efficient human supervision. The study "demonstrat[ed] Customize has been validated for its intended use" for image segmentation. |
Repeatability & Reproducibility (R&R) Study | Successful demonstration of repeatability and reproducibility in the segmentation of ankle anatomies. This indicates consistency in the device's output under similar conditions. The study "demonstrat[ed] Customize has been validated for its intended use" for R&R. |
Accuracy of 3D Model Generation (Tibia/Talus) | Accurate generation of 3D models for the tibia and talus based on CT-scan data. The study "demonstrat[ed] Customize has been validated for its intended use" for 3D model generation accuracy. No specific measurement units (e.g., mm, %) are provided for accuracy, but the overall conclusion of substantial equivalence implies these outcomes were within acceptable limits for a medical image management and processing system. |
Study Details
2. Sample size used for the test set and the data provenance:
- Segmentation validation and R&R study: The document states the AI-based ankle algorithm was trained on a dataset of 126 medical images (CT scans). It doesn't explicitly state the size of a separate "test set" for validation, but for these types of studies, a portion of the overall dataset or a new, unseen dataset is typically used for testing. Without further information, it's difficult to distinguish a test set size beyond the training dataset described.
- Data Provenance: Not explicitly stated whether the data was retrospective or prospective, nor the country of origin. The mix of patients (30% eligible for ankle arthroplasty, 70% cadaveric cases) suggests a blend of clinical and research data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The general statement about "human supervision by a 3D-Side engineer" applies to the device's operational use (editing AI output), but not necessarily to the establishment of ground truth for the validation studies.
4. Adjudication method for the test set:
- This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not described. The document mentions that the AI produces an "initialization" that "requires manual editions" and human supervision, suggesting it functions as an assistive tool rather than a standalone diagnostic or decision-making system. The focus is on the software's performance metrics (segmentation, R&R, 3D model generation), not on comparing human performance with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The description implies that standalone performance was assessed to some extent for the "initialization" step. The AI algorithm generates a "temporary label" which then undergoes "human supervision." The "performance testing included 1) segmentation validation" of the software, suggesting its initial output (before human intervention) was evaluated against a ground truth. However, the device's intended use is described as AI followed by human supervision, so the final, clinical performance relies on the human-in-the-loop.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies that the "corresponding segmentation" data was used as ground truth for training the AI. For the performance studies, it is logical that similar "ground truth" segmentations (likely created by experts, though their number and qualifications are not specified) would be used for comparison, especially for segmentation accuracy and 3D model generation. Pathology or outcomes data are not mentioned.
8. The sample size for the training set:
- 126 medical images (CT scans).
9. How the ground truth for the training set was established:
- The ground truth for the training set was established through "corresponding segmentation." This typically means that experts manually segmented the anatomical structures (bones in this case) on the CT scans, and these expert-generated segmentations were used as the gold standard to train the AI model. The details of who performed these segmentations (e.g., number of experts, their qualifications) are not provided.
Ask a specific question about this device
(103 days)
3D-Side SA
Customize for shoulder arthroplasty is intended to be used as a software interface to assist in:
- Visualization, modification, validation of the planning of shoulder arthroplasty
- Communication of treatment options
- Segmentation of CT-scan data
- 3D CAD models generation
- Managing timeline and cases
Customize is not intended to be used for
- Spine surgeries
- Implant and instrument design
Experience in usage and clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
Customize is intended to be used during the preparation of shoulder arthroplasties. It visualizes surgical treatment options that were previously created based on 3D CAD files generated from multi-slice DICOM data from a CT scanner. It consists of a single software user interface where the physician can review the CAD files in 3D and modify the position and orientation of the different 3D objects. Customize includes an implant library with 3D digital representations of various implant models so that the right implant positioning and sizing can be achieved based on the physician's input. After approval by the physician, the treatment plan is saved on the server and can be used as a reference during surgery.
Customize is prescription use only.
This 510(k) summary for the "Customize" device describes its intended use for shoulder arthroplasty planning. However, it does not provide specific acceptance criteria or a dedicated study report that details how the device meets those criteria with numerical performance metrics.
The document states that "Non-clinical performance data was included in the 510(k)-submission demonstrating Customize has been validated for its intended use and substantial equivalence to the predicate device." It also mentions "Software verification and validation was performed" and "performance testing included 1) segmentation validation of the Customize software, 2) Repeatability and Reproducibility study on the segmentation of shoulder anatomies and 3) accuracy study on 3D model generation for humerus and scapula."
However, the summary itself does not contain the detailed results, acceptance criteria, or study methodologies that would allow for a comprehensive answer to your request. It only broadly states that these tests were performed and that the device was deemed "substantially equivalent."
Therefore, I cannot populate the table or provide the detailed information requested regarding acceptance criteria and study particulars based solely on the provided text. The requested data points (sample sizes, ground truth details, expert qualifications, etc.) are not present in this summary.
Ask a specific question about this device
(133 days)
3D-Side SA
3D-Cut is intended to be used as a surgical instrument to assist in preoperative planning and/or in guiding the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing ostector of bone tumors, for femur, tibia and pelvis including sacrum.
3D-Cut is a patient-matched additively manufactured single use surgical instrument (PS). Based on a preoperative planning, the instruments are intended to assist physicians in guiding the marking of bone and guiding surgical instruments in bone tumor resection surgery, excluding joint replacement surgeries.
The 3D-Cut instruments are designed starting from patient medical images, computed tomography (CT) and magnetic resonance imaging (MRI) device. The clinician delineates the tumor on the MRI. MRI and the delineated tumor are merged onto the CT which is used to extract the 3D CAD model of the bone. A draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced using additive manufacturing.
The provided text describes the 3D-Cut device and its 510(k) submission, focusing on regulatory aspects, indications for use, and a high-level summary of performance data. However, this document does not contain the detailed information necessary to fully answer the specific questions regarding acceptance criteria, sample sizes, expert qualifications, ground truth establishment, or clinical study specifics like MRMC study results or effect sizes.
The text states: "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications. A combination of bench, cadaveric and clinical (OUS published case series) testing was executed to demonstrate the subject device is substantially equivalent to the predicate device and performs in accordance with its intended use." It also mentions "Software verification and validation were performed, and documentation was included in this submission in accordance with FDA Guidance 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices'".
This indicates that internal performance specifications and software verification/validation were performed, but the document does not elaborate on the specific acceptance criteria for these tests, nor does it provide details of a clinical study that would assess algorithm performance in the way suggested by the questions (e.g., number of experts, adjudication methods, MRMC studies, standalone performance data).
Therefore, I can only provide an answer that reflects the absence of the requested detailed information in the provided document.
Here's an assessment based on the provided document, highlighting the missing information:
1. A table of acceptance criteria and the reported device performance
The document mentions that "Several tests have been conducted to demonstrate the output of the manufacturing process conforms to the device specifications." and "Software verification and validation were performed". However, the specific acceptance criteria and the quantitative reported device performance for these tests are NOT provided in this document.
2. Sample sizes used for the test set and the data provenance
The document refers to "bench, cadaveric and clinical (OUS published case series) testing."
- Bench and Cadaveric Testing: No sample sizes are specified.
- Clinical Testing ("OUS published case series"): No sample size for the "case series" is provided, nor are details about the data provenance (e.g., specific country of origin, retrospective or prospective nature of these case series).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document states that the "clinician delineates the tumor on the MRI" and a "draft treatment plan is submitted for evaluation to the treating clinician. Upon surgeon's approval, a PSI is designed and again submitted to the clinician. After validation, the PSI is produced." This implies clinical input for planning, but it does not specify the number of experts, their qualifications, or how a 'ground truth' for evaluating the device's performance (e.g., guiding surgery accuracy) was established for a test set. The clinical "validation" mentioned likely refers to the surgeon's approval of the design for a specific patient, not a generalized ground truth for a test set.
4. Adjudication method for the test set
No information on adjudication methods for a test set is provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC comparative effectiveness study, nor any data on how human readers (or surgeons in this context) improve with or without AI (device) assistance. The device is a physical surgical instrument resulting from preoperative planning, not explicitly an AI diagnostic tool for image interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is a "patient-matched additively manufactured single use surgical instrument (PSI)" based on "preoperative planning." The planning process involves a "clinician delineat[ing] the tumor on the MRI" and approving the treatment plan and PSI design. This indicates a human-in-the-loop process. No standalone algorithm performance data without human input is mentioned or applicable given the nature of the device.
7. The type of ground truth used
For the design and approval process, the "treating clinician's approval" and "validation" serves as the ground truth for shaping the PSI. For the performance of the manufactured device, "dimensional stability," "mechanical testing," and "simulated use testing on cadaveric specimen" would rely on physical measurements and surgical outcomes on cadavers. The "OUS published case series" would likely rely on clinical outcomes. However, a specific, generalized "ground truth" definition for a test set that would validate the device's accuracy in a structured study format (e.g., expert consensus on image interpretation, pathology, or long-term patient outcomes for a large cohort) is not described.
8. The sample size for the training set
The document describes a custom manufacturing process where the device is "designed starting from patient medical images." This implies a patient-specific design, not a general algorithm that is "trained" on a large dataset in the typical sense of machine learning. While there might be internal design rules or algorithms, the concept of a "training set" as understood in a machine learning context for diagnostic AI is not explicitly described or applicable in the provided information about this custom-manufactured surgical instrument.
9. How the ground truth for the training set was established
As there is no described "training set" in the context of an AI algorithm, this question is not applicable based on the provided text. The "ground truth" for the device's design is the clinician's approval of the proposed plan and PSI.
Ask a specific question about this device
Page 1 of 1