Search Results
Found 10 results
510(k) Data Aggregation
(25 days)
3D Systems, Inc.
The Salto Talaris Ankle PSI System is intended to be used as patient specific surgical planning and instrumentation to assist in the positioning of total ankle replacement components intraoperatively, and in guiding bone cutting. The Salto Talaris Ankle PSI System is intended for use with Smith + Nephew's Salto Talaris Total Ankle System and its cleared indications for use, provided that anatomic landmarks necessary for alignment and positioning of the implant are identifiable on patient imaging CT scans.
3D Systems' Salto Talaris Ankle PSI System consists of patient specific outputs including surgical guides, anatomical models, and case reports. The Salto Talaris Ankle PSI System guides are made from biocompatible nylon and surgical grade stainless steel and are designed to fit the contours of the patient's distal tibial and proximal talar anatomy. The surgical guides in combination with the Smith+Nephew Salto Talaris Total Ankle System instruments, facilitate the positioning of Salto Talaris Total Ankle Prostheses.
The provided document describes the FDA clearance (K243173) for the Salto Talaris Ankle PSI System. However, it does not contain a detailed study with acceptance criteria and reported device performance in the format requested.
The document states:
- "Non-clinical performance testing on the Salto Talaris Ankle PSI System included cadaveric comparison testing in order to compare Implant Alignment Accuracy and Guide Usability between the subject device and reference device."
- "Accuracy and functionality were shown to be similar to that of the standard instrumentation used for the Salto Talaris Total Ankle System."
This indicates that a comparison study was performed, but the specific acceptance criteria, reported performance values, sample size, ground truth establishment, or expert details are not provided in this 510(k) summary. The summary concludes that the device is substantially equivalent and "performs as well as the predicate device" based on this testing, but the numerical data from the study is not included.
Therefore, I cannot populate the table or answer the specific questions about the study design with the information available in the provided text. The document focuses on regulatory clearance based on substantial equivalence, rather than a detailed presentation of performance study results against predefined acceptance criteria.
Ask a specific question about this device
(25 days)
3D Systems, Inc.
The Cadence Ankle PSI System is intended to be used as patient specific surgical planning and instrumentation to assist in the positioning of total ankle replacement components intraoperatively, and in guiding bone cutting. The Cadence Ankle PSI System is intended for use with Smith + Nephew's Cadence Total Ankle System and its cleared indications for use, provided that anatomic landmarks necessary for alignment and positioning of the implant are identifiable on patient imaging CT scans.
3D Systems' Cadence Ankle PSI System consists of patient specific outputs including surgical guides, anatomical models, and case reports. The Cadence Ankle PSI System guides are made from biocompatible nylon and surgical grade stainless steel and are designed to fit the contours of the patient's distal tibial and proximal talar anatomy. The surgical guides in combination with the Smith+Nephew Cadence Total Ankle System instruments, facilitate the positioning of Cadence Total Ankle Prostheses.
The provided text does not contain detailed information about the acceptance criteria or a specific study that proves the device meets those criteria in a format that would allow for a precise population of the requested table and points. The document is a 510(k) summary for a medical device (Cadence Ankle PSI System), which focuses on demonstrating substantial equivalence to a predicate device rather than presenting extensive performance study results like those typically found for AI/ML-based diagnostic devices.
However, I can extract the available information regarding non-clinical performance testing:
Non-clinical Performance Testing:
The relevant section states: "Non-clinical performance testing on the Cadence Ankle PSI System included cadaveric comparison testing in order to compare Implant Alignment Accuracy and Guide Usability between the subject device and reference device. Accuracy and functionality were shown to be similar to that of the standard instrumentation used for the Cadence Total Ankle System."
This indicates that the study focused on Implant Alignment Accuracy and Guide Usability. The acceptance criteria are implied to be "similar to that of the standard instrumentation used for the Cadence Total Ankle System," which served as the reference device.
Based on the provided text, the following information is not available:
- A formal table of acceptance criteria with reported device performance metrics (e.g., specific thresholds for accuracy or usability scores).
- Detailed sample size for the test set (only "cadaveric comparison testing" is mentioned).
- Data provenance (e.g., country of origin, retrospective/prospective).
- Number of experts, their qualifications, or adjudication methods for ground truth.
- Whether a multi-reader multi-case (MRMC) comparative effectiveness study was done.
- If a standalone (algorithm only) performance was measured (this device is a surgical guide system, not an AI diagnostic algorithm in the typical sense).
- The specific type of ground truth used beyond "comparison to the standard instrumentation."
- Sample size for the training set.
- How the ground truth for the training set was established.
In summary, the document states performance testing focused on implant alignment accuracy and guide usability through cadaveric comparison, indicating similarity to standard instrumentation as the performance benchmark. However, granular details about specific acceptance criteria metrics, study design, and ground truth establishment are not provided.
Ask a specific question about this device
(127 days)
3D Systems, Inc.
The Salto Talaris PSI System is intended to be used as patient specific surgical planning and instrumentation to assist in the positioning of total ankle replacement components intraoperatively, and in guiding bone cutting. The Salto Talaris PSI System is intended for use with Smith + Nephew's Salto Talaris Total Ankle System and its cleared indications for use.
3D Systems' Salto Talaris Ankle PSI System consists of patient specific outputs including surgical guides, anatomical models, and case reports. The Salto Talaris Ankle PSI System guides are made from biocompatible nylon and surgical grade stainless steel and are designed to fit the contours of the patient's distal tibial and proximal talar anatomy. The surgical guides in combination with the Smith+Nephew Salto Talaris Total Ankle System instruments, facilitate the positioning of Salto Talaris Total Ankle Prostheses.
The provided text is a 510(k) summary for the Salto Talaris Ankle PSI System. It describes the device, its intended use, and a summary of non-clinical testing. However, it does not contain the detailed information necessary to fully answer all aspects of your request, particularly regarding specific acceptance criteria, expert qualifications, adjudication methods, details of ground truth establishment, or specific sample sizes for training and testing sets in the context of an AI-based system.
The Salto Talaris Ankle PSI System is described as patient-specific surgical planning and instrumentation to assist in positioning total ankle replacement components and guiding bone cutting. This device primarily involves physical surgical guides and anatomical models derived from CT data, rather than a standalone AI algorithm for diagnosis or image analysis. Therefore, some of the requested information, such as an MRMC comparative effectiveness study with AI assistance or standalone algorithm performance, is not directly applicable to this type of device as typically described for AI/ML-based diagnostic software.
Here's a breakdown of the available information based on your request:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Mechanical Integrity (post-processing) | Met all acceptance criteria. |
Debris Generation | Met all acceptance criteria. |
Inter-Designer Variability analysis | Met all acceptance criteria. |
Implant Alignment Accuracy (cadaveric comparison) | Accuracy shown to be similar to standard instrumentation for the Salto Talaris Total Ankle System. |
Guide Usability (cadaveric comparison) | Functionality shown to be similar to standard instrumentation for the Salto Talaris Total Ankle System. |
The document states that the Salto Talaris Ankle PSI System met all acceptance criteria for mechanical integrity, debris generation, and inter-designer variability analysis. For implant alignment accuracy and guide usability, cadaveric comparison testing showed similarity to standard instrumentation. The specific quantitative acceptance criteria values (e.g., specific thresholds for mechanical integrity or debris generation, or a numerical range for alignment accuracy) are not provided in this summary.
2. Sample size used for the test set and the data provenance
The document mentions "Non-clinical cadaveric comparison testing." However, it does not specify the sample size (number of cadavers or cases) used for this test set, nor does it provide information about the data provenance (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide this information. Since the device involves surgical guides and bone cutting, "ground truth" in this context would likely relate to anatomical measurements or surgical outcomes, potentially assessed by orthopedic surgeons. However, no details on experts or their qualifications are given.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not specify any adjudication method.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study of the type typically performed for AI/ML diagnostic software (where human readers evaluate cases with and without AI assistance) was not mentioned and is unlikely to be applicable based on the device description. This device provides physical guides for surgery, not AI-based image analysis for diagnosis. The non-clinical cadaveric testing compared the device to standard instrumentation, not to human readers using or not using AI.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The device is described as "patient specific surgical planning and instrumentation to assist in the positioning... and in guiding bone cutting." This inherently implies a "human-in-the-loop" scenario (a surgeon using the guides). The "non-clinical cadaveric comparison testing" assessed the performance of the device in use, which is a form of standalone performance for the instrumentation itself but not in the context of an AI algorithm without human interaction for diagnosis or interpretation. The document does not describe a standalone algorithm performance test in the way it would be applied to AI/ML diagnostic software.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the cadaveric comparison testing, the "ground truth" implicitly relates to implant alignment accuracy and guide usability, compared against the results achieved using standard (non-PSI) instrumentation. The summary doesn't explicitly state how this ground truth was definitively established (e.g., by highly accurate post-operative CT measurements validated by multiple experts), but it refers to comparison rather than an absolute ground truth method.
8. The sample size for the training set
The document describes the device as being "designed with CT-based methods to produce patient-specific instrumentation." This suggests a design process based on anatomical data, but there is no mention of a "training set" in the context of an AI/ML algorithm. The device is a custom-manufactured surgical guide, not a learned AI model.
9. How the ground truth for the training set was established
Since there's no mention of a "training set" for an AI/ML algorithm, this question is not applicable in the context of the provided document. The "design" information would likely come from anatomical studies, engineering specifications, and clinical experience with total ankle arthroplasty, rather than a formal "ground truth" establishment for an AI training set.
Ask a specific question about this device
(112 days)
3D Systems, Inc.
The Cadence Ankle PSI System is intended to be used as patient specific surgical planning and instrumentation to assist in the positioning of total ankle replacement components intraoperatively, and in guiding bone cutting. The Cadence Ankle PSI System is intended for use with Smith + Nephew's Cadence Total Ankle System and its cleared indications for use.
3D Systems' Cadence Ankle PSI System consists of patient specific outputs including surgical guides, anatomical models, and case reports. The Cadence Ankle PSI System guides are made from biocompatible nylon and surgical grade stainless steel and are designed to fit the contours of the patient's distal tibial and proximal talar anatomy. The surgical guides in combination with the Smith+Nephew Cadence Total Ankle System instruments, facilitate the positioning of Cadence Total Ankle Prostheses.
The manufacturer, 3D Systems, Inc., has introduced the Cadence Ankle PSI System, a device intended for patient-specific surgical planning and instrumentation to assist in total ankle replacement component positioning and bone cutting. This device is designed for use with Smith + Nephew's Cadence Total Ankle System.
The information provided by the FDA 510(k) summary for K241326 primarily focuses on non-clinical performance testing to demonstrate substantial equivalence to the predicate device, not on clinical performance with human readers or standalone AI performance. Therefore, many of the requested details regarding clinical study design (e.g., MRMC studies, human reader improvement, expert consensus for ground truth) are not applicable based on the provided document.
Here's an analysis of the acceptance criteria and the study conducted, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary mentions "all acceptance criteria for all performance tests" were met, but it does not explicitly list the quantitative acceptance criteria. It broadly states the types of tests conducted and their qualitative outcomes.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Non-clinical Performance Testing: Mechanical Integrity (post-processing) | Met all acceptance criteria. |
Non-clinical Performance Testing: Debris Generation | Met all acceptance criteria. |
Non-clinical Performance Testing: Intra- and Inter-Designer Variability analysis | Met all acceptance criteria. |
Non-clinical Cadaveric Comparison Testing: Implant Alignment Accuracy (vs. reference device) | Shown to be similar to that of the standard instrumentation currently used for the Cadence Total Ankle System. |
Non-clinical Cadaveric Comparison Testing: Guide Usability (vs. reference device) | Shown to be similar to that of the standard instrumentation currently used for the Cadence Total Ankle System. |
2. Sample Size Used for the Test Set and Data Provenance
The document indicates "Non-clinical cadaveric comparison testing", which implies a test set was used. However, the exact sample size (number of cadavers or anatomic specimens) is not specified.
The data provenance is cadaveric testing, which is a form of pre-clinical, laboratory-based testing, not human patient data (retrospective or prospective). The country of origin for the data is not specified, though 3D Systems, Inc. is based in the USA.
3. Number of Experts Used to Establish Ground Truth and Qualifications
This information is not applicable as the studies described are non-clinical, cadaveric, and mechanical/design verification tests. There is no mention of experts establishing a "ground truth" in the context of diagnostic or clinical interpretation. Performance was assessed mechanically or by comparison to standard instrumentation.
4. Adjudication Method for the Test Set
This is not applicable as the studies are non-clinical and do not involve human diagnostic interpretation requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. The document describes non-clinical and cadaveric testing, not studies involving human readers or clinical cases.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The device is a "patient specific surgical planning and instrumentation" system. While it likely involves algorithms for design and manufacturing, the studies described are for the physical outputs (surgical guides, anatomical models) and their performance (mechanical integrity, debris, accuracy in cadavers), rather than a standalone AI algorithm's diagnostic performance. Therefore, a standalone AI performance study in the typical sense of evaluating an algorithm’s output without human-in-the-loop was not explicitly described for this submission. The focus is on the device's accuracy in physical guidance.
7. The Type of Ground Truth Used
For the non-clinical tests (mechanical integrity, debris generation, intra-/inter-designer variability), the "ground truth" would be established by engineering specifications, quality control standards, and measurement protocols.
For the cadaveric comparison testing, the "ground truth" for "Implant Alignment Accuracy" and "Guide Usability" would be based on direct measurements against design specifications and comparison to the performance of established standard instrumentation (the reference device, K151459 Cadence Total Ankle Replacement System). This is a technical, rather than a clinical, ground truth.
8. The Sample Size for the Training Set
The document does not describe the use of a "training set" in the context of machine learning or AI algorithm development. The device is a patient-specific instrument system, and its design is CADD-based. If there's an underlying AI component that uses a training set, this information is not provided in the summary.
9. How the Ground Truth for the Training Set Was Established
As no training set is described for an AI algorithm, this information is not applicable based on the provided document. If there are proprietary algorithms involved in the patient-specific design, the "training" data (if any) and its ground truth would be part of the manufacturer's internal development process, not typically disclosed in this level of detail in a 510(k) summary focused on the final product's performance.
Ask a specific question about this device
(287 days)
3D Systems, Inc.
The VSP PEEK Cranial Implant is indicated for use to fill a bony void or defect area in the cranial skeleton of patients 21 years of age and older.
The VSP PEEK Cranial Implants are non-load bearing, single use devices showing the following characteristics:
• The implants are designed individually for each patient to correct defects in cranial bone.
- The implants are 3D printed using material extrusion technology.
- The implants are manufactured using Polyetheretherketone (PEEK) implantable grade filament (i.e., Evonik VESTAKEEP® i4-3DF).
- The VSP PEEK Cranial Implants are designed using CT imaging data and surgeon input.
- · The implant size is ranging from 35 mm x 35 mm to 150 mm x 150 mm, and its thickness is constant, ranging from 3 mm to 6 mm with a nominal thickness of 3 mm.
- If minor intra-operative adjustments are required, the implants can be modified by the surgeon with standard surgical burrs.
- The implants are fixed to the native bone using commercially available cranial fixation screws and/or systems
- The implants are provided non-sterile for sterilization prior to implantation.
The provided text describes the 510(k) premarket notification for the "VSP PEEK Cranial Implant," which is a Class II device intended to fill bony voids or defects in the cranial skeleton. The information pertains to a medical device's regulatory clearance rather than an AI/ML-driven device, so many of the requested criteria related to AI acceptance, ground truth, and human reader studies are not applicable.
However, I can extract the relevant information regarding the device's acceptance criteria and the studies performed to demonstrate its performance.
Here's a breakdown based on the provided document:
1. Acceptance Criteria and Reported Device Performance (Table)
There are two main categories of acceptance criteria and performance reporting mentioned: Biocompatibility and Performance Bench Testing.
Criteria Category | Acceptance Criteria | Reported Device Performance/Results |
---|---|---|
Biocompatibility Testing | Compliance with ISO-10993 ("Biological Evaluation of Medical Devices Part 1: Evaluation and Testing") and specific standards for each test. | All tests found to be within acceptance criteria described in the standards. |
Chemical Characterization & Toxicological Evaluation (ISO 10993-18 & 17) | Acceptable Margin of Safety for all reported extractable substances. | Acceptable Margin of Safety for all reported extractable substances. |
Cytotoxicity (ISO 10993-5) | Cell culture treated with test sample compared to control showing acceptable levels of dehydrogenase activity. | Cell culture treated with test sample and compared dehydrogenase activity to control (Implied: met criteria for acceptance). |
Sensitization (ISO 10993-10) | Non-sensitizing. | Non-sensitizing. |
Irritation (ISO 10993-23) | Non-irritating. | Non-irritating. |
Pyrogenicity (ISO 10993-11) | Non-pyrogenic. | Non-pyrogenic. |
Acute Systemic Toxicity (ISO 10993-11) | Non-toxic. | Non-toxic. |
Implantation effects (ISO 10993-6) | No unexpected results. | No unexpected results (28 & 90 day follow-up in rat calvarial bone). |
Genotoxicity (ISO/TR 10993-33) | Non-mutagenic for Reverse Mutation Assay and In vitro Mammalian Cell Gene Mutation Assay. | Non-Mutagenic for both assays. |
Performance Bench Testing | Acceptable mechanical performance following sterilization or when used with common osteosynthesis fixation systems. | All samples passed the acceptance criteria concerning static compression and dynamic impact testing. |
Mechanical Performance Testing | All samples pass acceptance criteria (specific criteria not detailed but implied from results statement). | All samples passed the acceptance criteria. |
Anatomical Fit Testing | Acceptable size accuracy and usability to surgeons. | The size accuracy and usability of the VSP PEEK Cranial Implant was acceptable to surgeons. |
Verification and Validation Testing | Compliance with technical and biomechanical specifications. | All samples compliant with specifications. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify exact sample sizes (e.g., number of implants or test specimens) for each test. It uses phrases like "All samples" and "Several implantations."
- Data Provenance: The data is generated through laboratory bench testing and pre-clinical animal studies (rat model for implantation effects). The data origin is not specified by country, but it relates to the manufacturing and testing done by "3D Systems, Inc." located in Littleton, CO, USA. The studies are prospective in the sense that they were conducted for the purpose of this 510(k) submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Experts
- Experts: For the Anatomical Fit Testing, "surgeons" were involved in performing simulated surgical procedures and reviewing the results.
- Qualifications: The general qualification is "surgeons." No specific number or detailed qualifications (e.g., years of experience, subspecialty) are provided in this summary.
- Ground Truth: For the "anatomical fit," the ground truth was established by the subjective assessment of "surgeons" performing simulated implantations on anatomical models and completing a questionnaire.
4. Adjudication Method for the Test Set
- For the anatomical fit, the document states "reviewed by each surgeon." This suggests individual assessments, but it doesn't specify a formal adjudication method (e.g., consensus, majority vote) if multiple surgeons were involved. It can be inferred that simple agreement among the involved surgeons was sufficient for the "acceptable" conclusion.
- For other analytical and mechanical tests, adjudication would be based on predefined quantitative thresholds or qualitative observations against standards.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not done. This type of study is primarily relevant for diagnostic devices that involve interpretation by human readers (e.g., radiologists interpreting images). The VSP PEEK Cranial Implant is a structural implant, and its performance evaluation does not involve differential human interpretation of medical images.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not applicable directly. This question primarily applies to AI/ML software devices. The "VSP PEEK Cranial Implant" is a physical device. While its design involves "CT imaging data and surgeon input," implying a design process that might be aided by software, the performance evaluation described here is for the physical implant itself, not a separate AI algorithm for image analysis or diagnosis. The "standalone" performance here refers to the device's intrinsic mechanical and biological properties.
7. The Type of Ground Truth Used
- Bench Testing Standards & Expert Consensus/Subjective Assessment:
- For biocompatibility, the ground truth is established by international standards (ISO 10993 series) and the measured physical/chemical properties or biological responses compared against the acceptance criteria defined by these standards.
- For mechanical performance, the ground truth is established by literature research (as "there is no industry accepted standard governing mechanical testing for non-load bearing implantable prosthetic plates") and internal technical specifications, and the device's ability to withstand forces.
- For anatomical fit, the "ground truth" is based on the subjective assessment and "acceptance" by the surgeons performing the simulated implantations.
8. The Sample Size for the Training Set
- Not applicable. This device is a physical implant, not an AI/ML algorithm that requires a training set. The design of each implant is "individually for each patient" using their specific CT imaging data and surgeon input, which is a custom manufacturing process rather than a machine learning training process.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As there is no AI/ML training set, there is no ground truth to establish for such a set.
Ask a specific question about this device
(30 days)
3D Systems, Inc.
The Vantage PSI System is intended to be used as patient specific surgical planning and instrumentation to assist in the positioning of total ankle replacement components intra-operatively, and in guiding bone cutting. The Vantage PSI System is intended for use with Exactech's Vantage Total Ankle System and its cleared indications for use.
3D System's Vantage PSI System is patient-specific guides created to fit the contours of the patient's distal tibial and proximal talar anatomy. The guides and models are designed and manufactured from patient imaging data (CT) and are made from biocompatible nylon. The surgical guides in combination with Exacted Vantage Total Ankle reusable instruments, facilitate the positioning of Vantage Total Ankle Implants. 3D System s Vantage PSI System produces a variety of patient specific outputs including surgical quides, anatomic models, and case reports.
Here's an analysis of the acceptance criteria and supporting study for the Vantage PSI System, based on the provided FDA 510(k) summary:
- This device is a patient-specific surgical guide system, not an AI device. Therefore, many of the typical AI/ML-specific questions regarding ground truth, expert consensus, and multi-reader studies are not applicable in the way they would be for an AI diagnostic or prognostic tool. The "performance" here relates to its accuracy in guiding surgical procedures compared to existing instrumentation.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria | Reported Device Performance |
---|---|---|
Functional Equivalence | Ability to assist in positioning total ankle replacement components intra-operatively. | Shown to be substantially equivalent to Exactech instrumentation. |
Ability to guide bone cutting. | Shown to be substantially equivalent to Exactech instrumentation. | |
Accuracy | Accuracy in guiding surgical procedures. | Demonstrated sufficient accuracy. |
Manufacturing Material | Guides and models made from biocompatible nylon. | Manufactured using SLS technology with DuraForm® ProX PA (polyamide) material. (This meets the biocompatibility implied by use in surgery). |
Device Compatibility | Intended for use with Exactech's Vantage Total Ankle System. | Explicitly stated for use with Exactech's Vantage Total Ankle System. |
Design Revisions (Impact) | Additional fixation options, corner drill features, optional decoupled talus guide do not introduce technological differences. | The changes "are not technologically different from the predicate Vantage PSI System." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a number of "patients" or "cases." The testing was described as "Cadaveric comparison testing." This implies a limited number of cadavers were used.
- Data Provenance: Cadaveric. Specific country of origin is not mentioned. It is a prospective study in the sense that the test was conducted specifically for this submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- This is not applicable in the traditional sense for this device. The "ground truth" for a surgical guide system is its ability to accurately direct cuts and component placement. The comparison was to an existing instrumentation system (Exactech instrumentation), which serves as the benchmark. The "experts" would be the surgeons or engineers performing the cadaveric tests and assessing the outcomes. Their qualifications are not specified in this summary.
4. Adjudication Method for the Test Set
- Not explicitly stated. Given it's a comparison of mechanical accuracy and functionality on cadavers, it's likely measurements were taken and compared directly, rather than requiring a complex expert adjudication process for image interpretation.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- This is not applicable as the Vantage PSI System is a physical surgical guide system, not an AI/ML diagnostic or prognostic tool for human readers.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- This is not applicable as the Vantage PSI System is a physical surgical guide intended for human use in surgery. Its "performance" is inherently human-in-the-loop.
7. The Type of Ground Truth Used
- The "ground truth" was established by comparing the subject device's guidance and resulting anatomical modifications (e.g., bone cuts, component placement) to those produced by Exactech's established instrumentation system. This comparison was done on cadavers, implying direct measurement or visual assessment of the outcome of the guided procedure.
8. The Sample Size for the Training Set
- This is not applicable. The Vantage PSI System is a custom-manufactured guide based on a patient's individual CT scan. There isn't a "training set" in the machine learning sense for the device itself. The design principles and manufacturing process might be informed by historical data or engineering principles, but not a "training set" for an algorithm.
9. How the Ground Truth for the Training Set Was Established
- This is not applicable for the same reasons as #8.
Ask a specific question about this device
(255 days)
3D Systems, Inc.
The D2P software is intended for use as a software interface and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file. It is also intended as pre-operative software for surgical planning. For this purpose, the output file may be used to produce a physical replica. The physical replica is intended for adjunctive use along with other diagnostic tools and expert clinical judgement for diagnosis, patient management, and/or treatment selection of cardiovascular, craniofacial, genitourinary, neurological, and/or musculoskeletal applications.
The D2P software is a stand-alone modular software package that provides advanced visualization of DICOM imaging data. This modular package includes, but is not limited to the following functions:
- DICOM viewer and analysis
- Automated segmentation
- Editing and pre-printing
- Seamless integration with 3D Systems printers
- Seamless integration with 3D Systems software packages
- Seamless integration with Virtual Reality visualization for non-diagnostic use.
The provided text does not contain detailed information regarding acceptance criteria, specific study designs, or performance metrics in a structured format that directly addresses all the requested points. The document summarizes the device, its intended use, and its equivalence to a predicate device for FDA 510(k) clearance.
However, based on the limited information available, here's what can be extracted and inferred:
1. A table of acceptance criteria and the reported device performance:
The document states: "All performance testing... showed conformity to pre-established specifications and acceptance criteria." and "A measurement accuracy and calculation 3D study, usability study, and decimation study were performed and confirmed to be within specification." It also mentions "Validation of printing of physical replicas was performed and demonstrated that anatomic models... can be printed accurately when using any of the compatible 3D printers and materials."
Without specific numerical thresholds or target values, a detailed table cannot be created. However, the categories of acceptance criteria and the qualitative reported performance are:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Measurement Accuracy & Calculation 3D | Confirmed to be within specification |
Usability | Confirmed to be within specification |
Decimation | Confirmed to be within specification |
Accuracy of Physical Replica Printing | Anatomic models can be printed accurately on compatible 3D printers and materials for specified applications. |
2. Sample size used for the test set and the data provenance:
This information is not provided in the text. There is no mention of sample size for any test set or the origin (country, retrospective/prospective) of the data used for validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided in the text. The document does not detail how ground truth was established for any validation studies.
4. Adjudication method for the test set:
This information is not provided in the text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
The document describes the D2P software as an "image segmentation system," "pre-operative software for surgical planning," and a tool for "transfer of DICOM imaging information." It also mentions the "Incorporation of a deep learning neural network used to create the prediction of the segmentation."
However, there is no mention of an MRMC comparative effectiveness study involving human readers with and without AI assistance, nor any effect size related to human reader improvement. The focus appears to be on the performance of the software itself and the accuracy of physical replicas.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, the testing described appears to be primarily standalone performance testing of the D2P software and its ability to produce accurate segmented models and physical replicas. The statement "All performance testing... showed conformity to pre-established specifications and acceptance criteria" without mention of human interaction suggests standalone evaluation.
7. The type of ground truth used:
This information is not explicitly stated in the text. While it mentions "measurement accuracy," "usability," and "accuracy of physical replicas," it does not specify the method used to establish the gold standard or ground truth for these measurements (e.g., expert consensus, pathology, outcomes data, etc.). It can be inferred that for "measurement accuracy" and "accuracy of physical replicas," there would be established objective standards or measurements used as ground truth.
8. The sample size for the training set:
This information is not provided in the text. The document mentions the "Incorporation of a deep learning neural network," which implies a training set was used, but its size is not disclosed.
9. How the ground truth for the training set was established:
This information is not provided in the text. While a deep learning network was used, the method for establishing the ground truth for its training data is not discussed.
Ask a specific question about this device
(224 days)
3D Systems, Inc.
The VSP® Orthopedics System is intended to be used as a surgical instrument to assist in preoperative planning and/or in guiding the marking of bone and/or in guiding surgical instruments in non-acute, non-joint replacing osteotomies for adult patients in the distal femur, tibia, and non-sacrum pelvis.
The VSP® Orthopedics System is intended to assist a surgeon with pre-operative planning and transfer of the pre-operative plan to the surgery in orthopedic procedures. The system contains several physical and digital outputs including patient-specific anatomical models, and guides (physical outputs); and patient-specific surgical plans and digital files (digital or documentation outputs).
Outputs of the VSP® Orthopedics System are designed with physician input and reviewed by the physician prior to finalization and distribution.
The VSP® Orthopedics System also contains Stainless Steel Drill Inserts (VSP® Orthopedics System Accessories) which are intended to be used by the physician to guide drilling activities during the surgical procedure. The inserts fit into a standard hole in the cutting / drill guides and can be used across all VSP® Orthopedics System guides and templates.
The VSP® Orthopedics System is intended to assist a surgeon with pre-operative planning and transfer of the pre-operative plan to the surgery in orthopedic procedures.
Here's a breakdown of the acceptance criteria and study information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a specific table of numerical acceptance criteria or reported device performance metrics like sensitivity, specificity, or accuracy for the overall system. Instead, it states that:
"All design, process, and other verification and validation testing, which were conducted as a result of risk analyses and design impact assessments, showed conformity to pre-established specifications and acceptance criteria. The acceptance criteria was established in support of device performance, and testing demonstrated substantial equivalence of the system to the predicate device."
This indicates that the acceptance criteria were qualitative and focused on demonstrated compliance with pre-established specifications and substantial equivalence to the predicate device.
The "Performance Testing" section lists the types of tests performed:
Test Type | Description |
---|---|
Equipment Qualification | IQ/OQ/PQ |
Process Qualification | PQ |
Mechanical Testing | Detailed results not provided in the summary |
Software Validation | Detailed results not provided in the summary |
Cleaning Validation | Detailed results not provided in the summary |
Sterilization Validation | Detailed results not provided in the summary |
Biocompatibility Validation | Detailed results not provided in the summary |
Packaging Validation | Detailed results not provided in the summary |
Shelf Life Validation | Detailed results not provided in the summary |
Drop Test Validation | Detailed results not provided in the summary |
Debris Validation | Detailed results not provided in the summary |
Cadaver Study | Performed to prove the device performs according to its intended use |
2. Sample size used for the test set and the data provenance
The document specifies "Bench and cadaveric testing was conducted" and includes "Cadaver Study" as a test type. However, it does not provide any details on the sample size (number of cadavers or specific test scenarios) used for the cadaver study or any other test set.
The data provenance is not explicitly mentioned (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document states: "Outputs of the VSP® Orthopedics System are designed with physician input and reviewed by the physician prior to finalization and distribution." While this indicates physician involvement in the design and review process, it does not specify the number of experts used to establish ground truth for a test set or their specific qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication method for the test set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size for human readers improving with AI assistance. The system is described as a "surgical instrument to assist in pre-operative planning and/or in guiding..." which implies human-in-the-loop, but no comparative study to measure assistance effectiveness is detailed in this summary.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document does not explicitly state if a standalone (algorithm only) performance study was conducted. The description of the system's outputs ("patient-specific anatomical models, and guides (physical outputs); and patient-specific surgical plans and digital files (digital or documentation outputs)") suggests that the system generates these outputs, which are then reviewed by a physician.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document implies that the ground truth for the system's performance, particularly in the cadaver study, would be based on direct observation of surgical guidance and accuracy against the pre-operative plan. Since the system assists in planning and guiding, the "ground truth" would likely be the surgical outcome or accuracy of instrument placement as verified in the cadaveric setting. However, the exact nature of this "ground truth" and how it was established is not detailed. It mentions "physician input" and "reviewed by the physician," suggesting expert judgment forms a basis, but not explicitly as a ground truth for a test set.
8. The sample size for the training set
The document does not provide any information regarding a training set or its sample size. This summary focuses on verification and validation testing, not the development or training of an AI component, if one exists within the system's planning features.
9. How the ground truth for the training set was established
Since no information on a training set is provided, there is also no information on how its ground truth was established.
Ask a specific question about this device
(188 days)
3D SYSTEMS, INC.
The D2P software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for surgical planning.
3D printed models generated from the output file are meant for visual, non-diagnostic use.
The D2P software is a stand-alone modular software package that allows easy to use and quick digital 3D model preparation for printing or use by third party applications. The software is aimed at usage by medical staff, technicians, nurses, researchers or lab technicians that wish to create patient specific digital anatomical models for variety of uses such as training, education, and pre-operative surgical planning. The patient specific digital anatomical models may be further used as an input to a 3D printer to create physical models for visual, non-diagnostic use. This modular package includes, but is not limited to the following functions:
- DICOM viewer and analysis
- Automated segmentation
- Editing and pre-printing .
- Seamless integration with 3D Systems printers .
- Seamless integration with 3D Systems software packages .
The provided documentation, K161841 for the D2P software, does not contain detailed information regarding the specific acceptance criteria and the comprehensive study proof requested in the prompt. The document primarily focuses on the regulatory submission process, demonstrating substantial equivalence to a predicate device (Mimics, Materialise N.V., K073468).
The "Performance Data" section mentions several studies (Software Verification and Validation, Phantom Study, Usability Study - System Measurements, Usability Study – Segmentation, Segmentation Study) and states that "all measurements fell within the set acceptance criteria" or "showed similarity in all models." However, it does not explicitly list the acceptance criteria or provide the raw performance metrics to prove they were met.
Therefore, I cannot fully complete the requested table and answer all questions based solely on the provided text. I will, however, extract all available information related to performance and study design.
Here's a breakdown of what can be extracted and what information is missing:
Information NOT available in the provided text:
- Explicit Acceptance Criteria Values: The exact numerical values for the acceptance criteria for any of the studies (e.g., specific error margins for measurements, quantitative metrics for segmentation similarity).
- Reported Device Performance Values: The specific numerical performance metrics achieved by the D2P software in any of the studies (e.g., actual measurement deviations, Dice coefficients for segmentation).
- Sample Size for the Test Set: While studies are mentioned, the number of cases or subjects in the test sets for the Phantom, Usability, or Segmentation studies is not specified.
- Data Provenance (Country of Origin, Retrospective/Prospective): This information is not provided for any of the studies.
- Number of Experts and Qualifications for Ground Truth: No details are given about how many experts were involved in establishing ground truth (if applicable) or their qualifications.
- Adjudication Method: Not mentioned.
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: The document doesn't describe an MRMC study comparing human readers with and without AI assistance, nor does it provide an effect size if one were done. The studies mentioned focus on the device's technical performance and user variability.
- Standalone (Algorithm-only) Performance: While the D2P software is a "stand-alone modular software package," the details of the performance studies don't explicitly differentiate between algorithm-only performance and human-in-the-loop performance. The Usability Studies do involve users, suggesting human interaction.
- Type of Ground Truth Used (Pathology, Outcomes Data, etc.): For the Phantom Study, the ground truth is the "physical phantom model." For segmentation and usability studies, it appears to be based on comparisons between the subject device, predicate device, and/or inter/intra-user variability, but the ultimate "ground truth" (e.g., expert consensus on clinical cases, pathological confirmation) is not specified.
- Sample Size for the Training Set: No information is provided about the training set or how the algorithms within D2P were trained.
- Ground Truth Establishment for Training Set: No information is provided about how ground truth for a training set (if one existed) was established.
Information available or inferable from the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric/Study | Acceptance Criteria (Stated as met) | Reported Device Performance (Stated as met) |
---|---|---|
Phantom Study | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") |
Usability Study – System Measurements (Inter/Intra-user variability) | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") |
Usability Study – Segmentation | Not explicitly quantified (e.g., "showed similarity in all models") | Not explicitly quantified (e.g., "showed similarity in all models") |
Segmentation Study | Not explicitly quantified (e.g., "showed similarity in all models") | Not explicitly quantified (e.g., "showed similarity in all models") |
2. Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Not specified for any of the studies (Phantom, Usability, Segmentation).
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The phantom study used a physical phantom model. For patient data in segmentation/usability studies, the provenance is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No evidence of an MRMC comparative effectiveness study of human readers with vs. without AI assistance is detailed in this document. The Usability Studies assessed inter/intra-user variability of measurements and segmentation similarity, indicating human interaction with the device, but not a comparative study demonstrating improvement in reader performance due to the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The D2P software is described as a "stand-alone modular software package." The "Software Verification and Validation Testing" and "Segmentation Study" imply assessment of the software's inherent capabilities. However, the presence of "Usability Studies" involving human users suggests that human-in-the-loop performance was also part of the evaluation, but it's not explicitly segmented as "algorithm only" vs. "human-in-the-loop with AI assistance." The document doesn't provide distinct results for an "algorithm only" performance metric.
7. The type of ground truth used:
- Phantom Study: The ground truth was the "physical phantom model." Comparisons were made between segmentations created by the subject and predicate device from a CT scan of this physical phantom.
- Usability Study – System Measurements: Ground truth appears to be based on comparing inter and intra-user variability in measurements taken within the subject device. The reference for what constitutes "ground truth" for these measurements (e.g., true anatomical measures) is not explicitly stated beyond comparing user consistency.
- Usability Study – Segmentation / Segmentation Study: Ground truth for these studies is implied by "comparison showed similarity in all models" or comparison between subject and predicate devices. This suggests a relative ground truth (e.g., consistency across methods/users) rather than an absolute ground truth like pathology.
8. The sample size for the training set:
- Not specified. The document does not describe the specific training of machine learning algorithms, only the software's intended use and performance validation.
9. How the ground truth for the training set was established:
- Not specified, as information about a training set is not provided.
Ask a specific question about this device
(498 days)
3D Systems, Inc.
The 3D Systems, Inc. VSP Cranial System is intended for use as a collection of software to provide image segmentation and transfer of imaging information from a CT based medical scanner. The is processed by the VSP Cranial System and the result is an output data file that may then be provided as digital models or used as input in the production of physical outputs including anatomical models, templates, and surgical guides for use in the marking of cranial bone in cranial surgery.
The 3D Systems VSP® Cranial System is a collection of Commercial Off-The-Shelf (COTS) software, third party medical device software, and custom software intended to provide a variety of outputs to support cranial reconstructive surgery. The system uses CT based imaging data of the patient's anatomy with input from the physician, to manipulate original patient images for planning and executing surgery. The system produces a variety of patient specific outputs including, anatomical models (physical and digital), surgical templates / guides, and patient specific case reports.
Here's a breakdown of the acceptance criteria and the study information for the VSP Cranial System, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Leveraged from Predicate Device | ||
Biocompatibility testing | (Implied: Meets established biocompatibility standards for medical devices) | Not explicitly stated for this device, but leveraged from predicate. |
Cleaning and sterilization testing | (Implied: Demonstrates effective cleaning and sterilization methods) | Not explicitly stated for this device, but leveraged from predicate. |
Software verification and validation | (Implied: Software functions as intended, meets specified requirements, and is safe and effective) | Not explicitly stated for this device, but leveraged from predicate. |
Performance testing (Process validation, Simulated Use, Mechanical testing) | (Implied: All aspects of the manufacturing process, simulated use in planning, and mechanical strength/durability meet pre-established specifications) | Not explicitly stated for this device, but leveraged from predicate. |
Specific to VSP Cranial System | ||
Packaging Validation | Packaging and labels tested according to ASTM D4577, ASTM D642 (Method A), ASTM D4728, ASTM D3580, ASTM D5276, ASTM D6179, ASTM D880, ASTM D6179, ASTM D6653, and National Motor Freight Classification Rule 180. (Implied: Successfully withstands shipping and handling without compromise to product integrity or labeling) | All packaging and labeling met the required acceptance criteria. |
Sterilization Compatibility | VSP® Cranial System outputs subjected to a single sterilization cycle and visually/dimensionally inspected to ensure compatibility with the validated sterilization method. (Implied: Maintains visual and dimensional integrity after sterilization.) | All acceptance criteria was met. |
Dimensional Analysis | Sizes and shapes of VSP® Cranial System templates and guides (selected to challenge the system) were dimensionally inspected to verify conformance to the product requirements. (Implied: Dimensions are within specified tolerances.) | All acceptance criteria was met. |
Bioburden | Bioburden testing conducted on VSP® Cranial System templates, guides, anatomical models, and metal accessories per ISO 11737-1, USP and USP . (Implied: Bioburden levels are within acceptable limits for sterilization.) | All acceptance criteria was met. |
Pyrogenicity testing | Pyrogenicity testing conducted on VSP® Cranial System templates, guides, anatomical models, and metal accessories per AAMI ST72, USP , and USP . (Implied: Endotoxin levels are below a specified threshold.) | All samples met the acceptance criteria of ≤ 2.15 EU/device. |
2. Sample Size Used for the Test Set and Data Provenance:
The document details performance testing that was largely leveraged from the predicate device (VSP® System, K133907) because the planning/design process, materials, manufacturing process, cleaning methods, and sterilization methods are identical.
For the tests specific to the VSP Cranial System:
- Sample Size: Not explicitly stated but "sizes and shapes of VSP® Cranial System templates and guides were selected to challenge the system" for Dimensional Analysis. For bioburden and pyrogenicity, "VSP® Cranial System templates, guides, anatomical models, and metal accessories" were tested.
- Data Provenance: This appears to be prospective testing conducted by 3D Systems for the specific VSP Cranial System device. The country of origin of the data is not specified beyond being generated by the applicant, 3D Systems, Inc., which is based in Littleton, Colorado, USA.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
This information is not provided in the document. The testing described focuses on engineering, material, and sterilization validation rather than clinical performance or diagnostic accuracy that would require expert-established ground truth for a test set.
4. Adjudication Method for the Test Set:
This information is not provided in the document. As mentioned above, the tests are primarily engineering and material validations, which don't typically involve an adjudication method in the way a clinical study for diagnostic accuracy would.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No, an MRMC comparative effectiveness study was not done. The study described is not a clinical study involving human readers or comparative effectiveness of AI assistance. It's a technical performance and validation study to demonstrate substantial equivalence to a predicate device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The device is described as a "collection of software" that provides image segmentation and processing, resulting in an "output data file that may then be provided as digital models or used as input in the production of physical outputs." The system is operated by "trained 3D Systems employees," and the "physician does not directly input information" but provides "clinical input and review."
While there is "software verification and validation testing" (leveraged from the predicate), the document does not explicitly describe a standalone performance study for the algorithm's output in terms of accuracy against a ground truth independent of the human-in-the-loop (3D Systems employees and physician review) process. The nature of this device (planning tools for customized physical outputs) means the "human-in-the-loop" is integral to its intended use and output generation.
7. The Type of Ground Truth Used:
For the specific tests performed:
- Packaging Validation: Ground truth is defined by the technical specifications of the ASTM and National Motor Freight Classification standards.
- Sterilization Compatibility: Ground truth is visual and dimensional integrity after a validated sterilization cycle.
- Dimensional Analysis: Ground truth is the "product requirements" and specified dimensional tolerances.
- Bioburden: Ground truth is defined by the acceptable bioburden limits specified in ISO 11737-1, USP , and USP .
- Pyrogenicity: Ground truth is the acceptance criteria of ≤ 2.15 EU/device as per AAMI ST72, USP , and USP .
For the overall system, the "ground truth" for its functionality is implicitly linked to the predicate device's established performance and safety/effectiveness, as the core technologies and processes are leveraged and deemed "substantially equivalent."
8. The Sample Size for the Training Set:
This information is not applicable/not provided. The VSP Cranial System is not described as a machine learning or AI algorithm that requires a "training set" in the conventional sense for developing predictive models. It's a software system for image segmentation and design of physical outputs based on CT data.
9. How the Ground Truth for the Training Set was Established:
This information is not applicable/not provided for the same reasons as point 8.
Ask a specific question about this device
Page 1 of 1