(188 days)
The D2P software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for surgical planning.
3D printed models generated from the output file are meant for visual, non-diagnostic use.
The D2P software is a stand-alone modular software package that allows easy to use and quick digital 3D model preparation for printing or use by third party applications. The software is aimed at usage by medical staff, technicians, nurses, researchers or lab technicians that wish to create patient specific digital anatomical models for variety of uses such as training, education, and pre-operative surgical planning. The patient specific digital anatomical models may be further used as an input to a 3D printer to create physical models for visual, non-diagnostic use. This modular package includes, but is not limited to the following functions:
- DICOM viewer and analysis
- Automated segmentation
- Editing and pre-printing .
- Seamless integration with 3D Systems printers .
- Seamless integration with 3D Systems software packages .
The provided documentation, K161841 for the D2P software, does not contain detailed information regarding the specific acceptance criteria and the comprehensive study proof requested in the prompt. The document primarily focuses on the regulatory submission process, demonstrating substantial equivalence to a predicate device (Mimics, Materialise N.V., K073468).
The "Performance Data" section mentions several studies (Software Verification and Validation, Phantom Study, Usability Study - System Measurements, Usability Study – Segmentation, Segmentation Study) and states that "all measurements fell within the set acceptance criteria" or "showed similarity in all models." However, it does not explicitly list the acceptance criteria or provide the raw performance metrics to prove they were met.
Therefore, I cannot fully complete the requested table and answer all questions based solely on the provided text. I will, however, extract all available information related to performance and study design.
Here's a breakdown of what can be extracted and what information is missing:
Information NOT available in the provided text:
- Explicit Acceptance Criteria Values: The exact numerical values for the acceptance criteria for any of the studies (e.g., specific error margins for measurements, quantitative metrics for segmentation similarity).
- Reported Device Performance Values: The specific numerical performance metrics achieved by the D2P software in any of the studies (e.g., actual measurement deviations, Dice coefficients for segmentation).
- Sample Size for the Test Set: While studies are mentioned, the number of cases or subjects in the test sets for the Phantom, Usability, or Segmentation studies is not specified.
- Data Provenance (Country of Origin, Retrospective/Prospective): This information is not provided for any of the studies.
- Number of Experts and Qualifications for Ground Truth: No details are given about how many experts were involved in establishing ground truth (if applicable) or their qualifications.
- Adjudication Method: Not mentioned.
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: The document doesn't describe an MRMC study comparing human readers with and without AI assistance, nor does it provide an effect size if one were done. The studies mentioned focus on the device's technical performance and user variability.
- Standalone (Algorithm-only) Performance: While the D2P software is a "stand-alone modular software package," the details of the performance studies don't explicitly differentiate between algorithm-only performance and human-in-the-loop performance. The Usability Studies do involve users, suggesting human interaction.
- Type of Ground Truth Used (Pathology, Outcomes Data, etc.): For the Phantom Study, the ground truth is the "physical phantom model." For segmentation and usability studies, it appears to be based on comparisons between the subject device, predicate device, and/or inter/intra-user variability, but the ultimate "ground truth" (e.g., expert consensus on clinical cases, pathological confirmation) is not specified.
- Sample Size for the Training Set: No information is provided about the training set or how the algorithms within D2P were trained.
- Ground Truth Establishment for Training Set: No information is provided about how ground truth for a training set (if one existed) was established.
Information available or inferable from the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Metric/Study | Acceptance Criteria (Stated as met) | Reported Device Performance (Stated as met) |
|---|---|---|
| Phantom Study | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") |
| Usability Study – System Measurements (Inter/Intra-user variability) | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") | Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria") |
| Usability Study – Segmentation | Not explicitly quantified (e.g., "showed similarity in all models") | Not explicitly quantified (e.g., "showed similarity in all models") |
| Segmentation Study | Not explicitly quantified (e.g., "showed similarity in all models") | Not explicitly quantified (e.g., "showed similarity in all models") |
2. Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Not specified for any of the studies (Phantom, Usability, Segmentation).
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The phantom study used a physical phantom model. For patient data in segmentation/usability studies, the provenance is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No evidence of an MRMC comparative effectiveness study of human readers with vs. without AI assistance is detailed in this document. The Usability Studies assessed inter/intra-user variability of measurements and segmentation similarity, indicating human interaction with the device, but not a comparative study demonstrating improvement in reader performance due to the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The D2P software is described as a "stand-alone modular software package." The "Software Verification and Validation Testing" and "Segmentation Study" imply assessment of the software's inherent capabilities. However, the presence of "Usability Studies" involving human users suggests that human-in-the-loop performance was also part of the evaluation, but it's not explicitly segmented as "algorithm only" vs. "human-in-the-loop with AI assistance." The document doesn't provide distinct results for an "algorithm only" performance metric.
7. The type of ground truth used:
- Phantom Study: The ground truth was the "physical phantom model." Comparisons were made between segmentations created by the subject and predicate device from a CT scan of this physical phantom.
- Usability Study – System Measurements: Ground truth appears to be based on comparing inter and intra-user variability in measurements taken within the subject device. The reference for what constitutes "ground truth" for these measurements (e.g., true anatomical measures) is not explicitly stated beyond comparing user consistency.
- Usability Study – Segmentation / Segmentation Study: Ground truth for these studies is implied by "comparison showed similarity in all models" or comparison between subject and predicate devices. This suggests a relative ground truth (e.g., consistency across methods/users) rather than an absolute ground truth like pathology.
8. The sample size for the training set:
- Not specified. The document does not describe the specific training of machine learning algorithms, only the software's intended use and performance validation.
9. How the ground truth for the training set was established:
- Not specified, as information about a training set is not provided.
{0}------------------------------------------------
Image /page/0/Picture/1 description: The image shows the logo for the U.S. Department of Health & Human Services. The logo features the department's name in a circular arrangement around a symbol. The symbol is a stylized representation of three human profiles facing right, with flowing lines suggesting movement or connection. The text reads "DEPARTMENT OF HEALTH & HUMAN SERVICES - USA".
Food and Drug Administration 10903 New Hampshire Avenue Document Control Center - WO66-G609 Silver Spring, MD 20993-0002
January 9, 2017
3D Systems, Inc. % Ms. Kim Torluemke VP Quality & Regulatory, Healthcare 5381 South Alkire Circle LITTLETON CO 80127
Re: K161841 Trade/Device Name: D2P Regulation Number: 21 CFR 892.2050 Regulation Name: Picture archiving and communications system Regulatory Class: II Product Code: LLZ Dated: December 23, 2016 Received: December 27, 2016
Dear Ms. Torluemke:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food. Drug. and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21. Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting (reporting of medical device-related adverse events) (21 CFR 803); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820); and if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
{1}------------------------------------------------
If you desire specific advice for your device on our labeling regulation (21 CFR Part 801), please contact the Division of Industry and Consumer Education at its toll-free number (800) 638 2041 or (301) 796-7100 or at its Internet address
http://www.fda.gov/MedicalDevices/Resourcesfor You/Industry/default.htm. Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to
http://www.fda.gov/MedicalDevices/Safety/ReportaProblem/default.htm for the CDRH's Office of Surveillance and Biometrics/Division of Postmarket Surveillance.
You may obtain other general information on your responsibilities under the Act from the Division of Industry and Consumer Education at its toll-free number (800) 638-2041 or (301) 796-7100 or at its Internet address
http://www.fda.gov/MedicalDevices/ResourcesforYou/Industry/default.htm.
Sincerely yours,
Michael D'Hara
For
Robert Ochs, Ph.D. Director Division of Radiological Health Office of In Vitro Diagnostics and Radiological Health Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
Indications for Use
510(k) Number (if known) K161841
Device Name D2P
Indications for Use (Describe)
The D2P software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for surgical planning.
3D printed models generated from the output file are meant for visual, non-diagnostic use.
| Type of Use (Select one or both, as applicable) | |
|---|---|
| Prescription Use (Part 21 CFR 801 Subpart D) | Over-The-Counter Use (21 CFR 801 Subpart C) |
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
{3}------------------------------------------------
Image /page/3/Picture/0 description: The image shows the logo for 3D Systems. The logo consists of a stylized cube made up of the letters "3D" on the left, followed by the words "3D SYSTEMS" in a bold, sans-serif font. The cube is in shades of gray, while the text is in a darker gray or black color.
510(K) SUMMARY
INTRODUCTION 1.
This document contains the 510(k) summary for the D2P software. The content of this summary is based on the requirements of 21 CFR 807.92.
-
- SUBMITTER
| Name: | 3D Systems, Inc. (Simbionix) |
|---|---|
| Address: | Beit GolanCorner of Golan and Hanegev St.Airport City, 70151, IsraelPhone: +972-3-9114444Fax: +972-3-9114455 |
| Official Contact: | Kim TorluemkeVice President, Quality and Regulatory, Healthcare |
| Date Prepared: | January 5, 2017 |
| DEVICE | |
| Trade Name: | D2P |
| Common Name: | Image processing system and preoperative software for simulating/evaluating surgical treatment options. |
| Classification Name: | System, Image Processing, Radiological |
| Classification: | Class II, 21 CFR 892.2050 |
| Product Code: | LLZ |
PREDICATE DEVICE 4.
The D2P software is claimed to be substantially equivalent to the following legally marketed predicate device:
- · Mimics, Materialise N.V (K073468)
5. DEVICE DESCRIPTION
The D2P software is a stand-alone modular software package that allows easy to use and quick digital 3D model preparation for printing or use by third party applications. The software is aimed at usage by medical staff, technicians, nurses, researchers or lab technicians that wish to create patient specific digital anatomical models for variety of uses such as training, education, and pre-operative surgical planning. The patient specific digital anatomical models may be further used as an input to a 3D printer to create physical models for visual, non-diagnostic use. This modular package includes, but is not limited to the following functions:
{4}------------------------------------------------
Image /page/4/Picture/0 description: The image shows the logo for 3D Systems. The logo consists of a stylized cube made up of the letters "3D" on the left, followed by the words "3D SYSTEMS" in a sans-serif font. The cube is in shades of gray, while the text is in a darker gray color.
- DICOM viewer and analysis
- Automated segmentation
- Editing and pre-printing .
- Seamless integration with 3D Systems printers .
- Seamless integration with 3D Systems software packages .
INDICATIONS FOR USE 6.
The D2P software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for surgical planning.
3D printed models generated from the output file are meant for visual, non-diagnostic use.
The Indications for Use statement for the D2P software is nearly identical to the predicate device. The subtle differences do not alter the intended clinical use of the device nor do they affect the safety and effectiveness of the device relative to the predicate. Both the subject and predicate devices have the same intended use for visualization, analysis and segmentation of medical images and rendering 3D objects.
COMPARISON OF TECHNOLOGICAL CHARACTERISTICS WITH THE PREDICATE DEVICE 7.
The D2P software employs similar fundamental technologies as the identified predicate devices, including:
- Viewing of medical imaging data in the axial, coronal and sagittal views
- Ability to process, review and analyze medical imaging data;
- Image transfer and manipulation via software used for the creation of a 3D object;
The following technological differences exist between the subject and predicate devices:
- The inputs to the subject device are equivalent to a subset of the inputs of the predicate device
- The outputs of the subject device are equivalent to a subset of the outputs of the predicate device
8. PERFORMANCE DATA
The following performance data were provided in support of the substantial equivalence determination:
Software Verification and Validation Testing
Software verification and validation testing were conducted and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." The software for this device was considered as a "moderate" level of concern, since a failure or latent flaw in the software could directly result in minor injury to the patient. Software verification and validation included:
- Verification of each independent software subsystem against defined requirements
- Verification of interfaces between software subsystems against defined interface requirements
- Validation of fully integrated system including all subsystems against overall system requirements.
{5}------------------------------------------------
Image /page/5/Picture/0 description: The image shows the logo for 3D Systems. The logo consists of a stylized cube made up of the letters "3D" on the left, followed by the words "3D SYSTEMS" in a sans-serif font. The cube is in shades of gray, while the text is in black.
Phantom Study
The purpose of the study was to evaluate, measure and compare the correlations between a physical phantom model with segmentations that were created using the subject and predicate device from a CT scan of the phantom model. Comparison between the physical phantom and both software systems revealed that all measurements fell within the set acceptance criteria.
Usability Study - System Measurements
The purpose of the study was to evaluate, measure and compare the inter and intra user variability between measurements taken by multiple users in the subject device. Comparison of the inter and intra user measurements showed that all measurements fell within the set acceptance criteria.
Usability Study – Segmentation
The purpose of the study was to visually and quantitatively compare segmentation models created by representative users. The comparison showed similarity in all models.
Segmentation Study
The purpose of the study was to visually and quantitatively compare segmentation models created by both the subject and predicate devices. The comparison showed similarity in all models.
Summary
All performance testing which were conducted as a result of risk analyses and design impact assessments, showed conformity to pre-established specifications and acceptance criteria. The acceptance criteria were established in order to demonstrate device performance and substantial equivalence of the software to the predicate device.
9. CONCLUSIONS
Based on a comparison of the intended use and technological characteristics, the D2P software is substantially equivalent to the identified predicate device. Minor differences in technological characteristics did not raise new or different questions of safety and effectiveness. Additionally, the validation data supports that the system performs in accordance with its intended use and is substantially equivalent to the predicate device.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).