Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K183489
    Device Name
    D2P
    Manufacturer
    Date Cleared
    2019-08-29

    (255 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    D2P

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The D2P software is intended for use as a software interface and image segmentation system for the transfer of DICOM imaging information from a medical scanner to an output file. It is also intended as pre-operative software for surgical planning. For this purpose, the output file may be used to produce a physical replica. The physical replica is intended for adjunctive use along with other diagnostic tools and expert clinical judgement for diagnosis, patient management, and/or treatment selection of cardiovascular, craniofacial, genitourinary, neurological, and/or musculoskeletal applications.

    Device Description

    The D2P software is a stand-alone modular software package that provides advanced visualization of DICOM imaging data. This modular package includes, but is not limited to the following functions:

    • DICOM viewer and analysis
    • Automated segmentation
    • Editing and pre-printing
    • Seamless integration with 3D Systems printers
    • Seamless integration with 3D Systems software packages
    • Seamless integration with Virtual Reality visualization for non-diagnostic use.
    AI/ML Overview

    The provided text does not contain detailed information regarding acceptance criteria, specific study designs, or performance metrics in a structured format that directly addresses all the requested points. The document summarizes the device, its intended use, and its equivalence to a predicate device for FDA 510(k) clearance.

    However, based on the limited information available, here's what can be extracted and inferred:

    1. A table of acceptance criteria and the reported device performance:

    The document states: "All performance testing... showed conformity to pre-established specifications and acceptance criteria." and "A measurement accuracy and calculation 3D study, usability study, and decimation study were performed and confirmed to be within specification." It also mentions "Validation of printing of physical replicas was performed and demonstrated that anatomic models... can be printed accurately when using any of the compatible 3D printers and materials."

    Without specific numerical thresholds or target values, a detailed table cannot be created. However, the categories of acceptance criteria and the qualitative reported performance are:

    Acceptance Criteria CategoryReported Device Performance
    Measurement Accuracy & Calculation 3DConfirmed to be within specification
    UsabilityConfirmed to be within specification
    DecimationConfirmed to be within specification
    Accuracy of Physical Replica PrintingAnatomic models can be printed accurately on compatible 3D printers and materials for specified applications.

    2. Sample size used for the test set and the data provenance:

    This information is not provided in the text. There is no mention of sample size for any test set or the origin (country, retrospective/prospective) of the data used for validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the text. The document does not detail how ground truth was established for any validation studies.

    4. Adjudication method for the test set:

    This information is not provided in the text.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document describes the D2P software as an "image segmentation system," "pre-operative software for surgical planning," and a tool for "transfer of DICOM imaging information." It also mentions the "Incorporation of a deep learning neural network used to create the prediction of the segmentation."

    However, there is no mention of an MRMC comparative effectiveness study involving human readers with and without AI assistance, nor any effect size related to human reader improvement. The focus appears to be on the performance of the software itself and the accuracy of physical replicas.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, the testing described appears to be primarily standalone performance testing of the D2P software and its ability to produce accurate segmented models and physical replicas. The statement "All performance testing... showed conformity to pre-established specifications and acceptance criteria" without mention of human interaction suggests standalone evaluation.

    7. The type of ground truth used:

    This information is not explicitly stated in the text. While it mentions "measurement accuracy," "usability," and "accuracy of physical replicas," it does not specify the method used to establish the gold standard or ground truth for these measurements (e.g., expert consensus, pathology, outcomes data, etc.). It can be inferred that for "measurement accuracy" and "accuracy of physical replicas," there would be established objective standards or measurements used as ground truth.

    8. The sample size for the training set:

    This information is not provided in the text. The document mentions the "Incorporation of a deep learning neural network," which implies a training set was used, but its size is not disclosed.

    9. How the ground truth for the training set was established:

    This information is not provided in the text. While a deep learning network was used, the method for establishing the ground truth for its training data is not discussed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K161841
    Device Name
    D2P
    Manufacturer
    Date Cleared
    2017-01-09

    (188 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    D2P

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The D2P software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for surgical planning.

    3D printed models generated from the output file are meant for visual, non-diagnostic use.

    Device Description

    The D2P software is a stand-alone modular software package that allows easy to use and quick digital 3D model preparation for printing or use by third party applications. The software is aimed at usage by medical staff, technicians, nurses, researchers or lab technicians that wish to create patient specific digital anatomical models for variety of uses such as training, education, and pre-operative surgical planning. The patient specific digital anatomical models may be further used as an input to a 3D printer to create physical models for visual, non-diagnostic use. This modular package includes, but is not limited to the following functions:

    • DICOM viewer and analysis
    • Automated segmentation
    • Editing and pre-printing .
    • Seamless integration with 3D Systems printers .
    • Seamless integration with 3D Systems software packages .
    AI/ML Overview

    The provided documentation, K161841 for the D2P software, does not contain detailed information regarding the specific acceptance criteria and the comprehensive study proof requested in the prompt. The document primarily focuses on the regulatory submission process, demonstrating substantial equivalence to a predicate device (Mimics, Materialise N.V., K073468).

    The "Performance Data" section mentions several studies (Software Verification and Validation, Phantom Study, Usability Study - System Measurements, Usability Study – Segmentation, Segmentation Study) and states that "all measurements fell within the set acceptance criteria" or "showed similarity in all models." However, it does not explicitly list the acceptance criteria or provide the raw performance metrics to prove they were met.

    Therefore, I cannot fully complete the requested table and answer all questions based solely on the provided text. I will, however, extract all available information related to performance and study design.

    Here's a breakdown of what can be extracted and what information is missing:

    Information NOT available in the provided text:

    • Explicit Acceptance Criteria Values: The exact numerical values for the acceptance criteria for any of the studies (e.g., specific error margins for measurements, quantitative metrics for segmentation similarity).
    • Reported Device Performance Values: The specific numerical performance metrics achieved by the D2P software in any of the studies (e.g., actual measurement deviations, Dice coefficients for segmentation).
    • Sample Size for the Test Set: While studies are mentioned, the number of cases or subjects in the test sets for the Phantom, Usability, or Segmentation studies is not specified.
    • Data Provenance (Country of Origin, Retrospective/Prospective): This information is not provided for any of the studies.
    • Number of Experts and Qualifications for Ground Truth: No details are given about how many experts were involved in establishing ground truth (if applicable) or their qualifications.
    • Adjudication Method: Not mentioned.
    • Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: The document doesn't describe an MRMC study comparing human readers with and without AI assistance, nor does it provide an effect size if one were done. The studies mentioned focus on the device's technical performance and user variability.
    • Standalone (Algorithm-only) Performance: While the D2P software is a "stand-alone modular software package," the details of the performance studies don't explicitly differentiate between algorithm-only performance and human-in-the-loop performance. The Usability Studies do involve users, suggesting human interaction.
    • Type of Ground Truth Used (Pathology, Outcomes Data, etc.): For the Phantom Study, the ground truth is the "physical phantom model." For segmentation and usability studies, it appears to be based on comparisons between the subject device, predicate device, and/or inter/intra-user variability, but the ultimate "ground truth" (e.g., expert consensus on clinical cases, pathological confirmation) is not specified.
    • Sample Size for the Training Set: No information is provided about the training set or how the algorithms within D2P were trained.
    • Ground Truth Establishment for Training Set: No information is provided about how ground truth for a training set (if one existed) was established.

    Information available or inferable from the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance Metric/StudyAcceptance Criteria (Stated as met)Reported Device Performance (Stated as met)
    Phantom StudyNot explicitly quantified (e.g., "all measurements fell within the set acceptance criteria")Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria")
    Usability Study – System Measurements (Inter/Intra-user variability)Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria")Not explicitly quantified (e.g., "all measurements fell within the set acceptance criteria")
    Usability Study – SegmentationNot explicitly quantified (e.g., "showed similarity in all models")Not explicitly quantified (e.g., "showed similarity in all models")
    Segmentation StudyNot explicitly quantified (e.g., "showed similarity in all models")Not explicitly quantified (e.g., "showed similarity in all models")

    2. Sample size used for the test set and the data provenance:

    • Sample Size for Test Set: Not specified for any of the studies (Phantom, Usability, Segmentation).
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The phantom study used a physical phantom model. For patient data in segmentation/usability studies, the provenance is not mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No evidence of an MRMC comparative effectiveness study of human readers with vs. without AI assistance is detailed in this document. The Usability Studies assessed inter/intra-user variability of measurements and segmentation similarity, indicating human interaction with the device, but not a comparative study demonstrating improvement in reader performance due to the AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The D2P software is described as a "stand-alone modular software package." The "Software Verification and Validation Testing" and "Segmentation Study" imply assessment of the software's inherent capabilities. However, the presence of "Usability Studies" involving human users suggests that human-in-the-loop performance was also part of the evaluation, but it's not explicitly segmented as "algorithm only" vs. "human-in-the-loop with AI assistance." The document doesn't provide distinct results for an "algorithm only" performance metric.

    7. The type of ground truth used:

    • Phantom Study: The ground truth was the "physical phantom model." Comparisons were made between segmentations created by the subject and predicate device from a CT scan of this physical phantom.
    • Usability Study – System Measurements: Ground truth appears to be based on comparing inter and intra-user variability in measurements taken within the subject device. The reference for what constitutes "ground truth" for these measurements (e.g., true anatomical measures) is not explicitly stated beyond comparing user consistency.
    • Usability Study – Segmentation / Segmentation Study: Ground truth for these studies is implied by "comparison showed similarity in all models" or comparison between subject and predicate devices. This suggests a relative ground truth (e.g., consistency across methods/users) rather than an absolute ground truth like pathology.

    8. The sample size for the training set:

    • Not specified. The document does not describe the specific training of machine learning algorithms, only the software's intended use and performance validation.

    9. How the ground truth for the training set was established:

    • Not specified, as information about a training set is not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1