Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K222470
    Device Name
    3Dicom MD
    Date Cleared
    2022-10-25

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Dicom MD software is intended for use as a diagnostic and analysis tool for diagnostic images for hospitals. imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and report information.

    3Dicom MD displays and manages diagnostic quality DICOM images.

    3Dicom MD is not intended for diagnostic use with mammography images. Usage for mammography is for reference and referral only.

    3Dicom MD is not intended for diagnostic use on mobile devices.

    Contraindications: 3Dicom MD is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel.

    Device Description

    3Dicom MD is a software application developed to focus on core image visualization functions such as 2D multi-planar reconstruction, 3D volumetric rendering, measurements, and markups. 3Dicom MD also supports real-time remote collaboration, sharing the 2D & 3D visualization of the processed patient scan and allowing simultaneous interactive communication modes between multiple users online through textual chat, voice, visual aids, and screen-sharing.

    Designed to be used by radiologists and clinicians who are familiar with 2D scan images, 3Dicom MD provides both 2D and 3D image visualization tools for CT, MRI, and PET scans from different makes and models of image acquisition hardware.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Measurement Accuracy)Reported Device Performance
    Length (>10 mm)99.3%
    Length (1-10 mm)98.8%
    Area99.52%
    Angle99.46%

    Note: The document states that the tested accuracy for the lowest clinical range (1-10mm) was found to be slightly inferior (98.8%) due to the resolution of the input scan and screen.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 81 Digital Reference Objects (test cases).
    • Data Provenance: The Digital Reference Objects were "created...representative of the clinical range typically encountered in radiology practice." The text does not specify a country of origin or whether they were retrospective or prospective data in the clinical sense, as they are synthetically created for testing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications. It mentions that "usability testing involving trained healthcare professionals" was performed, but this is distinct from establishing ground truth for the measurement accuracy tests. For the measurement accuracy tests, the ground truth was "known values" from the "Digital Reference Objects."

    4. Adjudication Method for the Test Set

    Not applicable. The ground truth for measurement accuracy was established using "known values" from Digital Reference Objects, not through expert adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or presented in the provided text. The study described focuses on the device's standalone measurement accuracy.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance study was done for the measurement accuracy. The reported percentages (e.g., 99.3% accuracy for length > 10mm) represent the device's performance in measuring known values from Digital Reference Objects. There is no indication of human-in-the-loop performance in these specific metrics.

    7. Type of Ground Truth Used

    The ground truth used for the measurement accuracy tests was known values from Digital Reference Objects. These objects were created to represent the clinical range.

    8. Sample Size for the Training Set

    The document does not provide information about the sample size for a training set. The descriptions focus on verification and validation activities for the device's performance, not on a machine learning model's training.

    9. How the Ground Truth for the Training Set Was Established

    As no information about a training set for a machine learning model is provided, there is no description of how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K173041
    Manufacturer
    Date Cleared
    2018-12-20

    (448 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PC/MAC version:
    The RealGUIDE software is intended for the following uses:

    1. Support to the diagnosis for trained professionals. The input DICOM files acquired by a CT/CBCT scanner are not modified in any way but they are showed to the doctor through the classical imaging and volume rendering techniques. It is a stand-alone product. No information of the patient is modified, all the parameters used for the image processing are read from the DICOM file itself. Neither automatic diagnosis is made, nor automatic disease detection is performed. This software is not connected to any medical instrumentation and it doesn't control any medical or energy supplying device. The user imports DICOM data coming from any CT/CBCT imaging device and the software enables him to view the Patient exam in different multi-planar 2D images and easily reconstruct the 3D volume for an immediate visualization of bone structures and surrounding tissues.
    2. Virtual oral and maxillofacial surgery planning. Doctors can plan virtual implants and surgeries on 2D/3D reconstructions and export the projects in open or proprietary format for further processing. The user can choose different implant models (for example dental implants models) from a library provided by the Manufacturers and simulate the positioning in the Patient reconstructed volume (this operation is called "virtual plan")
    3. Dental/maxillofacial surgical guides and prosthetic modelling. The virtual plan is used to design a surgical guide that is used by the doctor to drive the surgery drills according to the planned implants direction and depth. This surgical guide can be manufactured by any 3D printer working from STL files. The user can also design the patient (typically a denture) with the surface and volume free-form tools implemented in the result is exported in STL format for 3D printing or CAD/CAM technologies.

    Mobile version:
    The RealGUIDE software APP is intended for the following uses:

    1. Projects visualization and editing. The input PROJECT files, pre-processed with the RealGUIDE desktop version, are used by trained professionals to evaluate the implants projects, edit them with other colleagues through the cloud, as well as for a more effective Patient treatment communication.
      The RealGUIDE APP version is NOT INTENDED for managing a 3D diagnosis starting from DICOM images, due to the mobile devices screen resolution limitations. For this reason, the APP is not reading directly the DICOM files but only pre-processed project files, exported through the cloud by the RealGUIDE desktop version.
    Device Description

    RealGUIDE Graphic Station is a fully-featured 3D imaging application in medicine. Its unique open architecture and modular framework make customization options trivial. RealGUIDE Graphic Station is meant to be a multiplatform application, running on PC, MAC and mobile devices (not provided by 3DIEMME). The RealGUIDE software is capable of displaying oral/maxillofacial radiology. The user is then able to navigate through different views, segmented analysis (cross sections), and 3D perspective. In addition, the user is able to simulate various objects within the radiograph for the purpose of treatment planning.
    Once treatment planning and visual simulation is complete, users can generate reports and simulated images for the purpose of evaluation and diagnosis, as well as perform a surgical guide and prosthesis modelling, to be exported in STL format for the manufacturing with any RP or CAD/CAM machine.
    The output format of the software is a STL file, mainly focused on dental, maxillofacial and orthognatic surgery. A list of the possible devices that can be modelled with the software is reported below:

    • -Surgical guides for dental implants and surgical screws planning
    • -Bone cutting and bone reduction guides for maxillofacial surgery
    • -Bone graft models for mandible/maxilla regenerative procedures
    • -Dental and maxillofacial prosthesis
    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study information based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly present a table of acceptance criteria with specific performance metrics (e.g., accuracy, sensitivity, specificity, or error rates). Instead, it makes general claims about the device being tested and functioning as intended, and being substantially equivalent to predicate devices.

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Intent"Testing demonstrates the implementation functions as intended..."
    Safety and Effectiveness"...differences between the Device and the predicates do not raise additional concerns with the Device's safety and effectiveness."
    Clinical Effectiveness"The results of these studies show the effectiveness of the RealGUIDE software to improve the patient's surgical planning and the whole diagnostic approach."
    Data Correspondence"...verify that the data shown by RealGUIDE were correspondent to the patient's anatomical features."
    Substantial Equivalence"Based on a comparison of intended use, indications, principle of operations, features, technical/clinical data, and the test results, the RealGUIDE software is found to be substantially equivalent in safety and effectiveness to the predicate and reference devices listed."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: The document states that "Significant clinical studies have been performed by different medical professionals on many CT/CBCT images." It does not provide an exact number or range for the "many" CT/CBCT images.
    • Data Provenance: The document does not explicitly state the country of origin of the data. It mentions "The input DICOM files acquired by a CT/CBCT scanner" for the PC/MAC version. The retrospective or prospective nature of the data is not specified directly, though using "input DICOM files acquired" could imply retrospective use of existing data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • Number of Experts: The document states that "Significant clinical studies have been performed by different medical professionals". It does not specify the exact number of medical professionals involved.
    • Qualifications of Experts: The document refers to them as "different medical professionals" and "leading doctors" who have published relevant scientific literature. It does not provide specific qualifications such as years of experience or subspecialty (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set:

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:

    The document does not indicate that an MRMC comparative effectiveness study was performed to measure how much human readers improve with AI assistance. The described "clinical studies" focused on the software's effectiveness in improving surgical planning and diagnostic approach, and verifying data correspondence, but not in a comparative "with AI vs. without AI" reader study setup. The RealGUIDE software as described primarily functions as an imaging and planning tool, not an AI-driven diagnostic aid that assists human readers in real-time interpretation. It explicitly states: "Neither automatic diagnosis is made, nor automatic disease detection is performed."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    The "PC/MAC version" is described as a "stand-alone product" in the context of its diagnostic support function and not being connected to medical instrumentation. However, this refers to the software itself not requiring external hardware to function, rather than an "algorithm-only" performance evaluation. The overall use of the device is for "trained professionals" for "Support to the diagnosis" and "Virtual oral and maxillofacial surgery planning," implying a human-in-the-loop workflow.

    7. The Type of Ground Truth Used:

    The document implies that the ground truth for "verifying that the data shown by RealGUIDE were correspondent to the patient's anatomical features" was established through clinical assessment and comparison to actual patient anatomy/features (presumably from the original CT/CBCT images and potentially outcomes). It also states that "Results of these studies are provided in a separate supplement" and "Relevant scientific literature has been published... on the effectiveness of medical imaging technology," suggesting comparison to established medical understanding and potentially expert consensus on anatomical representations.

    8. The Sample Size for the Training Set:

    The document does not provide any information regarding the sample size used for the training set for the software. This is common for 510(k) summaries where the focus is on performance validation rather than details of the development process.

    9. How the Ground Truth for the Training Set Was Established:

    Since no information is provided about a training set, there is no information on how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K113442
    Device Name
    3DI
    Manufacturer
    Date Cleared
    2012-02-16

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Di is intended for use as an interactive tool for assisting professional Radiologists, Cardiologists and specialists to reach their own diagnosis, by providing tools of communication, clinics networking, WEB Serving, image viewing, image manipulation, 2D/3D image visualization, image processing, reporting and archiving. The 3Di indications for use are processing of Cardiac CT studies, including CT Calcium scoring, CT Cardiac angiography, coronaries analysis, cardiac functional assessment and of CT colonoscopy. The 3Di indications for use have been modified to include viewing of Mammography images.

    Device Description

    3Di is a PACS device which enables users to access medical images over a network and to utilize 3Di's image visualization tools to review the images. It provides the following functions: Web server, patient browser, PACS capabilities, multi-modality viewing, CT Cardiac and Colonoscopy clinical applications. The 3Di indications for use have been modified to include viewing of Mammography images.

    AI/ML Overview

    The provided text describes the 3Di device, a PACS workstation that now includes the viewing of Mammography images. The submission focuses on demonstrating substantial equivalence for this added functionality.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Quality of Mammography imaging is unsubstantially different or equivalent to DICOM source mammographic data (for accurate viewing)The quality of Mammography imaging was validated by comparing the device imaging output to the DICOM source mammographic data. The comparison results demonstrate that the 3Di and the DICOM source mammographic data are substantial equivalent in terms of image quality.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective). It only mentions "comparison results" without detailing the number of mammography images or studies included in this comparison.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number of experts used or their qualifications for establishing ground truth for the test set. It only mentions that the device is "intended for use as an interactive tool for assisting professional Radiologists, Cardiologists and specialists to reach their own diagnosis".

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for the test set. The validation was based on a direct comparison of image quality.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study is mentioned in the provided text. The submission focuses on the equivalency of image quality for mammography viewing, not on the impact of the AI on human reader performance or diagnostic accuracy.

    6. Standalone Performance Study

    No standalone (algorithm only without human-in-the-loop performance) study is explicitly detailed. The validation described is a comparison of image output quality of the device against the DICOM source, rather than an algorithmic diagnostic performance study.

    7. Type of Ground Truth Used

    The "ground truth" for the validation seems to be the DICOM source mammographic data itself. The device's output was compared directly against this source data to ensure visual fidelity and quality. This indicates a technical ground truth related to image rendering, rather than a clinical ground truth like pathology or expert consensus on disease presence.

    8. Sample Size for the Training Set

    The document does not provide any information about a training set since the validation focuses on image quality comparison, not on a machine learning model that would require a training set.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set or related ground truth establishment is mentioned in the context of this 510(k) submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K112530
    Manufacturer
    Date Cleared
    2011-09-30

    (30 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Di is a software package of PACS workstation for handling multimodality (CT, XA, MR, PET, SPECT & Ultrasound) images, which are using DICOM protocol. It includes volume rendering, Multi-planar reconstruction (MPR) and viewing of the inner and outer surfaces of organs as well as within their walls.

    3Di is intended for use as an interactive tool for assisting professional Radiologists, Cardiologists and specialists to reach their own diagnosis, by providing tools of communication, clinics networking, WEB Serving, image viewing, image manipulation, 2D/3D image visualization, image processing, reporting and archiving. This product is not intended for use with or for diagnostic interpretation of Mammography images.

    The 3Di indications for use are processing of Cardiac CT studies, including CT Calcium scoring, CT Cardiac angiography, coronaries analysis, cardiac functional assessment and of CT colonoscopy.

    Device Description

    3Di is a PACS device which enables users to access medical images over a network and to utilize 3Di's image visualization tools to review the images. It provides the following functions: Web server, patient browser, PACS capabilities, multi-modality viewing, CT Cardiac and Colonoscopy clinical applications. The 3Di has been modified to include the 3Di CScore -option, which is intended to process CT cardiac studies for CT Calcium scoring.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes a validation study for the 3Di CScore feature by comparing its output to a predicate device. It doesn't explicitly state numerical "acceptance criteria" in the format of a predefined threshold. Instead, it states the goal of the validation and the outcome of the comparison.

    Criterion TypeAcceptance Criteria (Implicit)Reported Device Performance
    Calcium Scoring OutputThe 3Di CScore output should be "very similar" to the output of the Philips Brilliance workstation (the predicate device)."The comparison results demonstrate that the 3Di and the Brilliance are very similar in terms of calcium scoring."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not provide the sample size used for the test set or the data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used or their qualifications for establishing ground truth. The study is described as a comparison against a predicate device's output, rather than against expert consensus on a dataset.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method. The study is a direct comparison of algorithmic outputs.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, an MRMC comparative effectiveness study was not conducted or described. The study focuses on the device's algorithmic output compared to a predicate device's algorithmic output, not on human reader performance with or without AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done

    Yes, a standalone performance assessment was done in the sense that the 3Di CScore's calculated calcium scores were compared directly to the Philips Brilliance workstation's calculated calcium scores. This is a comparison of two algorithmic outputs.

    7. The Type of Ground Truth Used

    The "ground truth" for the test set appears to be the output of a legally marketed predicate device (Philips Brilliance workstation) for calcium scoring. This is a comparative validation, where the predicate device serves as the reference standard.

    8. The Sample Size for the Training Set

    The document does not provide the sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established. Given the nature of the validation (comparison to a predicate device), it's possible that the 3Di CScore's development involved internal reference standards or existing clinical data, but the details are not outlined here.

    Ask a Question

    Ask a specific question about this device

    K Number
    K093703
    Device Name
    3DI
    Manufacturer
    Date Cleared
    2010-01-19

    (49 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Di is a software package of PACS workstation for handling multimodality (CT, XA, MR, PET, SPECT & Ultrasound) images, which are using DICOM protocol. It includes volume rendering, Multi-planar reconstruction (MPR) and viewing of the inner and outer surfaces of organs as well as within their walls.

    3Di is intended for use as an interactive tool for assisting professional Radiologists, Cardiologists and specialists to reach their own diagnosis, by providing tools of communication, clinics networking, WEB Serving, image viewing, image manipulation, 2D/3D image visualization, image processing, reporting and archiving. This product is not intended for use with or for diagnostic interpretation of Mammography images.

    The 3Di indications for use are processing of Cardiac CT angiography studies, including coronaries analysis, cardiac functional assessment and CT colonoscopy.

    Device Description

    3Di is a PACS device which enables users to access medical images over a network and to utilize 3Di's image visualization tools to review the images. It provides the following functions: Web server, patient browser, PACS capabilities, multi-modality viewing, CT Cardiac and Colonoscopy clinical applications.

    AI/ML Overview

    Here's an analysis of the provided 510(k) summary regarding the 3Di device, addressing your specific questions.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state acceptance criteria in a quantitative or pass/fail threshold manner. Instead, it describes a comparative study against a predicate device.

    Acceptance Criteria (Implicit)Reported Device Performance
    General functionality of image reformatting (various modalities)"results of the two devices are very similar"
    Reliability of orientation annotations displayed"results of the two devices are very similar"
    Correctness of measurements"results of the two devices are very similar"
    Image quality"results of the two devices are very similar"
    Cardiac analysis Graphs and Results"results of the two devices are very similar"
    Colon analysis results"results of the two devices are very similar"
    Overall Safety and Effectiveness (compared to predicate)"substantial equivalent in terms of safety and effectiveness to the predicate devices."

    2. Sample Size Used for the Test Set and Data Provenance

    The 510(k) summary does not specify the sample size used for the comparative performance study. It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    The 510(k) summary does not provide information on the number of experts or their qualifications. The study described is a comparison of the 3Di device against a predicate device (Philips Brilliance) rather than establishing ground truth against expert consensus.

    4. Adjudication Method for the Test Set

    The 510(k) summary does not describe an adjudication method. The study appears to be a direct comparison of the 3Di's output with the predicate device's output across various functions.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The comparison is between two devices, not human readers with and without AI assistance. Therefore, there is no effect size reported for human readers improving with AI.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance assessment was effectively done. The summary states: "Its performance has been validated by comparison to the performance of the Philips Brilliance predicate device." This implies an evaluation of the algorithm's output directly against the predicate device's output, without human intervention in the loop during this specific performance validation.

    7. Type of Ground Truth Used

    The "ground truth" for this study was the performance and output of the legally marketed predicate device (Philips Brilliance). The comparison aimed to demonstrate "substantial equivalence" to this established device, rather than to a clinical ground truth like pathology or patient outcomes.

    8. Sample Size for the Training Set

    The 510(k) summary does not provide information on the sample size used for the training set. Given the submission date (2010) and the description of the device as a PACS workstation with visualization tools, it's possible that traditional "training sets" in the modern machine learning sense might not have been explicitly documented or emphasized in the same way as they would be for deep learning-based AI devices today. The device focuses on visualization and manipulation tools, which might rely more on established graphics algorithms than on data-driven machine learning models.

    9. How the Ground Truth for the Training Set Was Established

    The 510(k) summary does not provide information on how ground truth was established for any training set. If internal validation or verification was performed during development, this information is not detailed in the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1