Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K211656
    Device Name
    3D Echo v1.1
    Manufacturer
    Date Cleared
    2021-06-25

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    3D Echo v1.1

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    JointVue's 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data derived from the Terason uSmart3200T ultrasound system. It is designed to allow the user to observe images and perform analyses of musculoskeletal structures using the ultrasound volume data acquired with the Terason ultrasound scanner. Typical users of this system are trained medical professionals including physicians, nurses, and technicians.

    Device Description

    JointVue's 3D Echo is a software application that uses the raw ultrasound signals generated from an imaging ultrasound machine to visualize musculoskeletal structures in three dimensions. The 3D Echo v1.1 includes the following device modifications from 3D Echo v1.0:

      1. software is updated for interoperability with Terason 3200T+ Ultrasound system
      1. the ultrasound hardware is Terason 3200T+ Ultrasound with Terason 14L3 Linear transducer instead of the SonixOne tablet-based portable ultrasound system
    • the NDI 3D Guidance driveBAY™ tracking unit is replaced by the 3D Guidance trakSTAR™ 3. tracking unit (same system but with an internal power supply))
    • Different GCX system cart designed for Terason 3200T+ Ultrasound 4.
    • Custom transducer/sensor holder now attaches an 800 model EM sensor to the exterior of the ultrasound transducer
      1. Designed for use with medically approved probe cover
    AI/ML Overview

    The JointVue LLC 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data for musculoskeletal structures. The provided documentation primarily focuses on demonstrating substantial equivalence to a predicate device (3D Echo v1.0, K172513) rather than a de novo clinical study with pre-defined clinical acceptance criteria. Therefore, the "acceptance criteria" discussed below are derived from the benchtop performance testing outlined in the submission for demonstrating equivalence.

    Here's an analysis based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a submission demonstrating substantial equivalence to a predicate device, the "acceptance criteria" are implied by the comparison to the predicate's performance and general accuracy requirements for 3D model generation from ultrasound data. The study primarily evaluated the accuracy of 3D models generated by the 3D Echo v1.1 system against manufacturer-provided CAD models and compared its performance to the predicate device.

    Acceptance Criterion (Implied)Reported Device Performance
    Surface Error (RMS)The 3D Echo v1.1 system generated models "contained statistically less surface error" than the predicate device. "Every model generated by the system met accuracy requirements of the system."
    Angular Error (Degrees)The 3D Echo v1.1 system generated models "measured angular rotation errors" that were statistically less than the predicate device. "Every model generated by the system met accuracy requirements of the system."
    Equivalence to PredicateThe 3D Echo v1.1 system's performance for both surface and angular error was statistically better than the predicate device, thereby demonstrating equivalence and meeting accuracy requirements.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: A "sample (equivalent in size to that of the predicate validation test) of 3D models of the femur and tibia bone models" was obtained and analyzed. The exact number of models is not specified but is stated to be equivalent to the predicate's validation test.
    • Data Provenance: The data was generated through non-clinical benchtop testing using physical phantom models simulating the knee (femur and tibia bone models). This is a prospective generation of data using specific test articles. The country of origin for the data generation is not explicitly stated, but the company is US-based (Knoxville, TN).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    Not applicable. The ground truth for the benchtop testing was established by manufacturer-provided CAD models of the femur and tibia, not human experts.

    4. Adjudication Method for the Test Set

    Not applicable. The ground truth was based on CAD models, which are objective and do not require human adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not performed. The submission explicitly states "Human Clinical Performance Testing was not required to demonstrate the safety and effectiveness of the device." The study focused on benchtop performance of the device itself (algorithm only) compared to a predicate device, not on human reader performance with or without the AI.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, a standalone performance study was done. The benchtop performance testing evaluated the 3D Echo v1.1 system's ability to generate accurate 3D models from ultrasound data acquired from physical phantom models. This is an evaluation of the system's performance (including the software algorithm) independent of human interpretation or assistance during the analysis phase.

    7. The Type of Ground Truth Used

    The ground truth used for the benchtop performance testing was the manufacturer-provided CAD models of the femur and tibia. This represents an objective, digitally precise "ideal" representation of the anatomical structures.

    8. The Sample Size for the Training Set

    The document does not provide information regarding the training set sample size. The focus of the submission is on the validation of the modified device after its development, demonstrating its substantial equivalence to a previously cleared device. The "core algorithm for 3D bone reconstruction" was not changed from the predicate device, implying that the training for this core algorithm would have occurred prior to or during the development of the predicate device (K172513), and is not detailed in this specific submission for K211656.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established, as it does not discuss the training phase of the algorithm. It only mentions that the "core algorithm for 3D bone reconstruction" was unchanged from the predicate device, implying that any training would relate to the predicate and is not part of this 510(k) submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K172513
    Device Name
    3D Echo
    Manufacturer
    Date Cleared
    2017-09-14

    (24 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    3D Echo

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    JointVue's 3D Echo is a software application for the display and 3D visualization of ultrasound volume data derived from the Sonix Ultrasound Scanner. It is designed to allow the user to observe images and perform analyses of musculoskeletal structures using the ultrasound volume data acquired with the Sonix Ultrasound Scamer. Typical users of this system are trained medical professionals including physicians, nurses, and technicians.

    Device Description

    JointVue's 3D Echo is a software application that uses the raw ultrasound signals generated from an imaging ultrasound machine to visualize musculoskeletal structures in three dimensions. The 3D Echo software is pre-loaded on one of the following two ultrasound systems: 1) SonixOne, a tablet-based system; or 2) SonixTouch Q+ with linear transducer (BK Ultrasound model L14-5/38 GPS) and driveBAY™ tracking unit (Ascension Technology Corporation). There are also two electromagnetic (EM) sensors (Ascension 6DOF sensors, model 800, part #600786) included with the JointVue 3D echo software to identify the relative location of the ultrasound probe. Finally, there is a foot switch (stuete model MKF-2-MED GP25) included as an input device. The major software functions of the JointVue 3D Echo system include the following: 1) the ability to display axial, sagittal, coronal and oblique 2D images; 2) the ability to display the 3D surface of musculoskeletal structures; 3) the ability to display axial images with 3D visualization; and 4) the ability to provide contouring and US image visualization. The device is intended to be used in a clinical or hospital environment. JointVue's 3D Echo ultrasound system utilizes raw ultrasound signals to detect tissue interfaces and visualize joint anatomy in three dimensions. The system provides clinicians with three-dimensional models of the joint anatomy.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the 3D Echo device by JointVue, LLC. It outlines the FDA's determination of substantial equivalence to a predicate device, but it does not contain the detailed acceptance criteria or a specific study proving the device meets those criteria in the format requested.

    The document states that "Performance Testing" demonstrated equivalent precision and accuracy based on benchtop testing using a phantom. However, it does not provide the specific acceptance criteria, the detailed results of this testing, or the methodology for evaluating accuracy and precision.

    Here's an attempt to answer your questions based only on the information provided, highlighting what is missing:


    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Precision and Accuracy (implied from predicate equivalence statement)Equivalent precision and accuracy to the predicate device, "5D Viewer" (K161955), based upon benchtop testing using a phantom.
    Software Verification and ValidationDemonstrated safety and efficacy. Potential hazards classified as a moderate level of concern (LOC). Specific performance metrics are not provided.
    Mechanical and Acoustic PerformanceDemonstrated safety and effectiveness with the same accuracy and precision as the predicate device (based on benchtop testing using a phantom). Specific performance metrics are not provided.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not specified. The document only mentions "benchtop testing using a phantom."
    • Data Provenance: Not specified. The testing was described as "benchtop," implying laboratory testing, not human data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. The ground truth for benchtop phantom testing would likely be based on the known physical properties or measurements of the phantom itself, not expert interpretation.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. This typically refers to medical image interpretation by multiple experts, which was not the nature of the described performance testing.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of the 3D Echo software." Therefore, no MRMC study was performed.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, in a sense. The described "benchtop testing using a phantom" and "Software verification and validation testing" would fall under standalone performance assessment. However, the exact metrics and how "accuracy and precision" were quantified are not detailed. The device itself is described as a "software application for the display and 3D visualization," implying its performance as an algorithm is what was assessed.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the benchtop testing using a phantom, the ground truth would likely be the known physical dimensions, geometries, or simulated conditions of the phantom. No human-derived ground truth (like expert consensus, pathology, or outcomes) was used.


    8. The sample size for the training set

    Not specified. The document does not describe the development or training of any machine learning model, nor does it specify a training set size.


    9. How the ground truth for the training set was established

    Not specified, as no training set or machine learning development is detailed in the provided text.


    Summary of what's missing:

    The document focuses on the regulatory clearance process and establishing substantial equivalence rather than providing a detailed technical report of performance studies. Key details like specific numerical acceptance criteria, the methodology of phantom testing, and quantitative results of accuracy and precision are absent from this regulatory summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K132544
    Date Cleared
    2013-11-25

    (104 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    Device Name :

    TOMTEC-ARENA 1.0, TOMTEC-ARENA 3D ECHO, CARDIO-ARENA

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TomTec-Arena software are diagnostic review, quantification and reporting of cardiovascular, fetal and abdominal structures and function of patients with suspected disease.

    Device Description

    TomTec-Arena is a clinical software package for reviewing, quantifying and reporting digital medical data. TomTec-Arena runs on high performance PC platforms based on Microsoft Windows operating system standards. The software is compatible with different TomTec Image-Arena™ platforms, their derivatives or third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes. Tom Tec-Arena consists of the following optional clinical application packages: Image-Com, 4D LV-Analysis/Function, 4D RV-Function, 4D Cardio-View, 4D MV-Assessment, Echo-Com, 2D Cardiac-Performance Analysis, 2D Cardiac-Performance Analysis MR, 4D Sono-Scan.

    AI/ML Overview

    The provided document does not contain detailed acceptance criteria or a study proving the device meets specific performance criteria. Instead, it is a 510(k) summary for a software package, TomTec-Arena 1.0, and focuses on demonstrating substantial equivalence to predicate devices.

    Here's a breakdown of what is and is not available in the provided text, in response to your requested information:

    1. A table of acceptance criteria and the reported device performance

    • Not available. The document states that "Testing was performed according to internal company procedures. Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted." However, it does not provide the specific acceptance criteria or the quantitative reported device performance against those criteria. It only provides a high-level summary of tests passed.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Not available. The document explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." Therefore, there is no test set in the sense of a clinical performance study with patient data. The "tests" mentioned are software validation and verification.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Not applicable. As indicated above, no clinical test set with patient data was used for substantial equivalence determination. Ground truth establishment by experts for clinical performance is not mentioned.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Not applicable. No clinical test set.

    5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. No MRMC study was conducted or mentioned. The device is a software package for review, quantification, and reporting, and its substantial equivalence was not based on clinical performance data demonstrating impact on human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not explicitly detailed as a standalone performance study in the context of clinical accuracy. The document confirms that "measurement verification is completed without deviations" as part of non-clinical performance testing. This suggests that the algorithm's measurements were verified, but the specifics of this verification (e.g., what measurements, against what standard, sample size, etc.) are not provided. It's a software verification, not a clinical standalone performance study.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not applicable for clinical ground truth. For the non-clinical "measurement verification," the ground truth would likely be a known or calculated value for the data being measured, but the specific type of ground truth against which software measurements were verified is not described.

    8. The sample size for the training set

    • Not applicable. The document describes "TomTec-Arena" as a clinical software package for reviewing, quantifying, and reporting existing digital medical data. It is not an AI/ML device that requires a training set in the typical sense for learning patterns. Its functionality is based on established algorithms for image analysis and quantification.

    9. How the ground truth for the training set was established

    • Not applicable. No training set for an AI/ML model is mentioned.

    Summary of available information regarding performance:

    The document states that:

    • "Testing was performed according to internal company procedures."
    • "Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted."
    • "Test results were reviewed by designated technical professionals before software proceeded to release."
    • "All requirements have been verified by tests or other appropriate methods."
    • "The incorporated OTS Software is considered validated either by particular tests or implied by the absence of OTS SW related abnormalities during all other V&V activities."
    • The summary conclusions indicate:
      • "all automated tests were reviewed and passed"
      • "feature complete test completed without deviations"
      • "functional tests are completed"
      • "measurement verification is completed without deviations"
      • "multilanguage tests are completed without deviations"
    • "Substantial equivalence determination of this subject device was not based on clinical data or studies."
    • A "clinical evaluation following the literature route based on the assessment of benefits, associated with the use of the device, was performed." This literature review supported the conclusion that the device is "as safe as effective, and performs as well as or better than the predicate devices."

    In essence, TomTec-Arena 1.0 was cleared based on non-clinical software verification and validation, comparison to predicate devices, and a literature review, rather than a prospective clinical performance study with explicit acceptance criteria for diagnostic accuracy.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1