(28 days)
JointVue's 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data derived from the Terason uSmart3200T ultrasound system. It is designed to allow the user to observe images and perform analyses of musculoskeletal structures using the ultrasound volume data acquired with the Terason ultrasound scanner. Typical users of this system are trained medical professionals including physicians, nurses, and technicians.
JointVue's 3D Echo is a software application that uses the raw ultrasound signals generated from an imaging ultrasound machine to visualize musculoskeletal structures in three dimensions. The 3D Echo v1.1 includes the following device modifications from 3D Echo v1.0:
-
- software is updated for interoperability with Terason 3200T+ Ultrasound system
-
- the ultrasound hardware is Terason 3200T+ Ultrasound with Terason 14L3 Linear transducer instead of the SonixOne tablet-based portable ultrasound system
- the NDI 3D Guidance driveBAY™ tracking unit is replaced by the 3D Guidance trakSTAR™ 3. tracking unit (same system but with an internal power supply))
- Different GCX system cart designed for Terason 3200T+ Ultrasound 4.
- Custom transducer/sensor holder now attaches an 800 model EM sensor to the exterior of the ultrasound transducer
-
- Designed for use with medically approved probe cover
The JointVue LLC 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data for musculoskeletal structures. The provided documentation primarily focuses on demonstrating substantial equivalence to a predicate device (3D Echo v1.0, K172513) rather than a de novo clinical study with pre-defined clinical acceptance criteria. Therefore, the "acceptance criteria" discussed below are derived from the benchtop performance testing outlined in the submission for demonstrating equivalence.
Here's an analysis based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a submission demonstrating substantial equivalence to a predicate device, the "acceptance criteria" are implied by the comparison to the predicate's performance and general accuracy requirements for 3D model generation from ultrasound data. The study primarily evaluated the accuracy of 3D models generated by the 3D Echo v1.1 system against manufacturer-provided CAD models and compared its performance to the predicate device.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Surface Error (RMS) | The 3D Echo v1.1 system generated models "contained statistically less surface error" than the predicate device. "Every model generated by the system met accuracy requirements of the system." |
Angular Error (Degrees) | The 3D Echo v1.1 system generated models "measured angular rotation errors" that were statistically less than the predicate device. "Every model generated by the system met accuracy requirements of the system." |
Equivalence to Predicate | The 3D Echo v1.1 system's performance for both surface and angular error was statistically better than the predicate device, thereby demonstrating equivalence and meeting accuracy requirements. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A "sample (equivalent in size to that of the predicate validation test) of 3D models of the femur and tibia bone models" was obtained and analyzed. The exact number of models is not specified but is stated to be equivalent to the predicate's validation test.
- Data Provenance: The data was generated through non-clinical benchtop testing using physical phantom models simulating the knee (femur and tibia bone models). This is a prospective generation of data using specific test articles. The country of origin for the data generation is not explicitly stated, but the company is US-based (Knoxville, TN).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
Not applicable. The ground truth for the benchtop testing was established by manufacturer-provided CAD models of the femur and tibia, not human experts.
4. Adjudication Method for the Test Set
Not applicable. The ground truth was based on CAD models, which are objective and do not require human adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not performed. The submission explicitly states "Human Clinical Performance Testing was not required to demonstrate the safety and effectiveness of the device." The study focused on benchtop performance of the device itself (algorithm only) compared to a predicate device, not on human reader performance with or without the AI.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone performance study was done. The benchtop performance testing evaluated the 3D Echo v1.1 system's ability to generate accurate 3D models from ultrasound data acquired from physical phantom models. This is an evaluation of the system's performance (including the software algorithm) independent of human interpretation or assistance during the analysis phase.
7. The Type of Ground Truth Used
The ground truth used for the benchtop performance testing was the manufacturer-provided CAD models of the femur and tibia. This represents an objective, digitally precise "ideal" representation of the anatomical structures.
8. The Sample Size for the Training Set
The document does not provide information regarding the training set sample size. The focus of the submission is on the validation of the modified device after its development, demonstrating its substantial equivalence to a previously cleared device. The "core algorithm for 3D bone reconstruction" was not changed from the predicate device, implying that the training for this core algorithm would have occurred prior to or during the development of the predicate device (K172513), and is not detailed in this specific submission for K211656.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established, as it does not discuss the training phase of the algorithm. It only mentions that the "core algorithm for 3D bone reconstruction" was unchanged from the predicate device, implying that any training would relate to the predicate and is not part of this 510(k) submission.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).