Search Results
Found 3 results
510(k) Data Aggregation
(28 days)
JointVue LLC
JointVue's 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data derived from the Terason uSmart3200T ultrasound system. It is designed to allow the user to observe images and perform analyses of musculoskeletal structures using the ultrasound volume data acquired with the Terason ultrasound scanner. Typical users of this system are trained medical professionals including physicians, nurses, and technicians.
JointVue's 3D Echo is a software application that uses the raw ultrasound signals generated from an imaging ultrasound machine to visualize musculoskeletal structures in three dimensions. The 3D Echo v1.1 includes the following device modifications from 3D Echo v1.0:
-
- software is updated for interoperability with Terason 3200T+ Ultrasound system
-
- the ultrasound hardware is Terason 3200T+ Ultrasound with Terason 14L3 Linear transducer instead of the SonixOne tablet-based portable ultrasound system
- the NDI 3D Guidance driveBAY™ tracking unit is replaced by the 3D Guidance trakSTAR™ 3. tracking unit (same system but with an internal power supply))
- Different GCX system cart designed for Terason 3200T+ Ultrasound 4.
- Custom transducer/sensor holder now attaches an 800 model EM sensor to the exterior of the ultrasound transducer
-
- Designed for use with medically approved probe cover
The JointVue LLC 3D Echo v1.1 is a software application for the display and 3D visualization of ultrasound volume data for musculoskeletal structures. The provided documentation primarily focuses on demonstrating substantial equivalence to a predicate device (3D Echo v1.0, K172513) rather than a de novo clinical study with pre-defined clinical acceptance criteria. Therefore, the "acceptance criteria" discussed below are derived from the benchtop performance testing outlined in the submission for demonstrating equivalence.
Here's an analysis based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a submission demonstrating substantial equivalence to a predicate device, the "acceptance criteria" are implied by the comparison to the predicate's performance and general accuracy requirements for 3D model generation from ultrasound data. The study primarily evaluated the accuracy of 3D models generated by the 3D Echo v1.1 system against manufacturer-provided CAD models and compared its performance to the predicate device.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Surface Error (RMS) | The 3D Echo v1.1 system generated models "contained statistically less surface error" than the predicate device. "Every model generated by the system met accuracy requirements of the system." |
Angular Error (Degrees) | The 3D Echo v1.1 system generated models "measured angular rotation errors" that were statistically less than the predicate device. "Every model generated by the system met accuracy requirements of the system." |
Equivalence to Predicate | The 3D Echo v1.1 system's performance for both surface and angular error was statistically better than the predicate device, thereby demonstrating equivalence and meeting accuracy requirements. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A "sample (equivalent in size to that of the predicate validation test) of 3D models of the femur and tibia bone models" was obtained and analyzed. The exact number of models is not specified but is stated to be equivalent to the predicate's validation test.
- Data Provenance: The data was generated through non-clinical benchtop testing using physical phantom models simulating the knee (femur and tibia bone models). This is a prospective generation of data using specific test articles. The country of origin for the data generation is not explicitly stated, but the company is US-based (Knoxville, TN).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
Not applicable. The ground truth for the benchtop testing was established by manufacturer-provided CAD models of the femur and tibia, not human experts.
4. Adjudication Method for the Test Set
Not applicable. The ground truth was based on CAD models, which are objective and do not require human adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not performed. The submission explicitly states "Human Clinical Performance Testing was not required to demonstrate the safety and effectiveness of the device." The study focused on benchtop performance of the device itself (algorithm only) compared to a predicate device, not on human reader performance with or without the AI.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone performance study was done. The benchtop performance testing evaluated the 3D Echo v1.1 system's ability to generate accurate 3D models from ultrasound data acquired from physical phantom models. This is an evaluation of the system's performance (including the software algorithm) independent of human interpretation or assistance during the analysis phase.
7. The Type of Ground Truth Used
The ground truth used for the benchtop performance testing was the manufacturer-provided CAD models of the femur and tibia. This represents an objective, digitally precise "ideal" representation of the anatomical structures.
8. The Sample Size for the Training Set
The document does not provide information regarding the training set sample size. The focus of the submission is on the validation of the modified device after its development, demonstrating its substantial equivalence to a previously cleared device. The "core algorithm for 3D bone reconstruction" was not changed from the predicate device, implying that the training for this core algorithm would have occurred prior to or during the development of the predicate device (K172513), and is not detailed in this specific submission for K211656.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established, as it does not discuss the training phase of the algorithm. It only mentions that the "core algorithm for 3D bone reconstruction" was unchanged from the predicate device, implying that any training would relate to the predicate and is not part of this 510(k) submission.
Ask a specific question about this device
(23 days)
JointVue LLC
JointVue's ¡Fit Surgical Planner is a software device intended to assist medical professionals in preoperative planning of orthopedic surgery. The device allows for overlaying templates of prostheses on 3D bone models generated from radiological images. The software includes tools for performing measurements on the images and for positioning the prosthetic template. Clinical judgment and experience are required to properly use the software.
JointVue's ¡Fit Surgical Planner is an orthopedic preoperative planning software. It allows for overlaying templates of prostheses on patient 3D bone models generated from radiological images using JointVue's 3D echo software (K172513) for overlaying templates of prostheses for surgical preplanning of joint replacement surgery. jFit Surgical Planner is intended to run on a PC and requires the Microsoft Windows™ operating system, version 7, windows 8 or windows 10 (32-bit/64-bit). A PDF Reader such as Adobe Acrobat or Foxit is recommended in order to access the instructions for use. ¡Fit Surgical Planner software requires the following minimum requirements for computer hardware listed in the following table: Processor Intel Core i5-5300 @ 2.3 GHz or higher, Memory 8 GB RAM or more, Graphics Intel HD Graphics 5500 or higher, Resolution Minimum 1920*1080, HD Space 1 GB or more. The major functions and features of jFit Surgical Planner include: 1. Patient Case Loading, 2. Loading X-Ray Images, 3. Editing and Verifying Femur Landmarks, 4. Editing and Verifying Tibia Landmarks, 5. Conducting Femur Planning, 6. Conducting Tibia Planning, 7. Validating Surgical Planning, 8. Generation of a Surgical Planning Summary Report. iFit Surgical Planner is for installation on a secure computer workstation to protect patient data. Typical environment of use is an office environment. jFit Surgical Planner utilizes 3D bone models for preoperative planning of joint replacement surgery. Surgical preplanning starts by importing patient 3D bone models along with anterior-posterior and lateral X-Ray images, if available. ¡Fit Surgical Planner calculates relevant surgical landmarks which can then be verified and edited. ¡Fit Surgical Planner will suggest an initial implant selection and placement based on anatomical landmarks selected and verified by the clinician is responsible to adjust the implant selection and placement parameters to validate patient-specific surgical plan. Upon completion of surgical planning, a summary report is generated that must be signed by a physician to approve the surgical plan.
Here's a summary of the acceptance criteria and study information for the jFit Surgical Planner, based on the provided text:
Acceptance Criteria and Device Performance
The provided document describes the equivalence of the subject device (jFit Surgical Planner) to the predicate device (TraumaCAD 2.0) based on their ability to identify the same size implants.
Acceptance Criteria | Reported Device Performance (jFit Surgical Planner) |
---|---|
Identify same size implants as predicate | 100% agreement with predicate |
Precision and accuracy equivalent to predicate | Equivalent demonstrated by benchtop testing |
Study Details
2. Sample Size and Data Provenance for Test Set
- Sample Size: 39 simulated cases.
- Data Provenance: The cases were "simulated," implying they were not from real patients but rather constructed scenarios for testing. The country of origin is not specified. The study was retrospective in nature as it involved pre-defined simulated cases.
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number of experts or their specific qualifications (e.g., radiologist with 10 years of experience) used to establish the ground truth for the test set in the benchtop study. However, the study involved "two independent users" who performed the implant sizing tasks for both devices. While these "users" are implicitly qualified to perform surgical planning, their specific expert credentials for establishing ground truth are not detailed.
4. Adjudication Method for Test Set
The adjudication method for the test set is implied to be a direct comparison of the output from two independent users operating both the subject and predicate devices. No formal "adjudication method" in the sense of resolving discrepancies among multiple experts (like 2+1 or 3+1) is mentioned, as the focus was on the agreement between the devices' outputs by individual users. The case IDs were blinded for both operators, suggesting an attempt to reduce bias.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study was explicitly mentioned or conducted as described in the provided text. The study focused on side-by-side performance of the two devices by two independent users, rather than a comparative effectiveness study involving human readers with and without AI assistance to measure improvement effect size.
6. Standalone (Algorithm Only) Performance
The benchtop testing implicitly demonstrates standalone performance of the algorithm within the software, as it evaluated the device's output (implant sizing) based on simulated cases. The "two independent users" were operating the software to achieve these results. The device itself is a "software device."
7. Type of Ground Truth Used
The ground truth for the benchtop testing was established by the "predicate device" (TraumaCAD 2.0) as the reference, which the subject device was compared against. The study aimed to demonstrate that the subject device identified "the same size implants" as the predicate. This implies the ground truth was essentially the output of a previously cleared device, rather than pathology, expert consensus on imaging, or outcomes data.
8. Sample Size for the Training Set
The document does not specify a separate "training set" or its sample size. The description focuses on validation testing and equivalence to a predicate device. If the device uses machine learning, the training data used to develop the underlying models is not disclosed in this regulatory submission.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
(24 days)
JointVue LLC
JointVue's 3D Echo is a software application for the display and 3D visualization of ultrasound volume data derived from the Sonix Ultrasound Scanner. It is designed to allow the user to observe images and perform analyses of musculoskeletal structures using the ultrasound volume data acquired with the Sonix Ultrasound Scamer. Typical users of this system are trained medical professionals including physicians, nurses, and technicians.
JointVue's 3D Echo is a software application that uses the raw ultrasound signals generated from an imaging ultrasound machine to visualize musculoskeletal structures in three dimensions. The 3D Echo software is pre-loaded on one of the following two ultrasound systems: 1) SonixOne, a tablet-based system; or 2) SonixTouch Q+ with linear transducer (BK Ultrasound model L14-5/38 GPS) and driveBAY™ tracking unit (Ascension Technology Corporation). There are also two electromagnetic (EM) sensors (Ascension 6DOF sensors, model 800, part #600786) included with the JointVue 3D echo software to identify the relative location of the ultrasound probe. Finally, there is a foot switch (stuete model MKF-2-MED GP25) included as an input device. The major software functions of the JointVue 3D Echo system include the following: 1) the ability to display axial, sagittal, coronal and oblique 2D images; 2) the ability to display the 3D surface of musculoskeletal structures; 3) the ability to display axial images with 3D visualization; and 4) the ability to provide contouring and US image visualization. The device is intended to be used in a clinical or hospital environment. JointVue's 3D Echo ultrasound system utilizes raw ultrasound signals to detect tissue interfaces and visualize joint anatomy in three dimensions. The system provides clinicians with three-dimensional models of the joint anatomy.
The provided document is a 510(k) premarket notification for the 3D Echo device by JointVue, LLC. It outlines the FDA's determination of substantial equivalence to a predicate device, but it does not contain the detailed acceptance criteria or a specific study proving the device meets those criteria in the format requested.
The document states that "Performance Testing" demonstrated equivalent precision and accuracy based on benchtop testing using a phantom. However, it does not provide the specific acceptance criteria, the detailed results of this testing, or the methodology for evaluating accuracy and precision.
Here's an attempt to answer your questions based only on the information provided, highlighting what is missing:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Precision and Accuracy (implied from predicate equivalence statement) | Equivalent precision and accuracy to the predicate device, "5D Viewer" (K161955), based upon benchtop testing using a phantom. |
Software Verification and Validation | Demonstrated safety and efficacy. Potential hazards classified as a moderate level of concern (LOC). Specific performance metrics are not provided. |
Mechanical and Acoustic Performance | Demonstrated safety and effectiveness with the same accuracy and precision as the predicate device (based on benchtop testing using a phantom). Specific performance metrics are not provided. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The document only mentions "benchtop testing using a phantom."
- Data Provenance: Not specified. The testing was described as "benchtop," implying laboratory testing, not human data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. The ground truth for benchtop phantom testing would likely be based on the known physical properties or measurements of the phantom itself, not expert interpretation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. This typically refers to medical image interpretation by multiple experts, which was not the nature of the described performance testing.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of the 3D Echo software." Therefore, no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a sense. The described "benchtop testing using a phantom" and "Software verification and validation testing" would fall under standalone performance assessment. However, the exact metrics and how "accuracy and precision" were quantified are not detailed. The device itself is described as a "software application for the display and 3D visualization," implying its performance as an algorithm is what was assessed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the benchtop testing using a phantom, the ground truth would likely be the known physical dimensions, geometries, or simulated conditions of the phantom. No human-derived ground truth (like expert consensus, pathology, or outcomes) was used.
8. The sample size for the training set
Not specified. The document does not describe the development or training of any machine learning model, nor does it specify a training set size.
9. How the ground truth for the training set was established
Not specified, as no training set or machine learning development is detailed in the provided text.
Summary of what's missing:
The document focuses on the regulatory clearance process and establishing substantial equivalence rather than providing a detailed technical report of performance studies. Key details like specific numerical acceptance criteria, the methodology of phantom testing, and quantitative results of accuracy and precision are absent from this regulatory summary.
Ask a specific question about this device
Page 1 of 1