Search Results
Found 18 results
510(k) Data Aggregation
(30 days)
LXJ
A software system used with the Microsoft Kinect v2 intended to support the physical rehabilitation of adults in the clinic and at home. The system includes rehabilitation exercises for the upper extremity, trunk, and lower extremity with audiovisual feedback & graphic movement representations for patients as well as videos of the patients performing the exercises which can be remotely accessed by the clinician. Patient assessment, exercise guidance and approval by a medical professional is required prior to use.
Not Found
This FDA 510(k) clearance letter for the Vera device does not contain the detailed information necessary to fully answer your request regarding acceptance criteria and the specifics of the study that proves the device meets those criteria. The letter primarily confirms the device's substantial equivalence to predicate devices and outlines regulatory obligations.
However, based on the provided text, I can infer some information and highlight what is explicitly missing.
Information that CAN be extracted or inferred:
- Device Name: Vera
- Intended Use: A software system used with the Microsoft Kinect v2 intended to support the physical rehabilitation of adults (upper extremity, trunk, and lower extremity) in the clinic and at home, with audiovisual feedback, graphic movement representations for patients, and remote access to patient exercise videos for clinicians. Patient assessment, exercise guidance, and approval by a medical professional are required prior to use.
- Regulatory Status: Cleared as a Class II medical device (Product Code LXJ), Regulation Number 21 CFR 890.5360 (Measuring Exercise Equipment).
- Use Type: Prescription Use.
Information that is NOT present in the provided text and is crucial for your request:
- A table of acceptance criteria and the reported device performance: This detailed information is typically found in the 510(k) summary or the full submission, not in this clearance letter.
- Sample sizes used for the test set and data provenance: Not mentioned.
- Number of experts used to establish the ground truth for the test set and their qualifications: Not mentioned.
- Adjudication method for the test set: Not mentioned.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, or its effect size: Not mentioned, and generally less common for this type of device (measuring exercise equipment).
- If a standalone performance (algorithm only without human-in-the-loop performance) was done: Not explicitly mentioned, though the device has a standalone component (the software system itself). The performance metrics for this standalone component are what's missing.
- The type of ground truth used: Not mentioned. For a rehabilitation device, this could involve motion capture systems, clinical assessments, or expert observation.
- The sample size for the training set: Not mentioned.
- How the ground truth for the training set was established: Not mentioned.
To obtain the specific information you are looking for (acceptance criteria, study details, performance metrics, ground truth methods, sample sizes), you would need to access the full 510(k) submission for K192474, which may be available through the FDA's website or by filing a Freedom of Information Act (FOIA) request. The public 510(k) summary, if available, would be the most likely place to find a high-level overview of the performance data.
Ask a specific question about this device
(224 days)
LXJ
Not Found
Not Found
The provided text is an FDA 510(k) clearance letter for the device "Vera" by Reflexion Health, Inc. It states that the device is substantially equivalent to legally marketed predicate devices.
However, the document does not contain any information regarding acceptance criteria, device performance data, study design parameters (sample size, data provenance, expert qualifications, adjudication methods), or details about training sets or ground truth establishment.
Therefore, I cannot extract the requested information from the provided text. The document is essentially a regulatory clearance notice, not a clinical study report or a technical performance specification document.
Ask a specific question about this device
(338 days)
LXJ
DARI Health is a computer and video system used to quantify and graphically display human movement patterns and techniques for uses such as assessment and training of limb or body motion in pre/post rehabilitation evaluation, physical therapy, and the like.
DARI Health is a "markerless," three-dimensional human motion capture and analytical software system that uses off-the-shelf video cameras, off-the-shelf computer hardware, off-theshelf motion analysis software, and proprietary DARI Health Software (consisting of DARI Connect, DARI Capture plug-in, DARI Insight Engine, and DARI Rerort Engine) or DARA quantify, and document full-body human kinematics and kinetics during patient movement. Motions are recorded in real-time, using a digital "skeleton" to identify joint center and segment data for the cervical spine, upper extremity, trunk, and lower extremity of the paint. The captured video can be viewed after recording has ended or at some later point. During movement, the DARI Health interfaces with the capture software (off-the-shelf, Captury Live) tracks all segments and joint centers as they move in all planes of more the with resulting output is sent to The DARI Insight Engine processing software-either housed it ally on the computer used for capture or in a security-compliant, cloud-based server-where thered drant can be used to calculate all kinetic data produced during the revor d more one more one Such data is then displayed in different report formats-from large, raw datasets to simple, grahital reports that track key biomechanical metrics. Healthcare providers can access patient biomechanical data remotely through a secure webportal or locally on the computer database application to monitor patient progress and view prior or current motion analyses. There is no direct contact with the patient by the DARI Health system, and no energy is delivered to the subject at any point during the usage of the system.
The DARI Health device is a "markerless," three-dimensional human motion capture and analytical software system that uses off-the-shelf video cameras, computer hardware, motion analysis software, and proprietary DARI Health Software to quantify and document full-body human kinematics and kinetics during patient movement.
Here's an analysis of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, the acceptance criteria are not explicitly stated in a defined "acceptance criteria" section with specific numerical thresholds. Instead, the document discusses the device's performance in relation to predicate devices and industry standards. Therefore, I will derive the implicit acceptance criteria from the performance claims made in the "Summary of Performance Data" section and present the reported device performance.
Acceptance Criterion (Implicitly Derived) | Reported Device Performance |
---|---|
Kinematic Accuracy: No statistically significant differences in clinically relevant joint angles compared to a predicate marker-based system (Vicon/Peak Motus). | Met: When compared to a predicate marker-based Vicon (Peak Motus) system, the DARI Health system showed no statistically significant differences in the clinically relevant joint angles being measured for investigation (p = 0.33). |
Kinetic Accuracy: No statistically significant differences in peak force, mean force, or now force compared to industry-standard force plate output. | Met: The DARI Health system was tested against standard force plate output, and its force calculation methodology demonstrated no statistically significant differences in peak force, mean force, or now force as compared to simultaneous measurements derived from a force plate (p > 0.05). Additionally, comparison of force-time curves showed a strong correlation between modalities (r = 0.995, r^2 = 0.989, SEM = 11.1 N). |
Repeatability (Segment Length & Joint Center Calculation): Ability to identify joint centers and segment lengths within a specified tolerance, exceeding predicate and industry standards. | Met: Repeatability of segment length and joint center calculation over successive scans of 9,120 bone segments demonstrated the ability to identify the joint centers and segment lengths within 0.811mm with 99% confidence, exhibiting only 1.02% total skeletal change between sessions with 0.002 mm of variance. This exceeds predicate repeatability associated with marker analysis and inter-test repeatability of goniometers claimed by predicate JRS v2.0. |
Software Functionality: Software functions as intended, and all results are as expected. | Met: Software validation and testing were completed for the device and functioned as intended, and all results were those expected. |
Safety: Compliance with relevant safety standards (IEC 60601-1 and IEC 60601-1-2). | Met: The DARI Health complies with IEC 60601-1 and IEC 60601-1-2. The system requires no contact with the patient and poses no electrical, chemical, mechanical, thermal, or radiation safety concerns. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Kinematic Accuracy Study: Not explicitly stated as a number of subjects or sessions. It refers to "clinically relevant joint angles being measured for investigation" in a comparison to a predicate system.
- Sample Size for Kinetic Accuracy Study: Not explicitly stated as a number of subjects or sessions. It refers to a comparison to "simultaneous measurements derived from a force plate."
- Sample Size for Repeatability Study: "Successive scans of 9,120 bone segments" were analyzed. This implies a significant amount of data, likely from multiple subjects and/or multiple scan sessions.
- Data Provenance: The studies are described as "independent, peer-reviewed studies" and "DARI Health internal studies." The country of origin of the data is not specified, but the submission is to the U.S. FDA. The retrospective or prospective nature of these studies is not explicitly mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not mention the use of experts to establish ground truth for the test set. Instead, the ground truth was established by:
- Predicate marker-based Vicon (Peak Motus) system: For kinematic accuracy.
- Industry-standard force plate output: For kinetic accuracy.
- Successive scans and internal measurements: For repeatability.
Therefore, there were no human experts establishing the ground truth as described in the prompt's question for this type of objective measurement device.
4. Adjudication Method for the Test Set
Not applicable. As the ground truth was established by objective measurement systems (Vicon, force plate, internal measurements), there was no human adjudication process involved for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not conducted. The studies described are focused on the technical accuracy and repeatability of the device's measurements compared to established objective methods, not on how human readers (clinicians) improve with AI assistance. The DARI Health system is a measurement and analysis tool, not one that directly assists a human reader in making a diagnostic interpretation in the same way an AI for medical imaging might.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, the performance studies described are essentially standalone (algorithm only) performance assessments. The DARI Health system's ability to accurately capture, quantify, and analyze human movement (kinematics and kinetics) and its repeatability are evaluated as a standalone system against objective measurement standards. It does not involve human-in-the-loop performance in the context of these validation studies.
7. The Type of Ground Truth Used
- Kinematic Accuracy: Reference data from a predicate marker-based Vicon (Peak Motus) system. This is an instrument-based ground truth.
- Kinetic Accuracy: Reference data from industry-standard force plate output. This is also an instrument-based ground truth.
- Repeatability: Internal measurements of successive scans of bone segments, indicating the device's consistency over time.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set. The descriptions focus on performance validation rather than the details of algorithm development or training.
9. How the Ground Truth for the Training Set was Established
The document does not provide any information on how the ground truth for the training set (if any specific training set was used for machine learning models, which is implied by a "markerless" system) was established. The "Summary of Performance Data" focuses solely on the validation/test datasets against established objective measures.
Ask a specific question about this device
(433 days)
LXJ
The Qualisys Clinical System is a camera and computer system used to quantify and graphically display human movement patterns for adults and children. It is intended to be used for movement analysis in the fields of gait analysis, rehabilitation, sports medicine and ergonomics.
The Qualisys Clinical System (QCS) is a camera and computer system used to quantify and graphically display human movement patterns for adults and children. It is intended to be used for movement analysis in the fields of gait analysis, rehabilitation, sports medicine and ergonomics.
In order to achieve this, the QCS utilizes data captured from image sensors (motion capture cameras) to triangulate the 3D-position of one or several reflective markers attached to the patient. This is done by two or more cameras calibrated to provide overlapping field of views from multiple angles. QCS produce data with 3 degrees of freedom for each marker, i.e. positional information. Rotational information (e.g. of a limb) may be retrieved from the relative orientation of three or more markers.
The tracking cameras may record images and identify the position of the markers with a high spatial and temporal resolution to generate high performance motion capture of the markers.
The markers are usually attached directly to the skin. The markers are specially designed to reflect the IR-light flashed from the tracking cameras.
In addition, subsystems may be added to record data in synchronization with the tracking cameras to facilitate the analysis, or for pure documentation purposes. The additional subsystems may be any one of, or a combination of: 1) Ordinary video, 2) EMG data and 3) Force plate data.
Here's an analysis of the acceptance criteria and study information based on the provided document:
Acceptance Criteria and Device Performance
The document does not explicitly present a table of acceptance criteria with specific performance metrics (e.g., accuracy, precision) that the Qualisys Clinical System (QCS) must meet for regulatory purposes. Instead, the acceptance criteria are implicitly tied to demonstrating substantial equivalence to a predicate device (SMART-D [K131660]) through technical and functional comparisons, along with performance data from non-clinical tests.
The reported device performance is described generally through the successful completion of software and hardware tests, and demonstration of metrological accuracy and reliability.
Table 1: Implicit Acceptance Criteria and Reported Device Performance
Acceptance Criterion (Implicit) | Reported Device Performance and Discussion |
---|---|
Pertaining to Substantial Equivalence: Technical Characteristics | |
Identical CFR Section, Product Code, Regulation Number, and Classification Name to Predicate | Met: All are identical to the predicate (Table 5-1, items 1-4). |
Equivalent Intended Use to Predicate | Met: The QCS and predicate perform the same tasks for motion analysis (Table 5-1, item 5). |
Equivalent System Components to Predicate | Met: Both consist of Workstation + Software, Tracking Cameras (Table 5-1, item 6). |
Software Tools enable similar functions (data acquisition, 3D marker tracking, 3D kinematics, reporting) | Met with discussion: QCS software modules (QTM, Qualisys Report, Clinical Gait PAF Module) combined with third-party software (Visual3D) perform similar functions to the predicate's software (SMARTcapture, SMARTtracker, SMARTanalyzer, SMARTclinic) (Table 5-1, item 7; Table 5-3). The differences in specific software names and whether analysis protocols are customizable are discussed as not impacting safety or effectiveness. |
Equivalent Contraindications, Target Population, Light Emission, Modulation/External Sync, and Lens | Met: All are identical to the predicate (Table 5-1, items 8, 9, 12, 15, 16). |
Frame rate appropriate for clinical gait analysis | Met with discussion: QCS operates at 100 fps. This is discussed as being appropriate and in line with scientific literature recommendations for clinical gait analysis, even though the predicate operates at 200 fps (Table 5-1, item 10; Table 5-3). |
Acquisition Frequencies acceptable | Met with discussion: QCS records analog data faster (500-2000 Hz) than the predicate (250 Hz, up to 500 Hz). This improves time-resolution, not reducing safety or effectiveness (Table 5-1, item 11; Table 5-3). |
Light wavelength comparable to predicate | Met with discussion: QCS (850 nm) and predicate (880 nm) have very similar IR wavelengths, behaving the same in terms of attenuation and scatter. Optimized for QCS cameras. No influence on intended use or safety/effectiveness (Table 5-1, item 13; Table 5-3). |
Output Angle (FOV) sufficient for marker detection | Met with discussion: QCS has a wider FOV (61-42 degrees) than the predicate (40 degrees), which improves marker detectability, not reducing safety or effectiveness (Table 5-1, item 14; Table 5-3). |
Number of Cameras supported | Met with discussion: QCS supports up to 100 cameras, more than the predicate's 16. This does not influence intended use or reduce safety/effectiveness (Table 5-1, item 17; Table 5-3). |
Camera Resolution sufficient for precision | Met with discussion: QCS has higher resolutions (1216x800 - 4096x3072) than the predicate (800x600), improving precision of marker detection. Not influencing intended use or safety/effectiveness (Table 5-1, item 18; Table 5-3). |
Reporting capability equivalent | Met: Both provide reporting functionalities (Table 5-1, item 21). |
Physical characteristics (Weight, Dimension, Max input power) not impacting safety or effectiveness | Met with discussion: Differences in weight, dimension, and max input power are discussed as not influencing intended use or reducing safety/effectiveness (Table 5-1, items 22, 23, 25; Table 5-3). |
Equivalent Power Supply and Electrical Safety/EMC standards compliance | Met: Equivalent power supply and compliance with relevant electrical safety and EMC standards (Table 5-1, items 26, 27, 28). |
Pertaining to Substantial Equivalence: Functional Characteristics | |
Real-time visualization of acquired data | Met: Both support real-time visualization (Table 5-2, item 1; Table 5-1, item 24). |
3D marker trajectory and kinematics reconstruction | Met: Both perform 3D marker trajectory evaluation and mapping to anatomical models using validated protocols (Table 5-2, item 2). |
Management of acquired data (analysis protocols) | Met with discussion: QCS uses predefined analysis protocols, while the predicate allows user customization. This is discussed as not influencing intended use or reducing safety/efficiency, as the QCS protocol is a crucial, verified part of the software (Table 5-2, item 3; Table 5-4). |
Report drafting and data visualization | Met: Both offer visualization tools and reporting (Table 5-2, item 4). |
Performance Data | |
Software meets specified requirements | Reported: "All software tests passed successfully, demonstrating that the QCS meets all specified requirements." (Page 10) |
Hardware complies with EMC and electrical safety standards | Reported: Hardware tested successfully for EMC (IEC-60601-1-2) and electrical safety (ANSI AAMI ES60601-1) (Page 10). |
Metrological accuracy and reliability of 3D position of reflective markers | Reported: Verified and validated. Calibration devices traceable to a national meter reference (Page 10). |
Clinical relevance of gait assessments supported | Reported: A Clinical Evaluation report has been provided (Page 10). |
Study Information
The document describes non-clinical tests rather than a formal clinical study with human subjects for novel device performance claims. The primary "study" is a comparative analysis demonstrating substantial equivalence to a predicate device.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated as a number of patients/cases. The "test set" primarily refers to software and hardware testing.
- Data Provenance: Not specified for any dataset related to the performance data. It is focused on demonstrating substantial equivalence through technical and functional comparisons and non-clinical testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not applicable. The ground truth for the non-clinical tests and metrological accuracy would be established by engineering standards, calibrated instruments, and scientific literature rather than expert human interpretation of medical data.
- Qualifications of Experts: N/A.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable. This concept typically applies to studies where human readers are interpreting a test set and their discrepancies need to be resolved. The document focuses on technical verification and validation against specified requirements and predicate device characteristics.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The device is a motion capture system, not an AI-assisted diagnostic tool that aids human readers in interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, the described "Non-clinical tests performed on the QCS include software and hardware tests" and "Metrological accuracy and reliability of the raw output of the system (three-dimensional position of reflective markers) has been verified and validated" indicate standalone performance evaluations of the system's components and overall output. The system quantifies and displays movement patterns.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth:
- For metrological accuracy, the ground truth is established by physical standards (e.g., "dimensions of the calibration devices supplied with the system are traceable to a national meter reference").
- For software and hardware functioning, the ground truth is against predefined technical specifications and requirements.
- For the relevance of gait assessments, it refers to a "Clinical Evaluation report," implying a review against established clinical understanding and literature on gait analysis.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. The document describes a medical device for motion capture, not a machine learning or artificial intelligence system that requires a "training set" in the conventional sense.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable, as there is no mention of a training set for a machine learning model.
Ask a specific question about this device
(142 days)
LXJ
MindMotion™ GO is a medical device software used in combination with the Microsoft Kinect v2 and Leap Motion controller that supports the physical rehabilitation of adults in the clinic and at home. The software includes rehabilitation exercises for the upper extremity, trunk, and lower extremity; audio-visual feedback and graphic movement representations for patients; and patient performance metrics for the medical professional. Patient exercise guidance, and approval by the medical professional is required prior to use.
Not Found
This document is an FDA 510(k) clearance letter for the MindMotion™ GO device. It states the device's indications for use but does not contain information about acceptance criteria, device performance, study details (sample sizes, provenance, ground truth, experts, adjudication), or comparative effectiveness studies. Therefore, I cannot fulfill your request for that specific information based on the provided text.
The closest information provided is the "Indications for Use":
Device Name: MindMotion™ GO
Indications for Use (Describe): MindMotion™ GO is a medical device software used in combination with the Microsoft Kinect v2 and Leap Motion controller that supports the physical rehabilitation of adults in the clinic and at home. The software includes rehabilitation exercises for the upper extremity, trunk, and lower extremity; audio-visual feedback and graphic movement representations for patients; and patient performance metrics for the medical professional. Patient exercise guidance, and approval by the medical professional is required prior to use.
Ask a specific question about this device
(231 days)
LXJ
The UINCARE software system used with the Microsoft Kinect is intended to be used to support the physical rehabilitation of adults in the clinic or at home. The system includes rehabilitation exercises for the lower and upper extremities with audio- visual feedback & graphic movement representations for patients as well as remotely accessible patient performance metrics for the medical professional. exercise guidance and approval by the medical professional is required prior to use.
The UINCARE HOME software system used with the RGB-D Camera is intended to be used to support the physical rehabilitation of adults in the clinic or at home. The system includes rehabilitation exercises for the lower and upper extremities with audio- visual feedback & graphic movement representations for patients as well as remotely accessible patient performance metrics for the medical professional. Patient assessment, exercise guidance and approval by the medical professional is required prior to use.
The UINCARE HOME software utilizes three off-the-shelf accessories: a motion sensing camera, a monitor and a computer; to provide physical therapists and physicians with a software system to prescribe customized physical therapy plans for patients.
UINCARE HOME software provides medical professionals the tools to record and track movements in threedimensional space in order to identify motion and count repetitions during patient participation with a prescribed exercise. Data derived from motion tracking as well as the patient performance metrics can be reviewed by medical professionals to assess exercise movements and adherence to the prescribed exercise. The UINCARE HOME is a software tool that helps extend that expertise.
The UINCARE HOME software has three separate applications.
- (1) UINCARE Client: Patient Application. This application prompts and monitors patients in the performance of a therapy prescribed by their Clinician, monitors and reports exercise data to the Clinician for analysis, and permits a Patient to communicate with that Clinician.
- (2) UINCARE Manager: Clinician Application. This application allows a Clinician to define and update a patient's personal data, a patient's therapy prescription, to monitor a patient's performance of that therapy, and permits a Clinician to communicate with a patient.
- (3) UINCARE Server: Cloud-based Server. It provides to communicate between the UINCARE Client and UINCARE Manager by the internet as well as to deliver and store the patient information, exercise program information and the result of exercise performed by the patient.
The provided text is a 510(k) Summary for the UINCARE HOME device. It describes the device, its intended use, and its substantial equivalence to predicate devices, but it does not contain the detailed information requested regarding acceptance criteria and a specific study proving the device meets those criteria.
Here's a breakdown of why the requested information cannot be fully provided from the given text:
- No specific acceptance criteria are listed with numerical targets. The document mentions "Performance tests were conducted to verify that the proposed device met all design specifications. The test results demonstrated that the proposed device complies with the allowable deviation." However, these "design specifications" and "allowable deviation" are not quantified or detailed.
- No specific study is described with performance results against acceptance criteria. The "Performance Test Range of Motion Assessment Test" is mentioned, but its methodology, results, and specific metrics are absent.
Therefore, many of the requested fields cannot be filled directly from this document. However, I can extract what is available.
Here's the information that can be gleaned about the performance testing for the UINCARE HOME:
-
A table of acceptance criteria and the reported device performance:
This information is not explicitly provided in the document. The document states that "Performance tests were conducted to verify that the proposed device met all design specifications. The test results demonstrated that the proposed device complies with the allowable deviation." However, the specific design specifications, acceptance criteria (e.g., in terms of accuracy, precision, or other performance metrics), and the reported numerical performance data are not included. -
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
This information is not provided. The document mentions "Performance Test Range of Motion Assessment Test" but does not specify the sample size, type of data (e.g., patient data, simulated data), or its provenance. -
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
This information is not provided. The nature of the "Range of Motion Assessment Test" suggests a comparison to a known standard or expert assessment, but no details on ground truth establishment or experts are given. -
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
This information is not provided. -
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC study was conducted. The device is a software system for physical rehabilitation, not an AI for image interpretation or diagnosis. It assists patients with exercises and provides performance metrics to medical professionals. The document states: "No clinical studies were considered necessary and performed." Therefore, there is no discussion of human readers improving with or without AI assistance. -
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The "Performance Test Range of Motion Assessment Test" would likely be a standalone assessment of the software's ability to track motion and assess range of motion. However, no specific performance metrics or results are detailed. -
The type of ground truth used (expert concensus, pathology, outcomes data, etc):
The type of ground truth for the "Range of Motion Assessment Test" is not specified. It can be inferred that it involved comparing the device's measurements to a known or established standard for range of motion, but the exact method (e.g., expert measurement, gold standard sensor) is not described. -
The sample size for the training set:
This information is not provided. The document mentions "Software Validation" and that it "was designed and developed according to a software development process and was verified and validated," but it does not detail any training sets, as the device is described as a software system using an off-the-shelf camera, not necessarily a machine learning system that requires a distinct training set in the typical sense. -
How the ground truth for the training set was established:
This information is not provided, as details about a training set are absent.
Ask a specific question about this device
(206 days)
LXJ
MindMotionPRO is a 3D imaging and display system that interactively displays and tracks upper-extremity rehabilitation exercises for adult patients using a combination of 3D virtual environments and avatars for visual feedback. These rehabilitation exercises are intended to be conducted in a clinical environment and supervised by a medical professional trained in the proper utilization of the MindMotionPRO.
MindMotionPRO is a 3D imaging and display system that interactively displays and tracks upper-extremity rehabilitation exercises for adult patients using a combination of 3D virtual environments and avatars for visual feedback.
I am sorry, but based on the provided text, there is no information about acceptance criteria or a study that proves the device meets any such criteria. The document is a 510(k) clearance letter for the MindMotionPRO device, which primarily concerns the FDA's determination of substantial equivalence to a predicate device.
The information you asked for, such as:
- A table of acceptance criteria and reported device performance
- Sample size and data provenance for a test set
- Number and qualifications of experts for ground truth
- Adjudication method
- MRMC comparative effectiveness study results
- Standalone performance
- Type of ground truth used
- Sample size and ground truth establishment for the training set
is not present in this document. This letter focuses on regulatory clearance and does not detail the technical validation studies of the device.
Ask a specific question about this device
(102 days)
LXJ
A software system used with the Microsoft Kinect intended to support repetitive task practice for rehabilitation of adults under supervision of a medical professional in a clinical or home setting. The system includes simulated activities of daily living (ADLs) for the upper extremity with audio-visual feedback & graphic movement representations for patients as well as patient performance metrics for the medical professional. Patient assessment, exercise guidance, and approval by the medical professional is required prior to use.
The VOTA software system comprises a VOTA patient-facing and a provider-facing Provider Dashboard. The VOTA patient-facing application supports repetitive task practice exercises for the upper extremity that are consistent with Standard of Care for physical rehabilitation of adults. The software runs on a personal computer under the Windows 8.1 operating system (or later) and uses a Microsoft Xbox One Kinect Sensor (hereafter referred to as Kinect Sensor) to track patient arm movements. These arm movements are translated into equivalent movements of a graphical avatar that represents the patient in a virtual environment. The patient is thus able to practice activities of daily living (ADLs) that involve meaningful tasks and evoke functional movements with graduated levels of difficulty. The activities are organized into a virtual "Road to Recovery" that traverses a series of four islands, each organized around a central theme. There is no physical contact between the patient and the device during exercises, and thus no energy is directed to the patient assessment by a medical professional, and selection of exercise and settings, is required prior to use.
The provider-facing VOTA Provider Dashboard application enables the medical professional to view patient performance metrics and participation history using data produced by the VOTA patient-facing application. The application runs on the same personal computer and operating system as the patientfacing application.
All hardware associated with VOTA are commercial-of-the-shelf, consumer hardware items. The VOTA system ships with the following:
- Microsoft Xbox One Kinect Sensor and Kinect power supply;
- Microsoft Xbox Kinect Adapter for Xbox One ;
- Kinect TV Mount for Xbox One;
- Personal computer (preloaded with VOTA software) and computer power supply;
- Wireless keyboard;
- HDMI cable;
- Getting Started Guide; and
- Third-party Labeling Package
The provided text describes the 510(k) premarket notification for the Virtual Occupational Therapy Application (VOTA). However, it does not contain a specific table of acceptance criteria nor a detailed study that proves the device meets specific acceptance criteria in the way typically seen for a new AI/ML drug or device submission with quantifiable performance metrics (e.g., sensitivity, specificity, accuracy).
The document focuses on demonstrating substantial equivalence to a predicate device (Jintronix Rehabilitation System (JRS)) by comparing intended use, technological characteristics, and safety characteristics, rather than establishing quantifiable performance acceptance criteria for VOTA itself. The clinical testing described is primarily to show effectiveness for rehabilitation, not to meet pre-defined, quantitative performance metrics for a diagnostic or assistive AI system.
Therefore, I will extract and synthesize the information available in the document regarding the device's performance, the type of testing conducted, and the evidence provided to support its safety and effectiveness relative to its intended use and predicate device. I will then explain why some requested information (like specific quantitative acceptance criteria and AI-specific study details) is not present in this type of submission.
Here's the closest representation of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not provide a formal table of quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or accuracy, as would be typical for an AI/ML diagnostic or predictive device. Instead, the "acceptance criteria" are implied through the demonstration of substantial equivalence to a predicate device and clinical usability/effectiveness for its intended rehabilitative purpose.
The "performance" is primarily assessed in terms of clinical effectiveness for rehabilitation and safety.
Implied "Acceptance Criteria" Category | Description / Reported Performance |
---|---|
Functional Gain / Clinical Effectiveness | Acceptance Implied by: Demonstration of clinically significant improvement in upper extremity (UE) motor performance. |
Reported Performance: Stroke patients (n=15) using VOTA for ~1 hour, 3 times/week, over 8 weeks (24 total sessions) achieved an average Fugl-Meyer UE (FMUE) improvement of 6 points. This was measured pre- and post-intervention using the FMUE, a widely-recognized and clinically-relevant measure. | |
Safety | Acceptance Implied by: Absence of adverse events, compliance with safety standards, and no unique safety concerns compared to predicate. |
Reported Performance: No adverse incidents or injuries were reported over the entire period of actual VOTA use by stroke patients in the clinical testing, spanning 240 total sessions of approximately 1 hour each. The device also complies with consumer electrical safety standards (e.g., UL) and laser Class 1 standard (IEC 60825-1:2007) for the Kinect sensor. The risk analysis (ISO 14971) indicated a "Moderate Level of Concern" due to a small, non-zero risk of minor injury from overexertion if incorrectly used, which is mitigated by medical professional supervision as stipulated in the Indications for Use. | |
Usability | Acceptance Implied by: Assessment using a widely-accepted instrument and systematic comparison to Standard of Care by licensed therapists. |
Reported Performance: Clinical testing included "assessment of usability using a widely-accepted instrument" and "systematic comparison of VOTA to Standard of Care by licensed therapists." (Specific scores or detailed results are not provided in this summary). | |
Accuracy of Tracking | Acceptance Implied by: Sufficiency of Kinect-based tracking for intended application and established literature. |
Reported Performance: Clinical testing "demonstrated that VOTA's Kinect-based upper extremity tracking produces valid results for the intended application." The Kinect-based tracking solution was found to be "sufficient, both to permit patients to successfully perform virtual ADL exercises and to support derivation of speed-based motor performance metrics." References were provided for existing literature demonstrating the accuracy of Kinect-based upper extremity tracking. | |
Functional Equivalence | Acceptance Implied by: Demonstration that core functionality aligns with predicate and supports Indications for Use. |
Reported Performance: Bench testing validated "the core functionality of the software system" and established "substantial equivalency to the Predicate." Traceability was provided between Indications for Use, system-level requirements, test plans, and documented test results showing success criteria are met. |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: 15 stroke survivors with upper extremity impairment participated in the clinical testing.
- Data Provenance: The clinical testing was conducted by the University of Virginia (UVa) Department of Physical Medicine and Rehabilitation and the UVa HealthSouth Rehabilitation Hospital under the approval and governance of the UVa Institutional Review Board for Human Subject Research (IRB-HSR). This indicates prospective data collection from a specific clinical setting in the USA.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated as a distinct "ground truth" expert panel in the document. The clinical study involved:
- Licensed occupational therapists who supervised the sessions.
- Experienced therapists who assessed safety (over 200 hours of actual patient contact time using the VOTA system).
- Qualifications of Experts: Licensed occupational therapists; experienced therapists (implied clinical background). The Fugl-Meyer UE assessment (FMUE) is a gold-standard, clinician-administered test, meaning the scores collected by the trained therapists serve as the "ground truth" for motor performance.
4. Adjudication method for the test set
- The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth for the test set. For the FMUE assessment, it is a standardized clinical measure typically administered by a single trained therapist for each assessment. Inter-rater reliability (if multiple therapists assessed the same patient) or a consensus process is not mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study, as typically understood for evaluating AI assistance for human readers/clinicians, was not performed. This device is a direct patient-facing rehabilitation tool with a clinician supervising, not a diagnostic AI system assisting human interpretation of images or other data. The study was a clinical trial evaluating the therapeutic effect of the device on patients.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This device is not a standalone diagnostic algorithm. It is a patient-facing application that requires human-in-the-loop (medical professional supervision) as stated in its Indications for Use: "under supervision of a medical professional" and "Patient assessment, exercise guidance, and approval by the medical professional is required prior to use."
- The "standalone" performance closest to what might be considered is the accuracy of the Kinect-based tracking (which is an algorithm within the system). The document states this tracking "produces valid results for the intended application" and was "sufficient" for performing exercises and deriving metrics, citing clinical testing and existing literature. This implies an internal validation of the tracking component, but not as a separately defined "standalone" study in the context of an AI-only performance claim.
7. The type of ground truth used
- The primary "ground truth" for evaluating the device's effectiveness was clinical outcomes data – specifically, pre- and post-intervention scores from the Fugl-Meyer UE (FMUE) assessment. This is a clinician-administered, standardized functional outcome measure.
- For safety, the "ground truth" was observation of adverse events/injuries by supervising therapists.
- For tracking accuracy, the ground truth was implied by the ability of patients to successfully perform virtual activities and the feasibility of deriving motor performance metrics, supported by existing literature on Kinect accuracy.
8. The sample size for the training set
- The document does not specify a sample size for a training set for the VOTA software. This type of submission (for a device like VOTA based on existing technology like Kinect and established rehabilitation principles) is focused on demonstrating substantial equivalence and clinical effectiveness, not on detailing the dataset used to train a novel AI/ML algorithm from scratch. While VOTA is software, it's not described as a deep learning or AI model requiring a large training dataset in the typical sense of current AI medical devices. It utilizes an off-the-shelf sensor (Kinect) whose core tracking algorithms were developed by Microsoft.
9. How the ground truth for the training set was established
- Since a "training set" for a novel AI/ML algorithm is not described, the method for establishing its ground truth is also not applicable/not provided in this document. The "ground truth" relevant to VOTA's performance is established in its clinical test set, as described in point 7.
Ask a specific question about this device
(205 days)
LXJ
A software system used with the Microsoft Kinect intended to support the physical rehabilitation of adults in the clinic/ at home. The system includes rehabilitation exercises for the lower and upper extremities with audio- visual feedback & graphic movement representations for patients as well as remotely accessible patient performance metrics for the medical professional. Patient assessment, exercise guidance and approval by the medical professional is required prior to use.
The YuGO device, developed by Biogaming Ltd. is a virtual reality platform used with the Microsoft Kinect motion sensing technology, which provides exercise programs for the body's lower and upper extremities. It allows physical therapists to define professional, supervised, and personalized exercise programs via video games or a virtual trainer.
The system was designed to serve a number of purposes:
Allowing the treating clinician a direct access to the training program performed at the patients' home. by determining the allowed movements. extent and difficulty levels of the training protocol.
Encourage physical therapy home-training through the use of camera- sensors, visualizing and promoting body movements using a game displayed on screen.
Collecting movement and performance related data from the home- user; analyzing it and instantly preparing a report on a designated website, to allow treating clinicians (the patients' Physical Therapist or a Physician) access to a "performance report", in order to monitor, adjust and otherwise modify the training protocol to suit the needs of the patient in his rehabilitation process.
The system includes three (3) main functions: Clinician Dashboard, Patient Interface and Reporting System. Using the Microsoft Kinect for Windows to track motion, the YuGo Patient Interface records performance metrics providing them to qualified medical professionals via the Clinician Dashboard, in the Reporting system. Medical professionals can monitor patients performance, assign or modify rehabilitation exercises in Patient Interface for their patients through the Clinician Dashboard allowing for patients to perform their prescribed rehabilitation program even from the comfort of their home.
The provided text describes the YuGo system, a software device intended to support physical rehabilitation. However, the text does not include detailed acceptance criteria or a study describing the device's performance against those criteria.
The section "Performance Testing" on page 5-6 briefly mentions that "Software validation testing was conducted to verify that the device performs according to its specifications as described in the Software Requirements Specifications (SRS). The Software Test Description (STD) for the YuGo system presents the methodology for the validation and describes the test cases along with their acceptance criteria and the detailed test procedure. The STD also includes the test log (including individual grade of Pass/Fail)."
This statement indicates that acceptance criteria and performance data exist in the SRS and STD documents, but these documents are not provided in the given text. Therefore, I cannot extract the specific information requested in your prompt.
To answer your questions, I would need access to the Software Requirements Specifications (SRS) and the Software Test Description (STD) for the YuGo system.
Ask a specific question about this device
(249 days)
LXJ
A software system used with the Microsoft Kinect v2 intended to support the physical rehabilitation of adults in the clinic and at home. The system includes rehabilitation exercises for the lower extremity with audio-visual feedback & graphic movement representations for patients as well as remotely accessible patient performance video. Patient assessment, exercise guidance and approval by a medical professional is required prior to use.
The Vera™ software utilizes three off-the-shelf accessories: a motion sensing camera, a monitor and a computer; to provide physical therapists and physicians with a software system to prescribe customized physical therapy plans for patients. Vera™ software provides medical professionals the tools to record video and track movements in three-dimensional space in order to identify motion and count repetitions during patient participation with a prescribed exercise. Data derived from motion tracking as well as the recorded video can be reviewed by medical professionals to assess exercise movements and adherence to the prescribed exercise; Vera™ is a software tool that helps extend that expertise. The Vera™ software has three separate applications: Patient Application, Clinician Application, and Server Application.
Here's an analysis of the acceptance criteria and study detailed in the provided document for the Vera device:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance (Vera) |
---|---|
Correct Description of Exercises | 97.0% (223 out of 230 exercises) |
Exercises recognized as typical for rehabilitation | 96.7% of therapists agreed |
Therapists would recommend exercises to appropriate patients | 96.7% of therapists would recommend |
Safety based on clinician review | Clinicians identified "universal risks" inherent in rehabilitation, not specific to Vera. No risks inherent to the device itself were explicitly identified, only those related to patient assessment and auxiliary equipment. |
Accuracy of Vera in data processing | Validated by clinical study results "predominantly 100% exceeding the 95% acceptance criterion for all Vera activity movements." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 230 observations (10 Physical Therapists (PTs) x 23 Exercises/PT).
- Data Provenance: The study involved "healthy volunteers performing rehabilitation movements." The location of these volunteers or the study is not explicitly stated in the provided text. It is a prospective study as clinicians reviewed videos of performances.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: 10 Physical Therapists (PTs).
- Qualifications of Experts: The document explicitly states "Clinicians reviewed a series of video recordings." In a later section, it mentions "10 PTs," confirming the experts were Physical Therapists. No further details regarding their years of experience or specific sub-specialties are provided.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (such as 2+1 or 3+1). It describes that "Clinicians reviewed a series of video recordings... to verify that the movement seen in the video is recognized as physical therapy exercise and determine if they would recommend the movement to their patients." It appears each therapist made individual assessments, and the results were aggregated to report percentages of agreement. There is no mention of a process to resolve disagreements among the 10 therapists if they occurred.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance?
No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers' improvement with or without AI assistance was not performed. The study described focuses on clinician recognition and acceptance of exercises delivered by the Vera system, not on improving human reader performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done?
The study described evaluates the "clinical relevance of exercises delivered by the Vera System" and whether movements "executed by an individual interacting with the system" are recognized as exercises by clinicians. This is not a standalone algorithm performance study. While the Vera system tracks movements and provides data, the core of this particular study is about clinician perception of the exercises facilitated by the system, rather than the raw algorithmic accuracy in isolation. The document states a "hazard analysis of the Vera™ software also indicates low risk to the end user" and that "Vera™ has been validated for accuracy / performance effectiveness both clinically and through software testing with appropriate change management and design controls," which may allude to some standalone testing, but the details of such are not provided for the clinical study section.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for this specific study was expert consensus / expert opinion from the 10 Physical Therapists. They evaluated whether the movements performed by healthy volunteers via the Vera system were indeed recognized as physical rehabilitation exercises and if they would recommend them.
8. The sample size for the training set
The document does not provide any information regarding a training set size. This study appears to be a clinical validation of the exercises, not an evaluation of a machine learning model's training data. The Vera device itself is a "software system utilizing optical recording for rehabilitation exercises" and includes "motion sensing camera" technology (Kinect v2). While such a system internally would have trained algorithms for skeletal tracking and movement analysis, the details of that training are not part of the provided clinical performance data summary.
9. How the ground truth for the training set was established
As no training set is discussed in the context of this document's clinical performance section, the method for establishing its ground truth is not provided.
Ask a specific question about this device
Page 1 of 2