|    Mobile Systems Design Lab||
Professor Sujit Dey
Cloud-based Mobile Health Monitoring and Guidance System
Physical therapy is crucial for rehabilitation following many different types of surgery and injury, but it is often severely hampered by lack of access to therapists and lack of adherence to home therapy regimens. Similarly, wellness training and ergonomics training can be crucial components of preventative medicine, but are often not availed of due to a lack of access to proper expertise and guidance. This project aims to develop a computer vision based mobile system that can help people with accurate physical therapy, fitness training and ergonomics, while letting the medical caregivers track progress and compliance of patients. Our proposed real-time monitoring and guidance system integrates expertise in seemingly disparate disciplines - computer vision, computer gaming, wireless networking, high-dimensional machine learning, and human factors - towards an integrated solution that holds great promise to transform physical therapy, fitness training and ergonomic training through a quantitative process that can be done at home or at the workplace. Fundamental advances in the core disciplines towards successful implementation of the integrated solution include new hand and body pose estimation and tracking algorithms that are robust to interactions between hands, rapid motion, and occlusions, and the development of machine learning and avatar rendering algorithms for sensor fusion and expert-trained guidance logic, for both cloud-based and local usage. The aim is to provide avatar based training and real-time visual guidance on mobile devices and virtual reality VR platforms like Oculus and Samsung Gear VR to enable end-users to enhance accuracy, effectiveness, and safety for therapy, fitness and ergonomics applications.
Figure 1: Architecture of mobile training and guidance system, and data flow when system is implemented either locally on mobile device or on cloud servers (shaded).
Figure 1 shows the architecture and data flow of our proposed interactive training and guidance system, which enables demonstration rendering of exercise activities using avatars, real-time tracking of the user's performance using sensors, and real-time guidance. Figure 1 shows two proposed modes of operation: (i) Local mode, for which the system resides on the local device, such as a tablet or Gear VR, and expert session updates must be downloaded before a new live session, and (ii) Cloud mode, in which tasks are performed in the cloud, with the rendered video streamed to the local device. Sensor data are collected through a laptop or a Raspberry PI and transmitted over WiFi to the local device, which for cloud mode compresses and transmits the data to the cloud servers. Scalability and usability advantages are making the cloud the preferred platform for many applications. Cloud rendering will enable interactive training and guidance from any device, but may pose challenges in response time and network costs, so we will examine both modes.The following demonstration video introduces the prototype of our system.
back to top
Motion Data Alignment and Real-Time Guidance
In the proposed system, two kinds of delay may cause challenge for correctly calculating the accuracy of the user's movement compared to the avatar instructor's movement: human reaction delay (delay by user to follow avatar instructions/motion) and mobile network delay (which may delay when the cloud rendered avatar video reaches the user device). In particular, the delay may cause the motion sequences of the avatar instructor and the user to be misaligned with each other and make it difficult to judge whether the user is following the avatar instructor correctly. So our first work is focused on aligning the motion data and providing real-time guidance for the user based on the evaluation result.
back to top
back to top
back to top