|Mobile Systems Design Lab||
Professor Sujit Dey
Towards Enabling Personalized Video Content and Services
There are two trends that have become obvious over the last two years. One is the growth in consumption of Internet video, and the other is the significant increase in smart phone ownerships, leading to increased growth and use of mobile Internet and mobile video. While the above trends have been having significant impact on the industry and consumers alike, we believe there is a third trend, though at its infancy, which will emerge and provide new opportunities and challenges in the coming years: personalized and interactive video content and services. Examples of such services that we already see today are personalized video recommendations, and personalized advertisements. We believe there will be more innovative services, which will make use of location information, and personal preferences, to provide unique and relevant experiences in the future.
The broad goal of this project is to develop ''enabling technologies'' which will allow personalization of content based on user preferences. At the heart of our approach is to automatically classify and categorize Internet video, such that we can perform user preference and intent analysis based on the videos viewed by the users. In order to make the solution scalable and applicable to the billions of Internet video views of millions of Internet users, we propose to develop completely automated video classification techniques that make use of the contextual information accompanying most Internet videos, as opposed to computationally expensive video classification techniques based primarily on audio or video processing.
Figure1: Text-based Video Webpage Categorization
Figure 1 gives an overview of the classification approach, consisting of an offline step (executed once), and an online step, which is executed for each video watched by each user. In the Offline step, a set of representative keywords (vocabulary) is identified, and the classification models are developed (learnt) which map these representative keywords corresponding to the information source to potential video categories, to be used later by the Online step. Subsequently, for every video watched by a user, the Online step classifies the video using the models developed in the offline step, and updates an estimate of the user's preferences.
Existing work focuses on model development, given training data for a standard set of categories (such as Music, Sports, Pets, Automobiles,etc). From the point of view of the application that utilizes automated video categorization, a general set of categories may not be the most useful one. For example, Movie recommendation requires genres of movies (horror, comedy, action, romantic..). Advertisement recommendation by Sears might be interested in categorization to Home Appliances, Electronics, Computers, Baby Items, Clothing. Hence, we address a broader problem of obtaining good quality training data (videos) for any application-defined categories (Figure 2), thus enabling creation of trained models for any set of categories.
Figure2: Training Video Creation for Application-Specific Categories
back to top
Given an arbitrary set of application-specific categories, how can we automatically obtain labeled videos that can train models capable of classifying online videos into the same set of categories?
Below are the publications based on the above work:
Watch Related Videos
1. CWC Research Review - Towards Enabling Personalized and Interactive Video Services (Prof. Sujit Dey)
back to top