|Mobile Systems Design Lab||
Professor Sujit Dey
User Controlled Personalized Content
Recent advances in communication and networking, web services, semiconductor devices as well as signal processing have led to a proliferation of outlets for delivering rich multimedia content, including the Internet, mobile broadband networks, and Cable/Satellite/IPTV. This has encouraged the creation and sharing of a growing amount of user-generated media content through the web (e.g., YouTube) that provides an enriched, participatory multimedia experience in keeping with current trends like social networking. At the same time, there are also efforts to leverage the new delivery systems like web-video for professionally produced content (e.g., Hulu, or TV channels' websites), which have elicited a huge response from the users due to their convenience. We observe a trend of growing demand for more user control, accompanied by a continuing appetite for professional quality content.
Compared to earlier video broadcasting systems, there have already
been an increase in user control in the multimedia presentation
systems of today; common features like pausing, forward/backward
scrolling, etc are available not only for videos on the web but also in
Cable and Satellite media through Digital Video Recorder functionality.
To look at the possibilities of going beyond such rudimentary
levels of interactivity, we need to reconsider the video production and
presentation system in use today that produces videos containing a
single linear sequence of scenes or segments. In this method of
presentation, multiple threads of events that may be happening in
parallel in real life (e.g. in live shows, sports, reality television)
have to be shown in a single sequence to the viewer, requiring the
director to often make compromises in quality or the amount of content.
There have been a few extensions made to media presentation and
delivery systems to accommodate parallel events, e.g., PiP
(picture-in-picture) or multiple camera angles. However, these have
only very specific applications and they offer too little granularity
In our research, we propose to develop new techniques for capturing, delivering and presenting concurrent content that will allow the users to experience multimedia content in a much more realistic and participatory manner. We will also develop techniques to automatically identify concurrency in already published content. We believe the media viewing experience can be significantly enriched by allowing the user to control, for example, which subset among five parallel events to focus on, how views of parallel events can be interleaved, or choose which parts of the original footage to watch based on prior knowledge of their content/emphasis/events. We will look at the various algorithmic and systems level challenges that need to be resolved at content production, delivery and presentation stages in order to enable these capabilities. Some of these challenges include:
back to top