Perceptual Optimization of Large Scale Wireless Video Networks

Perceptual Optimization of Large Scale Wireless Video Networks

The next generation of wireless networks will become the dominant means for video content delivery, leveraging rapidly expanding cellular and local area network infrastructure. Unfortunately, the application-agnostic paradigm of current data networks is not well-suited to meet projected growth in video traffic volume, nor is it capable of leveraging unique characteristics of real-time and stored video to make more efficient use of existing capacity. As a consequence, wireless networks are not able to leverage the spatio-temporal bursty nature of video. Further, they are not able to make rate-reliability-delay tradeoffs nor are they able to adapt based on the ultimate receiver of video information: the human eye-brain system. We believe that video networks at every time-scale and layer should operate and adapt with respect to perceptual distortions in the video stream.

Our position is that increasing video network capacity against dramatically accelerating demand will require delivering fewer, more perceptually relevant bits per stream, communicating those streams more efficiently throughout the network, and creating a more capable, perception and video-content-aware network infrastructure. Our research is aligned along vectors that capture each of these research themes: VQA (perceptual video quality assessment), STIM (spatio-temporal interference management), and LV (learning for video network adaptation). We also have an experimental research thrust, which is the fourth vector SYN (synthesis).

Vector VQA is about perceptual video quality assessment and its application to wireless video networks. As part of this project we are developing perceptual metrics for 2-D and 3-D video quality assessment. We are using these metrics to devise rate-distortion models, and to input learning based adaptation models, which can subsequently be used in video-aware network adaptation.

Vector STIM is about exploiting the spatio-temporal characteristics of video traffic and the wireless environment to improve spectral utilization. We are developing interference management and opportunistic video transport strategies focusing on both real-time and stored video streaming. We are developing and analyzing techniques for interference management in two-tier networks and cross-layer application level protocol for stored video transmission. Interference management in the context of ad-hoc heterogeneous wireless network deployments is recognized as one of the key components to achieving high spectral efficiency. Better management of interference spans a range of issues from how traffic is associated with network infrastructure, to the `shaping’ transmissions so as to mitigate the impact of interference on video quality.

Vector LV is about adapting transmission parameters including the source rate and the modulation and coding parameters to the current channel state, as well as the “user state,” where this can mean the quality the user is experiencing (as reported by the user, or by quality agents in the network) as well as the user’s behavior and predicted behavior. We are developing a data-driven learning framework based on user behavior (e.g., mobility patterns, and predicted demand), and perceptual video quality distortion statistics collected by smart video quality agents distributed throughout the network. We are developing perceptual based wireless link adaptation procedures that make rate distortion tradeoffs dynamically, and as the project progresses will use learning agents to integrate with interference management approaches.

Vector SYN is about experimental validation of the proposed research concepts and capacity claims. It involves several different synergistic and cross-cutting experimental activities intended to incorporate elements of VQA, STIM, and LV. Specific tasks including developing a reference database for making quality measurements, demonstrating machine learning based scalable video source and channel rate adaptation algorithms, and providing assessments of the composite capacity gains of all our techniques through composite simulations.



Our project is part of a large effort by Intel, Cisco, and Verizon to tackle the wireless traffic jam called the VAWN (Video Aware Wireless Networks) project. Support for the effort at UT comes in the form of gifts from Intel and Cisco. Other universities involved in the VAWN project include:

  • Cornell University will focus on video coding for heterogeneous networks, predictive video streaming and quality-aware routing.
  • Moscow State University will explore 2-D/3-D video-quality restoration.
  • University of California, San Diego will research 2-D/3-D video coding and error concealment, fast mobile adaptations and network resource management.
  • University of Southern California will look at novel device-to-device video network architectures.


Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.

Comments are closed.