• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Du, Lina (Du, Lina.) | Yang, Shuo (Yang, Shuo.) | Zhuo, Li (Zhuo, Li.) (Scholars:卓力) | Zhang, Hui (Zhang, Hui.) | Zhang, Jing (Zhang, Jing.) | Li, Jiafeng (Li, Jiafeng.)

Indexed by:

EI Scopus

Abstract:

For mobile streaming media service providers, it is necessary to accurately predict the quality of experience (QoE) to formulate appropriate resource allocation and service quality optimization strategies. In this paper, a QoE evaluation model is proposed by considering various influencing factors (IFs), including perceptual video quality, video content characteristics, stalling, quality switching and video genre attribute. Firstly, a no-reference video multimethod assessment fusion (VMAF) model is constructed to measure the perceptual quality of the video by the deep bilinear convolutional neural network. Then, the deep spatio-temporal features of video are extracted using a TSM-ResNet50 network, which incorporates temporal shift module (TSM) with ResNet50, obtaining feature representation of video content characteristics while balancing computational efficiency and expressive ability. Secondly, video genre attribute, which reflects the user's preference for different types of videos, is considered as a IF while constructing the QoE model. The statistical parameters of other IFs, including the video genre attribute, stalling and quality switching, are combined with VMAF and deep spatio-temporal features of video to form an overall description parameters vector of IFs for formulating the QoE evaluation model. Finally, the mapping relationship model between the parameters vector of IFs and the mean opinion score is established through designing a deep neural network. The proposed QoE evaluation model is validated on two public video datasets: WaterlooSQoE-III and LIVE-NFLX-II. The experimental results show that the proposed model can achieve the state-of-the-art QoE prediction performance.

Keyword:

Deep spatio-temporal features No-reference VMAF metric TSM-ResNet50 QoE DBCNN

Author Community:

  • [ 1 ] [Du, Lina]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Yang, Shuo]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 3 ] [Zhuo, Li]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 4 ] [Zhang, Hui]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 5 ] [Zhang, Jing]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 6 ] [Li, Jiafeng]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 7 ] [Du, Lina]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 8 ] [Yang, Shuo]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 9 ] [Zhuo, Li]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 10 ] [Zhang, Hui]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 11 ] [Zhang, Jing]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 12 ] [Li, Jiafeng]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China

Reprint Author's Address:

Show more details

Related Keywords:

Source :

SENSING AND IMAGING

ISSN: 1557-2064

Year: 2022

Issue: 1

Volume: 23

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 3

Affiliated Colleges:

Online/Total:267/10504964
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.