• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Jiang, Haihua (Jiang, Haihua.) | Hu, Bin (Hu, Bin.) | Liu, Zhenyu (Liu, Zhenyu.) | Yan, Lihua (Yan, Lihua.) | Wang, Tianyang (Wang, Tianyang.) | Liu, Fei (Liu, Fei.) | Kang, Huanyu (Kang, Huanyu.) | Li, Xiaoyu (Li, Xiaoyu.)

Indexed by:

SSCI EI Scopus SCIE

Abstract:

Depression is one of the most common mental disorders. Early intervention is very important for reducing the burden of the disease, but current methods of diagnosis remain limited. Previously, acoustic features of speech have been identified as possible cues for depression, but there has been little research to link depression with speech types and emotions. This study investigated acoustic correlates of depression in a sample of 170 subjects (85 depressed patients and 85 healthy controls). We examined the discriminative power of three different types of speech (interview, picture description, and reading) and three speech emotions (positive, neutral, and negative) using different classifiers, with male and female subjects modeled separately. We observed that picture description speech rendered significantly better (p < 0.05) classification results than other speech types for males, and interview speech performed significantly better (p < 0.05) than other speech types for females. Based on speech types and emotions, a new computational methodology for detecting depression (STEDD) was developed and tested. This new approach showed a high accuracy level of 80.30% for males and 75.96% for females, with a desirable sensitivity/specificity ratio of 75.00%/85.29% for males and 77.36%/74.51% for females. These results are encouraging for detecting depression, and provide guidance for future research. (C) 2017 Elsevier B.V. All rights reserved.

Keyword:

Depression Speech types Acoustic features Speech emotions Classifiers

Author Community:

  • [ 1 ] [Jiang, Haihua]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Hu, Bin]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 3 ] [Liu, Zhenyu]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China
  • [ 4 ] [Yan, Lihua]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China
  • [ 5 ] [Wang, Tianyang]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China
  • [ 6 ] [Liu, Fei]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China
  • [ 7 ] [Kang, Huanyu]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China
  • [ 8 ] [Li, Xiaoyu]Lanzhou Univ, Ubiquitous Awareness & Intelligent Solut Lab, Lanzhou 730000, Peoples R China

Reprint Author's Address:

  • [Hu, Bin]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Source :

SPEECH COMMUNICATION

ISSN: 0167-6393

Year: 2017

Volume: 90

Page: 39-46

3 . 2 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:175

CAS Journal Grade:4

Cited Count:

WoS CC Cited Count: 63

SCOPUS Cited Count: 86

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 6

Online/Total:752/10700846
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.