2020 International Wireless Communications and Mobile Computing (IWCMC) Mobile Computing (IWCMC), 2020 International Wireless Communications and. :2100-2105 Jun, 2020
ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2020 - 2020 IEEE International Conference on. :2473-2477 May, 2020
2019 11th International Conference on Wireless Communications and Signal Processing (WCSP) Wireless Communications and Signal Processing (WCSP), 2019 11th International Conference on. :1-5 Oct, 2019
2019 11th International Conference on Wireless Communications and Signal Processing (WCSP) Wireless Communications and Signal Processing (WCSP), 2019 11th International Conference on. :1-6 Oct, 2019
Li, Xiaoxiang, Ma, Kai, Wang, Jian, and Shen, Yuan
2019 IEEE/CIC International Conference on Communications in China (ICCC) Communications in China (ICCC), 2019 IEEE/CIC International Conference on. :112-116 Aug, 2019
2019 IEEE/CIC International Conference on Communications in China (ICCC) Communications in China (ICCC), 2019 IEEE/CIC International Conference on. :59-63 Aug, 2019
Statistics - Machine Learning and Computer Science - Machine Learning
Abstract
The maximum entropy principle advocates to evaluate events' probabilities using a distribution that maximizes entropy among those that satisfy certain expectations' constraints. Such principle can be generalized for arbitrary decision problems where it corresponds to minimax approaches. This paper establishes a framework for supervised classification based on the generalized maximum entropy principle that leads to minimax risk classifiers (MRCs). We develop learning techniques that determine MRCs for general entropy functions and provide performance guarantees by means of convex optimization. In addition, we describe the relationship of the presented techniques with existing classification methods, and quantify MRCs performance in comparison with the proposed bounds and conventional methods.
Explainable AI, in the context of autonomous systems, like self driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for an autonomous vehicle actions has many benefits, e.g., increase trust and acceptance, but put little emphasis on when an explanation is needed and how the content of explanation changes with context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in the self driving cars in different contexts. We also present a self driving explanation dataset with first person explanations and associated measure of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Additionally, we propose a learning based model that predicts how necessary an explanation for a given situation in real time, using camera data inputs. Our research reveals that driver types and context dictates whether or not an explanation is necessary and what is helpful for improved interaction and understanding. Comment: 9.5 pages, 7 figures, submitted to UIST2020
2019 IEEE International Conference on Communications Workshops (ICC Workshops) Communications Workshops (ICC Workshops), 2019 IEEE International Conference on. :1-6 May, 2019
Xiong, Yifeng, Wu, Nan, Shen, Yuan, and Win, Moe Z.
2019 IEEE International Conference on Communications Workshops (ICC Workshops) Communications Workshops (ICC Workshops), 2019 IEEE International Conference on. :1-6 May, 2019
Gu, Kai, Wang, Yunlong, Wang, Jian, and Shen, Yuan
ICC 2019 - 2019 IEEE International Conference on Communications (ICC) Communications (ICC), ICC 2019 - 2019 IEEE International Conference on. :1-6 May, 2019