Understanding Distributional Ambiguity via Non-Robust Chance Constraint

2020 ACM International Conference on AI in Finance https://arxiv.org/abs/1906.01981

8 Pages Posted: 14 Jun 2019 Last revised: 28 Dec 2020

See all articles by Shumin Ma

Shumin Ma

City University of Hong Kong (CityU) - School of Data Science

Cheuk Hang Leung

City University of Hong Kong (CityU) - School of Data Science

Qi Wu

City University of Hong Kong, School of Data Science

Wei Liu

Tecent AI Lab

Nanbo Peng

JD Digits

Date Written: September 20, 2020

Abstract

This paper provides a non-robust interpretation of the distributionally robust optimization (DRO) problem by relating the distributional uncertainties to the chance probabilities. Our analysis allows a decision-maker to interpret the size of the ambiguity set, which is often lack of business meaning, through the chance parameters constraining the objective function. We first show that, for general ϕ-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problems. These mean-deviation problems are not subject to uncertain distributions, and the ambiguity radius in the original DRO problem now plays the role of controlling the risk preference of the decision-maker. We then demonstrate that a DRO problem can be cast as a chance-constrained optimization (CCO) problem when a boundedness constraint is added to the decision variables. Without the boundedness constraint, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Thanks to our high-order expansion result, a notable feature of our analysis is that it applies to divergence measures that accommodate well heavy tail distributions such as the student t-distribution and the lognormal distribution, besides the widely-used Kullback-Leibler (KL) divergence, which requires the distribution of the objective function to be exponentially bounded. Using the portfolio selection problem as an example, our comprehensive testings on multivariate heavy-tail datasets, both synthetic and real-world, shows that this business-interpretation approach is indeed useful and insightful.

Keywords: distributionally robust optimization, chance constraint, Kullback-Leibler (KL) divergence, Cressie-Read divergence, portfolio optimization, Heavy-tail distributions, financial risk management

Suggested Citation

Ma, Shumin and Leung, Cheuk Hang and Wu, Qi and Liu, Wei and Peng, Nanbo, Understanding Distributional Ambiguity via Non-Robust Chance Constraint (September 20, 2020). 2020 ACM International Conference on AI in Finance https://arxiv.org/abs/1906.01981, Available at SSRN: https://ssrn.com/abstract=3398047 or http://dx.doi.org/10.2139/ssrn.3398047

Shumin Ma

City University of Hong Kong (CityU) - School of Data Science ( email )

Kowloon
Hong Kong

Cheuk Hang Leung

City University of Hong Kong (CityU) - School of Data Science ( email )

Kowloon
Hong Kong

Qi Wu (Contact Author)

City University of Hong Kong, School of Data Science ( email )

83 Tat Chee Avenue
Kowloon
Hong Kong

Wei Liu

Tecent AI Lab ( email )

Nanbo Peng

JD Digits

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
162
Abstract Views
1,278
Rank
333,835
PlumX Metrics