#### Home>Research>Research Areas>Financial Math

Weekly seminar links: Financial Math

##### Financial Math

This part is selected from the work of the Department of Financial Mathematics which consists of six faculty members. Their research interests include the actuarial sciences, dependence modeling, Lévy processes and optimal stopping problems in finance and insurance, and graphical models and machine learning. Some of them are listed as follows.

**1. Optimal Reinsurance and Equity-linked Products (YANG Jingping, WU Lan)**

The optimal reinsurance strategy concerning the insurer's risk attitude and the reinsurance premium principle has been an interesting topic. In [Cui-Yang-Wu,Insurance: Mathematics and Economics(2013)], Jingping Yang, Lan Wu and their collaborators discussed the optimal reinsurance problem with the insurer's risk measured by distortion risk measure and the reinsurance premium calculated by a general principle including expected premium principle and Wang's premium principle as its special cases. Explicit solutions of the optimal reinsurance strategy are obtained under the assumption that both the ceded loss and the retained loss are increasing with the initial loss. They presented a new method for discussing the optimal problem. Based on their method, one can explain the optimal reinsurance treaty in the view of a balance between the insurer's risk measure and the reinsurance premium principle.

In [Zhou-Wu,Insurance: Mathematics and Economics(2015)], Lan Wu and her collaborator investigated equity-linked investment products with a threshold expense strategy, under which an insurance company will collect expenses continuously from the policyholder's account only when the account value is lower than a pre-specified level. The logarithmic value of the policyholder's account, before deducting any fees, is described by a jump diffusion process which is independent of the time-to-death random variable. The distribution of the time-to-death random variable is approximated by a combination of exponential distributions, which are dense in the space of density functions on [0, infinity). They characterized the Laplace transform of the distribution of a general refracted jump diffusion process through some integro-differential equations. Besides, the distribution of a refracted double exponential jump diffusion process at an independent exponential random variable is derived, from which closed-form formulas to evaluate the total expenses and the fair fee rates are obtained. Finally, they illustrate their results by some numerical examples.

**2. Correlation Structure and Worst Value-at-Risk (YANG Jingping)**

Copula function has been widely used in insurance and finance for modeling inter-dependency between risks. In [Yang-Chen-Wang-Wang,Astin Bulletin(2015)], Jingping Yang and his collaborators introduced a new class of multivariate copulas, the composite Bernstein copula, generated from a composition of two copulas. This new class of copula functions is able to capture tail dependence, and it has a reproduction property for the three important dependency structures: comonotonicity, countermonotonicity and independence. They introduced an estimation procedure based on the empirical composite Bernstein copula which incorporates both prior information and data into the estimation. Simulation studies and an empirical study on financial data illustrate the advantages of the empirical composite Bernstein copula estimation method, especially in capturing tail dependence.

For fitting a parametric copula to multivariate data, a popular way is to employ the so-called pseudo maximum likelihood estimation. Although interval estimation can be obtained via estimating the asymptotic covariance of the pseudo maximum likelihood estimation, In [Wang-Peng-Yang,Scandinavian Actuarial Journal(2013)], Jingping Yang and his collaborators proposed a jackknife empirical likelihood method to construct confidence regions for the parameters without estimating any additional quantities such as the asymptotic covariance. A simulation study shows the advantages of the new method in case of strong dependence or having more than one parameter involved.

In quantitative risk management, it is important and challenging to find sharp bounds for the distribution of the sum of dependent risks with given marginal distributions, but an unspecified dependence structure. These bounds are directly related to the problem of obtaining the worst Value-at-Risk of the total risk. Using the idea of complete mixability, In [Wang-Peng-Yang,Finance and Stochastics(2013)], Jingping Yang and his collaborators provided a new lower bound for any given marginal distributions and gave a necessary and sufficient condition for the sharpness of this new bound. For the sum of dependent risks with an identical distribution, which has either a monotone density or a tail-monotone density, the explicit values of the worst Value-at-Risk and bounds on the distribution of the total risk are obtained. Some examples are given to illustrate the new results.

**3.
Occupation Times of Lévy Processes and Optimal Stopping Problems in Finance and
Insurance (WU Lan, CHENG Xue )**

In [Wu-Zhou-Yu,J. Theoret. Probab.Online (2016)], Lan Wu and her collaborator are interested in its occupation times for an arbitrary Lévy process X which is not a compound Poisson process. They use a quite novel and useful approach to derive formulas for the Laplace transform of the joint distribution of X and its occupation times. The formulas are compact, and more importantly, the forms of the formulas clearly demonstrate the essential quantities for the calculation of occupation times of X. It is believed that the results are important not only for the study of stochastic processes, but also for financial applications.

In [Cheng-Riedel,Mathematics & Financial Economics(2013)], Xue Cheng and her collaborator develop a theory of optimal stopping problems under ambiguity in continuous time. Using results from (backward) stochastic calculus, they characterize the value function as the smallest (nonlinear) supermartingale dominating the payoff process. For Markovian models, they derive an adjusted Hamilton-Jacobi-Bellman equation involving a nonlinear drift term that stems from the agent's ambiguity aversion. They show how to use these general results for search problems and American options.

In [Cheng-Di Giacinto-Wang,Quantitative Finance(2017)], Xue Cheng and her collaborators extend the classical price impact model of Almgren and Chriss to incorporate the uncertainty of order fills. The extended model can be recast as alternatives to uncertain impact models and stochastic liquidity models. Optimal strategies are determined by maximizing the expected final profit and loss (P&L) and various P&L-risk tradeoffs including utility maximization. Closed form expressions for optimal strategies are obtained in linear cases. The results suggest a type of adaptive volume weighted average price, adaptive percentage of volume and adaptive Almgren-Chriss strategies. VWAP and classical Almgren-Chriss strategies are recovered as limiting cases with a different characteristic time scale of liquidation for the latter.

**4.
Graphical Models and Machine Learning (HE Yangbo)**

When learning a directed acyclic graph (DAG) model via observational data, one generally cannot identify the underlying DAG, but can potentially obtain a Markov equivalence class. The size (the number of DAGs) of a Markov equivalence class is crucial to infer causal effects or to learn the exact causal DAG via further interventions. Given a set of Markov equivalence classes, the distribution of their sizes is a key consideration in developing learning methods. However, counting the size of an equivalence class with many vertices is usually computationally infeasible, and the existing literature reports the size distributions only for equivalence classes with ten or fewer vertices. In [He-Jia-Yu,J. Mach. Learn. Res.(2015)], Yangbo He and his collaborators develop a method to compute the size of a Markov equivalence class. They first show that there are five types of Markov equivalence classes whose sizes can be formulated as five functions of the number of vertices respectively. Then they introduce a new concept of a rooted sub-class. The graph representations of rooted subclasses of a Markov equivalence class are used to partition this class recursively until the sizes of all rooted subclasses can be computed via the five functions. The proposed size counting is efficient for Markov equivalence classes of sparse DAGs with hundreds of vertices. Finally, they explore the size and edge distributions of Markov equivalence classes and find experimentally that, in general, (1) most Markov equivalence classes are half completed and their average sizes are small, and (2) the sizes of sparse classes grow approximately exponentially with the numbers of vertices.

Graphical models are popular statistical tools which are used to represent dependent or causal complex systems. Statistically equivalent causal or directed graphical models are said to belong to a Markov equivalent class. It is of great interest to describe and understand the space of such classes. However, with currently known algorithms, sampling over such classes is only feasible for graphs with fewer than approximately 20 vertices. In [He-Jia-Yu,Ann. Statist.(2013)], Yangbo He and his collaborators design reversible irreducible Markov chains on the space of Markov equivalent classes by proposing a perfect set of operators that determine the transitions of the Markov chain. The stationary distribution of a proposed Markov chain has a closed form and can be computed easily. Specifically, they construct a concrete perfect set of operators on sparse Markov equivalence classes by introducing appropriate conditions on each possible operator. Algorithms and their accelerated versions are provided to efficiently generate Markov chains and to explore properties of Markov equivalence classes of sparse directed acyclic graphs (DAGs) with thousands of vertices. They find experimentally that in most Markov equivalence classes of sparse DAGs, (1) most edges are directed, (2) most undirected subgraphs are small and (3) the number of these undirected subgraphs grows approximately linearly with the number of vertices.