H Emerald Insight

The Journal of Risk Finance

Jump liquidity risk and its impact on CVaR Harry Zheng Yukun Shen

Article information:

To cite this document:

Harry Zheng Yukun Shen, (2008),"Jump liquidity risk and its impact on CVaR", The Journal of Risk Finance, Vol. 9 Iss 5 pp. 477 - 492 Permanent link to this document: http://dx.doi.org/10.1108/15265940810916139

Downloaded on: 13 March 2016, At: 13:56 (PT)

References: this document contains references to 13 other documents.

To copy this document: permissions@emeraldinsight.com

The fulltext of this document has been downloaded 1200 times since 2008*

Users who downloaded this article also downloaded:

Ahmed Arif, Ahmed Nauman Anees, (2012),"Liquidity risk and performance of banking system", Journal of Financial Regulation and Compliance, Vol. 20 Iss 2 pp. 182-195 http:// dx.doi.org/10.1108/13581981211218342

Giampaolo Gabbi, (2004),"Measuring liquidity risk in a banking management framework", Managerial Finance, Vol. 30 Iss 5 pp. 44-58 http://dx.doi.org/10.1108/03074350410769065

Hamid Uddin, (2009),"Reexamination of stock liquidity risk with a relative measure", Studies in Economics and Finance, Vol. 26 Iss 1 pp. 24-35 http://dx.doi.org/10.1108/10867370910946306

Access to this document was granted through an Emerald subscription provided by emerald-srm:380560 [] For Authors

If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.com

Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online products and additional customer resources and services.

Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

*Related content and download information correct at time of download.

Jump liquidity risk and its impact

onCVaR

Harry Zheng and Yukun Shen

Department of Mathematics, Imperial College, London, UK

Jump liquidity risk

Abstract

Purpose - The aim is to study jump liquidity risk and its impact on risk measures: value at risk (VaR) and conditional VaR (CVaR).

Design/methodology/approach - The liquidity discount factor is modelled with mean revision jump diffusion processes and the liquidity risk is integrated in the framework of VaR and CVaR. Findings - The standard VaR, CVaR, and the liquidity adjusted VaR can seriously underestimate the potential loss over a short holding period for rare jump liquidity events. A better risk measure is the liquidity adjusted CVaR which gives a more realistic loss estimation in the presence of the liquidity risk. An efficient Monte Carlo method is also suggested to find approximate VaR and CVaR of all percentiles with one set of samples from the loss distribution, which applies to portfolios of securities as well as single securities.

Originality/value - The paper offers plausible stochastic processes to model liquidity risk. Keywords Monte Carlo methods, Risk analysis, Liquidity Paper type Research paper

1. Introduction

The liquidity drying up was the prevailing trigger element of some high profile failures in the recent past (e.g., the fall of long-term capital management). These highly leveraged hedge funds had great difficulty in raising cash to meet margin calls by unwinding positions in markets where liquidity had almost disappeared. Dunbar (1998) explains that:

Portfolios are usually marked to market at the middle of the bid-offer spread, and many hedge funds used models that incorporate this assumption. In late August, there was only one realistic value for the portfolio: the bid price. Amid such massive sell-offs, only the first seller obtains a reasonable price for its security; the rest lose a fortune by having to pay a liquidity premium if they want a sale. [...] Models should be revised to include bid-offer behaviour.

The liquidity risk can be conceptually divided into an exogenous component and an endogenous component, the former depends on the general market condition and the latter relates the specific position of a trader. (Lawrence and Robinson, 1996) assert that ignoring totally the liquidity risk can provoke an underestimate of the market risk up to 30 percent. Despite the wide recognition of the importance of the liquidity risk, there is no universal agreement on the definition of liquidity. In the academic literature, the liquidity is usually defined in terms of the bid-ask spread and/or the transaction cost whereas in the practitioner literature the illiquidity is often viewed as the inability of buying and selling securities (at any price). The four major properties of the liquidity are the following Black (1971): the immediacy of the transaction, the tightness of the

The authors thank the referee for several comments which have helped to improve the first

The Journal of Risk Finance Vol. 9 No. 5, 2008 pp. 477-491

© Emerald Group Publishing Limited 1526-5943

DOI 10.1108/15265940810916139

spread, the resiliency of the market, and the depth of the market. The concept of liquidity can be summarized as the ability for traders to execute large trades rapidly at a price close to current market price. The liquidity risk refers to the loss stemming from costs of liquidating a position.

To manage the liquidity risk a good risk measure is needed to account for the impact of the liquidity shock on tradable securities and portfolios. Value at risk (VaR) is the most popular market risk measure used in practice, which estimates the potential loss of a financial instrument at a certain level of probability and for a given period of time. However, VaR ignores the liquidity component and can seriously underestimate the potential loss if the loss distribution is fat-tailed. This is because VaR takes the value of the least loss among all possible losses if an event of a given probability does occur. To overcome the underestimation of the potential loss VaR is often adjusted in an ad hoc fashion either by lengthening the holding period or by magnifying the VaR calculated with the desired holding period. A different risk measure that addresses the shortcoming of VaR is the conditional VaR (CVaR). Unlike VaR in predicting the potential loss CVaR uses the average loss among all possible losses, which provides a more realistic loss estimation if an unexpected "bad" event occurs in a fat-tailed environment. CVaR is also a coherent risk measure whereas VaR is not (Artzner et al., 1999).

It is often difficult to compute directly the VaR and CVaR from their definitions as VaR requires to solve a nonlinear equation and CVaR to integrate over the tail distribution, especially when the closed-form expression of the loss distribution is unknown or is too complicated. Rockafellar and Uryasev (2002) suggest a viable method for the computation of VaR and CVaR by formulating a convex minimization problem in which the minimum value is the CVaR and the left end point of the minimum solution set gives the VaR. The resulting optimization problem can be easily solved in two steps by first generating samples of the loss distribution and then solving a large linear programming (LP) problem which gives the approximate VaR and CVaR. The same formulation can also be used to find the minimum CVaR portfolio.

We focus in this paper on the loss of the realized value (bid-price) of a tradable security, we define the bid-price as the product of the mid-price and the liquidity discount factor, both follow some stochastic processes. To highlight the key point and simplify the discussion, we assume that the mid-price follows a geometric Brownian motion (GBM) process, the liquidity discount factor follows a mean-reversion jump diffusion process, and two processes are independent of each other.

There are two main contributions in this paper. The first contribution is that it provides an explicit solution to the LP problem of Rockafellar and Uryasev (2002) with the following advantages:

(1) approximate VaR and CVaR can be computed by simply generating samples of the loss distribution and no optimization is needed;

(2) VaR and CVaR of any percentile can be computed with a given set of samples of the loss distribution;

(3) it works for both a single security and a portfolio of securities as long as the joint distribution of security losses is known; and

(4) it opens up other optimization methods (e.g., the augmented Lagrangian method) to find the minimum CVaR portfolio in supplement to the nonsmooth optimization method or the large-scaled LP method.

The second contribution is that it defines liquidity-adjusted VaR (LVaR) and liquidity-adjusted CVaR (LCVaR) for market risk of tradable securities when there exists jump liquidity risk. It shows that the conventional VaR and CVaR for the mid-price of a security can seriously underestimate the potential loss, especially over a short period such as one day, and can result in substantial loss if a "bad" rare event occurs. This partially explains the difficulty those hedge funds had to meet margin calls by unwinding the position when liquidity disappeared. The implication in risk management is that financial institutions should reserve sufficient liquid assets, much larger than what the conventional VaR and CVaR for the mid-price would have suggested, in their portfolios to withstand the potential large loss when a jump liquidity event strikes.

The paper is organized as follows: Section 2 reviews the convex optimization problem of Rockafellar and Uryasev (2002) for VaR and CVaR, and solves the resulting LP problem by the dual method and the Kuhn-Tucker conditions. It also discusses the way of finding the minimum CVaR portfolio. Section 3 models the liquidity discount factor with the mean reversion OU and CIR jump diffusion processes which seem to characterize well the general phenomenon of the liquidity risk: unpredictable sudden liquidity dry-up and gradual recovery afterwards. Section 4 compares LVaRs and LCVaRs of different models and parameters to see their effects for risk measures. Section 5 concludes and the appendix contains the proofs of theorems.

2. Computation of VaR and CVaR

Consider a real-valued random variable L on a probability space (V, F, P) that represents the loss of an investment over a fixed time horizon. Let a [ (0,1) be fixed. Then the VaR of L at level a is defined to be the smallest number x such that the probability that the loss does not exceed is not less than a, i.e.:

VaRa = min{x [ R : P(L # x) $ a}.

The CVaR at level a is defined to be the average loss given that the loss is at least VaRa, i.e.:

CVaRa = mean of the a - tail distribution of L

where the a-tail distribution Fa(x) is defined by:

0 for x

Fa(x) =

(P(L # x) - a)/(1 - a) for x $ VaRa.

Let St be the discounted asset price at time t, following a GBM process:

dSt = sStdWt,

where s is a constant asset volatility and Wt a standard Brownian motion. Remark In general asset price St is assumed to follow a GBM process with a drift m

dSt = mSt dt + sSt dWt.

The discounted asset price St ■= e rtSt satisfies SDE:

dSt = (jl - r)Stdt + o-StdWt.

where r is the risk-free interest rate. We can then apply the Girsanov theorem to change the probability measure P to an equivalent probability measure P 0 such that:

dSt = sSt dW0

where W° is a standard Brownian motion under P (Karatzas and Shreve, 1988). We therefore assume, without loss of generality, that discounted asset price St has zero drift. The loss of the discounted asset price at time is given by:

L = So - St = So(1 - e-(1=2)o2T+oWr). (L < 0 represents a gain.) The distribution of is given by:

'-In(1 - (x/So)) - (1/2)o2T

P (L # x) = F

if x < S0 and P(L # x) = 1 if x $ S0 where F(x) is the standard normal cumulative distribution function. A simple calculation using (1) and (2) shows that:

VaRs = So(1 - e-(1/2)s2T-sffiF2l(a)),

CVaRa = So -

SoF(-F-1(a) - s/T),

In general, there are many factors which make the direct computation of VaR and CVaR with (1) and (2) difficult. For example, the closed form expression of the loss distribution is unknown, or equation (1) is highly nonlinear, etc. (Rockafellar and Uryasev, 2002) suggest a new way of computing VaR and CVaR by solving the following convex optimization problem:

min Fa(x) u x +

- E(L - x)4

where x + = max(x, 0). The VaR and CVaR are the optimal solution and optimal value of problem (4), given by:

VaRa = left endpoint of argminFa(x)

CVaRa = minFa(x) = Fa(VaRa).

Rockafellar and Uryasev (2002) suggest to solve (4) by first generating samples M of random variable L to approximate (4) by:

minx + ^--v^Yl (Li - x)+

1 - aM

and then solving an equivalent LP problem:

mm x +

1 1 M -LIv

1 - aM4-f

Zi s.t. x + zt $ Li and zt $ 0, i — 1,

, M. (5)

The optimal solution and the optimal value to problem (5) are the approximate VaR and CVaR as we have replaced the expectation by the sample average. These approximate VaR and CVaR tend to the exact VaR and CVaR as M!i. We investigate the computation of these approximate VaR and CVaR and their applications in liquidity risk analysis. From now on, the VaR and CVaR in the paper refer to these approximate VaR and CVaR, computed from (5).

We can solve problem (5) explicitly due to its special structure, and consequently we can get VaRa and CVaRa explicitly by simply sorting the samples.

Theorem 1. Let the M samples of loss random variable L be arranged in decreasing order L1 $ ••• $ LM. Let a [ (0, 1) be a given percentile and N be the unique integer satisfying (1 — a)M — 1 < N # (1 — a)M. Then the approximate VaR and CVaR are given by:

VaRa — Ln+1 and CVaRa — gCL + ••• + Ln ) + (1 — N j)Ln+1

where g — (1/(1 — a))(1/M). Furthermore, as M n the approximate VaR and CVaR tend to the exact VaR and CVaR defined in (1) and (2).

Proof. See Appendix.

Theorem 1 finds the approximate VaR and CVaR of all percentiles once a set. of samples is generated and sorted, as the only difference with different a is to choose different N. When a is close to 1 the number of samples should be sufficiently large to ensure a stable result. For example, if 100 samples are generated, then CVaR is the average of the first ten sorted samples and VaR is the eleventh sorted sample when a — 0.9 whereas CVaR is the first sorted sample and VaR is the second sorted sample when a — 0.99, which is bound to be unstable.

Theorem 1 can be used to find the approximate VaR and CVaR of a portfolio of securities, not necessarily a single security. Suppose there are n securities in a portfolio with Li representing the loss of security i. The loss of the portfolio is given by:

L(w) — wiLi

where wi are weights of securities in the portfolio. For fixed w, we can find the VaR and CVaR of loss L(w) by first generating M samples of the joint distribution of (L1,..., Ln) say (L1k,..., Lnk) for k — 1,..., M then sorting Lk (w) — ^nt—1wtLtk into a decreasing sequence, say L1(w) $ ••• $ LM(w), and finally applying Theorem 1 to get VaRa(w) and CVaRa(w), i.e.:

VaRa(w) — wL n+1 and CVaRa(w) — ^ wlcl (w),

where a (w) — y(Ln + ••• + Ln ) + (1 — Nj)L1)n+1.

As a byproduct we can also get the approximate marginal VaR and CVaR of L(w) with respect to the weights w under the mild condition that the sorted sequence Lk(w) is strictly decreasing (as is typical). Then the perturbed loss sequence Lk(w + e) keeps the same order as that of Lk(w) if perturbation e = (e1;..., en) has sufficiently small magnitude. We therefore have c(w + e) = a(w) for all and:

VaRa(w + e) = ^ (w8 + e8 )Li; N+1 and CVaRa(w + e) = ^ (w8 + e8 )c (w).

8=1 8=1

This and (6) imply that the approximate gradients of VaR and CVaR are given by:

VwVaRa(w) = (L1N+1, . . . ; Ln,N+1)T; VwCVaRa(w) = (c1(w), ..., Cn(w))T.

To find the minimum CVaR portfolio, Rockafellar and Uryasev (2002) suggest to solve the following convex optimization problem:

minx + --— >

*eR,w 1 - aMf-f , . ,

k=1 i=1

J^wjLjk - x ) = minCVaRa(w)

subject to constraints w1 + ••• + wn = 1 and w1,..., wn $ 0, and possibly some other linear constraints. The optimal value is the minimum CVaR of the portfolio, the optimal solution w * is the optimal weight of the securities, and x * is the optimal VaR of the portfolio. Since the objective function is not differentiable, Rockafellar and Uryasev (2002) suggest to solve it either with the nonsmooth optimization method (n + 1 variables) or with the LP method (M + n + 1 variables). Since we know how to compute CVaRa(w) and VwCVaRa(w) explicitly for every w, we may also use other optimization method, such as the LANCELOT method of multipliers (Conn et al., 1992), to find the optimal weight and the minimum CVaR portfolio.

3. Jump liquidity risk processes

Let (V, F, P) be a probability space and Ft be the filtration satisfying the usual conditions. Let St be the discounted mid-price of an asset following a GBM process (3). In Zheng (2006), it is suggested that the yield spread of corporate bonds is influenced by both credit risk and liquidity risk which is modelled with Poisson jump events. This motivates us to assume that the liquidity discount factor Xt of the asset price at time t follows a jump-diffusion process:

dXt = m(t,Xt) dt + s(t,xt)dB + xt - dQt

where Bt is a standard Brownian motion, Qt = ^Nt1 F8 is a marked point process, Nt is a Poisson process with intensity A, Yi s are independent and identically distributed random variables. Assume that Wt, Bt, Nt and Ys are independent to each other, and adapted to the filtration Ft. Assume also that m and s satisfy the conditions (e.g., Lipschitz continuity and linear growth) that guarantee the existence of a unique strong solution to (7).

The discounted bid-price at time T is given by STXT and the discounted loss of liquidating the asset at time T is given by:

L — S0X0 — StXt .

The LVaR and the LCVaR of L at level a are defined by (1) and (2), respectively. Since ST — Soe—(1/2)a t+swt we have by conditioning on WT that:

P(L #y) — / P(Xt $ (X0 — ^ |e'

— jM e(1/2)s2T — affiz

dF(z).

Let Ti be the ith jump time. Conditional on j jumps over the interval [0, T], i.e. NT — j the joint density function of t1, ..., t is given by:

/(«1, ...,«j|NT — j) — j!T —j1{0<M1<---<Mj'<T}.

Let XS x denote the strong solution to the SDE:

dXt — m(t, Xt )dt + a(t, Xt )dBt

with the initial condition Xs — x and F^ x(y) —P(Xt,x # y) the corresponding distribution function. If there is no jump in the interval [0, T] then the conditional distribution of XT is simply given by P(Xt # j|Nt — 0) — FTX0 (y). We have by conditioning on the number of jumps NT that:

P(Xt # y) — ^ P(Nt — j )P(Xt # j|Nt — j)

and by conditioning on the jump times t1, ..., tNt that:

nT r t ...

and by conditioning on the jump sizes Y1,..., YNt that: P(Xt # jyiT- — m)

P(XT # y| t — m)d«j.. .d«i

{Yi}j—1

[•••[ FT,(1+Yj)xj(ydF^^—*—!(xj).. .dF«;X0(x1)

JR JR j

OU jump d//«sion process. Assume that Xt follows a mean-reverting Ornstein-Uhlenbeck jump diffusion process (7) with m(t, x) — k(9 — x) and <t, x) — <~ where k, U, <~ are constants. It is known that the solution to (8) is given by:

Xsx — xe—k(t—s) + U(1 — e—k(t—s)) + <re—ktj/" ekudB

and the distribution function of Xts,x is given by:

Fst; x(y) = F

fy - xe-k(t-t) - U1 - e2

-k(t-t)\

v(1 - e-2k(t-t))/(2k)

We can express XT explicitly as follows.

Theorem 2. Let the number of jumps in interval (0, T) be NT and jump times be 0 < t1 < ••• < tNt < T with t0 = 0 and tNt + 1 = T. Then XT of the OU jump diffusion process is given by:

Xt = mNT + +se-

-kTY, u

ekt dBt

where Unj = nj=n(1 + Yt) for n $ 1 and Un,j = 1 if n > j by convention, and

uj = U1;jX0e-kT + 0e-kTY?r+=i Un,^ - ekt-1). The conditional probability of (9) is equal to:

P (Xt # yWi = ut; i = 1Nt ) = E

-2kT^pj+1 U2 re: ¿Lm=1Un, j(e

2kUn-1

{Yi }NT ).

y - mNq

where s22 = (s2/2k)e Proof. See Appendix. CIR jump diffution procett. Assume that XT follows a mean-reverting Cox-Ingersoll-Ross jump diffusion process (7) with m(t, x) = = k(U - x) and s(t, x) = &4x It is known that there is no closed-form solution to (8) (Cox et al., 1985) and the distribution function of Xst'x has a noncentral x 2 distribution:

Ft x( y) = x2

s2(1 - e

k(t- t

~2(ek(t-t) - 1)

where x2(x; n, A) is the distribution function of a noncentral x2 random variable with degrees of freedom and noncentral parameter A.

Remark. The compensated GBM jump-diffusion process is given by (7) with m(t,x) = -@Ax and s(t,x) = sx where b = E(Yi). Although it is simple and has a closed-form solution, it is not suitable for modelling the liquidity discount factor as Xt increases (or decreases) at rate - bA (ignoring the diffusion effect) between jump times, which is not in line with the empirical observation that the bid-ask spread is stable and relatively flat in a normal market. To remove the obvious trend, one has to set the drift zero, but in doing so Xt is no longer a martingale and is unlikely to move back to its original level after jumps, which is again at odds with the empirical observation that the liquidity discount factor tends to recover to the normal market level after market crash. Owing to these reasons we do not use the GBM jump diffusion process to model the liquidity discount factor process Xt.

Figure 1 displays sample paths of GBM, OU, and CIR jump diffusion processes. It is obvious that sample paths for the GBM jump diffusion process either have a clear trend between jumps if there is a compensator in the SDE or have no mean reversion after jumps if there is no compensator. Sample paths for OU and CIR jump diffusion processes are similar and both display the mean reversion property as expected.

1.3 1.2 1.1 1

0.9 0.8 0.7 0.6 0.5 0.4 0.3

GBM with compensator-

GBM without compensator

OU-------

- CIR............ -

- -

Figure 1.

Sample paths of GBM, OU, and CIR jump diffusion processes. Data: X0 — 1, k — 2, U — 0.98, <~ — 0.1, A — 2, ^ ~ U[— 0.5, — 0.2]

Remark. In between jumps OU process Xt is driven by Brownian motion and there is a positive probability that Xt can be greater than 1 or less than 0, this is due to the nature of the Brownian motion, a well-known phenomenon for Gaussian interest rate models (Hull, 2000). To keep the liquidity discount factor process Xt within the range of 0 and 1, we may use the reflected stochastic process. For example, the reflected OU process is modelled by:

dXt — k(U — Xt )dt + <~d Wt + dLt — dUt,

where both L and U are continuous nondecreasing processes with L0 — U0 — 0 and and increase only on the sets {t [ R + : Xt — 0} and {t [ R + : Zt — 1}. The reflected process Xt is guaranteed to stay in between 0 and 1. The other possibility is to define the liquidity discount factor process as an exponential process Xt — X0exp(— Yt) where Yt is a basic affine process:

d Yt — k (jy — Yt)dt + arffidBt + jdNt

and Nt is a Poisson process with intensity A and jis an exponential random variable with mean g. All parameters k,y <x, A, g are constants (Duffie and Singleton, 2003). Since Yt > 0 a.s. the liquidity discount factor process Xt takes values in range of 0 and 1. If there is a jump of size j at time t then the liquidity discount factor jumps downwards from Xt— to Xt — e — jXt—.

4. Numerical tests

To find numerical values of LVaR and LCVaR one may apply the results of Rockafellar and Uryasev (2002) to solve a convex optimization problem (4) with the Monte Carlo method. In fact, if S lT and XT, i — 1, ..., M are samples of random variables ST and XT set Li — S0X0 — S^XT sort Li in decreasing order, and apply Theorem 1 to find LVaRa and LCVaRa. T T

Since ST follows a GBM process, it is easy to generate ST. To generate XT, we need to know the distribution of XT which is known for the mean reversion OU and CIR processes. In each simulation run, we first generate jump times Ti and jump sizes Yi, i = 1,..., NT, in the interval [0, T]. If Xt follows an OU jump diffusion process, then we generate further NT + 1-independent standard normal random variables Zn, n = 1,..., NT + 1, and compute XT by (12) with the Ito integral:

T ektdBt = J-

J Tn-1 V

(e2ktn — e2ktn-1) 72k) Z

If Xt follows a CIR jump diffusion process, then we generate recursively further NT + 1 noncentral x2 variables Xt from the noncentral x2 distribution function (14) with s = t, x = Xt (1 + Yi)) and t = t+ 1 for i = 0,..., NT and finally set XT = XTnt + 1 - . (Here, we denote XTo (1 + Y0) = X0 and t0 = 0 and tnt + 1 = T.) Noncentral x2 random variables can be generated with the algorithm disTcussed in Glasserman (2003).

Table I lists the values of LVaR and LCVaR with the OU and CIR jump diffusion processes. The parameters used represent a market in which the liquidity premium is small and stable (mean reversion level close to 1 and volatility close to 0), the liquidity dry up event is rare (once every five years on average), potential liquidity loss is severe (20-50 percent of asset value), the holding period is two weeks. The number of simulations is = 100,000.

Table I clearly shows the following outcomes:

• The OU and CIR jump diffusion processes produce very similar values for LVaR and LCVaR, which implies that one can essentially use either of these two models to compute liquidity adjusted risk measures.

• LCVaR is much larger than LVaR at 0.99 percentile, which implies that LVaR can still seriously underestimate the potential loss even after the jump liquidity risk is included, LCVaR is a more realistic potential loss indicator.

• LVaR and LCVaR are close at 0.999 percentile, which implies these two risk measures produce similar results at the extreme tail part of the loss distribution.

Table II shows that jump intensity A affects greatly the values of LVaR and LCVaR. When there are no jumps (A = 0) LVaR and LCVaR are close to VaR and CVaR, the difference is mainly due to the CIR mean reversion diffusion process for the liquidity

Table I.

LVaR and LCVaR for OU and CIR jump diffusion processes

0.99 0.999

CIR OU CIR OU

8.96 11.70

11.05 11.02 45.63 45.45

10.18 12.66

29.90 29.66 48.30 48.13

Notes: Data: S0 = 100, s = 0.2, X0 = 1, k = 1, 6 = 0.98, s = 0.02, A = 0.2, T = 0.04, Yi ~ U[ - 0.5, 0.2]

discount factor. When A increases, both LVaR and LCVaR increase at different speed. For example, at 0.99 percentile, when A — 0.2, LVaR is increased by 23 percent over the standard VaR, while LCVaR is increased by 194 percent over the standard CVaR, and the ratio of LCVaR to LVaR is about 2.7. This implies that the traditional VaR and CVaR are inappropriate risk measures in the presence of jump liquidity risk, and that one should take cautious views on the loss suggested by the LVaR as it may seriously underestimate the potential average loss for rare jump liquidity events.

Table III shows that as the holding period T increases both LVaR and LCVaR increase and LVaR gives a good indication of the average loss. When the holding period T is very short (e.g., one day) the LVaR, VaR, and CVaR all suggest a small loss. However, LCVaR points out a much larger loss. At 0.999 percentile, the ratio of LCVaR to LVaR is 6.0, which implies that if one manages the risk with the liquid asset suggested by VaR/CVaR/LVaR, one is possibly unable to withstand the potential severe loss. This sheds some light to the cause of the fall of the LTCM which had great difficulty in raising sufficient cash in a short spell of time to meet margin calls by liquidating the asset in a market where the liquidity essentially disappeared.

We have also tested cases for different mean-reversion rate k, mean-reversion level U, and volatility a. We find that LVaR and LCVaR are not very sensitive to changes of these parameters. This is because over a short period (two weeks) the change caused by diffusion part of the liquidity discount factor process is small, but if there is a jump liquidity event, then there is little time to recover and the loss is likely to be substantial. On the other hand, LVaR and LCVaR are sensitive to the magnitude of the jump size.

5. Conclusion

We have suggested in this paper some plausible stochastic processes to model liquidity risk and discussed their impact on VaR and CVaR. We have shown that VaR, CVaR,

0 0.2 1 0 0.2 1

9.07 11.03 42.09 11.82

10.29 29.89 46.83 12.80 48.29 56.86

Notes: Data: S0 = 100, s = 0.2, X0 = 1, k = 1, в = 0.98, s = 0.02, T = 0.04, Y ~ U[- 0.5, - 0.2]

Table II.

LVaR and LCVaR for the CIR jump diffusion model with varying A

0.99 0.999

0.0028 0.04 1

0.0028 0.04 1

2.44 8.96 38.45 3.22 11.70 47.17

2.48 11.03 51.55 3.48 45.62 63.66

2.79 10.18 42.37 3.50 12.66 49.94

4.59 29.89 57.00 20.91 48.29 67.38

Notes: Data: S = 100, s = 0.2, X0 = 1, k = 1, в = 0.98, s = 0.02, Л = 0.2, Y ~ U[- 0.5, - 0.2]

Table III.

LVaR and LCVaR for the CIR jump diffusion model with varying T

and LVaR can seriously underestimate the potential loss over a short holding period for rare jump liquidity events. This has significant implication for short-term risk management, i.e. one should keep a much larger liquid asset reserve than the suggested VaR value to withstand the potential severe loss if a rare "bad" event does happen. The LTCM's fall is a recent example to illustrate such a need. A better risk measure is the LCVaR which gives a more realistic loss estimation in the presence of the liquidity risk.

We have also suggested a simple and fast Monte Carlo method to compute approximate VaR and CVaR without having to solve nonlinear equations and to integrate tail expectations. The only work involved is to generate and sort samples of the loss distribution, which is sufficient to find VaR and CVaR of all percentiles, and their marginal values for a portfolio of securities.

Many questions remain to be answered, especially in the area of calibration, empirical studies, and correlation modelling. For example, how to calibrate the jump liquidity risk process to the market data? how good are these models in explaining market liquidity-crunch and crash behaviour? What is the correlation of the liquidity risk with other market risks such as credit risk? We will focus on these questions in our future work.

References

Artzner, P., Delbaen, F., Eber, J.M. and Heath, D. (1999), "Coherent measures of risk", Mathematical Finance, Vol. 9, pp. 203-28.

Black, F. (1971), "Towards a fully automated exchange: Part 1", Financial Analyst Journal, Vol. 27, pp. 29-34.

Conn, A.R., Gould, N.I.M. and Toint, P.L. (1992), LANCELOT: A FORTRAN Package for Large-scale Nonlinear Optimization, No. 17 in Springer Series in Computational Mathematics, Springer, New York, NY.

Cox, J.C., Ingersoll, J.E. Jr. and Ross, S.A. (1985), "A theory of the term structure of interest rates", Econometrica, Vol. 53, pp. 385-408.

Duffie, D. and Singleton, S.K. (2003), Credit Risk: Pricing, Measurement, and Management, Princeton University Press, Princeton, NJ.

Dunbar, N. (1998), "Meriwether's meltdown", Risk, October, pp. 32-6.

Glasserman, P. (2003), Monte Carlo Methods in Financial Engineering, Springer, New York, NY.

Hull, J. (2000), Options, Futures, and Other Derivatives, Prentice-Hall, London.

Karatzas, I. and Shreve, S.E. (1988), Brownian Motion and Stochastic Calculus, Springer, New York, NY.

Lawrence, C. and Robinson, G. (1996), "Liquidity, dynamic hedging and VAR risk", Management for Financial Institutionals, pp. 63-72.

Rockafellar, R.T. and Uryasev, S. (2002), "Conditional value-at-risk for general loss distributions", Journal of Banking & Finance, Vol. 26, pp. 1443-71.

Zheng, H. (2006), "Interaction of credit and liquidity risks: modelling and valuation", Journal of Banking & Finance, Vol. 30, pp. 391-407.

Further reading

Nocedal, J. and Wright, S.J. (1999), Numerical Optimization, Springer, New York, NY.

Appendix: Proofs

Proof o/ Theorem 1. The dual problem for (5) is:

max L131 + • • • + LmjVm s.t. y + y + • • • +jyM — 1, 0 #y # g, i — 1, ...,M. (A1)

The choice of integer ensures that Ng # 1 and (N + 1)g > 1. Since L1 $ • • • $ LM we know the optimal solution to (A1) is:

y— y*— • • •—% — g, ^N+1— 1—Ng, ^N+2— ^N+3— • • •—^M — 0. (A2)

The Lagrangian function for (A1) is given by:

M / M \ M M

L — —^L-y + x ^y — 1 + Y^z--(y — g) — myi, i=1 V i=1 J i=1 i=1

where x, zi, m^ i — 1,..., M are Lagrange multipliers. The optimal solution to the dual problem (A1) is characterized by the following Kuhn-Tucker conditions:

Li + x + Zi - m = 0, (A3)

Zi(y - g) = 0, (A4)

mi yi = 0; (A5)

Zi $ 0, (A6)

m $ 0, (A7)

for i = 1,..., M Since the optimal solution to the dual problem is (A2) we can find the optimal Lagrange multipliers x, zi and m, i = 1,..., M from the Kuhn-Tucker conditions as follows:

• For i = 1,..., N

* (A5) (A3)

y = g )m = 0 ) - Li + x + Zi = 0.

For i = N + 2,..., M

y = 0) = 0.

For i = N + 1

yv+1 = 1 - Ng<g)ZN+1 = 0.

If Ng < 1 then

>mN+1 = 0) - Ln+1 + x + ZN+1 = 0)x = L

and z* = L* - LN+ i = 1,..., N, from (a). The optimal solution to the primal problem (5) is unique: x* = LN+1; z* = Li - LN+1, and z* = 0, i = N + 1, ..., M. If 1 - Ny = 0, then

* r(A3) , r , (A7) -, r (c) -, r

jyN+1 = 0)x + ZN+1 = Ln+1 + mN+1 )x + Zn+1 $ Ln+1)x $ Ln+1

Since (a) and (A6) imply z* = L* - x $ 0, i = 1,..., N, must also satisfy x # LN as {L*} is a non-increasing sequence. The optimal solution to the primal problem (5) is not unique: x * [ [Ln+ 1, Ln], z* = Li - x*, i = 1,... ,N and z* = 0, i = N + 1, ..., M.

Rockafellar and Uryasev (2002) show that VaR is equal to the left endpoint of the optimal solution set, which implies VaRa = LN+1 whether the primal problem has a unique solution or not, and CVaRa = (1 — Ng)LN+1 + g(L1 + ••• + LN) is the corresponding optimal value. □ Proof of Theorem 2. We first use the induction method to prove (12). In fact, when NT = 0, i.e. there is no jump in interval [0, T], then (12) is the same as (10). Now assume that (12) is correct for NT = j — 1( j $ 1) and we only need to show that (12) is correct for NT = j too. Since there is no jump between the jth jump time t, and the terminal time the solution XT is given by:

XT = XTje -k(T—j + 0(1 — e—k(T—T')) + de—eksdBs.

On the other hand, since t, is a jump time and there are only j — 1 jumps in interval [0, t,) the induction assumption implies:

Xt = (1 + Y,)XT, =(1 + ¥,)[ m—1 + de

r'jUnj—J eksdBs)

n=1 -T, 1 /

= U 1jX0e—kT + 0eU,t,(ekT' — ekTn 1) + de—^ U,t, eksdBs.

n=1 n=1 J Tn 1

Substituting XTj into (A8) we see that (12) holds true for NT = j.

Given jump sizes Y1,..., Yj XT defined in (12) is a normal variable with mean mj and variance dj2. Here, we have used the independent increment property of a Brownian motion and the Ito isometry property. The conditional probability (9) is therefore given by (13).

We can also prove (13) by substituting (11) directly into (9). First, note that if Z1 ~ N(m1, sf) and Z2 ~ N(0, s?2) are two independent normal variables, then cZ1 + Z2 (c is a constant) is a normal variable with mean cm1 and variance c2 of + o|, and:

P(cZ 1 + Z2 # y) = F On the other hand, by conditioning on Z1 we have:

y — cm1

Vc 2° + °

P(cZ 1 + Z2 # y) = Therefore, the following relation holds:

f y—Z) d^z — m1

( \ y — cm1

ycidf+°2

With the expression (11) for Fst' x(y) and the relation (A9) we can get:

pTa+^te (y)dFU 1'(1+Yj 1 >xj 1 (x,)

?Uj 1,(1+Yj 1 )Xj

y — e -k(T-ui -1) u,—l}x}—1 — 0e—k^n+}JUnj(ekun — eku, 1) v yjd■ 2e—2k^,n+})U2nt j(e2ku, — e2ku, 1 )/(2k) /

Repeating the same argument, also noting k0 = 0, x0 = X0, and Y0 = 0, we get:

(y)dF2 1 ■ (1+Yj 1)Xj 1 (Xj).. .dFf0 (X1) = F

Jump liquidity risk

Corresponding author

Harry Zheng can be contacted at: h.zheng@imperial.ac.uk

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

This article has been cited by:

1. Jan Hanousek, Evzen Kocenda, Jan Novotny. 2013. Price jumps on European stock markets. Borsa Istanbul Review . [CrossRef]

2. Ahmed Arif, Ahmed Nauman Anees. 2012. Liquidity risk and performance of banking system. Journal of Financial Regulation and Compliance 20:2, 182-195. [Abstract] [Full Text] [PDF]