Workshop on Computational and Algorithmic Finance (WCAF) Session 1

Time and Date: 10:35 - 12:15 on 6th June 2016

Room: Boardroom East

Chair: A. Itkin and J.Toivanen

136 Reduced Order Models for Pricing American Options under Stochastic Volatility and Jump-Diffusion Models [abstract]
Abstract: American options can be priced by solving linear complementary problems (LCPs) with parabolic partial(-integro) differential operators under stochastic volatility and jump-diffusion models like Heston, Merton, and Bates models. These operators are discretized using finite difference methods leading to a so-called full order model (FOM). Here reduced order models (ROMs) are derived employing proper orthogonal decomposition (POD) and non negative matrix factorization (NNMF) in order to make pricing much faster within a given model parameter variation range. The numerical experiments demonstrate orders of magnitude faster pricing with ROMs.
Maciej Balajewicz, Jari Toivanen
237 Implicit Predictor Corrector method for Pricing American Option under Regime Switching with Jumps [abstract]
Abstract: We develop and analyze a second order implicit predictor-corrector scheme based on Exponential time differencing (ETD) method for pricing American put options under Multistate - Regime Switching economy with Jump Diffusion Models. Our approach formulates the problem of American options pricing as a set of coupled partial intgro-diffrential equations (PIDE), which we solve using a primitive tri-diagonal linear system, while we treat the complexity of the dense jump probability generator and the nonlinear regime switching terms explicitly in time. We define both differential and integral terms of the PIDE on the same domain, and discretize the spatial derivatives using a non-uniform mesh. The American option constraint is enforced by using a scaled penalty method approach to establish a conservative bound for the penalty parameter. We also provide a detailed treatment for the consistency, stability, and convergence of the proposed method, and analytically study the impact of the jump intensity, penalty and non-uniform parameters on convergence and solution accuracy. The dynamic properties of the no -uniform mesh and ETD approach are utilized to calibrate suitable values for the penalty and no uniform grid parameters. Superiority of the prosed scheme over recently published methods is demonstrated by numerical examples by discussing the efficiency, accuracy and reliability of the proposed approach
Abdul Khaliq, Mohammad Rasras and Mohammad Yousuf
235 Model Impact on Prices of American Options [abstract]
Abstract: Different dividend assumptions consistent with prices of European option can lead to very different prices for American options. In this paper we study the impact of continuous versus discrete and cash versus proportional dividend assumption on the prices of European and American options and discuss the consequences it implies for calibration and pricing of exotic instruments.
Alexey Polishchuk
137 Fixing Risk Neutral Risk Measures [abstract]
Abstract: As per regulations and common risk management practice, the credit risk of a portfolio is managed via its potential future exposures (PFEs), expected exposures (EEs), and related measures, the expected positive exposure (EPE), effective expected exposure (EEE) and the effective expected positive exposure (EEPE). Notably, firms use these exposures to set economic and regulatory capital levels. Their values have a big impact on the capital that firms need to hold to manage their risks. Due to the growth of CVA computations, and the similarity of CVA computations to exposure computations, firms find it expedient to compute these exposures under the risk neutral measure. Here we show that exposures computed under the risk neutral measure are essentially arbitrary. They depend on the choice of numeraire, and can be manipulated by choosing a different numeraire. The numeraire can even be chosen in such a way as to pass backtests. Even when restricting attention to commonly used numeraires, exposures can vary by a factor of two or more. As such, it is critical that these calculations be done under the real world measure, not the risk neutral measure. To help rectify the situation, we show how to exploit measure changes to efficiently compute real world exposures in a risk neutral framework, even when there is no change of measure from the risk neutral measure to the real world measure. We also develop a canonical risk neutral measure that can be used as an alternative approach to risk calculations.
Harvey Stein
336 Efficient CVA Computation by Risk Factor Decomposition [abstract]
Abstract: According to Basel III, financial institutions have to charge a Credit Valuation Adjustment (CVA) to account for a possible counterparty default. Calculating this measure is one of the big challenges in risk management. In earlier studies, future distributions of derivative values have been simulated by a combination of finite difference methods for the option valuation and Monte Carlo methods for the state space sampling of the underlying, from which the portfolio exposure and its quantiles can be estimated. By solving a forward Kolmogorov PDE for the future underlying distribution instead of Monte Carlo simulation, we hope to achieve efficiency gains and better accuracy especially in the tails of future exposures. Together with the backward Kolmogorov equation, the expected exposure and quantiles can then directly be obtained without the need for an extra Monte Carlo simulation. We studied the applicability of PCA and ANOVA-based dimension reduction in the context of a portfolio of risk factors. Typically, for these portfolios, a huge number of derivatives are traded on a relatively small number of risk factors. By solving a PDE for one risk factor, it is possible to value all derivatives traded on this single factor over time. However, if we want to solve a PDE for multiple risk factors, one has to deal with the curse of dimensionality. Between these risk factors, the correlation is often high, and therefore PCA and ANOVA are promising techniques for dimension reduction and can enable us to compute the exposure profiles for higher dimensional portfolios. We compute lower dimensional approximations where only one factor is taken stochastic and all other factors follow a deterministic term structure. Next, we correct this low dimensional approximation by two dimensional approximations. We also look into the effect of taking higher (three) dimensional corrections. In our results, our method is able to compute Exposures (EE, EPE and ENE) and Quantiles for a real portfolio driven by 10 different risk factors. This portfolio consists of Cross-Currency Swaps, Interest rate swaps and FX call or put options. The risk factors are: stochastic FX rates, stochastic volatility and stochastic domestic and foreign interest rates. The method is accurate and fast when compared to a full-scale Monte Carlo implementation.
Kees de Graaf, Drona Kandhai and Christoph Reisinger

Workshop on Computational and Algorithmic Finance (WCAF) Session 2

Time and Date: 14:30 - 16:10 on 6th June 2016

Room: Boardroom East

Chair: A. Itkin and J.Toivanen

135 LSV models with stochastic interest rates and correlated jumps [abstract]
Abstract: Pricing and hedging exotic options using local stochastic volatility models drew a serious attention within the last decade, and nowadays became almost a standard approach to this problem. In this paper we show how this framework could be extended by adding to the model stochastic interest rates and correlated jumps in all three components. We also propose a new fully implicit modification of the popular Hundsdorfer and Verwer and Modified Craig-Sneyd finite-difference schemes which provides second order approximation in space and time, is unconditionally stable and preserves positivity of the solution, while still has a linear complexity in the number of grid nodes.
Andrey Itkin
147 Forward option pricing using Gaussian RBFs [abstract]
Abstract: We will present a method to numerically price options by solving the Fokker-Planck equation for the conditional probability density p(s,t|s_0,t_0). This enables the pricing of several contracts with pay-offs ϕ(s,K,T) (with strike-price K and time of maturity T) by integrating p(s,T|s_0,t_0) multiplied by ϕ(s,K,T) and discount to today's price. From a numerical perspective the initial condition for the Fokker-Planck equation is particularly challenging since it is a Dirac delta function. In [1] a closed-form expansion for the conditional probability density was introduced that is valid for small time-steps. We use this for the computation of p(s,t_0+∆t|s_0,t_0) the first time-step. For the remaining time-steps we discretize the Fokker-Planck equation using BDF-2 in time and Radial Basis Function (RBF) approximation in space with Gaussian RBFs. Finally, the computation of the option prices from the obtained p(s,T|s_0,t_0) can be done analytically for many pay-off functions ϕ(s,K,T), due to the Gaussian RBFs. We will demonstrate the good qualities of our proposed method for European call options and barrier options. [1] Y. Aït-Sahalia, Maximum-likelihood estimation of discretely-sampled diffusions: A closed-form approximation approach, Econometrica, 70: 223–262, 2002.
Jamal Amani Rad, Josef Höök, Elisabeth Larsson and Lina von Sydow
512 Tail dependence of the Gaussian copula revisited [abstract]
Abstract: Tail dependence refers to clustering of extreme events. In the context of financial risk management, the clustering of high-severity risks has a devastating effect on the well-being of firms and is thus of pivotal importance in risk analysis. When it comes to quantifying the extent of tail dependence, it is generally agreed that measures of tail dependence must be independent of the marginal distributions of the risks but rather solely copula-dependent. Indeed, all classical measures of tail dependence are such, but they investigate the amount of tail dependence along the main diagonal of copulas, which has often little in common with the concentration of extremes in the copulas' domain of definition. In this paper we urge that the classical measures of tail dependence may underestimate the level of tail dependence in copulas. For the Gaussian copula, however, we prove that the classical measures are maximal. As, in spite of the numerous criticisms, the Gaussian copula remains ubiquitous in a great variety of practical applications, our ndings must be a welcome news for risk professionals.
Ed Furman, Alexey Kuznetsov, Jianxi Su and Ricardas Zitikis
94 Radial Basis Function generated Finite Differences for Pricing Basket Options [abstract]
Abstract: A radial basis function generated finite difference (RBF-FD) method has been considered for solving multidimensional PDEs arising in pricing of financial contracts, mainly basket options. Being mesh-free while yielding a sparse differentiation matrix, this method aims to exploit the best properties from, both, finite difference (FD) methods and radial basis function (RBF) methods. Moreover, the RBF-FD method is expected to be advantageous for high-dimensional problems compared to: Monte Carlo (MC) methods which converge slowly, global RBF methods since they produce dense matrices, and FD methods because they require regular grids. The method was succesfully tested in solving the standard Black-Scholes-Merton equation for pricing European and American options with discrete or continuous dividends in 1D. Then, it is developed further in order to price European call basket and spread options in 2D on adapted domains, and some groundwork has been done in solving 3D problems as well. The method features a non-uniform node placement in space, as well as a variable spatial stencil size, in order to improve the accuracy in the regions with known low regularity. Performance of the method and the error profiles have been studied with respect to discretization in space, size and form of stencils, and RBF shape parameter. The results highlight RBF-FD as a competitive, sparse method, capable of achieving high accuracy with a small number of nodes in space.
Slobodan Milovanovic and Lina von Sydow
138 A Unifying Framework for Default Modeling [abstract]
Abstract: Credit risk models largely bifurcate into two classes – the structural models and the reduced form models. Attempts have been made to reconcile the two approaches via restricting information by adjusting filtrations, but they are technically complicated. Here we propose a reconciliation inspired by actuarial science’s approach to survival analysis. Extending the work of Chen, we model the hazard rate curve itself as a stochastic process. This puts default models in a form resembling the HJM model for interest rates, yielding a unifying framework for default modeling. All credit models can be put in this form, and default dependent derivatives can be directly priced in this framework. Predictability of default has a simple interpretation in this framework. The framework enables us to disentangle predictability and the distribution of the default time from calibration decisions such as whether to use market prices or balance sheet information. It also allows us a simple way to define new default models.
Harvey Stein, Nick Costanzino and Albert Cohen

Workshop on Computational and Algorithmic Finance (WCAF) Session 3

Time and Date: 16:40 - 18:20 on 6th June 2016

Room: Boardroom East

Chair: A. Itkin and J.Toivanen

77 Global Optimization of nonconvex VaR measure using homotopy methods [abstract]
Abstract: Value at Risk is defined as the maximum loss of a portfolio given a future time horizon within high confidence (or probability, typical values used are 95% or 99%). In our work we devise novel techniques to minimize the non-convex Value-at-Risk function. VaR has the following properties: 1. VaR is a non-coherent measure of risk; in particular, it is not sub-additive. 2. VaR also happens to be a non-convex (multiple local solutions). The above properties make search for a global minimum of VaR a very diffcult problem, in fact an NP-hard problem. CVaR is a coherent and convex measure of risk and we use homotopy methods to project CVaR optimal solutions to VaR optimum. The results show that optimal VaR is within 1% of global minimum if found and as efficient as finding a solution to a convex conditional-VaR minimization problem.
Arun Verma
502 Optimal Pairs Trading with Time-Varying Volatility [abstract]
Abstract: We propose a pairs trading model that incorporates a time-varying volatility of the Constant Elasticity of Variance type. Our approach is based on stochastic control techniques; given a fixed time horizon and a portfolio of two cointegrated assets, we define the trading strategies as the portfolio weights maximizing the expected power utility from terminal wealth. We compute the optimal pairs strategies by using a Finite Difference method. We then show some empirical tests on data of stocks that are dual listed in Shanghai and Hong Kong of China, with low frequency and high frequency.
Thomas Lee
239 Computational Approach to an Optimal Hedging Problem [abstract]
Abstract: Consider a hedging strategy g(s) for using short-term futures contracts to hedge a long-term exposure. Here the underline commodity $S_t$ follows the stochastic differential equation $d S_t = \mu dt + \sigma dW_t$. It is known that the full hedging is not a good choice in terms of the risk. We establish a numerical approach for searching a strategy g(s) which reduces the running risk of the hedging. The approach also leads to the numerical solution of the optimal strategy for such a hedging problem.
Chaoqun Ma, Zhijian Wu and Xinwei Zhao
382 Novel Heuristic Algorithm for Large-scale Complex Optimization [abstract]
Abstract: Research in finance and lots of other areas often encounter large-scale complex optimization problems that are hard to find solutions. Classic heuristic algorithms often have limitations from the objectives that they are trying to mimic, leading to drawbacks such as lacking memory-efficiency, trapping in local optimal solutions, unstable performances, etc. This work considers imitating market competition behavior (MCB) and develops a novel heuristic algorithm accordingly, which combines characteristics of searching-efficiency, memory-efficiency, conflict avoidance, recombination, mutation and elimination mechanism. In searching space, the MCB algorithm updates solution dots according to the inertia and gravity rule, avoids falling into local optimal solution by introducing new enterprises while ruling out of the old enterprises at each iteration, and recombines velocity vector to speed up solution searching efficiency. This algorithm is capable of solving large-scale complex optimization model of large input dimension, including Over Lapping Generation Models, and can be easily applied to solve for other complex financial models. As a sample case, MCB algorithm is applied to a hybrid investment optimization model on R&D, riskless and risky assets over a continuous time period.
Honghao Qiu, Yehong Liu

Workshop on Computational and Algorithmic Finance (WCAF) Session 4

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Boardroom East

Chair: A. Itkin and J.Toivanen

158 Optimum Liquidation Problem Associated with the Poisson Cluster Process [abstract]
Abstract: In this research, we develop a trading strategy for the discrete-time optimal liquidation problem of large order trading with different market microstructures in an illiquid market. In this framework, the flow of orders can be viewed as a point process with stochastic intensity. We model the price impact as a linear function of a self-exciting dynamic process. We formulate the liquidation problem as a discrete-time Markov Decision Processes, where the state process is a Piecewise Deterministic Markov Process (PDMP). The numerical results indicate that an optimal trading strategy is dependent on characteristics of the market microstructure. When no orders above certain value come the optimal solution takes offers in the lower levels of the limit order book in order to prevent not filling of orders and facing final inventory costs.
Amirhossein Sadoghi and Jan Vecer
429 Expected Utility or Prospect Theory: which better fits agent-based modeling of markets? [abstract]
Abstract: Agent-based simulations may be a way to model human society behavior in decisions under risk. However, it is well known in economics that Expected Utility Theory (EUT) is flawed as a descriptive model. In fact, there are some models based on Prospect Theory (PT), that try to provide a better description. If people behave according to PT in finance environments, it is arguable that PT based agents may be a better choice for such environments. We investigate this idea, in a specific risky environment, financial market. We propose an architecture for PT-based agents. Due to some limitations of original PT, we use an extension of PT called Smooth Prospect Theory (SPT). We simulate artificial markets with PT and traditional (TRA) agents using historical data of many different assets over a period of twenty years. The results showed that SPT-based agents provided behavior closer to real market data than TRA agents in a statiscally significant way. It supports the idea that PT based agents may be a better pick to risky environments.
Paulo A. L. Castro, Anderson R. B. Teodoro and Luciano de Castro
487 Market Trend Visual Bag of Words Informative Patterns in Limit Order Books [abstract]
Abstract: This paper presents a graphical representation that fully depicts the price-time-volume dynamics in a Limit Order Book (LOB). Based on this pattern representation, a clustering technique is applied to predict market trends. The clustering technique is tested on information from the USD/COP market. Competitive trend prediction results were found, and a benchmark for future extensions was settled.
Javier Sandoval, German Hernandez, Jaime Nino, Andrea Cruz
494 Modeling High Frequency Data Using Hawkes Processes with Power-Law Kernels [abstract]
Abstract: Those empirical properties exhibited by high frequency financial data, such as time-varying intensities and self-exciting features, make it a challenge to model appropriately the dynamics associated with, for instance, order arrival. To capture the microscopic structures pertaining to limit order books, this paper focuses on modeling high frequency financial data using Hawkes processes. Specifically, the model with power-law kernels is compared with the counterpart with exponential kernels, on the goodness of fit to the empirical data, based on a number of proposed quantities for statistical tests. Based on one-trading-day data of one representative stock, it is shown that Hawkes processes with power-law kernels are able to reproduce the intensity of jumps in the price processes more accurately, which suggests that they could serve as a realistic model for high frequency data on the level of microstructure.
Changyong Zhang