GPUs Take Monte Carlo Options Pricing Real-Time
In finance, an option or derivative is a contract that gives a firm the right, under certain conditions, to buy or sell an underlying asset or instrument. Financial firms use options to reduce the risks associated with investing. While options can add balance to a portfolio and limit exposure to potential threats, they are complex securities that must be approached with due diligence. Accurately calculating risk and pricing is a critical part of this strategy, and GPU co-processors have increasingly been commandeered for this computationally intensive task.
There are a number of computational algorithms common to the financial world. In finance pricing, Monte Carlo simulation is the dominant technique for American-style options, which may be exercised at any time before the expiration date. The Monte Carlo method provides the decision maker with a range of possible outcomes and the probabilities they will occur for any choice of action.
Julien Demouth, who works on the developer technology team at Nvidia, describes an implementation of American option pricing using Monte Carlo Simulation with a GPU-optimized implementation of the Longstaff Schwarz algorithm. This setup, developed in collaboration with partners IBM and the Securities Technology Analysis Center (STAC), was used to calculate a risk management benchmark in real-time on a single workstation with Tesla GPUs.
Nvidia worked with IBM and STAC to implement the STAC-A2 benchmark on two Nvidia Tesla K20X GPUs. The system calculated the risk and pricing of this particular complex option in less than a second, showing that risk management benchmarks, like STAC-2, can be operated in real-time, enabling them to be used ahead of actual trades.
STAC-A2 is a suite of benchmarks based on options Greeks, standard financial tools that measure the sensitivity of the price of the option based on a number of variables, such as changes in interest rates.
A key component of the Monte Carlo simulation for pricing American-style options is the Longstaff-Schwartz algorithm, a calculation that determines at which point the option should be exercised. Nvidia's STAC-A2 implementation initially used a hybrid method that relied on the CPU to perform the linear regression, but since then, Nvidia developers have come up with a new implementation that runs entirely on the GPU. To maximize the performance of the linear regression, Nvidia developers figured out a way to reduce the amount of data transferred between the GPU memory and the system main memory attached to the compute cores.
On a Tesla K40 GPU coprocessor, announced last November by Nvidia, the GPU-optimized algorithm prices an American option on 32,000 paths and 100 time steps in under 3ms. Complete time, including the paths generation, is less than 5.5ms.
Nvidia also explains the programming techniques used to obtain a very efficient code for the Andersen Quadratic Exponential (QE) path discretization, used in quantifying the random nature of the stock price and its volatility. Although Andersen-QE presents a challenge for efficient parallel implementations, Nvidia improved performance by finely tuning each of the branches of the code and moving as much computation outside of the block as possible.
According to results published by STAC, STAC-A2 benchmarks running on Nvidia Tesla K20x GPUs showed nearly an order of magnitude speed up compared to traditional X86 CPUs. For a test machine, STAC used an IBM System x iDataPlex server with two eight-core Intel Xeon E5-2660 processors running at 2.20 GHz processors and two Nvidia K20Xm GPUs. The software stack was coded by Nvidia using the CUDA 5.5 toolkit. The system delivered over six times the average speed of the fastest publicly benchmarked system without GPUs. Furthermore, adding one or two GPUs to a system offers speedups just north of 5X and 9X, respectively, compared to the same system without GPUs.
"Our STAC-A2 benchmark code and this implementation of the Longstaff-Schwartz algorithm both illustrate how Nvidia GPUs can make option pricing and risk analysis much faster," notes Demouth, who will be giving a presentation about this work at the GPU Technology Conference in San Jose, California on March 26.