Add Thesis

Forecasting Call Option prices

A Quantitative Study in Financial Economics

Written by R. Lundmark

Paper category

Master Thesis

Subject

Economics

Year

2020

Abstract

Thesis: Implied volatility and Newton-Raphson method One of the most effective algorithms for estimating implied volatility from market observation prices through the Black-Scholes-Merton model is the Newton-Raphson method, which will become the method used in the research. The method is to take the call option price formula in the Black–Scholes–Merton model, the derivative of equation (21) with respect to sigma. Newton-Raphson is an algorithm for finding roots under the condition of satisfying assumptions. Obviously, the hypothesis is 𝑔, and a simple calculated derivative must be retained. If the derivative can be calculated methodically, then this method is more effective and efficient. 𝑔Cannot be widely used; if it varies greatly, the risk of inaccurate derivative approximation is high. Newton-Raphson's idea is to use analytical derivatives to linearly estimate where the solution should appear. 4.2 Recursive prediction method with re-estimation Before entering the recursive prediction method, it is necessary to introduce the understanding of the test method. The data set is divided into two subsets, training data and test data. The training data set is a data set used to optimize model fit, weights, process lags, and deviations. The training set is used to compare different models, decide the choice of the model, through training and testing the prediction performance of different models, involving accuracy and precision. Analyze in-sample errors. Test data set, the out-of-sample period is used for unbiased evaluation of the final model fit, and the test data is predicted by using the model. To use this method correctly, the final model should only be used once on the test data set. The judgment rules of in-sample prediction performance are more reliable than the evidence based on in-sample performance, which is more susceptible to data mining and outliers. The recursive prediction method is a strategy that uses machine learning algorithms to predict future values ​​based on previous values. It is a method for out-of-sample prediction, where the in-sample data set has an extended window. Let, 𝑦Š!, indicate the first day before the forecast of the call option price or stock price, let, 𝑦! 3&, represents the known closing price in the market yesterday. Then predict the future price of the next day, represented by the unknown closing price of the next day, using the same model, but re-estimating. In other words, the algorithm is set to use the real value 𝑦! instead of the predicted value 𝑦Š!. This real observation, 𝑦! , And then will be added to the sample data set. In the forecasting process of the second day, the forecast is stored and the sample data set is refitted with the true value of the day, and then the re-forecast of the second day will be repeated. 4.3 Optimizing the Fitting of AR-GARCH and ARMA-GARCH Models The purpose of this sub-chapter is to understand how the model is optimized and tested on the training data set, and the process of the final model. This subsection describes how to calibrate the model's recursive prediction method before applying the model to the test data set. According to Brooks (2002), effective market stocks should not be autocorrelated. In other words, the autocorrelation of different lags should not have any significant relationship. In order to identify the properties of the mean, the autocorrelation function (ACF) refers to the degree of correlation between a time series and its past values. In other words, it describes the relationship between the present value of a time series and its past value. In the AR process, Tsay (2012) described that the autocorrelation function will decay exponentially. The graph of the autocorrelation function is used to see the correlation between the observations and which lag unit is reached. By applying the autocorrelation function on the training data set, we can see the amount of autocorrelation for large lags, but perhaps the autocorrelation of the posterior lag is only due to the spread of the autocorrelation in the first lag. Instead of finding the correlation between the present value and the lag like the ACF, the partial autocorrelation function (PACF) locates the correlation between the variable and its lag, and then subtracts the residual residual of the influence from the variable and subsequent measurements. Therefore, if there is any hidden information in the residual that can be modeled by the next lag, the partial autocorrelation reveals the best (p) order of the AR process. The MA process is a process in which a series of present values ​​is defined as a linear combination of past errors, where the errors are assumed to be independent and identically distributed random variables. Tsay (2012) recommends using the autocorrelation function graph to find the (q) order of the MA process by first observing a significant lag that exceeds the upper limit of the confidence interval. The AnMA process does not include seasonal or trend components, so the autocorrelation function will only obtain the correlation due to residual components. In order to identify the MA process, the correlation in the autocorrelation function should have a good correlation with the closest hysteresis, and then decrease rapidly during the MA process. Partial autocorrelation function, another method for confirming the best model selection is to perform the stepwise method of AR and MAlags on the training data set of each predictor variable, AUTO-ARIMA, a function in R, it is a function of possible models Perform the search in the order constraints provided, regarding minimizing AIC (Akaike Information Criterion) and SBC (Schwarz's Bayesian Information Criterion). Read Less