LSTM Financial Analysis

Published by alice seaborn on Monday, 14 June, 2021

This is the final project for ECE 465 (Machine Learning for Engineers) taken during the Spring of 2019 at Bradley University. The goal of this research was to identify potential advantages offered by Long Short-Term Memory networks in developing forecasting models for time-series data. The argument behind LSTM approaches is simple: typical ANNs adjust coefficients based on the data that is immediately within the scope of exposure without reference to the preceding data points, potentially blinding the ANN to the broader data trends, LSTM networks "remember" the preceding data and thereby act based on historical context. When approaching a classification problem, such as predicting the risk of heart disease on a case-by-case basis, ANN memory is an undesirable attribute since the preceding patient record is independent of the current record under consideration. Rather, ANN memory becomes useful when the data entries are strongly dependent, such as in a time-series dataset.

The problem in question for this paper was the prediction of US stock market indices. A goal as vain as it was absurd, but this high and haughty hubris was precisely the point. Feeling particularly ironic that spring, I wanted to simultaneously discuss the intricacies and the advantages of LSTM models while debunking the notion that machine learning is some magical "solve-anything" approach that whimsical scientific wizards and hocus-pocus programming professionals brew in the black cauldron of their aggressively dark-themed IDEs and then sell to investors for all the riches and Bitcoin of Scottland. Don't be fooled, in terms of this project's ability to predict macro-economic trends, it is a gorgeously documented failure.

Table of Contents

  1. Abstract
  2. Data Preparation
  3. Convolutional Approach
  4. LSTM Approach
  5. Conclusion

Abstract

All financial prediction systems are essentially using time-series data to inform their opinion of the future trajectory of an individual security or of a market as a whole. Neural networks, as exciting as they are, lack the ability to associate trends across the dimension of time. Consequently, ANNs with short memories have been designed to study the underlying trends of time-series data. These networks specifically involve RNNs, which train connections of linear temporal sequences to understand time-dependent trends. This approach to machine learning, known as Long Short-Term Memory (LSTM), has produced exciting results due in part to its capacity to generalize the trends which it observes. This project seeks to compare LSTM approaches to time-series analysis against non-LSTM ANNs in terms of their respective abilities to capture macro-economic trends.

Data Preparation

The focus of the neural network forecast model is the S&P500 composite index using S&P500 metrics from the last 20 years. In particular, this network studies the monthly price to earnings ratio (PE), the monthly dividend yield, and the ratio between the earnings yield and the 10 year treasury note (EYTY ratio). The neural network tries to relate these metrics to the monthly adjusted closing price of the index. All data is freshly downloaded from Quandl and is processed in a separate python program before being exported into a CSV file. This file is then loaded by the neural network models for study. This data pipeline design allows for the data to be studied and updated independently.

There were several issues related to the gathering of the S&P500 data. For instance, the data can only be downloaded on a monthly basis, this means that recessions, for instance, are represented by fewer data points than would be ideal for a neural network study. Additionally, the dates for the data features did not align correctly and many dates were missing. As many as 33% of the dataseries instances were missing at least one value, as shown below.

mission-value

To account for this, the dataseries was imputed using the average of the instances between the missing value date. For example, if yesterdays date was missing from the series, then the instance would be imputed using the instance from two days ago and the instance from today. Using this method, the general trends of the S&P500 were generally preserved and the pricing data reflect that of the actual S&P500 data. However, finer details of the S&P500 fluctuations were lost in this process.

Convolutional Approach

The convolutional neural network approach evaluated the S&P500 data by performing convolution on the dataset, thereby limiting its dependence on time. This is definitely an interesting approach to time series analysis because it is stressing the irrelevance of time and focusing on the S&P500 metrics instead. This approach was selected because of its ability to generalize and to act as a baseline measurement for LSTM network performance.

Despite relying less on the temporal dimension of the dataseries, time is still an important factor, especially for using the CNN as a basis for comparison with other networks. For this reason, the train-test split was not an entirely stochastic process of selecting individual data points for study. Rather, the training split selected a point within the time of study as a line of demarcation between the train and test data. This line of demarcation was selected pseudo randomly to allow for the model to explore different ranges of data in between successive executions of the training routine. The train-test split is visually plotted below.

cnn-split

When training the model, the CNN experienced losses which were indicative of learning behavior. When training neural networks, it is important that the accuracy of the models predictions increase with the number of epochs as the loss of error decreases. Under ideal scenarios, each of these trends would occur exponentially. As illustrated below, the network exponentially and successively looses error throughout the standardized 1000 epochs of its training routine.

cnn-loss

The network training results were mixed. Although the CNN was able predict the trends in the 2008 financial crisis (middle of the figure below), it clearly fails to detect the many smaller daily returns of normal market activity in the preceding and proceeding years. This could indicate that the nuances of fluctuating earnings and dividends during normal business cycles are too subtle for the network to study. On the other hand, this could involve the approach to the development of training data. Perhaps the violent fluctuations in the features during abnormal market activity were related to investor panic whereas the focus on investors and other market actors in the normal markets was on variable which were not considered in this analysis.

cnn-return

Using the starting S&P500 closing price, the prices of the composite index can be simulated from the predicted monthly returns generated by the CNN model. The results of the model are shown in the figure below in the form of S&P500 prices instead of returns. This visualization is more intuitively understood and illustrates an interesting result. Past the line of demarcation, the model departs from the actual closing prices but continues to predict its trends. For instance, economic downturns and recoveries can still be seen in this testing data despite the estimated closing prices being lower than predicted. In other words, the CNN is correctly guessing the trends, but not the values. Although this approach as focused on pricing, the price of the index is irrelevant, it is a system which the study imposes on the data. The trends of the predictions are the highlight of the analysis and they are not vacant from this model's predictions.

cnn-price

LSTM Approach

Whereas the CNN approach endeavored to reduce the effect that time had on the time series, the LSTM approach works in the opposing direction. Long short-term memory networks, as described previously, recall previous instances when developing their next prediction. Consequently, time not only remains relevant but its relevance is exaggerated. For this reason, LSTM networks are recommended for time series analysis. As with the CNN, the training and testing data were determined by splitting the dataseries along a line of demarcation, see the figure below.

lstm-split

The training losses of the LSTM network as similar to the CNN in that they decay almost exponentially from an initial high, below. Although the loses occasionally sputter, the overall trend of the losses is clearly downward. However, in hindsight, the model does not appear to have reached the end of the loss decay and could use more training. The reason that the epochs were stopped at 1000 despite this observation is to serve as an effective comparison to other models under identical testing conditions. Additionally, this study labored to avoid over training the networks to respond to false trends such as the imputing averages which were described earlier.

lstm-loss

The monthly return predictions of the LSTM network are even more muted than that of the CNN for obvious reasons. Whereas the CNN was convolving the data and taking time less seriously, the LSTM network apparently has a smoothing effect as it takes time even more seriously than it should, diminishing the networks ability to respond sharply to changes in the input. Consequently, the violent return trends exhibited by the CNN are absent from the LSTM.

lstm-return

When the returns are converted into simulated S&P500 prices, the effects of the smoothing become more apparent. The LSTM does not respond sharply to the decline in US equities in 2008 and lags behind the closing price whereas the CNN was irrationally exuberant in its expectations. Just like the CNN, the LSTM is not accurate in terms of pricing but it able to pick up some trends in S&P500 behavior.

lstm-price

Conclusion

Financial forecasting through ANNs is relatively new and untested. A proper balance between excitement and skepticism is needed to proceed further in this field. Although the CNN and LSTM networks strayed farther from real closing prices than is acceptable, their capacities to capture market trends is apparent and remarkable.

Unfortunately, traditional measurements of accuracy do not apply to this problem. For example, an R-squared measurements would punish the networks for failing to adhere to the actual pricing values but would disregard the networks ability to adhere to actual pricing trends. The trends of the market might in fact be more valuable to investors and speculators, allowing them to create volatility spreads based on the expected movements of the markets. It is imperative, given the publicity which neural networks and machine learning receive, to stress that these methods are new and unreliable. These networks show promise but further study is needed before money can be risked on their foresight, and lack thereof.