Files

Abstract

Throughout history, the pace of knowledge and information sharing has evolved into an unthinkable speed and media. At the end of the XVII century, in Europe, the ideas that would shape the "Age of Enlightenment" were slowly being developed in coffeehouses, literary salons, and printed books. L'âge des lumières was characterized by shedding light on the darkness of human ignorance, and it was not a coincidence that it developed after Gutenberg invented the movable type printing machine. Today, at the end of the first quarter of the XXI century, "The Age of AI has begun"--Bill Gates. And again, it is not a coincidence that Artificial Intelligence is seeing its golden period after Tim Berners-Lee invented the World Wide Web. Books enlighten human brains. Similarly, massive datasets are now being fed to machine learning models and artificial brains, also known as neural networks. It is noteworthy that human brains are capable of creating knowledge; as for artificial brains, their capability in this regard is still uncertain. In this thesis, I will explore how data proliferation and artificial intelligence are affecting financial markets through the lens of a statistician. In the first chapter, I leverage extreme value theory to study time-varying idiosyncratic tail risk using option-implied information. In other words, does the information contained in the implied volatility surface explain extreme losses? The second chapter moves away from the distribution tails and focuses on the entire distribution of returns. It is known that stock and options traders systematically use the geometric shape of the option's implied volatility surface to infer market expectations and risk attitudes and make trading decisions. Thus, we leverage convolutional neural networks to capture spatial patterns and relationships between pixels in images and translate this information into a prediction forecast. Finally, it has been recently shown that a particular kernel, the Neural Tangent Kernel, represents an infinitely wide neural network in the lazy training regime. This discovery has revived interest in kernel methods and, thus, random features. In the final chapter, we provide an algorithm, which we call Fast Annihilating Batch Regression, which is capable of solving a regression with an infinite amount of random features, in theory.

Details

PDF