Quantum Computing and Quantum Machine Learning: A Quant Finance Perspective

Recently, we had the pleasure of hosting Dr. Alexei Kondratyev at our quarterly Scientific Advisory Board Meeting. Dr. Kondratyev is Managing Director at Standard Chartered Bank in London, where he leads the data analytics group, and he was awarded Risk Magazine’s prestigious Quant of the Year award in 2019. The award was based on his progress in developing financial applications using recent advances in machine learning, optimization and quantum computing. His work included using artificial neural networks to model interest-rate curve dynamics, developing evolutionary algorithms for optimizing margin valuation adjustment and applying quantum computing to enhance portfolio optimization. The following paragraphs provide a brief summary of his presentation and highlight some of the exciting trends in quantitative finance.

Quantum Computing
Computation can be defined as a transformation of one memory state into another. In classical computing, information is preserved in the form of binary digits. Computations are executed using logic gates; an appropriate combination of universal NAND gates can implement any arbitrary function of two bits. However, there are only two gates which can operate on single input bits – identity and negation. On the other hand, according to the laws of quantum physics, bits in quantum computers can exist in multiple overlapping states. This allows us to define infinitely many logic gates operating on single quantum bits. As a result of this property, quantum computers can perform operations on many values simultaneously, allowing them to solve certain problems significantly faster. However, large development costs and general unavailability of quantum computers still pose a massive obstacle for their widespread usage.

Quantum annealing
One particularly interesting application of quantum computing is quantum annealing, a meta-heuristic for finding the global minimum of a given optimization problem. Given that we can encode the information about our optimization problem into a physical system using Hamiltonian mathematical functions – the solution to the problem corresponds to the lowest energy state of the system. According to the recent paper by Venturelli and Kondratyev (2018), quantum annealing solves classic Markowitz asset allocation problems up to three orders of magnitude faster than classical benchmark models, such as genetic algorithms.

Quantum neural networks
Quantum neural networks (QNN) are typically trained by supervised learning and consist of a sequence of parameter dependent unitary transformations which act on input quantum states. The structure of the output layer is task-dependent. For binary classification, it consists of a single Pauli operator whose output predicts the binary label of the input state vector. Farhi and Neven (2018) present an interesting proof of capability for QNNs to solve classification problems, such as distinguishing between handwritten digits.

Learning sampling distributions
Quantum circuit Born machines are generative models structurally similar to neural networks, comprised of sequentially connected single-qubit rotation and multi-qubit entanglement operators. Classical probability distributions are discretized and represented as quantum pure states. For the quantum sampling task, Liu and Wang (2018) propose a gradient-based learning algorithm to train a Born circuit, from which one can efficiently draw samples via projective measurements on qubits. There are numerous potential applications for finance, ranging from pricing of exotic derivatives to nonlinear asset allocation based on simulation.

Restricted Boltzmann Machines
Restricted Boltzmann machines (RBM) are generative stochastic neural networks, consisting of connected visible and hidden layers, that are commonly used to learn probability distributions over a set of inputs. Kondratyev and Schwarz (2019) propose using Bernoulli RBMs for synthetic scenario generation, replicating the empirical probability distribution of the input market data. Their non-parametric model allows for generation of realistic, unique uni- and multi-variate scenarios, as well as for the control of the degree of autocorrelation through hyperparameter tuning. Moreover, by injection of conditional variables, one can enforce RBMs to detect market regimes in the training phase and generate scenarios conditional to the market state.

Adachi and Henderson (2015) have proposed a quantum extension to the classical RBM training approach, reporting significant improvement in accuracy and computational time. The costly Gibbs sampling step in the generative learning phase was replaced with quantum annealing, while the weights were fine-tuned using the standard backpropagation approach.


Share on: