XClose

UCL Computer Science

Home
Menu

Seminars

The group seminars are Wednesday 4-5pm.

FCA seminar series 

Please contact the organiser Paolo Barucca to be emailed the announcements of forthcoming seminars.

Reinforcement learning for optimal execution: time-varying liquidity and multiple-player games

Fabrizio Lillo, Universita di Bologna

16:00-17:00, Friday 29 November 2024, G12 in Torrington Place (1-19) 

Abstract: Optimal execution of large orders is a classical problem in mathematical finance and of great importance for the industry. Most models make simplifying assumptions in order to achieve analytical tractability. With an alternative approach, I will present some recent advancements obtained with Reinforcement Learning (RL) to cases when liquidity is time varying and when multiple executions are present at the same time. Specifically, I will first show that when market impact is time varying, RL based techniques are able to find solutions which are superior to approximated analytical solutions. Then, I will consider the case of multiple RL agents simultaneously performing an optimal execution and I will show that the strategies learned by the agents deviate significantly from the Nash equilibrium of the corresponding market impact game, suggesting that the learned strategies exhibit tacit collusion.

 

The Laplacian Renormalization Groups (LRG) unveils the structural organization of heterogeneous networks

Andrea Gabrielli, Università degli Studi ROMA TRE 

16:00-17:00, Thursday 28 November 2024, G12 in Torrington Place (1-19) 

Abstract: Complex networks have emerged as the primary framework for modeling the dynamics of interacting systems. However, networks inherently describe pairwise interactions, while real-world systems often involve interactions among groups of three or more units. In this talk, I will explore social systems as a natural testing ground for higher-order network approaches. I will briefly demonstrate how incorporating higher-order mechanisms can lead to the emergence of novel phenomena, presenting recent results on the influence of structural features and seeding strategies on emergent dynamics. Finally, I will delve into the microscopic dynamics of empirical higher-order structures, examining the mechanisms governing their temporal dynamics at both the individual and group levels. This will involve characterizing how individuals navigate groups and how groups form and dissolve. I will conclude by proposing a dynamical hypergraph model that closely reproduces empirical observations.

 

The social dynamics of group interactions

Iacopo Iacopini, Northeastern University in London

16:00-17:00, Wednesday 23 October 2024, Room G01, UCL Gower Street Building, 66-72 Gower Street 

Abstract: Complex networks have emerged as the primary framework for modeling the dynamics of interacting systems. However, networks inherently describe pairwise interactions, while real-world systems often involve interactions among groups of three or more units. In this talk, I will explore social systems as a natural testing ground for higher-order network approaches. I will briefly demonstrate how incorporating higher-order mechanisms can lead to the emergence of novel phenomena, presenting recent results on the influence of structural features and seeding strategies on emergent dynamics. Finally, I will delve into the microscopic dynamics of empirical higher-order structures, examining the mechanisms governing their temporal dynamics at both the individual and group levels. This will involve characterizing how individuals navigate groups and how groups form and dissolve. I will conclude by proposing a dynamical hypergraph model that closely reproduces empirical observations.

Video

 

Improving Opinion Measurement using Neural-Embedding based on Social Connectivity Data 

Shi Zhou, University College London

16:00-17:00, Wednesday 22 May 2024, Room G01, UCL Gower Street Building, 66-72 Gower Street 

Abstract: There are considerable research works on modelling opinion dynamics in society and online social networks. Such efforts are criticised for lack of real data and measurements, whereas conventional opinion measurements based on polling or text analysis are unreliable and erroneous.  Recent work reported on opinion measurement using neural embeddings based solely on social connectivity data. Two points in the embedding space are manually chosen to represent examples of opposing opinion and the cosine distance between a user’s embedding and the projection vector defined by the two points is a measure of the user’s opinion. This method, however, is sensitive to the choice of the two points. Here we propose to define the projection vector by the centres of two clusters automatically determined by K-means clustering in the embedding space. Using the ‘following’ relationship data of US congressional members on X (formerly Twitter) and the voting records of the politicians as a proxy for ground truth opinion, we experimentally demonstrate that our proposal shows better consistency with the evaluation data, avoids the dilemma for point selection, and has a lower computational complexity.

Multiscalar social segregation

Elsa Arcaute, University College London

16:00-17:00, Wednesday 13 March 2024, G01, UCL Gower Street Building, 66-72 Gower Street 

Abstract: The talk introduces an analytical framework for examining socio-spatial segregation across various spatial scales. This framework considers regional connectivity and population distribution, using an information theoretic approach to measure changes in socio-spatial segregation patterns across scales. It identifies scales where both high segregation and low connectivity occur, offering a topological and spatial perspective on segregation. Illustrated through a case study in Ecuador, the method is demonstrated to identify disconnected and segregated regions at different scales, providing valuable insights for planning and policy interventions.

Modelling and Understanding Cooperation in Societies of Artificial Agents

Mirco Musolesi, University College London

16:00-17:00, Wednesday 13 March 2024, Room 405, UCL Gower Street Building, 66-72 Gower Street 

Abstract: The analysis and modelling of the evolution of cooperation in competitive environments are of interest for economics, game theory, biology, psychology, and computer science just to name a few. Mathematical and computational models have been developed in order to extract insights on the underlying mechanisms. More recently, there has been an increasing interest in the study of societies based on artificial agents that can learn their strategies as they interact. The applications of this work are many: from the analysis of economic markets and financial strategies to the study of social and political institutions, from the design of self-organising agent systems, including robotic ones, to the understanding of the emergence of cooperation in human and animal societies (and, possibly, in the future, in mixed environments composed of humans and artificial agents). In this talk I will give an overview of our work in modelling societies of artificial learning agents. I will discuss the design of reinforcement learning architectures composed of autonomous agents that do not rely on centralised coordination. I will introduce examples of applications of learning algorithms to social dilemmas and cooperative games. Finally, I will discuss open challenges and research questions in this fascinating emerging field.

Canonical Portfolios: Optimal Signal and Asset Combinations

Nick Firoozye, University College London

16:00-17:00, Wednesday 14 February 2024, Room 405, UCL Gower Street Building, 66-72 Gower Street 

Abstract: We present a novel framework for analyzing the optimal asset and signal combination problem, directly optimising expected utility. Our approach builds upon the dynamic portfolio selection problem introduced by Brandt and Santa-Clara (2006) and consists of two stages. First, we reformulate their original investment problem into a tractable one that allows us to derive a closed-form expression for the optimal portfolio policy that is scalable to large cross-sectional financial applications. Second, we recast the problem of selecting a portfolio of correlated assets and signals into selecting a set of uncorrelated managed portfolios through the lens of Canonical Correlation Analysis of Hotelling (1936). The new investment environment of uncorrelated managed portfolios offers unique economic insights into the joint correlation structure of our optimal portfolio policy. We also operationalize our theoretical framework to bridge the gap between theory and practice, showcasing the improved performance of our proposed method over natural competing benchmarks. The framework is shown empirically to enhance the performance of naively combined strategies by a signficant multiple, and outpeforms many other portfolio trading frameworks.

Privacy is Fungibility

Alexander Lynham, University College London

16:00-17:00, Wednesday 31 January 2024, Room 405, UCL Gower Street Building, 66-72 Gower Street 

Abstract: It is generally assumed that distributed ledgers, such as blockchains, are immutable. The reality is that this is often not the case. Many permissionless networks that offer fast finality, via an economic finality mechanism like proof-of-stake, are not adequately decentralized, either in stake distribution or in number of validator nodes, to achieve immutability in practice. An organisation with an operational role, the developers of key software, or a particular set of validators can be pressured to stop, and then alter, the state of a ledger, or unilaterally co-ordinate to do so, for example via a software upgrade, misuse of a disaster recovery process, or a hard fork. From the perspective of individuals and businesses working with digital assets, the primary risk is loss of funds. Many loss events resulting from what could be described as political factors, have already occurred, and are simply underreported. In short, without privacy guarantees, all user funds held by these systems are theoretically at risk, whether to political or economic pressure from other users and stakers, or to regulators and governments. Building on the discussion above, I will outline the design and intended operation of the Comet system, as well as the USO asset, its novel cryptographic primitive. Then, I will suggest a few potential use-cases for this system in real-world applications.

 

Fluctuations and heterogeneity in process on networks

Timothy Rogers, Professor of Mathematics, University of Bath

16:00-17:00, Wednesday 15 November 2023, South Wing, Institute of Advanced Studies, Room 20 

Abstract: Understanding the relationship between complexity and stability in large dynamical systems—such as ecosystems—remains a key open question in complexity theory which has inspired a rich body of work developed over more than fifty years. The vast majority of this theory addresses asymptotic linear stability around equilibrium points, but the idea of ‘stability’ in fact has other uses in the empirical ecological literature. The important notion of ‘temporal stability’ describes the character of fluctuations in population dynamics, driven by intrinsic or extrinsic noise. Here we apply tools from random matrix theory to the problem of temporal stability, deriving analytical predictions for the fluctuation spectra of complex ecological networks. We show that different network structures leave distinct signatures in the spectrum of fluctuations, and demonstrate the application of our theory to the analysis of ecological time-series data

Video [NO AUDIO]

Separation between Bond Pricing and Bond Intrinsic Value

Yacoov Mutnikas, Dept of Computer Science, UCL

16:00-17:00, Wednesday 18 October 2023, Room 405, 4th Floor, UCL GS Building, 66-72 Gower Street 

Abstract: The efficient market theory assumes that market participants act rationally, seeking to maximize risk-adjusted returns without hindrances. However, in real-world fixed income markets, these assumptions often do not hold. Economic and non-economic participants have different objectives, leading to market frictions and inefficiencies. These inefficiencies are exacerbated by factors like information asymmetry, liquidity challenges, macro-economic conditions, and regulatory constraints. As a result, fixed income markets are diverse and prone to instrument mispricing and demand-supply imbalances. Understanding these complexities can present opportunities for investors but requires research and a grasp of intrinsic bond values. Given today's economic volatility and uncertainty in intertest rates, credit cycles and markets sentiment environment, recognizing these opportunities and early signs of credit deterioration is crucial.

What we do in the shadows: Using temporal network motifs to analyse NFT wash trading and dark markets

Richard Clegg, Dept of Elec Eng and Computer Science, Queen Mary University of London

16:00-17:00, Wednesday 24 May 2023,  Hybrid Seminar, UCL

Abstract: This talk presents new results on cryptocurrency networks, we look at three datasets: a set of bitcoin transactions centred on the alphabay dark market, a set of bitcoin transactions centred on the hydra dark market and a set of transactions in other currencies involved in NFT sales. We use a new tool, Raphtory, that was designed from the ground up to be a fast and flexible way to analyse any temporal network. In this case we use the tool to look at several different phenomena. In the NFT market we consider "wash trading" an illegal practice involving "pumping" an asset by making "fake" trades to create the appearance of demand. In all data sets we use temporal motifs (small subgraphs that respect time order) to investigate patterns of trade within these cryptocurrency based markets. We demonstrate that motif analysis can reveal a lot about patterns of trade including revealing actors, times and periods of analysis that might otherwise appear irrelevant to more traditional network analysis.

Video

A bi-directional approach to comparing the modular structure of networks

Neave O’Clery, University College London

16:00-17:00, Wednesday 10 May 2023,  Hybrid Seminar, UCL

Abstract: Here we propose a new method to compare the modular structure of a pair of node-aligned networks. The majority of current methods, such as normalized mutual information, compare two node partitions derived from a community detection algorithm yet ignore the respective underlying network topologies. Addressing this gap, our method deploys a community detection quality function to assess the fit of each node partition with respect to the other network’s connectivity structure. The advantages of our methodology are three-fold. First, it is adaptable to a wide class of community detection algorithms that seek to optimize an objective function. Second, it takes into account the network structure, specifically the strength of the connections within and between communities, and can thus capture differences between networks with similar partitions but where one of them might have a more defined or robust community structure. Third, it can also identify cases in which dissimilar optimal partitions hide the fact that the underlying community structure of both networks is relatively similar. We apply our method to compare the multi-scale modular structure of inter-industry labour market mobility across a set of European countries.

Video

The transmission of Keynesian supply shocks

Ambrogio Cesa-Bianchi, Bank of England

16:00-17:00, Wednesday 22 March 2023,  Hybrid Seminar, UCL

Abstract: Sectoral supply shocks can trigger shortages in aggregate demand when strong sectoral complementarities are at play. US data on sectoral output and prices offer support to this notion of “Keynesian supply shocks” and their underlying transmission mechanism. Demand shocks derived from standard identification schemes using aggregate data can originate from sectoral supply shocks that spillover to other sectors via a Keynesian supply mechanism. This finding is a regular feature of the data and is independent of the effects of the 2020 pandemic. In a New Keynesian model with input-output network calibrated to 3-digit US data, sectoral productivity shocks generate the same pattern for output growth and inflation as observed in the data. The degree of sectoral interconnection, both upstream and down-stream, and price stickiness are key determinants of the strength of the mechanism. Sectoral shocks may account for a larger fraction of business cycle fluctuations than previously thought.

Video

Amplifying green growth and fossil fuel divestment in networked bank lending

Jamie Rickman and Max Falkenberg, University College London

16:00-17:00, Wednesday 8 March 2023,  Hybrid Seminar, UCL

Abstract: Effective climate action is critically dependent on a rapid and sustained energy transition from fossil fuels to green energy. The banking sector is a key player in this, funding new energy projects totalling several hundred billion dollars each year. Using Bloomberg New Energy Finance data, we first identify key banks in the sector and show how energy investments have undergone a significant transition between 2010 and 2021, principally characterised by an increase in green investment, but with little evidence of a system-wide reduction in fossil fuel spend. Then, by developing a network model for the reassignment of capital, we show how the substitution effect, the phenomenon whereby the capital divested from one bank is replaced by new capital from a competing bank, prevents effective, system wide divestment. We show that unless multiple major banks divest from the fossil fuel sector in parallel, the divestment of individual banks has little to no actual effect on the total value of fossil fuel projects which are funded in a given year. However, if banks are subject to regulations which restrict their fossil fuel investments according to the bank’s own assets – for instance the “one-for-one” capital requirements rule recently proposed by the European Parliament but ultimately rejected – then the individual divestment of banks can have a non-zero impact on the sector, with a phase transition in divestment efficiency as the number of divesting banks increase. Our results highlight the need for collective action, stressing the importance of regulatory oversight to ensure that fossil fuel divestment at the banking level has the desired effect at the project level.

Video

Benign Autoencoders

Andrea Xu Teng, École Polytechnique Fédérale de Lausanne

16:00-17:00, Wednesday 1 February 2023,  In-person Seminar, Engineering Front Executive Suite 104 in Engineering Front Building, UCL

Abstract: The success of modern machine learning algorithms depends crucially on efficient data representation and compression through dimensionality reduction. This practice seemingly contradicts the conventional intuition suggesting that data processing always leads to information loss. We prove that this intuition is wrong. For any non-convex problem, there exists an optimal, benign auto-encoder (BAE) extracting a lower-dimensional data representation that is strictly beneficial: Compressing model inputs improves model performance. We prove that BAE projects data onto a manifold whose dimension is the compressibility dimension of the learning model. We develop and implement an efficient algorithm for computing BAE and show that BAE improves model performance in every dataset we consider. Furthermore, by compressing "malignant" data dimensions, BAE makes learning more stable and robust.

Video

Identifying common volatility shocks in the global carbon transition

Susana Campos Martins, Nuffield College, University of Oxford

16:00-17:00, Wednesday 7 December 2022,  In-person Seminar, G13, 1-19 Torrington Place 

Abstract: We propose a novel approach to measure the global effects of climate change news on financial markets. For that purpose, we first study the global common volatility of the oil and gas industry, and then project it on climate-related shocks. We show that rising concerns about the energy transition make oil and gas share prices move at the global scale, controlling for shocks to the oil price, US and world stock markets. Despite the clear exposure of oil and gas companies to carbon transition risk, not all geoclimatic shocks are alike. The sign and magnitude of the impact differs across topics and themes of climate-related concerns. Regarding sentiment, climate change news tends to create turmoil only when the news is negative. Furthermore, the adverse effect is amplified by oil price movements but weakened by stock market shocks. Finally, our findings point out climate news materialises when it reaches the global scale, supporting the relevance of modelling geoclimatic volatility.

Video

Systemic risk under ESG rating inflation 

Davide Stocco, Politecnico di Milano
14:00-15:00, Tuesday 13 September 2022, G01, UCL Gower Street Building, 66-72 Gower Street

Abstract: ESG ratings have become a complementary non-financial information tool for investors that seeks to quantify the sustainability profile of the listed firms. However, ESG ratings diverge across data providers and do not reflect the actual sustainable performance of the listed firms, causing non-responsive financial market or even financial distress. In the seminar we will review the main controversies related to quantifying the sustainable profile of the firms and how they affect the financial market. We will then examine a research proposal concerning the detection and quantification of the potential systemic risk that ESG disagreement and ESG inflated ratings can produce. This analysis will extend the literature by connecting the complex relationship between ESG ratings and stock prices making use of complex network theory and multivariate conditional probability theory.

Video   

Filtering the covariance matrix of nonstationary systems with time-independent eigenvalues

Christian Bongiorno, École CentraleSupélec de Paris

16:00-17:00, Wednesday 26 October 2022, Zoom Seminar 

Abstract: We propose a data-driven way to reduce the noise of covariance matrices of nonstationary systems. In the case of stationary systems, asymptotic approaches were proved to converge to the optimal solutions. Such methods produce eigenvalues that are highly dependent on the inputs, as common sense would suggest. Our approach proposes instead to use a set of eigenvalues totally independent from the inputs and that encode the long-term averaging of the influence of the future on present eigenvalues. Such an influence can be the predominant factor in nonstationary systems. Using real and synthetic data, we show that our data-driven method outperforms optimal methods designed for stationary systems for the filtering of both covariance matrix and its inverse, as illustrated by financial portfolio variance minimization, which makes out method generically relevant to many problems of multivariate inference.

Video   

Emergent Bartering Behaviour in Multi-Agent Reinforcement Learning

Joel Z Leibo, DeepMind

16:00-17:00, Wednesday 16 November 2022,  In-person Seminar 

Abstract: Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication

Video

Spectral theory for networks and its application to economy

Izaak Neri, King's College London

16:00-17:00, Wednesday 23 November 2022,  In-person Seminar 

Abstract: Spectral theory plays an important role in, among others, neuroscience, ecology, and economy.   However, the lion’s share of the research literature focuses on densely connected graphs where the indegree and outdegree of a node is proportional to system size.    Here, we review some of the main results in a spectral theory for sparse and random networks, for which the indegree and outdegrees of nodes is independent of system size.   As we discuss, this theory  provides simple analytical results for the spectral properties of infinitely large, random directed graphs, while for sparse graphs with nonoriented edges this theory provides the spectral properties in the infinite size limit through the numerical solution of a set of equations that can be solved with a Monte Carlo algorithm.   Consequently, we analyse how this theory provides analytical predictions for numerical results reported before in the literature on the stability of a random economy. 

Video

 

Past 

2021-2022

What do data on millions of U.S. workers reveal about lifecycle earnings dynamics?
Fatih Guvenen, University of Minnesota
16:00-17:00, Wednesday 8 June 2022, Zoom

Abstract: We study individual male earnings dynamics over the life cycle using panel data on millions of U.S. workers. Using nonparametric methods, we first show that the distribution of earnings changes exhibits substantial deviations from lognormality, such as negative skewness and very high kurtosis. Further, the extent of these nonnormalities varies significantly with age and earnings level, peaking around age 50 and between the 70th and 90th percentiles of the earnings distribution. Second, we estimate nonparametric impulse response functions and find important asymmetries: Positive changes for high-income individuals are quite transitory, whereas negative ones are very persistent; the opposite is true for low-income individuals. Third, we turn to long-run outcomes and find substantial heterogeneity in the cumulative growth rates of earnings and the total number of years individuals spend nonemployed between ages 25 and 55. Finally, by targeting these rich sets of moments, we estimate stochastic processes for earnings that range from the simple to the complex. Our preferred specification features normal mixture innovations to both persistent and transitory components and includes state-dependent long-term nonemployment shocks with a realization probability that varies with age and earnings.

Video

Labor and supply chain networks
Anna Nagurney, University of Massachusetts
16:00-17:00, Wednesday 1 June 2022, Zoom

Abstract: The COVID-19 pandemic has dramatically illustrated the importance of labor in supply chain networks in numerous economic sectors, from agriculture to healthcare. In this talk, I will discuss our research on the inclusion on labor in supply chains both in optimization and game theory frameworks to elucidate the impacts of disruptions of labor in terms of availability as well as productivity on product flows, prices, as well as the profits of firms. I will also highlight what can be done to ameliorate negative impacts and will discuss the power of setting appropriate wages on supply chain links from production and transportation to storage and the ultimate distribution to points of demand. The use of international migrants to alleviate shortages will also be noted as well as the impacts of the war on Ukraine on global supply chains. I will conclude with some of our experiences in influencing policy in the pandemic.

Video

A deep dive into crypto-markets: we cannot compare oranges with apples
Emilio Barucci, Politecnico di Milano
16:00-17:00, 19 May 2022, Room 1.02, Malet Place Engineering Building

Abstract: We analyze markets for cryptoassets (cryptocurrencies and stablecoins), investigating market impact and efficiency through the lens of the market order flow. We provide evidence that markets where cryptoassets are exchanged between them play a central role on price formation (aggregating preference/technology shocks and heterogeneous opinions) and are more efficient than markets where cryptocurrencies are exchanged with the US dollar.

Video

Self-regularization in deep neural networks: evidence from random matrix theory
Charles Martin, Calculation Consulting
16:00-17:00, Wednesday 18 May 2022, Zoom

Abstract: We present a semiempirical theory of deep learning which explains the remarkable generalization performance of state-of-the-art (SOTA) deep neural networks (DNNs). We derive the WeightWatcher Alpha-Hat metric, which is capable of predicting trends in the test accuracies of pretrained SOTA models, without needing access to the test or even training data. Our theory is based on the recent heavy-tailed self-regularization (HT-SR) theory and uses advanced methods from random matrix theory (RMT). We show how to estimate the generalization accuracy of a (pre-)trained DNN by computing the empirical spectral density (ESD) of the layer weight matrices, fitting the ESD to a power-law (PL) formula with exponent alpha, and then plugging this empirical alpha into our theory. Finally, we discuss the relationship of our approach to known experimental data of actual spiking neurons.

Video

Systemic risk in markets with multiple central counterparties
Luitgard Veraart, London School of Economics and Political Science
16:00-17:00, Wednesday 04 May 2022, Zoom

Abstract: We provide a framework for modelling risk and quantifying payment shortfalls in cleared markets with multiple central counterparties (CCPs). Building on the stylised fact that clearing membership is shared among CCPs, we show that stress in this shared membership can transmit across markets through multiple CCPs. We provide stylised examples to lay out how such stress transmission can take place, as well as empirical evidence based on publicly available data to illustrate that the mechanisms, we study could be relevant in practice. Furthermore, we show how stress mitigation mechanisms such as variation margin gains haircutting by one CCP can have spillover effects on other CCPs. Finally, we discuss how the framework can be used to enhance CCP stress-testing. The current “Cover 2” standard requires CCPs to be able to withstand the default of their two largest clearing members. We show that who these two clearing members are can be significantly affected by higher-order effects arising from interconnectedness through shared membership. This is joint work with Iñaki Aldasoro (Bank for International Settlements).

Variational methods for conditional volatility forecasting
Zexuan Yin, University College London
16:00-17:00, Wednesday 20 April 2022, Zoom

Abstract: Forecasting conditional volatility to account for heteroscedastic behaviour in financial time series is an important task in domains such as risk management, asset pricing, and portfolio management. Traditional models from econometrics such as GARCH (generalised autoregressive conditional heteroscedasticity) and stochastic volatility models each have their own limitations. Whilst GARCH models are more commonly deployed in industry, they suffer from curse of dimensionality due to their computational complexity. For stochastic volatility models, there is a lack of open-source software, and inference is difficult due to the absence of a closed-form solution. Recently however, deep learning models have been deployed to successfully predict time series from various domains. To this end, we would like to present two models: neural GARCH and variational heteroscedastic volatility model (VHVM). Neural GARCH is a neural network adaptation of traditional GARCH models and are used in low dimensional settings, whilst VHVM is a neural architecture designed to work in higher dimensional settings.

Video

Looking for evidence: can deep learning models display herding behaviour?
Rishabh Kumar and Marvin Tewarrie, Bank of England
16:00-17:00, Wednesday 6 April 2022, Zoom

Abstract: Looking for Evidence: Can Deep Learning Models display herding behaviour? Deep learning, a sub-field of Artificial Intelligence represents a fundamental discontinuity from prior analytical techniques due to its complex structure but outstanding predictive powers. This development is not without its forewarning. Current academic discourse highlight that deep learning frameworks often use similar datasets and methodologies, as well as focusing on narrow functions to optimize, and are likely to generate pro-cyclical systemic financial risks through herding and formation of monocultures. Paradoxically, unlike traditional methods, deep learning models tend to be non-deterministic which may lead to a greater diversity of outcomes. Due to the complexity and recent rapid deployment of these models, it is paramount for regulators to understand these novel models. In this paper, we simulate various deep learning models on credit default data and stock return data to assess model herding. We simulate different flavours of various models on the same dataset to ascertain if these models would display similarity among their features and predictions. We compare the results to traditional analytical techniques of logistical regression. We also compare the workings of these models across during non-turbulent (Pre 2007 crisis) and during turbulent times (During 2007-2008) and understand their behaviour.

The comments made by the speaker are personal views and so cannot be taken to represent those of the Bank of England or any of its committees or to state Bank of England policy.

MicroVelocity: rethinking the velocity of money for digital currencies
Carlo Campajola, Universität Zürich
16:00-17:00, Thursday 7 April 2022, Zoom

Abstract: We propose a novel framework to analyse the velocity of money in terms of the contribution (MicroVelocity) of each individual agent, and to uncover the distributional determinants of aggregate velocity. Leveraging on complete publicly available transactions data stored in blockchains from four cryptocurrencies, we empirically find that MicroVelocity i) is very heterogeneously distributed and ii) strongly correlates with agents' wealth. We further document the emergence of high-velocity intermediaries, thereby challenging the idea that these systems are fully decentralised. Further, our framework and results provide policy insights for the development and analysis of digital currencies.

Video

Online feature engineering for high-frequency trading limit order books
Adamantios Ntakaris
13:15-14:00, Wednesday 23 March 2022, Zoom

Abstract: The increasing complexity of financial trading in recent years revealed the need for methods that can capture its underlying dynamics. An efficient way to organise this chaotic system is by contracting limit order book ordering mechanisms that operate under price and time filters (event-based or, equivalently, online learning forecast). Limit order book can be analysed using linear and nonlinear models. Novel methods need to be developed for the identification of limit order book characteristics that provide traders and market makers an information edge in their trading. A good proxy for traders and market makers is the prediction of mid-price movement. The mid-price movement has potential candidates as a source for its fluctuations, with some of them being: (1) technical indicators, (2) and/or quantitative features, (3) and/or econometrics, (4) and/or features that explore hidden information that hand-crafted features are not capable of capturing but fully automated processes can. The task of extracting informative features is just part of the problem since clever and unique feature selection methods need to be applied for optimal trading for information edge. Another problem in the quest for robust and fully-automated forecasting processes is the model selection task as part of a multi-objective strategy. This presentation sets the framework for tackling the aforementioned challenges in the space of high-frequency trading and limit order books (examples based on US and Nordic Stocks) in an event (not time-sampling) based manner.

Disclaimer: The experimental protocols, the features (hand-crafted and fully automated), the machine learning and deep learning topologies that appeared during this presentation at UCL (23rd of March, 2022) are for educational purposes only, and they are not related to any projects/consultancy with clients and companies that I have collaborated with.

Video

Analysis of Bitcoin crash in 2017-18 using high-frequency data and tools from information dynamics
Vaiva Vasiliauskaite
13:15-14:00, Wednesday 9 March 2022, Zoom

Abstract: Cryptocurrencies are a novel financial instrument, whose uniqueness lies in a novel distributed ledger technology that serves as a public database of executed transactions. They are also characterised by high price fluctuations, price bubbles, and sudden price crashes. Cryptocurrencies can be traded (sold and bought) at many independently operating venues (exchange markets), however, the price of a cryptocurrency eventually synchronises as traders take advantage of mismatches between prices, as observed by a single asset in several markets, or exploit several assets within one market. Therefore, the universal price of a cryptocurrency is also determined in a distributed fashion, depending on trading decisions within each individual market and trades that occur in between markets, and “exogenous information”, in the form of social media mentions, news, as well as “inter-market information”. These price mismatches that appear at high frequency allows detection of influences amongst system’s constituents. To deduce the importance of different types of information sources (exogenous beyond the system of markets, exogenous across markets, endogenous within each individual market) and its changes over time, we analyse entropy constituents of the high-frequency data from the largest Bitcoin exchange markets. By contrasting two states of the system (the pre-crash and post-crash), we find that the system as a whole shifted from a high-coupling regime to a low-coupling regime. We compared these empirical findings with several econometric models and argued that some results could relate to intra-market and inter-market regime shifts, and changes in direction of information flow between different market observables.

 Video

Graph clustering applications to equity markets
Mihai Cucuringu
13:15-14:00, Wednesday 23 February 2022, Zoom

Abstract: We consider the problem of clustering in two important families of networks: signed and directed, both relatively less well explored compared to their unsigned and undirected counterparts. Both problems share an important common feature: they can be solved by exploiting the spectrum of certain graph Laplacian matrices. In signed networks, the edge weight between two nodes may take either positive or negative values, encoding a measure of similarity or dissimilarity. We demonstrate the benefits of this approach on networks arising from stochastic block models and financial multivariate time series data. We also discuss a spectral clustering algorithm for directed graphs, based on a complex-valued representation of the adjacency matrix, which is able to capture the underlying cluster structures, for which the information encoded in the direction of the edges is crucial. We evaluate the proposed algorithm in terms of a cut flow imbalance-based objective function, which, for a pair of given clusters, it captures the propensity of the edges to flow in a given direction. The motivation for this problem stems from multivariate time series data, where it has been observed that certain groups of variables partially lead the evolution of the system, while other variables follow this evolution with a time delay, resulting in a lead-lag structure amongst the time series variables. We showcase that our method is able to detect statistically significant lead-lag clusters in the US equity market. We study the nature of these clusters in the context of the empirical finance literature on lead-lag relations, and demonstrate how they can be leveraged for the construction of predictive financial signals.

Video

A network view of cryptocurrencies: the Bitcoin Lightning Network case study
Tiziano Squartini
13:15-14:00, Wednesday 9 February 2022, Zoom

Abstract: Cryptocurrencies are distributed systems that allow exchanges of native tokens among participants. The public availability of their complete historical bookkeeping opens up an unprecedented possibility, i.e. that of analysing the static and the dynamical properties of their network representations throughout their entire history. In this talk, some of the most recent results concerning the structural properties of the Bitcoin Lightning Network (BLN) will be reviewed: the picture that emerges is that of a system whose size enlarges while becoming increasingly sparse and whose mesoscopic structural organization becomes increasingly compatible with a (statistically-significant) core-periphery structure. Such a peculiar topology is matched by a very uneven distribution of bitcoins, a result suggesting that the BLN is undergoing a "centralisation" process at different levels.

Video

Opinion formation and consensus dynamics on (temporal) hypergraphs
Leonie Neuhäuser
13:15-14:00, Wednesday 26 January 2022, Zoom

Abstract: In this talk, we will derive and analyse models for consensus dynamics on hypergraphs. In the case of static hypergraphs, unless there are nonlinear node interaction functions, it is always possible to rewrite the system in terms of a new network of effective pairwise node interactions, regardless of the initially underlying multi-way interaction structure. We thus focus on dynamics based on a certain class of non-linear interaction functions, which can model different sociological phenomena such as peer pressure and stubbornness. Unlike for linear consensus dynamics on networks, we show how our nonlinear model dynamics can cause shifts away from the average system state. We will then investigate consensus dynamics on temporal hypergraphs that encode network systems with time-dependent, multiway interactions. We compare these consensus processes with dynamics evolving on projections that remove the temporal and/or the multiway interactions of the higher-order network representation. We find differences in convergence speed for linear dynamics and, in addition to that, an effect on the final consensus value for nonlinear dynamics. In particular, we observe a first-mover advantage in the consensus formation process: If there is a local majority opinion in the hyperedges that are active early on, then the majority in these first-mover groups has a higher influence on the final consensus value—a behavior that is not observable in this form in projections of the temporal hypergraph.

Video

Determinants and consequences of poor decisions in health insurance 
Lan Zou, University of St. Gallen
13:00-14:00, Wednesday 12 January 2022, Zoom

Abstract: This paper aims at understanding decision patterns and welfare effects of poor decisions in health insurance conditional on a large set of sociodemographic characteristics. While giving consumers choice has the potential to improve welfare in principle, the prevalence of empirically observed choices that deviate from utility-maximizing behavior in health insurance markets questions the validity of such arguments. We exploit the highly regulated nature of contracts with only six distinct deductible levels and standardized covered services across contracts and providers in the Swiss mandatory health insurance market to identify optimal and non-optimal coverage levels at the individual level based on a range of standard and behavioral decision models. Using population representative survey and register data from 16,380 individuals, collected by the Swiss Federal government, we show that consumers lose up to USD 1,200 (USD 420 on average) annually due to non-optimal deductible choice. We identify high levels of heterogeneity, indicating that it is particularly the low-income share of the population demanding non-optimally high coverage and facing high financial losses and that the saliency of health issues (e.g., via the presence of long-term chronic diseases) increases the probability to choose optimal coverage levels. Our results highlight the heterogenous adverse welfare effects of choice in complex settings conditional on a choice menu including non-optimal options in general and have implications for policy in the Swiss mandatory health insurance scheme.
Video

An investigation of the volatility djustment
Daniele Marazzina, Politecnico di Milano
13:00-14:00, Wednesday 1 December 2021, Zoom

Abstract: The Volatility Adjustment aims at capturing the non fundamental (or credit quality) components of the spread of bonds detained by insurance companies. We use market data to reconstruct the Volatility Adjustment of different countries on a monthly basis. Looking at the effect of the Volatility Adjustment on the Solvency Capital Requirement we observe some evidence of over-shooting with the mechanism rewarding insurance companies with long-term liabilities. We observe that the Volatility Adjustment is not affected by credit quality of bonds and exaggerations of financial markets but by turbulence in financial markets. We also show that the new mechanism performs differently with respect to the mechanism currently in force providing higher values and lower variances.
Video

Entropic algorithms and wide flat minima in neural networks
Carlo Lucibello, Bocconi University
13:00-14:00, Wednesday 3 November 2021, Zoom

Abstract: The properties of flat minima in the training loss landscape of neural networks have been debated for some time. Increasing evidence suggests they possess better generalization capabilities with respect to sharp ones. First, we'll discuss simple neural network models. Using analytical tools from spin glass theory of disordered systems, we are able to probe the geometry of the landscape and highlight the presence of flat minima that generalize well and are attractive for learning dynamics. Next, we extend the analysis to the deep learning scenario by extensive numerical validations. Using two algorithms, Entropy-SGD and Replicated-SGD, that explicitly include in the optimization objective a non-local flatness measure known as local entropy, we consistently improve the generalization error for common architectures (e.g. ResNet, EfficientNet). Finally, we will discuss the extension of message passing techniques (belief propagation) to deep networks as an alternative paradigm to SGD training.
Video

Momentum gender gap
Sofiya Malamud, EPFL 
13:00-14:00, Wednesday 6 October 2021, Zoom

Abstract:  We study the response of traders towards momentum in the market and find that gender seems to play a role in their reaction. Specifically, we find that female investors sell higher capital gains than capital losses, a behavioral constancy known as the disposition effect to which male investors are less prone.  We use a large client data set of a major Swiss retail broker to conduct these experiments.
Video

 

 

2020-2021

Fundamental valuation of companies using new data and quant methods
Michael Recce, Neuberger Berman
12:00-13:00, Wednesday 16 June 2021, Zoom

Abstract:  Investing is historically an early adopter of new algorithms and technology.  The utility of a new idea is straightforwardly measured in money out over money in. Machine learning and large-scale computing of unstructured, novel data provides a new set of methods that were developed first by internet companies. These methods primarily provide an informational advantage in an inefficient market where securities are selected based on their underlying intrinsic value.  In this talk I describe the underlying methods, provide examples of success using these methods, and I present a roadmap for the impact these methods will have on markets and investing.
Video

Dynamics of cascades on burstiness-controlled temporal networks
Samuel Unicomb, University of Limerick
12:00-13:00, Wednesday 12 May 2021, Zoom

Abstract: Burstiness, the tendency of interaction events to be heterogeneously distributed in time, is critical to information diffusion in physical and social systems. However, an analytical framework capturing the effect of burstiness on generic dynamics is lacking. Here we develop a master equation formalism to study cascades on temporal networks with burstiness modelled by renewal processes. Supported by numerical and data-driven simulations, we describe the interplay between heterogeneous temporal interactions and models of threshold-driven and epidemic spreading. We find that increasing interevent time variance can both accelerate and decelerate spreading for threshold models, but can only decelerate epidemic spreading. When accounting for the skewness of different interevent time distributions, spreading times collapse onto a universal curve. Our framework uncovers a deep yet subtle connection between generic diffusion mechanisms and underlying temporal network structures that impacts a broad class of networked phenomena, from spin interactions to epidemic contagion and language dynamics.
Video

Regulating unintended consequences: Algorithmic trading and the limits of securities regulation
Carsten Gerner-Beuerle, University College London
12:00-13:00, Wednesday 14 April 2021, Zoom

Abstract: Since the infamous flash crash of 2010, instances of unexplained high volatility in financial markets, often driven by algorithmic and high-frequency trading, have received increased attention by policy makers and commentators. A number of regulatory initiatives in the EU and US deal specifically with the perceived risks that algorithmic and high-frequency trading pose to market quality. However, their efficacy is disputed, with some claiming that they are unlikely to prevent the future misuse of HFT practices, while others caution that the additional regulatory burden may have unintended and counterproductive consequences for market efficiency. This paper examines whether existing regulatory techniques, notably disclosure, internal testing and monitoring systems, and the regulation of structural features of the trade process, such as order execution times and circuit breakers, are adequate to address the risk of extreme market turbulence. It draws on market microstructure theory in arguing that regulation in the EU and the US continues to be wedded to an old regulatory paradigm centred around the role of information, without taking sufficient account of the mechanics of automated trading in modern financial markets.
Video

The COVID-19 auction premium
Gerardo Ferrara, Bank of England
12:00-13:00, Wednesday 10 March 2021, Zoom

Abstract: We uncover an additional channel by which a pandemic is costly for taxpayers, namely the surge of the bond auction premium. By applying a novel econometric strategy to high frequency data of the secondary Italian bond market, we show that the premium spiked anomalously during the “perfect storm” of 12 March 2020, a day which featured a large Treasury auction, the peak of COVID-19 infections in Italy and a controversial press conference following the announcement of the ECB Governing Council monetary policy decisions. We quantify the Treasury issuance cost at 136 bps of the auction size. Our results indicate that subsequent monetary policy measures, implemented since 18 March 2020, effectively reduced volatility, and consequently the size of the premium, during the second wave of the pandemic.

Evaluating structural edge importance in temporal networks
Isobel Seabrook, Financial Conduct Authority and University College London 
12:00-13:00, Wednesday 24 February 2021, Zoom

Abstract: To monitor risk in temporal financial networks, we need to understand how individual behaviours affect the global evolution of networks. Here we define a structural importance metric-which we denote as l_e- for the edges of a network. The metric is based on perturbing the adjacency matrix and observing the resultant change in its largest eigenvalues. We then propose a model of network evolution where this metric controls the probabilities of subsequent edge changes. We show using synthetic data how the parameters of the model are related to the capability of predicting whether an edge will change from its value of l​_e. We then estimate the model parameters associated with five real financial and social networks, and we study their predictability. These methods have application in financial regulation whereby it is important to understand how individual changes to financial networks will impact their global behaviour. It also provides fundamental insights into spectral predictability in networks, and it demonstrates how spectral perturbations can be a useful tool in understanding the interplay between micro and macro features of networks.
Video

Synthetic leverage, risk-taking, and monetary policy
Daniel Fricke, Deutsche Bundesbank
12:00-13:00, Wednesday 3 February 2021, Zoom

Abstract: A growing literature documents that easy monetary policy facilitates investor risk-taking. In this paper, I propose a new measure of synthetic leverage and provide evidence that German equity funds have been increasing their risk-taking through synthetic leverage from 2015 onwards. In fact, changes in synthetic leverage are closely aligned with the stance of monetary policy. Returns of synthetically leveraged funds (those that make use of risk-taking strategies) tend to be negative on a risk-adjusted basis and these funds underperform other funds significantly. Lastly, while synthetically leveraged funds do not differ in terms of their flow-performance sensitivity, they display larger flow externalities, in particular during volatile market conditions.

Tâtonnement, approach to equilibrium and excess volatility in firm networks
Jose Moran, Institute for New Economic Thinking, Oxford
12:00-13:00, Wednesday 20 January 2021, Zoom

Abstract: We study the conditions under which input-output networks can dynamically attain competitive equilibrium, where markets clear and profits are zero. We endow a classical firm network model with simple dynamical rules that reduce supply/demand imbalances and excess profits. We show that the time needed to reach equilibrium diverges as the system approaches an instability point beyond which the Hawkins-Simons condition is violated and competitive equilibrium is no longer realisable, in reminiscence of May's stability condition. We argue that such slow dynamics is a source of excess volatility, through accumulation and amplification of exogenous shocks. Factoring in essential physical constraints, such as causality or inventory management, we propose a dynamically consistent model that displays a rich variety of phenomena. Competitive equilibrium can only be reached after some time and within some region of parameter space, outside of which one observes periodic and chaotic phases, reminiscent of real business cycles. This suggests an alternative explanation of the excess volatility that is of purely endogenous nature.

Joint work with Théo Dessertaine, Michael Benzaquen and J. P. Bouchaud, arXiv:2012.05202
Video

Causal Campbell-Goodhart law and reinforcement learning
Henry Ashton, University College London
12:00-13:00, Wednesday 25 November 2020, Zoom

Abstract: Campbell-Goodhart's law relates to the causal inference error whereby decision-making agents aim to influence variables which are correlated to their goal objective but do not reliably cause it. This is a well known error in economics and political science but not widely labelled in artificial intelligence research. Through a simple example, we show how off-the-shelf deep reinforcement learning (RL) algorithms are not necessarily immune to this cognitive error. The off-policy learning method is tricked, whilst the on-policy method is not. The practical implication is that naive application of RL to complex real life problems can result in the same types of policy errors that humans make. Great care should be taken around understanding the causal model that underpins a solution derived from reinforcement learning.
Video

Learning (not) to trade: Lindy's law in retail traders
Jiahua Xu, UCL Centre for Blockchain Technology
12:00-13:00, Wednesday 11 2020, Zoom

Abstract: We develop a rational model of trading behavior in which the agents gradually learn about their ability to trade, and exit after poor trading performance. We demonstrate that it is optimal for experienced traders to "procrastinate" and postpone exit even after bad results. We embed this "optimal procrastination" in a model of population dynamics with entry and endogenous exit, and generate predictions about the dynamics of various cross-sectional characteristics. We test these population-level predictions using a large client data set of a major Swiss retail broker. Consistent with the model, we find that endogenous exit decisions produce non-trivial and non-monotonic population-wide linkages between performance, exits, and trading experience.
Video

Universal rankings in complex input-output organisations: From socio-economic to ecological systems
Silvia Bartolucci, University College London
12:00-13:00, Wednesday 28 October 2020, Zoom 

Abstract: The input-output balance equation is used to define rankings of constituents in the most diverse complex organizations: the very same tool that helps classify how species of an ecosystems or sectors of an economy interact with each other is useful to determine what sites of the world wide web - or which nodes in a social network - are the most influential. The basic principle is that constituents of a complex organization can produce outputs whose "volume" should precisely match the sum of external demand plus inputs absorbed by other constituents to function. The solution typically requires a case-by-case inversion of large matrices, which provides little to no insight on the structural features responsible for the hierarchical organization of resources. Here we show that - under very general conditions - the solution of the input-output balance equation for open systems can be described by a universal master curve, which is characterized analytically in terms of simple "mass defect" parameters - for instance, the fraction of resources wasted by each species of an ecosystem into the external environment. Our result follows from a stochastic formulation of the interaction matrix between constituents: using the replica method from the physics of disordered systems, the average (or typical) value of the rankings of a generic hierarchy can be computed, whose leading order is shown to be largely independent of the precise details of the system under scrutiny. We test our predictions on systems as diverse as the WWW PageRank, trophic levels of generative models of ecosystems, input-output tables of large economies, and centrality measures of Facebook pages.

Acceleration of descent-based optimisation algorithms via Caratheodory's theorem
Francesco Cosentino, Alan Turing Institute and University of Oxford
12:00-13:00, Wednesday 14 October 2020, Zoom

Abstract:  Given a discrete probability measure supported on N atoms and a set of n real-valued functions, there exists a probability measure that is supported on a subset of n+1 of the original N atoms and has the same mean when integrated against each of the n functions. We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by “greedy geometric sampling”. We then propose a new technique to accelerate algorithms based on gradient descent using Caratheodory’s theorem. As a core contribution, we then present an application of the acceleration technique to block coordinate descent methods. Experimental comparisons on least squares regression with LASSO regularisation terms show better performance than the ADAM and SAG algorithms.
Video

2019-2020

Meta-graph: Few shot link prediction via meta learning
Joey Bose
12:00-12:40, Wednesday 24 June 2020, Zoom

Abstract: We consider the task of few shot link prediction on graphs. The goal is to learn from a distribution over graphs so that a model is able to quickly infer missing edges in a new graph after a small amount of training. We show that current link prediction methods are generally ill-equipped to handle this task. They cannot effectively transfer learned knowledge from one graph to another and are unable to effectively learn from sparse samples of edges. To address this challenge, we introduce a new gradient-based meta learning framework, meta-graph. Our framework leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that meta-graph can learn to quickly adapt to a new graph using only a small sample of true edges, enabling not only fast adaptation but also improved results at convergence.
Video

A maximum entropy approach to time series analysis
Riccardo Marcaccioli, University College London
12:00-12:40, Wednesday 17 June 2020, Zoom

Abstract: Natural and social multivariate systems are commonly studied through sets of simultaneous and time-spaced measurements of the observables that drive their dynamics, i.e., through sets of time series. Typically, this is done via hypothesis testing: the statistical properties of the empirical time series are tested against those expected under a suitable null hypothesis. This is a very challenging task in complex interacting systems, where statistical stability is often poor due to lack of stationarity and ergodicity. Here, we describe an unsupervised, data-driven framework to perform hypothesis testing in such situations. This consists of a statistical mechanical approach—analogous to the configuration model for networked systems—for ensembles of time series designed to preserve, on average, some of the statistical properties observed on an empirical set of time series. We showcase its possible applications with a case study on financial portfolio selection.
Video

Societal biases reinforcement through machine learning: A credit scoring perspective
Bertrand Hassani
12:00-12:40, Wednesday 10 June 2020, Zoom

Abstract: Does machine learning and AI ensure that social biases thrive ? This paper aims to analyse this issue. Indeed, as algorithms are informed by data, if these are corrupted, from a social bias perspective, good machine learning algorithms would learn from the data provided and reverberate the patterns learnt on the predictions related to either the classification or the regression intended. In other words, the way society behaves whether positively or negatively, would necessarily be reflected by the models. In this paper, we analyse how social biases are transmitted from the data into banks loan approvals by predicting either the gender or the ethnicity of the customers using the exact same information provided by customers through their applications.

Pricing of futures with a CARMA(p,q) model driven by a time-changed Brownian motion
Andrea Perchiazzo
12:00-12:40, Wednesday 20 May 2020, Zoom

Abstract: In this paper we start from the empirical findings on the behaviour of future prices in commodity markets and propose a continuous time model that allows to represent in a semi analytical form the price function. In particular, we study the term structure of future prices under the assumption that the underlying asset price follows an exponential CARMA(p,q) model where the driving noise is a time-changed Brownian motion. The obtained formula is strictly connected to the cumulant generating function of the subordinator process. The main advantages of the proposed model are the possibility to work directly with market data without requesting regular grid and its ability to capture complex time dependent structures through different shapes of the autocovariance function.
Video

Supply and demand shocks in the COVID-19 pandemic: An industry and occupation perspective  
Rita Maria del Rio Chanona
12:00-12:40, Wednesday 22 April 2020, Zoom

Abstract: We provide quantitative predictions of first-order supply and demand shocks for the US economy associated with the COVID-19 pandemic at the level of individual occupations and industries. To analyse the supply shock, we classify industries as essential or non-essential and construct a Remote Labour Index, which measures the ability of different occupations to work from home. Demand shocks are based on a study of the likely effect of a severe influenza epidemic developed by the US Congressional Budget Office. Compared to the pre-COVID period, these shocks would threaten around 20 per cent of the US economy’s GDP, jeopardize 23 per cent of jobs, and reduce total wage income by 16 per cent. At the industry level, sectors such as transport are likely to be output-constrained by demand shocks, while sectors relating to manufacturing, mining, and services are more likely to be constrained by supply shocks. Entertainment, restaurants, and tourism face large supply and demand shocks. At the occupation level, we show that high-wage occupations are relatively immune from adverse supply- and demand-side shocks, while low-wage occupations are much more vulnerable. We should emphasize that our results are only first-order shocks—we expect them to be substantially amplified by feedback effects in the production network.
Video

Regime detection in financial time series and further results in portfolio construction  
Pier Francesco Procacci, University College London
12:00-12:40, Wednesday 8 April 2020, Zoom

Abstract: The seminar discusses a novel approach to define, analyse and forecast market states. Two experiments are presented, together with observations on portfolio construction and patterns in the likelihood which arise from the analysis. Defining market states is an essential tool in dealing with non-stationarity of time series, but the most widely used models are often computationally expensive and unfeasible as dimensionality increases. In our approach, market states are identified by a reference sparse precision matrix and a vector of expectation values. Each multivariate observation is associated with a given market state accordingly to a minimization of a penalized distance measure. The procedure is made computationally very efficient and can be used with a large number of assets. We demonstrate that this procedure is successful at clustering different states of the markets in an unsupervised manner.
Video

Information flow simulations in the investigation of economic complex systems 
Riccardo Righi 
12:00-13:00, Wednesday 26 February 2020, Room 4.05

Abstract: The seminar discusses the use of simulations of information flows for the investigation of complex systems related to digital economy. Two case studies are presented. The first concerns the investigation of an equity crowdfunding platform based in the UK. To categorize the nodes of such network, the implementation of a cluster analysis on investors allows the identification of three profiles: Small Investors, Serial Investors and Highly Involved Investors. Similarly, three clusters emerge from companies' characteristics. The structural properties of the platform are then investigated through the detection of groups of agents that are likely to generate internal social capital due to dense connections. Our results show that Small Investors, through investments on different types of companies, significantly interconnect distinct communities, thus contributing to generate a cohesive network structure. Hence they structurally support information exchange throughout the platform. The second case study is about the investigation of the techno-economic complex system defined by economic worldwide institutions (e.g. firms, governmental institutions and research institutes) actively contributing in the field of Artificial Intelligence in the period 2009-2018. We discuss the theoretical basis supporting the construction of a multi-layer network representing the Artificial Intelligence agent-artifact space. 
Video

Efficiency of payment networks with a central counterparty 
Haotian Gao  
12:00-13:00, Wednesday 12 February 2020, Room 4.05

Abstract: The Central Counterpary Clearing Houses (CCPs) are financial institutions established to facilitate clearing and settlement of trades across various markets. After the financial crisis, CCPs have become a key piece of the new regulatory framework, as several classes of derivative contracts must now be cleared through them. The problem of how CCPs affect counterparty risk was previously studied by Duffie & Zhu (2011). A key benefit of having trades cleared through CCPs is that payments obligations arising from those trades can be smaller, because they are netted. We show that, if the system is efficient in absence of a CCP, introducing a CCP always reduces its efficiency. Conversely, when the system with no CCP is inefficient, we show that in presence of a CCP its efficiency is a non-monotonic function of the percentage of transactions cleared though the CCP, which we denote in the following by α. In particular, we show that there exists an optimal value of α where efficiency achieves a maximum

A rough perspective on modern market generators
Blanka Horvath, King's College London
12:00-13:00, Wed, 29 January 2020, Room 4.05

Abstract: In this talk we investigate how Deep Pricing and Hedging brings a new impetus into the modelling of financial markets. We have a short walk though historical market models and proceed to modern generative models for financial time series. We then investigate some of the challenges to achieve good results in the latter, and highlight some applications and pitfalls. We also discuss different approaches pricing and hedging considerations in DNN framework and the connection to Market Generators.
Video

Networks and the arrow of time 
Tim Evans 
12:00-13:00, Wednesday 4 December 2019, Room 4.05

Abstract: There are many data sets where objects come with a natural order: academic papers have publications dates, predator- prey relationships in ecosystems, computer packages have their dependencies, space-time events are ordered by causality. If these objects are nodes in a network then we have a Directed Acyclic Graph. We must take account of the constraint placed on such networks by this order, the” arrow-of-time”, so many standard network techniques are inappropriate for such networks. In my talk I will highlight some of the well-known mathematical features of Directed Acyclic Graphs and will show I have been using them to look at these networks in new ways. I will use examples from both simple models, such as the Price model (directed Barabasi-Albert model), and real data sets such as document citation data and food webs.
Video

From brain to markets: A fractal journey 
Federico Turkheimer 
12:00-13:00, Wed, 20 November 2019, Room 4.05

Abstract: Introduced almost 60 years ago, fractals started off as mathematical curiosities with a great appeal given that fractal-like structures turned out to be ubiquitous in nature. Nowadays they have turned into unique tools that aid the construction of quantitative models of complex systems. This talk is meant as a hopefully pleasurable walk from the original intuition of Mandelbrot and others through a quantitative multiscale model of brain function and behaviour to quantitative models for trading equities. A qualitative view of fractals and self-similar expansions will be used to illustrate how we can advance our understanding of brain and mind and make solid predictions on how changes in the neurobiology of elementary units can affect human health and activity. In fact such an insight can be further extended into human activities such as art (through a short review of psychosis and the art of Vincent Van Gogh) and, importantly for this context, financial markets and turn old mathematical tools in sensitive predictors of market conditions.
Video

Sector-neutral portfolios: Long memory motifs persistence in market structure dynamics 
Jeremy Turiel, University College London
12:00-13:00, Wednesday 30 October 2019, Room 4.05

Abstract: We study soft persistence (existence in subsequent temporal layers of motifs from the initial layer) of motif structures in Triangulated Maximally Filtered Graphs (TMFG) generated from time-varying Kendall correlation matrices computed from stock prices log-returns over rolling windows with exponential smoothing. We observe long-memory processes in these structures in the form of power law decays in the number of persistent motifs. The decays then transition to a plateau regime with a power-law decay with smaller exponent. We demonstrate that identifying persistent motifs allows for forecasting and applications to portfolio diversification. Balanced portfolios are often constructed from the analysis of historic correlations, however not all past correlations are persistently reflected into the future. Sector neutrality has also been a central theme in portfolio diversification and systemic risk. We present an unsupervised technique to identify persistently correlated sets of stocks. These are empirically found to identify sectors driven by strong fundamentals. Applications of these findings are tested in two distinct ways on four different markets, resulting in significant reduction in portfolio volatility. A persistence-based measure for portfolio allocation is proposed and shown to outperform volatility weighting when tested out of sample.

Identifying the hidden multiplex architecture of complex systems 
Lucas Lacasa 
12:00-13:00, Wednesday 16 October 2019, Room 4.05

Abstract: s taking place at different levels. However only in a few cases can such multi-layered architecture be empirically observed, as one usually only has experimental access to such structure from an aggregated projection. A fundamental question is thus to determine whether the hidden underlying architecture of complex systems is better modelled as a single interaction layer or results from the aggregation and interplay of multiple layers. Here we show that, by only using local information provided by a random walker navigating the aggregated network, it is possible to decide in a robust way if the underlying structure is a multiplex and, in the latter case, to determine the most probable number of layers. The proposed methodology detects and estimates the optimal architecture capable of reproducing observable non- Markovian dynamics taking place on networks, with applications ranging from human or animal mobility to electronic transport or molecular motors. Furthermore, the mathematical theory extends above and beyond detection of physical layers in networked complex systems, as it provides a general solution for the optimal decomposition of complex dynamics in a Markov switching combination of simple (diffusive) dynamics.
Video

Rough landscapes: From machine learning to glasses and back 
Chiara Cammarota 
12:00-13:00, Wednesday 25 September 2019, Room 4.05

Abstract: The evolution of many complex systems in physics, biology or computer science can often be thought of as an attempt to optimize a cost function. Such function generally depends on a highly non-linear way on the huge number of variables parametrizing the system so that its profile defines a high-dimensional landscape, which can be either smooth and convex, or rugged. In this talk I will focus on rough cost/loss functions within the realm of inference and machine learning. I will first discuss the cost landscape of a widely used inference model called spiked tensor model, hereby also generalised, and its implications on the performances of inference algorithms. Secondly I will report on evidences of glass-like dynamics, including aging, during training of deep neural networks, and use them to discuss the importance of over-parametrisation, widely used in the field.
Video

2018-2019

Network methods for policy evaluation 
Omar Guerrero 
12:00-13:00, Wednesday 19 June 2019, Room 4.05

Abstract: Over the last 50 years, an increasing number of countries have used guidelines provided by international organisations in order to shape their development strategies. Today, the best example of these guidelines is the Sustainable Development Goals (SDGs). The SDGs consist of 17 general goals that are monitored through 232 development indicators. Before the SDGs, development indicators were designed to measure different policy issues in isolation from each other. Today, this has changed with the official acknowledgment that “development challenges are complex are interlinked ” (SDG official website). Accounting for interdependencies between development goals has become a central discussion among researchers and practitioners in development studies, for example, to evaluate SDGs; to align environmental policies; to coordinate anti-poverty policies; and to better understand the synergies and tradeoffs between development goals. 
Website

Analysis of overfitting in the regularized Cox model 
Mansoor Sheikh 
12:00-13:00, Wednesday 1 May 2019, Room 4.05

Abstract: The Cox proportional hazards model is ubiquitous in the analysis of time-to-event data. However, when the data dimension p is comparable to the sample size N, maximum likelihood estimates for its regression parameters are known to be biased or break down entirely due to overfitting. This prompted the introduction of the so-called regularized Cox model. In this paper we use the replica method from statistical physics to investigate the relationship between the true and inferred regression parameters in regularized multivariate Cox regression with L2 regularization, in the regime where both p and N are large but with p/N ~ O(1). We thereby generalize a recent study from maximum likelihood to maximum a posteriori inference. We also establish a relationship between the optimal regularization parameter and p/N, allowing for straightforward overfitting corrections in time-to-event analysis.
Video

Contagion accounting 
Anne-Caroline Hüser
12:00-13:00, Wednesday 3 April 2019, Room 4.05

Abstract: We provide a simple and tractable accounting-based stress-testing framework to assess loss dynamics in the banking system. Contagion can occur through direct and indirect interbank exposures, indirect exposures due to overlapping portfolios, and price dynamics via fire sales in a context of leverage targeting. We apply the framework to three granular proprietary ECB datasets, including an interbank network of 26 large euro area banks as well as their overlapping portfolios of loans and securities.

Conditional generative adversarial networks for trading strategies
Adriano Koshiyama, University College London
12:00-13:00, Wednesday 13 March 2019, Room 4.05

Abstract: Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine tune its strategy, or discover how to combine weak signals in novel alpha-creating manners. Both aspects, namely fine tuning and combination, have been extensively researched using several methods, but emerging techniques such as generative adversarial networks can have an impact into such aspects. Therefore, our work proposes the use of a conditional generative adversarial metworks (CGAN) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a CGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared a CGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that a CGAN is a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha.
Video

Max-hedge/max-grace
Stephen Pasteris, University College London
12:00-13:00, Wednesday 20 February 2019, Room 4.05

Abstract: We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner's selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is an efficient and very scalable unifying approach which is capable of solving our general problem. Hence, our method solves several online learning problems which fall into this general framework.
Video

Statistical challenges of loopy networks
Fabián Aguirre López
12:00-13:00, Wednesday 30 January 2019, Room 4.05

Abstract: We present an analytical approach for describing spectrally constrained maximum entropy ensembles of finitely connected regular loopy graphs, valid in the regime of weak loop-loop interactions. We derive an expression for the leading two orders of the expected eigenvalue spectrum, through the use of infinitely many replica indices taking imaginary values. We apply the method to models in which the spectral constraint reduces to a soft constraint on the number of triangles, which exhibit ‘shattering’ transitions to phases with extensively many disconnected cliques, to models with controlled numbers of triangles and squares, and to models where the spectral constraint reduces to a count of the number of adjacency matrix eigenvalues in a given interval. Our predictions are supported by MCMC simulations based on edge swaps with nontrivial acceptance probabilities.
Video

Market impact and optimal execution with transient models 
Fabrizio Lillo
11:00-12:00, Thursday 17 January 2019, Room 4.05

Abstract: Modeling market impact is critically important for developing trading strategies and for transaction cost analysis. In this talk, I first review some old and new empirical regularities on market impact, in particular focusing on the effect on price of multiple contemporaneous large executions. I then present the problem of optimal execution and dynamical arbitrage in the context of transient impact models. Specifically I discuss the role of the different benchmarks (Implementation Shortfall, Volume Weighted Average Price and Target Close) on the optimal execution and the extension of the model to the multi.asset setting.
Video

Superstars in two-sided markets: exclusives or not? 
Elias Carroni
12:00-13:00, Wednesday 16 January 2019, Room 4.05

Abstract: Competition in many markets is shaped by the presence of Superstars, i.e., very strong players who can decide to offer their product through exclusive contracts. In this paper, we present a tractable model of two-sided platform competition. Platforms act as intermediaries between consumers and content providers. Relative to other content providers, a Superstar is more important for consumers and has market power. When platform competition is intense, consumers are very responsive to the presence of the Superstar. This makes exclusivity more lucrative. Differently, when competition is less intense, consumers tend to stick with their preferred plat- form. So, the Superstar offers a non-exclusive contract reaching the largest possible audience. This mechanism is self-reinforcing as content providers endogenously follow consumer decisions and it is robust to more general set-ups and extensions. Contrary to the common wisdom, in most cases the contract choice of the Super- star is aligned with the first-best outcome in the industry.
Video

Fourier-transform based pricing of barrier options with stochastic volatility
Jiaqi Liang, University College London
14:00-15:00, Wednesday 14 November 2018, Room 4.05

Abstract: We present a pricing method for discretely monitored barrier options with stochastic volatility models extending previous work with fluctuation identities in Fourier-z space for Lévy processes. The option price can be found calculating a set of nested integrals which express an iterative relation between the discounted prices at successive monitoring dates. Computing the variance integral by quadrature and applying the z-transform to discounted option prices monitored at discrete times, a Wiener-Hopf integral equation is obtained. Due to its convolution structure this equation can be solved in Fourier space using the Wiener-Hopf technique. The joint conditional characteristic function of the Heston model with respect to the log-price and variance is known. Then the price of a barrier option can be found using an algorithm similar to the one described for Lévy processes. It is not clear yet whether this method can be extended to continuous monitoring as it has been for Lévy processes, but we guess that, after testing it on the Heston model as an example, it can be applied to any Lévy-driven local-stochastic volatility model.
Video

Resilience of trading networks
Laura Silvestri
14:00-15:00, Wednesday 31 October 2018, Room 4.05

Abstract: We study the network structure and resilience of the sterling investment-grade and high-yield corporate bond markets. We use proprietary, transaction-level data to show that the trading networks of sterling investment-grade and high-yield corporate bonds exhibit a core-periphery structure where a large number of non-dealers trade with a small number of dealers. The market is highly concentrated, with the top three dealers accounting for around 20%, and the top three non-dealers accounting for around 10-20% on average of trading volume. Consistently with dealer behaviour in the primary market, we find that trading activity is particularly concentrated for newly issued bonds: the top three dealers account for 45% of trading volume in the secondary market of in newly issued bonds. Whilst the network structure has been broadly stable and the market broadly resilient around bond downgrades over our 2012-2017 sample period, the reliance on a small number of participants makes the trading network somewhat fragile to the withdrawal of a few key dealers from the market.
Video

Integral transform methods and spectral filters for the pricing of exotic options 
Guido Germano, University College London
Friday 14 December 2018, Room 4.05

Abstract: We present numerical methods to calculate fluctuation identities for exponential Lévy processes with discrete and continuous monitoring. This includes the Spitzer identities which give the distribution of the maximum or the minimum of a random path, the joint distribution at maturity with the extrema staying below or above a barrier, and the more difficult case of the two-barriers exit problem. These identities are given in the Fourier-z or Fourier-Laplace domain and require numerical inverse z and Laplace transforms as well as, for the required Wiener-Hopf factorisations, numerical Hilbert transforms based on a sinc function expansion and thus ultimately on the fast Fourier transform. In most cases we achieve exponential convergence with respect to the number of grid points, in some cases improving the rate of convergence with spectral filters to mitigate the Gibbs phenomenon for discrete Fourier transforms. As motivating applications we price barrier, lookback, quantile and Bermudan options. 
Paper

Social closure and the evolution of cooperation via indirect reciprocity
Simone Righi, University College London
Friday 16 November 2018, Room 4.05

Abstract: Direct and indirect reciprocity are good candidates to explain the fundamental problem of evolution of cooperation. We explore the conditions under which different types of reciprocity gain dominance and their performances in sustaining cooperation in the PD played on simple networks. We confirm that direct reciprocity gains dominance over indirect reciprocity strategies also in larger populations, as long as it has no memory constraints. In the absence of direct reciprocity, or when its memory is flawed, different forms of indirect reciprocity strategies are able to dominate and to support cooperation. We show that indirect reciprocity relying on social capital inherent in closed triads is the best competitor among them, outperforming indirect reciprocity that uses information from any source. Results hold in a wide range of conditions with different evolutionary update rules, extent of evolutionary pressure, initial conditions, population size, and density.
Paper

Reciprocity and success in academic careers
Giacomo Livan, University College London
Thursday 11 October 2018, Room 4.05

Abstract: The growing importance of citation-based bibliometric indicators in shaping the prospects of academic careers incentivizes scientists to boost the numbers of citations they receive. Whereas the exploitation of self-citations has been extensively documented, the impact of reciprocated citations has not yet been studied. In this talk I will discuss reciprocity in a citation network of academic authors, and compare it with the average reciprocity computed in a variety of null network model ensembles. I will show that obtaining citations through reciprocity correlates negatively with a successful career in the long term. Nevertheless, at the aggregate level there is evidence of a steady increase in reciprocity over the years, largely fuelled by the exchange of citations between coauthors. These results characterize the structure of author networks in a time of increasing emphasis on citation-based indicators, and I will discuss their implications towards a fairer assessment of academic impact. 
Paper

Filtering information with networks: Understanding market structure and predicting market changes
Tomaso Aste, University College London
Thursday 13 September 2018, Room 4.05

Abstract: We are witnessing interesting times rich of information readily available for us all. Using, understanding and filtering such information has become a major activity across science, industry and society at large. Networks are excellent tools to represent and model complex systems such as the human brain or the financial market.  Sparse networks constructed from observational data of complex systems can be used to filter information by extracting the core interaction structure in a simplified but representative way. I will show how information filtering networks built from similarity measures, both linear and non-linear, can be used to process information while it is generated reducing complexity and dimensionality while keeping the integrity of the dataset. I will describe  how predictive probabilistic models can be associate to such networks. I will show how reliable, predictive and useful these models are to describe financial market structure and to predict regime changes.