XClose

UCL Computer Science

Home
Menu

Seminars

Group seminars usually are on Wednesday 12-13 in Room 4.05, 66-72 Gower Street, London WC1E 6EA. Currently they are held on Zoom.

Please contact the organiser Paolo Barucca to be emailed the announcements of forthcoming seminars and the Zoom invitations.

Forthcoming

 

Past

2021-2022

Momentum gender gap
Sofiya Malamud
13:00-14:00, Wednesday 6 October 2021, Zoom

Abstract:  We study the response of traders towards momentum in the market and find that gender seems to play a role in their reaction. Specifically, we find that female investors sell higher capital gains than capital losses, a behavioral constancy known as the disposition effect to which male investors are less prone.  We use a large client data set of a major Swiss retail broker to conduct these experiments.

2020-2021

Fundamental Valuation of Companies Using New Data and Quant Methods
Michael Recce, Neuberger Berman
12:00-13:00, Wednesday 16 June 2021, Zoom

Abstract:  Investing is historically an early adopter of new algorithms and technology.  The utility of a new idea is straightforwardly measured in money out over money in. Machine Learning and large-scale computing of unstructured, novel data provides a new set of methods that were developed first by internet companies. These methods primarily provide an informational advantage in an inefficient market where securities are selected based on their underlying intrinsic value.  In this talk I describe the underlying methods, provide examples of success using these methods, and I present a roadmap for the impact these methods will have on markets and investing.

Video

Dynamics of cascades on burstiness-controlled temporal networks
Samuel Unicomb, University of Limerick
12:00-13:00, Wednesday 12 May 2021, Zoom

Abstract: Burstiness, the tendency of interaction events to be heterogeneously distributed in time, is critical to information diffusion in physical and social systems. However, an analytical framework capturing the effect of burstiness on generic dynamics is lacking. Here we develop a master equation formalism to study cascades on temporal networks with burstiness modelled by renewal processes. Supported by numerical and data-driven simulations, we describe the interplay between heterogeneous temporal interactions and models of threshold-driven and epidemic spreading. We find that increasing interevent time variance can both accelerate and decelerate spreading for threshold models, but can only decelerate epidemic spreading. When accounting for the skewness of different interevent time distributions, spreading times collapse onto a universal curve. Our framework uncovers a deep yet subtle connection between generic diffusion mechanisms and underlying temporal network structures that impacts a broad class of networked phenomena, from spin interactions to epidemic contagion and language dynamics.

Video

Regulating unintended consequences: Algorithmic trading and the limits of securities regulation
Carsten Gerner-Beuerle, University College London
12:00-13:00, Wednesday 14 April 2021, Zoom

Abstract: Since the infamous flash crash of 2010, instances of unexplained high volatility in financial markets, often driven by algorithmic and high-frequency trading, have received increased attention by policy makers and commentators. A number of regulatory initiatives in the EU and US deal specifically with the perceived risks that algorithmic and high-frequency trading pose to market quality. However, their efficacy is disputed, with some claiming that they are unlikely to prevent the future misuse of HFT practices, while others caution that the additional regulatory burden may have unintended and counterproductive consequences for market efficiency. This paper examines whether existing regulatory techniques, notably disclosure, internal testing and monitoring systems, and the regulation of structural features of the trade process, such as order execution times and circuit breakers, are adequate to address the risk of extreme market turbulence. It draws on market microstructure theory in arguing that regulation in the EU and the US continues to be wedded to an old regulatory paradigm centred around the role of information, without taking sufficient account of the mechanics of automated trading in modern financial markets.

Video

The COVID-19 auction premium
Gerardo Ferrara, Bank of England
12:00-13:00, Wednesday 10 March 2021, Zoom

Abstract: We uncover an additional channel by which a pandemic is costly for taxpayers, namely the surge of the bond auction premium. By applying a novel econometric strategy to high frequency data of the secondary Italian bond market, we show that the premium spiked anomalously during the “perfect storm” of 12 March 2020, a day which featured a large Treasury auction, the peak of COVID-19 infections in Italy and a controversial press conference following the announcement of the ECB Governing Council monetary policy decisions. We quantify the Treasury issuance cost at 136 bps of the auction size. Our results indicate that subsequent monetary policy measures, implemented since 18 March 2020, effectively reduced volatility, and consequently the size of the premium, during the second wave of the pandemic.

Evaluating structural edge importance in temporal networks
Isobel Seabrook, Financial Conduct Authority and University College London 
12:00-13:00, Wednesday 24 February 2021, Zoom

Abstract: To monitor risk in temporal financial networks, we need to understand how individual behaviours affect the global evolution of networks. Here we define a structural importance metric-which we denote as l_e- for the edges of a network. The metric is based on perturbing the adjacency matrix and observing the resultant change in its largest eigenvalues. We then propose a model of network evolution where this metric controls the probabilities of subsequent edge changes. We show using synthetic data how the parameters of the model are related to the capability of predicting whether an edge will change from its value of l​_e. We then estimate the model parameters associated with five real financial and social networks, and we study their predictability. These methods have application in financial regulation whereby it is important to understand how individual changes to financial networks will impact their global behaviour. It also provides fundamental insights into spectral predictability in networks, and it demonstrates how spectral perturbations can be a useful tool in understanding the interplay between micro and macro features of networks.

Video

Synthetic leverage, risk-taking, and monetary policy
Daniel Fricke, Deutsche Bundesbank
12:00-13:00, Wednesday 3 February 2021, Zoom

Abstract: A growing literature documents that easy monetary policy facilitates investor risk-taking. In this paper, I propose a new measure of synthetic leverage and provide evidence that German equity funds have been increasing their risk-taking through synthetic leverage from 2015 onwards. In fact, changes in synthetic leverage are closely aligned with the stance of monetary policy. Returns of synthetically leveraged funds (those that make use of risk-taking strategies) tend to be negative on a risk-adjusted basis and these funds underperform other funds significantly. Lastly, while synthetically leveraged funds do not differ in terms of their flow-performance sensitivity, they display larger flow externalities, in particular during volatile market conditions.

Tâtonnement, approach to equilibrium and excess volatility in firm networks
Jose Moran, Institute for New Economic Thinking, Oxford
12:00-13:00, Wednesday 20 January 2021, Zoom

Abstract: We study the conditions under which input-output networks can dynamically attain competitive equilibrium, where markets clear and profits are zero. We endow a classical firm network model with simple dynamical rules that reduce supply/demand imbalances and excess profits. We show that the time needed to reach equilibrium diverges as the system approaches an instability point beyond which the Hawkins-Simons condition is violated and competitive equilibrium is no longer realisable, in reminiscence of May's stability condition. We argue that such slow dynamics is a source of excess volatility, through accumulation and amplification of exogenous shocks. Factoring in essential physical constraints, such as causality or inventory management, we propose a dynamically consistent model that displays a rich variety of phenomena. Competitive equilibrium can only be reached after some time and within some region of parameter space, outside of which one observes periodic and chaotic phases, reminiscent of real business cycles. This suggests an alternative explanation of the excess volatility that is of purely endogenous nature.

Joint work with Théo Dessertaine, Michael Benzaquen and J. P. Bouchaud, arXiv:2012.05202
Video

Causal Campbell-Goodhart law and reinforcement learning
Henry Ashton, University College London
12:00-13:00, Wednesday 25 November 2020, Zoom

Abstract: Campbell-Goodhart's law relates to the causal inference error whereby decision-making agents aim to influence variables which are correlated to their goal objective but do not reliably cause it. This is a well known error in economics and political science but not widely labelled in artificial intelligence research. Through a simple example, we show how off-the-shelf deep reinforcement learning (RL) algorithms are not necessarily immune to this cognitive error. The off-policy learning method is tricked, whilst the on-policy method is not. The practical implication is that naive application of RL to complex real life problems can result in the same types of policy errors that humans make. Great care should be taken around understanding the causal model that underpins a solution derived from reinforcement learning.
Video

Learning (not) to trade: Lindy's law in retail traders
Jiahua Xu, UCL Centre for Blockchain Technology
12:00-13:00, Wednesday 11 2020, Zoom

Abstract: We develop a rational model of trading behavior in which the agents gradually learn about their ability to trade, and exit after poor trading performance. We demonstrate that it is optimal for experienced traders to "procrastinate" and postpone exit even after bad results. We embed this "optimal procrastination" in a model of population dynamics with entry and endogenous exit, and generate predictions about the dynamics of various cross-sectional characteristics. We test these population-level predictions using a large client data set of a major Swiss retail broker. Consistent with the model, we find that endogenous exit decisions produce non-trivial and non-monotonic population-wide linkages between performance, exits, and trading experience.
Video

Universal rankings in complex input-output organisations: From socio-economic to ecological systems
Silvia Bartolucci, University College London
12:00-13:00, Wednesday 28 October 2020, Zoom 

Abstract: The input-output balance equation is used to define rankings of constituents in the most diverse complex organizations: the very same tool that helps classify how species of an ecosystems or sectors of an economy interact with each other is useful to determine what sites of the world wide web - or which nodes in a social network - are the most influential. The basic principle is that constituents of a complex organization can produce outputs whose "volume" should precisely match the sum of external demand plus inputs absorbed by other constituents to function. The solution typically requires a case-by-case inversion of large matrices, which provides little to no insight on the structural features responsible for the hierarchical organization of resources. Here we show that - under very general conditions - the solution of the input-output balance equation for open systems can be described by a universal master curve, which is characterized analytically in terms of simple "mass defect" parameters - for instance, the fraction of resources wasted by each species of an ecosystem into the external environment. Our result follows from a stochastic formulation of the interaction matrix between constituents: using the replica method from the physics of disordered systems, the average (or typical) value of the rankings of a generic hierarchy can be computed, whose leading order is shown to be largely independent of the precise details of the system under scrutiny. We test our predictions on systems as diverse as the WWW PageRank, trophic levels of generative models of ecosystems, input-output tables of large economies, and centrality measures of Facebook pages.

Acceleration of descent-based optimisation algorithms via Caratheodory's theorem
Francesco Cosentino, Alan Turing Institute and University of Oxford
12:00-13:00, Wednesday 14 October 2020, Zoom

Abstract:  Given a discrete probability measure supported on N atoms and a set of n real-valued functions, there exists a probability measure that is supported on a subset of n+1 of the original N atoms and has the same mean when integrated against each of the n functions. We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by “greedy geometric sampling”. We then propose a new technique to accelerate algorithms based on gradient descent using Caratheodory’s theorem. As a core contribution, we then present an application of the acceleration technique to block coordinate descent methods. Experimental comparisons on least squares regression with LASSO regularisation terms show better performance than the ADAM and SAG algorithms.
Video

2019-2020

Meta-graph: Few shot link prediction via meta learning
Joey Bose
12:00-12:40, Wednesday 24 June 2020, Zoom

Abstract: We consider the task of few shot link prediction on graphs. The goal is to learn from a distribution over graphs so that a model is able to quickly infer missing edges in a new graph after a small amount of training. We show that current link prediction methods are generally ill-equipped to handle this task. They cannot effectively transfer learned knowledge from one graph to another and are unable to effectively learn from sparse samples of edges. To address this challenge, we introduce a new gradient-based meta learning framework, meta-graph. Our framework leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that meta-graph can learn to quickly adapt to a new graph using only a small sample of true edges, enabling not only fast adaptation but also improved results at convergence.
Video

A maximum entropy approach to time series analysis
Riccardo Marcaccioli, University College London
12:00-12:40, Wednesday 17 June 2020, Zoom

Abstract: Natural and social multivariate systems are commonly studied through sets of simultaneous and time-spaced measurements of the observables that drive their dynamics, i.e., through sets of time series. Typically, this is done via hypothesis testing: the statistical properties of the empirical time series are tested against those expected under a suitable null hypothesis. This is a very challenging task in complex interacting systems, where statistical stability is often poor due to lack of stationarity and ergodicity. Here, we describe an unsupervised, data-driven framework to perform hypothesis testing in such situations. This consists of a statistical mechanical approach—analogous to the configuration model for networked systems—for ensembles of time series designed to preserve, on average, some of the statistical properties observed on an empirical set of time series. We showcase its possible applications with a case study on financial portfolio selection.
Video

Societal biases reinforcement through machine learning: A credit scoring perspective
Bertrand Hassani
12:00-12:40, Wednesday 10 June 2020, Zoom

Abstract: Does machine learning and AI ensure that social biases thrive ? This paper aims to analyse this issue. Indeed, as algorithms are informed by data, if these are corrupted, from a social bias perspective, good machine learning algorithms would learn from the data provided and reverberate the patterns learnt on the predictions related to either the classification or the regression intended. In other words, the way society behaves whether positively or negatively, would necessarily be reflected by the models. In this paper, we analyse how social biases are transmitted from the data into banks loan approvals by predicting either the gender or the ethnicity of the customers using the exact same information provided by customers through their applications.

Pricing of futures with a CARMA(p,q) model driven by a time-changed Brownian motion
Andrea Perchiazzo
12:00-12:40, Wednesday 20 May 2020, Zoom

Abstract: In this paper we start from the empirical findings on the behaviour of future prices in commodity markets and propose a continuous time model that allows to represent in a semi analytical form the price function. In particular, we study the term structure of future prices under the assumption that the underlying asset price follows an exponential CARMA(p,q) model where the driving noise is a time-changed Brownian motion. The obtained formula is strictly connected to the cumulant generating function of the subordinator process. The main advantages of the proposed model are the possibility to work directly with market data without requesting regular grid and its ability to capture complex time dependent structures through different shapes of the autocovariance function.
Video

Supply and demand shocks in the COVID-19 pandemic: An industry and occupation perspective  
Rita Maria del Rio Chanona
12:00-12:40, Wednesday 22 April 2020, Zoom

Abstract: We provide quantitative predictions of first-order supply and demand shocks for the US economy associated with the COVID-19 pandemic at the level of individual occupations and industries. To analyse the supply shock, we classify industries as essential or non-essential and construct a Remote Labour Index, which measures the ability of different occupations to work from home. Demand shocks are based on a study of the likely effect of a severe influenza epidemic developed by the US Congressional Budget Office. Compared to the pre-COVID period, these shocks would threaten around 20 per cent of the US economy’s GDP, jeopardize 23 per cent of jobs, and reduce total wage income by 16 per cent. At the industry level, sectors such as transport are likely to be output-constrained by demand shocks, while sectors relating to manufacturing, mining, and services are more likely to be constrained by supply shocks. Entertainment, restaurants, and tourism face large supply and demand shocks. At the occupation level, we show that high-wage occupations are relatively immune from adverse supply- and demand-side shocks, while low-wage occupations are much more vulnerable. We should emphasize that our results are only first-order shocks—we expect them to be substantially amplified by feedback effects in the production network.
Video

Regime detection in financial time series and further results in portfolio construction  
Pier Francesco Procacci, University College London
12:00-12:40, Wednesday 8 April 2020, Zoom

Abstract: The seminar discusses a novel approach to define, analyse and forecast market states. Two experiments are presented, together with observations on portfolio construction and patterns in the likelihood which arise from the analysis. Defining market states is an essential tool in dealing with non-stationarity of time series, but the most widely used models are often computationally expensive and unfeasible as dimensionality increases. In our approach, market states are identified by a reference sparse precision matrix and a vector of expectation values. Each multivariate observation is associated with a given market state accordingly to a minimization of a penalized distance measure. The procedure is made computationally very efficient and can be used with a large number of assets. We demonstrate that this procedure is successful at clustering different states of the markets in an unsupervised manner.
Video

Information flow simulations in the investigation of economic complex systems 
Riccardo Righi 
12:00-13:00, Wednesday 26 February 2020, Room 4.05

Abstract: The seminar discusses the use of simulations of information flows for the investigation of complex systems related to digital economy. Two case studies are presented. The first concerns the investigation of an equity crowdfunding platform based in the UK. To categorize the nodes of such network, the implementation of a cluster analysis on investors allows the identification of three profiles: Small Investors, Serial Investors and Highly Involved Investors. Similarly, three clusters emerge from companies' characteristics. The structural properties of the platform are then investigated through the detection of groups of agents that are likely to generate internal social capital due to dense connections. Our results show that Small Investors, through investments on different types of companies, significantly interconnect distinct communities, thus contributing to generate a cohesive network structure. Hence they structurally support information exchange throughout the platform. The second case study is about the investigation of the techno-economic complex system defined by economic worldwide institutions (e.g. firms, governmental institutions and research institutes) actively contributing in the field of Artificial Intelligence in the period 2009-2018. We discuss the theoretical basis supporting the construction of a multi-layer network representing the Artificial Intelligence agent-artifact space. 
Video

Efficiency of payment networks with a central counterparty 
Haotian Gao  
12:00-13:00, Wednesday 12 February 2020, Room 4.05

Abstract: The Central Counterpary Clearing Houses (CCPs) are financial institutions established to facilitate clearing and settlement of trades across various markets. After the financial crisis, CCPs have become a key piece of the new regulatory framework, as several classes of derivative contracts must now be cleared through them. The problem of how CCPs affect counterparty risk was previously studied by Duffie & Zhu (2011). A key benefit of having trades cleared through CCPs is that payments obligations arising from those trades can be smaller, because they are netted. We show that, if the system is efficient in absence of a CCP, introducing a CCP always reduces its efficiency. Conversely, when the system with no CCP is inefficient, we show that in presence of a CCP its efficiency is a non-monotonic function of the percentage of transactions cleared though the CCP, which we denote in the following by α. In particular, we show that there exists an optimal value of α where efficiency achieves a maximum

A rough perspective on modern market generators
Blanka Horvath, King's College London
12:00-13:00, Wed, 29 January 2020, Room 4.05

Abstract: In this talk we investigate how Deep Pricing and Hedging brings a new impetus into the modelling of financial markets. We have a short walk though historical market models and proceed to modern generative models for financial time series. We then investigate some of the challenges to achieve good results in the latter, and highlight some applications and pitfalls. We also discuss different approaches pricing and hedging considerations in DNN framework and the connection to Market Generators.
Video

Networks and the arrow of time 
Tim Evans 
12:00-13:00, Wednesday 4 December 2019, Room 4.05

Abstract: There are many data sets where objects come with a natural order: academic papers have publications dates, predator- prey relationships in ecosystems, computer packages have their dependencies, space-time events are ordered by causality. If these objects are nodes in a network then we have a Directed Acyclic Graph. We must take account of the constraint placed on such networks by this order, the” arrow-of-time”, so many standard network techniques are inappropriate for such networks. In my talk I will highlight some of the well-known mathematical features of Directed Acyclic Graphs and will show I have been using them to look at these networks in new ways. I will use examples from both simple models, such as the Price model (directed Barabasi-Albert model), and real data sets such as document citation data and food webs.
Video

From brain to markets: A fractal journey 
Federico Turkheimer 
12:00-13:00, Wed, 20 November 2019, Room 4.05

Abstract: Introduced almost 60 years ago, fractals started off as mathematical curiosities with a great appeal given that fractal-like structures turned out to be ubiquitous in nature. Nowadays they have turned into unique tools that aid the construction of quantitative models of complex systems. This talk is meant as a hopefully pleasurable walk from the original intuition of Mandelbrot and others through a quantitative multiscale model of brain function and behaviour to quantitative models for trading equities. A qualitative view of fractals and self-similar expansions will be used to illustrate how we can advance our understanding of brain and mind and make solid predictions on how changes in the neurobiology of elementary units can affect human health and activity. In fact such an insight can be further extended into human activities such as art (through a short review of psychosis and the art of Vincent Van Gogh) and, importantly for this context, financial markets and turn old mathematical tools in sensitive predictors of market conditions.
Video

Sector-neutral portfolios: Long memory motifs persistence in market structure dynamics 
Jeremy Turiel, University College London
12:00-13:00, Wednesday 30 October 2019, Room 4.05

Abstract: We study soft persistence (existence in subsequent temporal layers of motifs from the initial layer) of motif structures in Triangulated Maximally Filtered Graphs (TMFG) generated from time-varying Kendall correlation matrices computed from stock prices log-returns over rolling windows with exponential smoothing. We observe long-memory processes in these structures in the form of power law decays in the number of persistent motifs. The decays then transition to a plateau regime with a power-law decay with smaller exponent. We demonstrate that identifying persistent motifs allows for forecasting and applications to portfolio diversification. Balanced portfolios are often constructed from the analysis of historic correlations, however not all past correlations are persistently reflected into the future. Sector neutrality has also been a central theme in portfolio diversification and systemic risk. We present an unsupervised technique to identify persistently correlated sets of stocks. These are empirically found to identify sectors driven by strong fundamentals. Applications of these findings are tested in two distinct ways on four different markets, resulting in significant reduction in portfolio volatility. A persistence-based measure for portfolio allocation is proposed and shown to outperform volatility weighting when tested out of sample.

Identifying the hidden multiplex architecture of complex systems 
Lucas Lacasa 
12:00-13:00, Wednesday 16 October 2019, Room 4.05

Abstract: s taking place at different levels. However only in a few cases can such multi-layered architecture be empirically observed, as one usually only has experimental access to such structure from an aggregated projection. A fundamental question is thus to determine whether the hidden underlying architecture of complex systems is better modelled as a single interaction layer or results from the aggregation and interplay of multiple layers. Here we show that, by only using local information provided by a random walker navigating the aggregated network, it is possible to decide in a robust way if the underlying structure is a multiplex and, in the latter case, to determine the most probable number of layers. The proposed methodology detects and estimates the optimal architecture capable of reproducing observable non- Markovian dynamics taking place on networks, with applications ranging from human or animal mobility to electronic transport or molecular motors. Furthermore, the mathematical theory extends above and beyond detection of physical layers in networked complex systems, as it provides a general solution for the optimal decomposition of complex dynamics in a Markov switching combination of simple (diffusive) dynamics.
Video

Rough landscapes: From machine learning to glasses and back 
Chiara Cammarota 
12:00-13:00, Wednesday 25 September 2019, Room 4.05

Abstract: The evolution of many complex systems in physics, biology or computer science can often be thought of as an attempt to optimize a cost function. Such function generally depends on a highly non-linear way on the huge number of variables parametrizing the system so that its profile defines a high-dimensional landscape, which can be either smooth and convex, or rugged. In this talk I will focus on rough cost/loss functions within the realm of inference and machine learning. I will first discuss the cost landscape of a widely used inference model called spiked tensor model, hereby also generalised, and its implications on the performances of inference algorithms. Secondly I will report on evidences of glass-like dynamics, including aging, during training of deep neural networks, and use them to discuss the importance of over-parametrisation, widely used in the field.
Video

2018-2019

Network methods for policy evaluation 
Omar Guerrero 
12:00-13:00, Wednesday 19 June 2019, Room 4.05

Abstract: Over the last 50 years, an increasing number of countries have used guidelines provided by international organisations in order to shape their development strategies. Today, the best example of these guidelines is the Sustainable Development Goals (SDGs). The SDGs consist of 17 general goals that are monitored through 232 development indicators. Before the SDGs, development indicators were designed to measure different policy issues in isolation from each other. Today, this has changed with the official acknowledgment that “development challenges are complex are interlinked ” (SDG official website). Accounting for interdependencies between development goals has become a central discussion among researchers and practitioners in development studies, for example, to evaluate SDGs; to align environmental policies; to coordinate anti-poverty policies; and to better understand the synergies and tradeoffs between development goals. 
Website

Analysis of overfitting in the regularized Cox model 
Mansoor Sheikh 
12:00-13:00, Wednesday 1 May 2019, Room 4.05

Abstract: The Cox proportional hazards model is ubiquitous in the analysis of time-to-event data. However, when the data dimension p is comparable to the sample size N, maximum likelihood estimates for its regression parameters are known to be biased or break down entirely due to overfitting. This prompted the introduction of the so-called regularized Cox model. In this paper we use the replica method from statistical physics to investigate the relationship between the true and inferred regression parameters in regularized multivariate Cox regression with L2 regularization, in the regime where both p and N are large but with p/N ~ O(1). We thereby generalize a recent study from maximum likelihood to maximum a posteriori inference. We also establish a relationship between the optimal regularization parameter and p/N, allowing for straightforward overfitting corrections in time-to-event analysis.
Video

Contagion accounting 
Anne-Caroline Hüser
12:00-13:00, Wednesday 3 April 2019, Room 4.05

Abstract: We provide a simple and tractable accounting-based stress-testing framework to assess loss dynamics in the banking system. Contagion can occur through direct and indirect interbank exposures, indirect exposures due to overlapping portfolios, and price dynamics via fire sales in a context of leverage targeting. We apply the framework to three granular proprietary ECB datasets, including an interbank network of 26 large euro area banks as well as their overlapping portfolios of loans and securities.

Conditional generative adversarial networks for trading strategies
Adriano Koshiyama, University College London
12:00-13:00, Wednesday 13 March 2019, Room 4.05

Abstract: Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact into such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha.
Video

Max-hedge/max-grace
Stephen Pasteris, University College London
12:00-13:00, Wednesday 20 February 2019, Room 4.05

Abstract: We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner's selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is an efficient and very scalable unifying approach which is capable of solving our general problem. Hence, our method solves several online learning problems which fall into this general framework.
Video

Statistical challenges of loopy networks
Fabián Aguirre López
12:00-13:00, Wednesday 30 January 2019, Room 4.05

Abstract: We present an analytical approach for describing spectrally constrained maximum entropy ensembles of finitely connected regular loopy graphs, valid in the regime of weak loop-loop interactions. We derive an expression for the leading two orders of the expected eigenvalue spectrum, through the use of infinitely many replica indices taking imaginary values. We apply the method to models in which the spectral constraint reduces to a soft constraint on the number of triangles, which exhibit ‘shattering’ transitions to phases with extensively many disconnected cliques, to models with controlled numbers of triangles and squares, and to models where the spectral constraint reduces to a count of the number of adjacency matrix eigenvalues in a given interval. Our predictions are supported by MCMC simulations based on edge swaps with nontrivial acceptance probabilities.
Video

Market impact and optimal execution with transient models 
Fabrizio Lillo
11:00-12:00, Thursday 17 January 2019, Room 4.05

Abstract: Modeling market impact is critically important for developing trading strategies and for transaction cost analysis. In this talk, I first review some old and new empirical regularities on market impact, in particular focusing on the effect on price of multiple contemporaneous large executions. I then present the problem of optimal execution and dynamical arbitrage in the context of transient impact models. Specifically I discuss the role of the different benchmarks (Implementation Shortfall, Volume Weighted Average Price and Target Close) on the optimal execution and the extension of the model to the multi.asset setting.
Video

Superstars in two-sided markets: exclusives or not? 
Elias Carroni
12:00-13:00, Wednesday 16 January 2019, Room 4.05

Abstract: Competition in many markets is shaped by the presence of Superstars, i.e., very strong players who can decide to offer their product through exclusive contracts. In this paper, we present a tractable model of two-sided platform competition. Platforms act as intermediaries between consumers and content providers. Relative to other content providers, a Superstar is more important for consumers and has market power. When platform competition is intense, consumers are very responsive to the presence of the Superstar. This makes exclusivity more lucrative. Differently, when competition is less intense, consumers tend to stick with their preferred plat- form. So, the Superstar offers a non-exclusive contract reaching the largest possible audience. This mechanism is self-reinforcing as content providers endogenously follow consumer decisions and it is robust to more general set-ups and extensions. Contrary to the common wisdom, in most cases the contract choice of the Super- star is aligned with the first-best outcome in the industry.
Video

Fourier-transform based pricing of barrier options with stochastic volatility
Jiaqi Liang, University College London
14:00-15:00, Wednesday 14 November 2018, Room 4.05

Abstract: We present a pricing method for discretely monitored barrier options with stochastic volatility models extending previous work with fluctuation identities in Fourier-z space for Lévy processes. The option price can be found calculating a set of nested integrals which express an iterative relation between the discounted prices at successive monitoring dates. Computing the variance integral by quadrature and applying the z-transform to discounted option prices monitored at discrete times, a Wiener-Hopf integral equation is obtained. Due to its convolution structure this equation can be solved in Fourier space using the Wiener-Hopf technique. The joint conditional characteristic function of the Heston model with respect to the log-price and variance is known. Then the price of a barrier option can be found using an algorithm similar to the one described for Lévy processes. It is not clear yet whether this method can be extended to continuous monitoring as it has been for Lévy processes, but we guess that, after testing it on the Heston model as an example, it can be applied to any Lévy-driven local-stochastic volatility model.
Video

Resilience of trading networks
Laura Silvestri
14:00-15:00, Wednesday 31 October 2018, Room 4.05

Abstract: We study the network structure and resilience of the sterling investment-grade and high-yield corporate bond markets. We use proprietary, transaction-level data to show that the trading networks of sterling investment-grade and high-yield corporate bonds exhibit a core-periphery structure where a large number of non-dealers trade with a small number of dealers. The market is highly concentrated, with the top three dealers accounting for around 20%, and the top three non-dealers accounting for around 10-20% on average of trading volume. Consistently with dealer behaviour in the primary market, we find that trading activity is particularly concentrated for newly issued bonds: the top three dealers account for 45% of trading volume in the secondary market of in newly issued bonds. Whilst the network structure has been broadly stable and the market broadly resilient around bond downgrades over our 2012-2017 sample period, the reliance on a small number of participants makes the trading network somewhat fragile to the withdrawal of a few key dealers from the market.
Video

Integral transform methods and spectral filters for the pricing of exotic options 
Guido Germano, University College London
Friday 14 December 2018, Room 4.05

Abstract: We present numerical methods to calculate fluctuation identities for exponential Lévy processes with discrete and continuous monitoring. This includes the Spitzer identities which give the distribution of the maximum or the minimum of a random path, the joint distribution at maturity with the extrema staying below or above a barrier, and the more difficult case of the two-barriers exit problem. These identities are given in the Fourier-z or Fourier-Laplace domain and require numerical inverse z and Laplace transforms as well as, for the required Wiener-Hopf factorisations, numerical Hilbert transforms based on a sinc function expansion and thus ultimately on the fast Fourier transform. In most cases we achieve exponential convergence with respect to the number of grid points, in some cases improving the rate of convergence with spectral filters to mitigate the Gibbs phenomenon for discrete Fourier transforms. As motivating applications we price barrier, lookback, quantile and Bermudan options. 
Paper

Social closure and the evolution of cooperation via indirect reciprocity
Simone Righi, University College London
Friday 16 November 2018, Room 4.05

Abstract: Direct and indirect reciprocity are good candidates to explain the fundamental problem of evolution of cooperation. We explore the conditions under which different types of reciprocity gain dominance and their performances in sustaining cooperation in the PD played on simple networks. We confirm that direct reciprocity gains dominance over indirect reciprocity strategies also in larger populations, as long as it has no memory constraints. In the absence of direct reciprocity, or when its memory is flawed, different forms of indirect reciprocity strategies are able to dominate and to support cooperation. We show that indirect reciprocity relying on social capital inherent in closed triads is the best competitor among them, outperforming indirect reciprocity that uses information from any source. Results hold in a wide range of conditions with different evolutionary update rules, extent of evolutionary pressure, initial conditions, population size, and density.
Paper

Reciprocity and success in academic careers
Giacomo Livan, University College London
Thursday 11 October 2018, Room 4.05

Abstract: The growing importance of citation-based bibliometric indicators in shaping the prospects of academic careers incentivizes scientists to boost the numbers of citations they receive. Whereas the exploitation of self-citations has been extensively documented, the impact of reciprocated citations has not yet been studied. In this talk I will discuss reciprocity in a citation network of academic authors, and compare it with the average reciprocity computed in a variety of null network model ensembles. I will show that obtaining citations through reciprocity correlates negatively with a successful career in the long term. Nevertheless, at the aggregate level there is evidence of a steady increase in reciprocity over the years, largely fuelled by the exchange of citations between coauthors. These results characterize the structure of author networks in a time of increasing emphasis on citation-based indicators, and I will discuss their implications towards a fairer assessment of academic impact. 
Paper

Filtering information with networks: Understanding market structure and predicting market changes
Tomaso Aste, University College London
Thursday 13 September 2018, Room 4.05

Abstract: We are witnessing interesting times rich of information readily available for us all. Using, understanding and filtering such information has become a major activity across science, industry and society at large. Networks are excellent tools to represent and model complex systems such as the human brain or the financial market.  Sparse networks constructed from observational data of complex systems can be used to filter information by extracting the core interaction structure in a simplified but representative way. I will show how information filtering networks built from similarity measures, both linear and non-linear, can be used to process information while it is generated reducing complexity and dimensionality while keeping the integrity of the dataset.  I’ll describe  how predictive probabilistic models can be associate to such networks.  I will show how reliable, predictive and useful these models are to describe financial market structure and to predict regime changes.