Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails
Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails
Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails
Ebook696 pages8 hours

Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Taking due account of extreme events when constructing portfolios of assets or liabilities is a key discipline for market professionals. Extreme events are a fact of life in how markets operate.

In Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails, leading expert Malcolm Kemp shows readers how to analyse market data to uncover fat-tailed behaviour, how to incorporate expert judgement in the handling of such information, and how to refine portfolio construction methodologies to make portfolios less vulnerable to extreme events or to benefit more from them.

This is the only text that combines a comprehensive treatment of modern risk budgeting and portfolio construction techniques with the specific refinements needed for them to handle extreme events. It explains in a logical sequence what constitutes fat-tailed behaviour and why it arises, how we can analyse such behaviour, at aggregate, sector or instrument level, and how we can then take advantage of this analysis.

Along the way, it provides a rigorous, comprehensive and clear development of traditional portfolio construction methodologies applicable if fat-tails are absent. It then explains how to refine these methodologies to accommodate real world behaviour.

Throughout, the book highlights the importance of expert opinion, showing that even the most data-centric portfolio construction approaches ultimately depend on practitioner assumptions about how the world might behave.

 The book includes:

  • Key concepts and methods involved in analysing extreme events
  • A comprehensive treatment of mean-variance investing, Bayesian methods, market consistent approaches, risk budgeting, and their application to manager and instrument selection
  • A systematic development of the refinements needed to traditional portfolio construction methodologies to cater for fat-tailed behaviour
  • Latest developments in stress testing and back testing methodologies
  • A strong focus on the practical implementation challenges that can arise at each step in the process and on how to overcome these challenges

“Understanding how to model and analyse the risk of extreme events is a crucial part of the risk management process. This book provides a set of techniques that allow practitioners to do this comprehensively.”
Paul Sweeting, Professor of Actuarial Science, University of Kent

“How can the likeliness of crises affect the construction of portfolios? This question is highly topical in times where we still have to digest the last financial collapse. Malcolm Kemp gives the answer. His book is highly recommended to experts as well as to students in the financial field.”
Christoph Krischanitz, President Actuarial Association of Austria, Chairman WG “Market Consistency” of Groupe Consultatif

LanguageEnglish
PublisherWiley
Release dateOct 4, 2011
ISBN9781119962878
Extreme Events: Robust Portfolio Construction in the Presence of Fat Tails

Related to Extreme Events

Titles in the series (100)

View More

Related ebooks

Investments & Securities For You

View More

Related articles

Reviews for Extreme Events

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Extreme Events - Malcolm Kemp

    Chapter 1

    Introduction

    1.1 Extreme Events

    This book is about how best to construct investment portfolios if a priori it is reasonable to assume that markets might exhibit fat-tailed behaviour. It is designed to appeal to a wide variety of practitioners, students, researchers and general readers who may be interested in extreme events or portfolio construction theory either in isolation or in combination. It achieves this aim by

    (a) Exploring extreme events, why they might arise in a financial context and how we might best analyse them.

    (b) Separately exploring all the main topics in portfolio construction theory applicable even in the absence of fat tails. A special case of any more general approach capable of effectively handling extreme events is the situation where the extent of fat-tailed behaviour is too small to be discernible.

    (c) Blending points (a) and (b) together to identify portfolio construction methodologies better able to cater for possible fat-tailed behaviour in the underlying assets or liabilities.

    Given its wide intended audience, the book covers these topics both from a more qualitative perspective (particularly in the earlier and later chapters) and from a more quantitative (i.e., mathematical) perspective (particularly in the middle chapters). Where possible, this book has been segmented so that valuable insights can be gained without necessarily having to read the whole text. Conversely, in the author’s opinion, valuable insights arise throughout the book, including the parts that are more mathematical in nature. More general readers are therefore encouraged not to skip over these parts completely, although they do not need to worry too much about following all the details.

    By fat-tailed behaviour we mean that the distribution of future returns is expected to involve more extreme events than might be expected to occur were returns to follow the (multivariate) (log-) Normal distributions often assumed to apply to markets in basic portfolio construction texts.¹

    Most practitioners believe that most markets are ‘fat-tailed’ given this terminology. There is a wide body of empirical academic literature that supports this stance, based on analysis of past market behaviour. There is also a growing body of academic theory, including some involving behavioural finance, explaining why fat-tailed behaviour seems so widespread. So, we might also characterise this book as exploring how best to construct investment portfolios in the real world.

    Of course, practitioners and academics alike are not themselves immune from behavioural biases. It is one thing to agree to pay lip service to the notion that market behaviour can be fat-tailed, but quite another to take this into account in how portfolios are actually constructed. Following the dot.com boom and bust in the late 1990s and early 2000s, markets settled into a period of unusually low volatility. Strategies that benefited from stable economic conditions, e.g., ones that followed so-called ‘carry’ trades or strategies relying on continuing ready access to liquidity, proved successful, for a time. The 2007–09 credit crisis, however, painfully reminded the complacent that markets and economies more generally can and do experience extreme events.

    1.2 The Portfolio Construction Problem

    We do not live in a world in which we have perfect foresight. Instead, portfolio construction always involves striking a balance between risk and reward, i.e., the risk that the views implicit in our portfolio construction will prove erroneous versus the rewards that will accrue if our views prove correct. Everyone involved in the management of portfolios, whether of assets or of liabilities, faces a portfolio construction problem. How do we best balance risk and return? Indeed, what do we mean by ‘best’?

    Given the lack of perfect foresight that all mortals face, it is not reasonable to expect a book like this to set out guaranteed ways of profiting from investment conditions come what may. Instead, it seeks to achieve the lesser but more realistic goal of exploring the following:

    (a) core elements of portfolio construction;

    (b) mathematical tools that can be used to assist with the portfolio construction problem, and their strengths and weaknesses;

    (c) ways of refining these tools to cater better for fat-tailed market behaviour;

    (d) mindsets best able to cope well with extreme events, and the pitfalls that can occur if we do not adopt these mindsets.

    1.3 Coping with Really Extreme Events

    Lack of perfect foresight is not just limited to a lack of knowledge about exactly what the future holds. Typically in an investment context we also do not know how uncertain the future will be. Using statistical terminology, we do not even know the precise form of the probability distribution characterising how the future might evolve.

    The differentiation between ‘risk’ and ‘uncertainty’ is a topic that several popular writers have sought to explore in recent times, e.g., Taleb (2004, 2007). In this context ‘risk’ is usually taken to mean some measurable assessment of the spread of possible future outcomes, with ‘uncertainty’ then taken to mean lack of knowledge, even (or particularly) concerning the size of this spread.

    In this book, we take up this baton particularly in Chapters 8 and 9. Holding such an insight in mind is, I think, an important contributor to successful portfolio construction. In particular, it reminds us that really extreme events seem to have a nasty habit of occurring more often than we might like. Put statistically, if there is a 1 in 10¹⁰ (1 in 10 billion) chance of an event occurring given some model we have adopted, and there is a 1 in 10⁶ (1 in a million) chance that our model is fundamentally wrong, then any really extreme events are far more likely to be due to our model being wrong than representing random (if unlikely) draws from our original model.²

    Yet such insights can also be overplayed. The portfolio construction problem does not go away merely because the future is uncertain. Given a portfolio of assets, someone, ultimately, needs to choose how to invest these assets. Although it is arguably very sensible for them to bear in mind intrinsic limitations on what might be knowable about the future, they also need some framework for choosing between different ways of structuring the portfolio.

    This framework might be qualitatively formulated, perhaps as someone’s ‘gut feel’. Alternatively, it might be quantitatively structured, based on a more mathematical analysis of the problem at hand. It is not really the purpose of this book to argue between these two approaches. Indeed, we shall see later that the outcome of essentially any qualitative judgemental process can be reformulated as if it were coming from a mathematical model (and arguably vice versa).

    Perhaps the answer is to hold onto wealth lightly. All of us are mortal. The more religious among us, myself included, might warm to this philosophy. But again, such an answer primarily characterises a mindset to adopt, rather than providing specific analytical tools that we can apply to the problem at hand.

    1.4 Risk Budgeting

    Some practitioners point to the merits of risk budgeting. This involves identifying the total risk that we are prepared to run, identifying its decomposition between different parts of the investment process and altering this decomposition to maximise expected value-added for a given level of risk. It is a concept that has wide applicability and is difficult to fault. What business does not plan its evolution via forecasts, budgets and the like? Indeed, put like this risk budgeting can be seen to be common sense.

    Again, though, we have here principally a language that we can use to describe how to apply investment principles. Risk budgeting principally inhabits the ‘mindset’ sphere rather than constituting an explicit practical toolset directly applicable to the problem at hand. This should not surprise us. Although sensible businesses clearly do use budgeting techniques to good effect, budgeting per se does not guarantee success. So it is with risk budgeting.³

    However, language is the medium through which we exchange ideas and so cannot be ignored. Throughout this book, we aim to explain emerging ideas using terms that can be traced back to risk budgeting concepts. This helps clarify the main aspects of the methodology under discussion. It also helps us understand what assumptions need to be made for the relevant methodology to be valid.

    1.5 Elements Designed to Maximise Benefit to Readers

    As explained in Section 1.1, this book aims to appeal to a wide variety of audiences. To do this, I have, as with my earlier book on Market Consistency, sought a suitable balance between mathematical depth and readability, to avoid some readers being overly daunted by unduly complicated mathematics. The book focuses on core principles and on illuminating them where appropriate with suitably pitched mathematics. Readers wanting a more detailed articulation of the underlying mathematics are directed towards the portfolio construction pages of the www.nematrian.com website, referred to throughout this book as Kemp (2010).

    To maximise the benefit that both practitioners and students can gain from this book, I include two sections at the end of each main chapter that provide:

    (a) Comments specifically focusing on the practitioner perspective. To navigate successfully around markets typically requires an enquiring yet somewhat sceptical mindset, questioning whether the perceived benefits put forward for some particular technique really are as strong as some might argue. So, these sections either focus on the ways that practitioners might be able to apply insights set out earlier in the relevant chapter in their day-to-day work, or highlight some of the practical strengths and weaknesses of techniques that might be missed in a purely theoretical discussion of their attributes.

    (b) A discussion of some of the more important implementation challenges that practitioners may face when trying to apply the techniques introduced in that chapter. Where the same challenge arises more than once, I generally discuss the topic at the first available opportunity, unless consideration of the challenge naturally fits better in a later chapter.

    The book also includes an Appendix containing some exercises for use by students and lecturers. Each main chapter of the book has associated exercises that further illustrate the topics discussed in that chapter. The exercises are reproduced with kind permission from Nematrian Limited. Hints and model solutions are available on the www.nematrian.com website, as are any analytical tools needed to solve the exercises.

    Throughout the book, I draw out principles (i.e., guidance, mainly for practitioners) that have relatively universal application. Within the text these principles are indented and shown in bold, and are referenced by P1, P2, etc.

    1.6 Book Structure

    The main title of this book is Extreme Events. It therefore seems appropriate to focus first, in Chapters 2 and 3, on fat tails and extreme events. We explore some of the ways in which fat-tailed behaviour can be analysed and the existence or otherwise of extreme events confirmed or rejected. We differentiate between analysis of fat tails in single return series in Chapter 2 and analysis of fat tails in joint (i.e., multiple) return series in Chapter 3. The shift from ‘one’ to ‘more than one’ significantly extends the nature of the problem.

    Before moving on to portfolio construction per se, we consider in Chapter 4 some ways in which we can identify what seems to be driving market behaviour. Without some underlying model of market behaviour, it is essentially impossible to assess the merits of different possible approaches to portfolio construction (or risk modelling). We consider tools such as principal components analysis and independent components analysis, and we highlight their links with other statistical and econometric tools such as multivariate regression.

    In Chapters 5--7 we turn our attention to the portfolio construction problem.

    Chapter 5 summarises the basic elements of portfolio construction, both from a quantitative and from a qualitative (i.e., ‘fundamental’) perspective, if fat tails are not present. At a suitably high level, both perspectives can be viewed as equivalent, apart perhaps from the mindset involved. In Chapter 5 we also explore some of the basic mathematical tools that commentators have developed to analyse the portfolio construction problem from a quantitative perspective. The focus here (and in Chapter 6) is on mean-variance portfolio optimisation (more specifically, mean-variance optimisation assuming time stationarity). We consider its application both in a single-period and in a multi-period world.

    In Chapter 6 we highlight the sensitivity of the results of portfolio construction analyses to the input assumptions, and the tendency of portfolio optimisers to maximise ‘model error’ rather than ‘risk-return trade-off’. We explore ways of making the results more robust to errors affecting these input assumptions. The academic literature typically assumes that input assumptions are estimated in part from past data. We might argue that asset allocation is a forward-looking discipline, and that the assumptions we input into portfolio construction algorithms should properly reflect our views about what might happen in the future (rather than about what has happened in the past). However, some reference to past data nearly always arises in such analyses. We pay particular attention to Bayesian approaches in which we have some prior (‘intrinsic’) views about the answers or input parameters that might be ‘reasonable’ for the problem and we give partial weight to these alongside partial weight to external (often past) data. The best-known example of this is probably the Black-Litterman approach. Some Bayesian approaches can also be viewed as justifying heuristic⁴ techniques that can be applied to the portfolio construction problem. This again highlights the high-level equivalence that exists between quantitative and qualitative approaches to portfolio construction.

    In Chapter 6 we also introduce ‘market consistent’ portfolio construction, in which we derive input assumptions not from past data but from market implied data. Such an approach is ‘robust’ in the sense that the input assumptions are in theory not subject to the same sorts of estimation errors as ones derived from historical behaviour. We also explore tools that practitioners less keen on Bayesian approaches have developed to tackle estimation error, particularly resampled portfolio optimisation. We show that they are less divorced from Bayesian approaches than might appear at first sight.

    In Chapter 7 we identify how to incorporate fat tails into portfolio construction theory. We start by exploring what happens when we relax the assumption of time stationarity, by introducing the concept of regime shifting. This involves assuming that the world is, at any point in time, in one of several possible states, characterised by different distributions of returns on the different assets and liabilities under consideration. The mixing of distributions introduced in such a model naturally leads to fat-tailed behaviour. We then extend these ideas to encompass more general ways of incorporating fat-tailed behaviour. We focus mainly but not exclusively on situations where the regime is characterised not by a single Normal distribution but by a distributional mixture of Normal distributions (because this type of model is sufficiently general that it can approximate other ways in which fat tails might arise). We also explore approaches that involve continuously varying parameterisations of the different regimes and focus on behaviour in continuous rather than discrete time.

    Chapters 2 to 7 are largely concerned with a probability-theoretic view of portfolio construction. In them, we identify, somehow or other, a distributional form to which we believe future outcomes will adhere (or more than one in the case of regime shifting). At least in principle, this involves specifying a likelihood of occurrence for any given future scenario. However, the belief that we can in all cases actually identify such likelihoods arguably involves an overly rosy view about our ability to understand the future. More to the point, regulators and other bodies who specify how financial services entities should operate may want to ensure that entities do not put too many of their eggs into what may prove an uncertain basket.

    In recent years this has led to an increased focus on stress testing, which we explore in Chapter 8. Stress testing, in this context, generally involves placing less emphasis on likelihood and more emphasis on magnitude (if large and adverse) and on what might make the scenario adverse. We can view ‘reverse stress testing’ and ‘testing to destruction’ as being at one extreme of this trend. In the latter, we hypothesise a scenario adverse enough to wipe out the business model of the firm in question (or the equivalent if we are considering a portfolio) irrespective of how likely or unlikely it might be to come to pass. We then focus on what might cause such a scenario to arise and whether there are any strategies that we can adopt that might mitigate these risks.

    Chapter 9 extends the ideas implicit in Chapter 8 to consider ‘really extreme’ events. It is more heuristic and ‘mindset-orientated’ than the rest of the book. This is inevitable. Such events will almost certainly be so rare that there will be little if any directly relevant data on them. Market implied portfolio construction techniques potentially take on added importance here. Merely because events are rare does not mean that others are not exposed to them too. The views of others, distilled into the market prices of such risks, may be subject to behavioural biases, but may still help us in our search to handle extreme events better.

    Finally, in Chapter 10 we collate and summarise in one place all the principles highlighted elsewhere in the book.

    ¹ By ‘multivariate’ we mean that the returns on different return series have a joint distribution, the characterisation of which includes not only how individual return series in isolation might behave, but also how they behave when considered in tandem, see Chapter 3. By ‘(log-) Normal’ we mean that the natural logarithm of 1 + r is Normally distributed, where the return, r, is expressed in fractional form, see Section 2.3.1.

    ² More precisely, in this situation we need the probability of occurrence of the event that we are considering to be much higher (on average) than 1 in 10,000 in the 1 in a million circumstances when our underlying model is assumed to prove to be fundamentally wrong. This, however, is typically what is implied by use of the term ‘fundamentally wrong’. For example, suppose that our model presupposes that possible outcomes are Normally distributed with zero mean and standard deviation of 1. Then the likelihood of an outcome worse than c. –6.4 is 1 in 10 billion. However, suppose that there is actually a one in a million chance that our model is ‘fundamentally wrong’ and that the standard deviation is not 1 but 10. Roughly 26% of outcomes when the standard deviation is 10 will involve an outcome worse than c. –6.4. So, in this instance an event this extreme is roughly 40 times as likely to be a result of our original model being ‘fundamentally wrong’ as it is to be a fluke draw from the original model.

    ³ Likewise, no portfolio construction technique is able to guarantee success.

    ⁴ In this context, a ‘heuristic’ technique is one that is akin to a rule of thumb that is not principally proposed on the basis of some formal mathematical justification but more because the approach is relatively convenient to implement.

    Chapter 2

    Fat Tails – In Single (i.e., Univariate) Return Series

    2.1 Introduction

    The 2007–09 credit crisis is a profound reminder that ‘extreme’ events, and particularly ‘black swans’ (i.e., those rare events that, until they occur, may have been thought essentially impossible), occur more frequently than we might expect, were they to be coming from the Normal distributions so loved by classical financial theory.

    In this chapter we first explore what we mean by an ‘extreme event’ and hence by a ‘fat-tailed’ distribution. We then explore the extent to which some financial series appear to exhibit fat-tailed behaviour. In later chapters we will reuse the methodologies that we develop in this chapter for analysing such behaviours, particularly when applied to the task of practical portfolio construction.

    We focus in this chapter on univariate data series, e.g., the return series applicable to a single asset such as a bond, equity or currency or a single composite asset, such as an equity market or sector index. In Chapter 3 we focus on multivariate data, i.e., the combination of such series when viewed in tandem. The portfolio construction problem ultimately involves selecting between different assets. Therefore, in general it can only be tackled effectively when a full multivariate view is adopted.

    2.2 A Fat Tail Relative to What?

    If everyone agrees that extreme events occur rather more frequently than we might like, then why don’t we take more cognisance of the possibility of such events?

    This rather profound question is linked in part to behavioural biases that humans all too easily adopt. An event can only be classified as ‘extreme’ by comparison with other events, with which we might reasonably expect it to be comparable. A five-hour train journey might be deemed to have taken an ‘extremely long’ time if journeys between the same two stations usually take only ten minutes and have never before taken more than 20 minutes. Conversely, this journey might be deemed to be ‘extremely short’ relative to a transcontinental train journey that usually takes days to complete. Indeed, a journey might be deemed ‘extremely’ short and still take roughly its ‘expected’ time. A commentator might, for example, note that the expected time taken to get from an airport to a holiday resort is ‘extremely short’, because the resort is just round the corner from the airport, if most ‘equivalent’ resorts involve much longer transfer times.

    Principle P1: Events are only ‘extreme’ when measured against something else. Our innate behavioural biases about what constitute suitable comparators strongly influence our views about how ‘extreme’ an event actually is.

    In the context of finance and economics we usually (although not always) make such comparisons against corresponding past observations. For example, we might view a particular day’s movement in a market index as extreme relative to its movements on previous days or a recent quarterly economic decline (or growth) as extreme versus equivalent movements in previous quarters.

    However, we generally do not view ‘the past’ as one single monolithic dataset that implicitly weights every observation equally. Instead, most of us, deep down, believe (or want to believe) in ‘progress’. We typically place greater weight on more recent past observations. We usually think that the recent past provides ‘more relevant’ comparative information in relation to current circumstances. Even if we do not consciously adopt this stance our innate behavioural biases and learning reflexes often result in us doing so anyway. ‘Recent’ here needs to be sufficiently recent in the context of a typical human lifetime, or maybe merely a human career span, for the generality of informed commentators to incorporate the relevant past data observation into the set of observations that they use (implicitly or otherwise) to work out what spread of outcomes is ‘to be expected’ for a given event.

    But what happens if the nature of the world changes through time? The ‘recent’ past may no longer then be an appropriate anchor to use to form an a priori guess as to what spread of outcomes might be ‘expected’ at the present juncture. An event may be ‘exceptional’ relative to one (relatively recent) past time span but be less exceptional in the context of a longer past time span.

    Figure 2.1 illustrates some issues that such comparisons can raise. This chart shows the behaviour of the spread (i.e., difference) between two different types of money market (annualised) interest rates. Both rates relate to 1 month interest rates: one relates to unsecured interbank lending (Euribor) and the other relates to secured lending in which the borrower posts collateral in the form of Euro denominated government debt (Eurepo). The difference between the two can be viewed as a measure of the potential likelihood of the borrowing bank defaulting within a 1 month horizon.

    Figure 2.1 Spread between 1 month Eurepo and Euribor interest rates

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    Until July 2007, the relationship between the two interest rates appeared to be very stable (and the spread very small). In preceding months, market participants believed that banks active in the interbank market were generally very sound and unlikely to default in the near future. A spread between these two lending rates of even, say, 0.1% pa would have seemed very large, based on the (then) recent history. However, with the benefit of hindsight we can see that much larger spreads still were to become commonplace. In the latter part of the period covered by this chart, a spread of 0.1% pa would have seemed very small! As noted in the Preface, the breakdown of previously ruling relationships such as these in the money markets in late July and early August 2007 marked the start of the credit crisis. What appeared to be ‘normal’ before the credit crisis no longer appeared ‘normal’ (or at least ‘typical’) once it had started.

    Economists and quantitative investors and researchers have a particular name for comparisons of the same underlying object through time: they are called longitudinal comparisons. Such comparisons may be contrasted with cross-sectional comparisons, in which we, say, compare the returns on different securities over the same time period. A particular security’s return on a particular day might, for example, be deemed extremely large relative to the spread of returns that other ‘equivalent’ securities achieved on the same day.

    With a cross-sectional comparison, the need for some sort of ‘equivalence’ between the securities being considered is self evident. Otherwise the comparison may be viewed as spurious. For example, we do not ‘expect’ the spread of returns exhibited by different equities to be anything like as narrow as the spread of returns on less volatile asset categories, such as cash, because we do not view equities and cash as ‘equivalent’ in this context. We also come with less of a preconception that extreme returns might be very rare. Given a large enough universe, we naturally expect to see some outliers. For example, with equities, we ‘expect’ exceptional events, such as takeovers and bankruptcies, to occur from time to time. We might ‘expect’ events like these, which generate extreme outliers, only to affect any given individual security rarely. However, given a large enough sample of such securities exposed to the same sorts of outliers, such events are common enough to remind us that very large positive or negative returns do affect individual securities from time to time, and might therefore show up as outliers in cross-sectional comparisons.

    The need for a corresponding underlying ‘equivalence’ between observations also arises with longitudinal comparisons, but it is easier to forget that this is the case. In particular we need this period’s observation to be ‘comparable’ with the corresponding past observations against which it is to be compared.

    Where the data is immediately perceived not to be comparable then we readily adjust for this and discount ‘spurious’ conclusions that we might otherwise draw from such comparisons. For example, we do not normally place much emphasis on a grown man’s height being large relative to his height as a baby. Instead we view such a progression as being part of the natural order.¹ But what if lack of comparability is less clearly not present?

    Our understanding of what constitutes an extreme event (in relation to comparisons through time) is heavily coloured by an implicit assumption of time stationarity (or by a rejection of such an assumption). By this we mean that the assumption that the distribution from which the observation in question is being drawn does not change through time. Implicitly, time stationarity (or lack of it) has two parts, namely:

    (a) A part relating to the underlying nature of the world (including how it is viewed by others). Usually this is what we are trying to understand better.

    (b) A part relating to the way in which we may be observing the world. Ideally, we want this not to cloud our understanding of point (a). However, to assume that it is not present at all may overstate our own ability to avoid human behavioural biases.

    Differentiating between these two elements is not always straightforward, and is further compounded by market prices being set by the interaction of investor opinions as well as by more fundamental economic drivers.

    Consider, for example, a company that claims to have radically changed its business model and to have moved into an entirely new industry. We might then notice a clear lack of time stationarity in the observed behaviour of its share price. How relevant should we expect its past behaviour or balance sheet characteristics to be to its changing fortunes going forwards in its new industry? Perhaps there would be some relevance if its management team, its corporate culture and its behavioural stances have not altered much, but data more generally applicable to its new industry/business model might be viewed as more relevant. Conversely, sceptics might question whether the company really has moved into a new industry. It might, wittingly or unwittingly, merely be presenting itself as having done so.

    Similar shifting sands also affect the overall market. Every so often, commentators seem to focus on new paradigms, in which it is claimed that the market as a whole has shifted in a new direction. Only some of these turn out to be truly new paradigms with the benefit of hindsight.

    We will find echoes of all these issues in how we should best cater for extreme events in practical portfolio construction. The difference between effects arising from the underlying nature of the world and arising from how we observe the world can be particularly important in times of change. When we observe new information that seems to invalidate our earlier expectations, we will need to form a judgement about whether the new ‘information’ really reflects a change in the underlying nature of the world, or whether it merely reflects inadequacies in the way in which we have previously been observing the world. Wise investors have always appreciated that their understanding of how the world operates will be incomplete and that they need to learn from experience. Even wiser investors will appreciate that their learning may never be complete, a topic that we return to in Chapter 9.

    Principle P2: The world in which we live changes through time. Our perception of it also changes, but not necessarily at exactly the same time.

    Mathematically, we might formalise the definition of an ‘extreme event’ as one where the probability of occurrence of an event, X, which is this extreme (say X x, for downside extreme events, for some given threshold x, or X x for upside extreme events) is sufficiently small, i.e., P( X x ) < α (for some sufficiently small positive α) given our ‘model’, i.e., a probability distribution P ( X ) characterising how we think the world works. ‘Fat-tailed’ behaviour cannot typically be identified just from one single event, but is to do with the apparent probabilities of occurrence of X, say (which will in general involve an amalgam of intuition and observational data), being such as to involve a higher frequency of extreme events than would arise with a Normal distribution with the same standard deviation and mean (if they exist) as .

    2.3 Empirical Examples of Fat-Tailed Behaviour in Return Series

    2.3.1 Introduction

    In this section we explore some of the methodologies that can be used to tell whether return series appear to exhibit fat tails. We consider various ways of visualising the shape of the distributional form² and we explore some of the stylised ‘facts’ that are generally held to apply to investment returns in practice.

    Throughout this section we assume that the world exhibits time stationarity (see Section 2.2). Given this assumption, a distribution is ‘fat-tailed’ if extreme outcomes seem to occur more frequently than would be expected were returns to be coming from a (log-) Normal distribution. The rationale for focusing on log-Normal rather than Normal distributions is that returns, i.e., r(t ), compound through time and thus log returns, i.e., log(1 + r (t)), add through time.

    2.3.2 Visualising fat tails

    Perhaps the most common way of visualising a continuous distributional form is to plot its probability density function (pdf). Such a chart can be thought of as the continuous limit of a traditional histogram chart. It indicates the probability of a given outcome being within a certain small range, scaled by the size of that range. For a discrete distributional form the equivalent is its probability mass function, which indicates the likelihood of occurrence of each possible outcome, again typically with the outcomes arranged along the horizontal (i.e., x) axis and the probabilities of these occurrences along the vertical (i.e., y) axis.

    A chart directly plotting the pdf immediately provides a visual indication of the relative likelihood of a given outcome falling into one of two different equally-sized possible (small) ranges. The ratio between these probabilities is the ratio of the heights of the line within such a chart.

    Directly plotting the entire pdf is not the ideal visualisation approach from our perspective. The ‘scale’ of such a chart (here meaning what might be the largest viewable y-value) is dominated by differences in the likelihood of the most likely outcomes. It is difficult to distinguish the likelihoods of occurrence of unlikely events.

    We can see this visualisation problem by viewing in the same chart the pdf of a (log) return series that is Normally distributed³ and the equivalent pdf of a (log) return series with the same mean and standard deviation but which is (two-sided) fat-tailed and thus has more outliers (at both ends of the distribution); see Figure 2.2.⁴ The main visual difference between the two charts is in the centre of the distribution (with the example fat-tailed distribution appearing more peaked there⁵). It is possible to tell that the fat-tailed distribution also has greater mass in the tails (e.g. beyond, say, ±3 in Figure 2.2), but this feature is not as obvious, because visually it is less marked. It only becomes more obvious if we zoom in on the tail, e.g., as in Figure 2.3.

    Figure 2.2 Illustrative probability density function plot

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    Figure 2.3 Illustrative probability density function plot as per Figure 2.2, but zooming in on just the part of the lower tail of the distribution between x = − 6 and x = − 2

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    A mathematically equivalent⁶ way of describing a (continuous) probability distribution is to plot its cumulative distribution function (cdf), i.e., the likelihood of the outcome not exceeding a certain value, as shown in Figure 2.4. This approach presents a similar visualisation challenge. Its ‘scale’ is dominated by the height of the cdf at its right hand end, i.e., by unity. The differential behaviour in the tail is again not immediately striking to the eye. It has the further disadvantage that many people have an intuitive feel for the bell shape curve applicable to a Normal pdf, but have less of an intuitive feel for the shape of the corresponding cdf, potentially making it harder for them to spot ‘significant’ deviation from Normality.

    Figure 2.4 Illustrative cumulative probability distribution plot

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    A more helpful visualisation approach when analysing fat tails is a quantile–quantile plot (‘QQ-plot’) as shown in Figure 2.5. This illustrates the return outcome (i.e., ‘quantile’) associated with a given (cumulative) probability level, plotted against the corresponding return outcome applicable to a (log-) Normal distribution with the same mean and standard deviation as the original distribution. In it, a (log-) Normally distributed return series would be characterised by a straight line, while (two-sided) fat-tailed behaviour shows up as a curve that is below this straight line at the bottom left hand end of the curve and above it at the top right hand end of the curve. In practice, its ‘scale’ characteristics are driven by the extent to which distributions have different quantile behaviours in the ‘tails’ of the distribution. This is in contrast with plots of pdfs, which as we can see from Figure 2.1, largely focus on differences in the centre of the distribution.

    Figure 2.5 Illustrative quantile–quantile plot

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    Of the three graphical representations described above, the one in Figure 2.5 (the QQ-plot) is the easiest one in which to see visually the extent of any fat-tailed behaviour in the extremities. It is the visualisation approach that we concentrate on in this section.

    QQ-plots such as these have a natural interpretation in the context of Value-at-Risk (VaR). This is a forward-looking risk measure commonly used in many parts of the financial community. VaR is an enticingly simple concept and therefore relatively easy to explain to lay-people. It involves a specification of a confidence level, say 95%, 99% or 99.5%, and a time period, say 1 day, 10 days, 1 month or 1 year. If we talk about a fund having a 10 day 99% confidence VaR of, say, X then we mean that there is⁷ only a 1% chance of losing more than X over the next 10 days, if the same positions are held for this 10 day time frame.⁸ The VaR at any given confidence level can be read off such a quantile–quantile chart by using as the x-coordinate the relevant VaR level that would have applied to a (log-) Normally distributed variable. Incidentally, the other forward-looking risk measure one often comes across in the financial community, namely (ex-ante) tracking error,⁹ can also be inferred from such plots, because it drives the scale used to plot the x-axis.

    Are quantile–quantile plots the best possible visualisation approach to adopt for this purpose? This question is not an easy one to answer. Different types of chart implicitly assign different levels of importance, i.e., weights, to different ways in which distributional forms might differ. Our eyes will implicitly deem large displacements within a given chart type of greater ‘importance’ than small or difficult to spot displacements. Our eyes typically process such data as if distance along a chart scale is the correct way to distinguish things of note. Thus the ‘scale’ characteristics of the chart methodology directly feed through to the importance that we will implicitly give to different aspects of the distributional form.

    QQ-plots give much greater visual weight to outliers than probability density or cumulative density plots. However, this could result in them giving too much weight to what ultimately are still unlikely occurrences. From a return-seeking perspective, the mean drift of the return series may be the feature that is of most importance. Focusing too much on tail behaviour may distract from more important matters, especially because not all fat tails are bad in this context, only downside fat tails. Upside fat tails typically correspond to particularly favourable outcomes!

    Conversely, even QQ-plots might not give sufficient weight to downside risk in all circumstances. As noted above, quantiles are closely associated with Value-at-Risk. However, this type of risk measure can be criticised because it effectively ascribes the same downside whatever happens if a given cut-off threshold is met, even though the worse the loss the greater is the downside for the party who ultimately bears this loss. Kemp (2009) points out that this makes Value-at-Risk an intrinsically shareholder focused risk measure for financial firms. Shareholders are largely indifferent to how spectacularly a financial firm might default. Given the limited liability structure adopted by almost all modern financial firms, once a firm has defaulted its shareholders have largely lost whatever economic interest they might previously have had in the firm. Conversely, regulators, customers and governments may have a greater interest in ‘beyond default’ tail risk, if they are the parties that bear the losses beyond this cut-off. For such parties, risk measures such as tail VaR (TVaR) – also called conditional VaR (CVaR) or Expected Shortfall – i.e., the expected loss conditional on the loss being worse than a certain level, could potentially be more appropriate risk measures to use.¹⁰ There are also more technical reasons why TVaR might be preferable to VaR.¹¹

    Principle P3: The ways in which we visualise data will influence the importance that we place on different characteristics associated with this data. To analyse extreme events, it helps to use methodologies such as quantile–quantile plots that highlight such occurrences. However, we should be aware that they can at times encourage us to focus too much on fat-tailed behaviour, and at other times to focus too little on it.

    The practical impact of such subtleties can depend heavily on the behaviour of the distribution in the far tail. This takes us into so-called ‘extreme value theory’ (see Section 2.9). To visualise such downsides, we could use a visualisation approach similar to QQ-plots, but with the y-axis showing (downside) tail-VaRs rather than quantiles (i.e., VaRs).

    Figure 2.6 shows such an analysis for the same example fat-tailed distribution we analysed in Figures 2.2–2.5. In the downside tail, this chart appears qualitatively (i.e., asymptotically) similar to the shape of the corresponding quantile–quantile plot shown in Figure 2.5. This is true for any probability distribution where the pdf falls off steeply enough in the tail. For these sorts of probability distributions, most observations further into the tail than a given quantile point are not ‘much’ further into the tail.

    Figure 2.6 Illustrative TVAR versus quantile plot

    Source: Nematrian. © Nematrian. Reproduced by permission of Nematrian

    However, this similarity is not the case for all probability distributions. In Section 2.8 we will discuss stable distributions. Some of these are so heavy-tailed that their TVaRs are infinite! More practically, we might be analysing possible movements in the goodwill element in a firm value calculation. These may decline catastrophically in the event of a firm defaulting. The behaviour of the TVaR might then be more akin to one of the other two lines shown in Figure 2.6, which assume that there is a small probability of a sudden decline in value (but no further decline in worse situations), with smaller declines and rises following a Normal distribution.¹²

    2.3.3 Behaviour of individual bonds and bond indices

    Certain types of instrument, including many types of bonds, naturally exhibit fat-tailed characteristics. High quality bonds typically rarely default. When they do, however, losses on them can be very substantial. A typical market assumption is that the average recovery, i.e., the average payment as a proportion of par value on a bond that has defaulted, is roughly 40%. This does not mean that the average market value decline shortly before default is circa 60%; an impaired bond often trades at well below par value. Even so, market declines as and when a bond does actually default can be substantial, particularly if the default was not widely expected prior to it actually happening.

    We might initially expect well diversified bond indices to exhibit less fat-tailed behaviour than individual bonds (particularly individual bonds where there is a material likelihood of default). This is indeed normally the case, at least for corporate bond indices. However, in practice there are other factors that come into play at a bond index level.¹³

    Many common bond indices relate to the bonds issued by a single government. Inclusion of many different bonds within them does not therefore diversify away issuer risk (to the extent that there is deemed to be any such risk). The behaviour of these types of bonds is more normally viewed as driven principally by yield curve dynamics. Factors influencing these dynamics include supply and demand for the relevant government’s bonds and the perceived trade-off between consumption and saving through

    Enjoying the preview?
    Page 1 of 1