Low Volatility Investing: The Behavioural Underpinning
Low volatility investing, through smart beta vehicles, is becoming increasingly popular. Last month, we looked at how the low volatility strategy has performed over time. We found that the return premium between low and high volatility stocks is mainly caused by the poor performance of high volatility stocks. This month, we will expand the analysis by measuring risk adjusted performance of the low volatility strategy and discuss one of the underlying behavioural drivers of the low volatility premium.
Risk Adjusted Performance
A common measure of risk adjusted performance is the Sharpe Ratio, which is calculated by subtracting a risk free rate from the asset performance and then dividing by risk (standard deviation of returns). We assume a risk free rate of 4.9% to calculate the Sharpe ratios below. Although this seems like a high rate by the standards of the last decade, this is the average yield of 3 month Treasury Bills over the sample period. The chart below shows the risk adjusted performance of US stocks, ranked in deciles based on their volatility.
Figure 1: Data Taken from Ken R French data Library
Although the lowest volatility, decile 1 stocks, have the lowest risk; they still do not provide the highest risk adjusted performance, as their absolute performance is too low. The highest risk adjusted performance is actually provided by decile 3 stocks, which provide a nice mix of high return and low risk. The worst risk adjusted performance is provided by decile 10 stocks. Not only do these stocks have low absolute returns, but the standard deviation of returns is also very high.
It appears that high volatility stocks have both low absolute returns and high levels of risk. This is at odds with traditional investment theory, which stresses a positive trade-off between investment returns and risk
Behavioural Finance does have an explanation for the overpricing of high volatility stocks in the form of the probability weighting function. The graph below shows the function with the dotted green line representing the weight a rational investor would assign to various probabilities. The subjective probability they assign to events is equal to the objective probability. The solid blue line represents the weight that investors actually assign to probabilities. We see that for low probability events (p<0.4), investors typically assign a higher probability to these events occurring than is warranted by the objective probability, as the solid blue line is above the dotted green line.
Figure 2: Taken from Burns et. al. (2010), Overweighting of Small Probabilities. Wiley Encyclopaedia of Operations Research and Management Science
The probability weighting function can explain why people take part in lotteries, despite the expected payoff being extremely poor. The objective probability of winning the jackpot is small, but people assign a higher subjective probability of winning than is warranted.
The same principle can be applied to high volatility stocks. Often these stocks are very high risk businesses, operating with new and innovative business models. The failure rate of these businesses is high, but a small number of them will make a lot of money. The probability weighting function implies that investors will tend to overpay for such stocks, in the same way that they are attracted to lottery style payoffs, as they believe the probability of success is greater than is warranted.
High volatility stocks are a poor investment on average, both in terms of absolute returns and risk adjusted returns. Low volatility smart beta products do avoid these stocks, but the highest risk adjusted returns are generally found in the middle of the volatility spectrum.
There could be many reasons for the overpricing of high volatility stocks, but behavioural explanations are a strong contender. The overweighting of small probability events, such as the chance of earning high returns on glamorous high-risk businesses, ensures that high volatility stocks are overpriced and offer poor subsequent returns for investors.
Chris Riley, RSMR September 2016
A Closer Look at the Low Volatility Factor
The low volatility factor has become one of the most popular smart beta strategies, following the financial crisis of 2008. Investors believe that low volatility stocks can potentially provide market beating levels of return, at a lower level of risk than the market as a whole.
Given the popularity of the low volatility strategy, we will look at the historical returns in more depth by examining returns across volatility deciles. The results are surprising and show there is anything but a linear relationship between volatility and returns. After that, we will examine the performance of the low volatility factor post-2008, to see if its popularity has changed the return profile.
Returns of Volatility Deciles
The graph below shows the performance of US stock deciles sorted by historical return volatility. Volatility Decile 1 represents the lowest volatility stocks that would commonly be purchased in a long-only, low volatility factor strategy. Decile 10 represents the highest volatility stocks that would be avoided in a long-only strategy, or shorted in a long-short strategy. The blue line represents the annual performance of the respective deciles and the orange line represents the average performance of the 10 deciles in total. If there was a straight linear relationship between volatility and return then we would expect to see a straight line, moving downwards from decile 1 to decile 10.
Figure 1: Data Taken from Ken R French data Library
Contrary to popular belief, we do not see a straight line relationship between volatility and return. Although the highest volatility shares (decile 10) have a lower return than the lowest volatility shares (decile 1), the return of both of these deciles are below the average for stocks as a whole. The highest return actually comes from stocks of relatively high volatility (decile 8). In general, we can say that the relationship between volatility and return looks more likely to be an n-shaped non-linear relationship than a straight line. Returns are highest for stocks with average to above average levels of volatility and drop for stocks with very low volatility or very high volatility.
Performance Post 2008
The chart below shows the performance of volatility deciles for the shorter period of 2009-2015. This is to account for the large inflows into low volatility strategies, following the financial crisis of 2008 and see if these flows have changed the return pattern. The average line for the period is higher than for the first chart, with average returns running at around 15%.
Figure 1: Data Taken from Ken R French data Library
The relationship between volatility and returns post 2008 has not been in line with that anticipated. The return on low volatility stocks (decile 1) has been the lowest of all the deciles. These are the stocks that would typically be purchased by a low volatility factor product. The return for the highest volatility stocks (decile 10) has also been below average. In general, there appears to be more of a straight linear relationship between volatility and returns post 2008.
The long-term relationship between volatility and returns is more complex than is presumed. There seems to be little premium for holding low volatility stocks and there is not a straight line relationship between volatility and returns. There has been a high return penalty for holding high volatility stocks in the pre-2009 period. This suggests the low volatility factor would be exploited most efficiently using a long-short strategy, which allows the manager to short the very highest volatility stocks.
Looking at more recent returns, we see that the penalty for holding high volatility stocks has been reduced since 2008. Although the reason for this cannot be proven, many will suspect that preferences against holding high volatility since 2008 have improved the returns of high volatility stocks.
It is also worth bearing in mind that the risk adjusted performance of low volatility stocks will be very much stronger than raw performance. This is because the lowest volatility stocks have the lowest volatility by definition. If the investor is more focussed on risk adjusted performance than on returns alone, then a low volatility strategy may still be appropriate for the lower risk that it provides.
Chris Riley, RSMR August 2016
Smart Beta: Historical Factor Performance
One issue with the smart beta concept is the sheer number of products available in the market place. For some clients a multi-factor product will be a good option, providing exposure to multiple factors in a diversified portfolio. Other clients will be interested in gaining exposure to specific factors, such as low volatility or high quality. For these clients, it would be useful to provide some insight into the historical returns of each factor. In this month’s article, we will examine the short-term and long-term performance of 6 common factors used in smart beta products.
Performance of Factor Tilts
In the chart below we show the short-term and long-term performance of the factor tilts, with performance data stretching back 50 years and 10 years from the end of 2015. The returns are simulated by investing in the top decile of stocks in the US market sorted by the appropriate factor tilt. Factor definitions were taken from the Ken R French website and are commonly used proxies to form style portfolios. For example, the size factor was defined as low market capitalisation and the value factor defined as high book value to market value.
Figure 1: Data Taken from Ken R French data Library
The best performing factors over the 50 year period have been the traditional factor tilts: size, value and momentum. The other factors, which have only been discovered in more recent times, do not have as strong a track record over the long term. This may explain why they were not initially considered to be factors.
Over the last 10 years, factors returns are down relative to 50 year returns across all strategies. This is most likely due to the factors becoming more recognised by investors and exploited over the sample period. In this regard, the launch of smart beta products to gain exposure to the factors, is likely to erode factor returns further going forward. The top performing factor was value, with the quality factor a close second. The worst performing style tilt was high dividend yield.
The low volatility factor has become popular in recent years, as risk aversion increased following the Global Financial Crisis of 2008, but its long-term and short-term performance record is not great relative to the other factors. A more in-depth look at the pattern of returns suggests that the highest performing stocks are in the middle of the volatility range, with both high and low volatility stocks underperforming stocks of average volatility.
One thing to bear in mind is that factors in smart beta products may be defined by different metrics to the ones used here. For example, we have defined the value tilt as high book value to market capitalisation. Some smart beta products may define value as another related metric, such as the price to earnings ratio. This may change the result somewhat, although different measures of value should be correlated with each other.
Quality and low volatility have been popular style tilts since the Global Financial Crisis of 2008, but these have not been the best performing factors in the long-run.
It is noticeable that over the last 10 years, factor returns have been lower than the past across all strategies. This should be borne in mind when looking at historical back testing of factor performance. As further capital enters the space, it is likely to drive down factor returns even further.
Another noticeable feature is that factor returns have come broadly into line with each other over the last 10 years. Factor timing and selection is likely to be a difficult process that is not necessarily rewarded with higher returns. This strengthens the argument for smart beta products that provide diversified factor exposure.
Chris Riley, RSMR July 2016
Smart Beta ETFs
Smart beta ETFs have become more popular in recent years and advisers may be wondering if these products are a good investment. There are many products now available with the smart beta label, each with differing characteristics.
One common feature of smart beta ETFs is that they typically offer exposure to a single factor. Common examples of factors would be value (targeting cheap stocks), size (targeting small stocks) or low volatility (targeting safe stocks). More complex ETFs, which provide exposure to multiple factors, are also available and potentially add additional value over single factor products.
Single Versus Multi Factor Performance
The table below looks at the performance of 3 common style tilts available through single factor ETFs: value, size and quality. Value is defined as the top decile of stocks ranked by the book-equity to market-value ratio. Cheap stocks have a low market value relative to accounting book equity.
The size tilt is measured as the lowest decile of stocks ranked by their market capitalisation. The smallest stocks are figured to have the highest return potential.
|Annual Returns and Volatility (1964-2015)
|Source: Ken R French Data Library
Quality is defined as the top decile of stocks ranked by operating profitability. The stocks with the highest operating profit relative to book value are selected. Highly profitable companies are deemed to be good investments.
The combined column represents the performance that is possible by merging the value, size and quality strategies into one. In order to maintain a similar number of stocks across all strategies, stocks are selected that are within the top half of the universe for the size and quality factors and top quartile for the value factor.
Looking at performance on a single factor basis, we see that value has been the best performer over the 51 year period. The average annual return would have been 23.1% per annum and standard deviation would has been in line with the market at 22.6%. The worst performing factor was quality, although its standard deviation of returns was also the lowest. The Sharpe ratio has been calculated as the return minus a risk free rate of 5%, divided by the standard deviation. The Sharpe ratio for the value factor has been the highest of the single factor strategies, with the quality factor again the lowest.
The combined factor performance has been the highest of all, with an annual return of 24.8% and standard deviation of returns has been in line with the single factor strategies. The Sharpe ratio of the combined strategy is higher than that of the three individual strategies, meaning that risk adjusted performance has been the best.
Clients may have a preference for a particular style tilt. Two of the more popular tilts of recent years have been low volatility and quality. Clients perceive that companies favoured by these strategies are likely to perform well in a downturn, which has increased the popularity of these styles.
Multi-Factor smart beta strategies have the potential to improve risk adjusted returns for clients versus single factor strategies, as the results above demonstrate. These products aim to invest in a subset of companies that have positive characteristics across multiple factors. The downside of multi-factor products is increased complexity, as they involve more sophisticated portfolio construction. Fees may be higher to reflect the increase in complexity.
The highs and lows of passive investing
Investors have certainly suffered a testing start to the year but, says Chris Riley, investment research manager at RSMR, recent market movements are not particularly out of line with historical experience
The first month of 2016 was a testing time for investors, with large falls in equity markets following the Christmas break. After the financial crisis of 2008, many investors are well aware we are now due another recession and there are concerns 2016 could be the year.
With equity markets off around 7% so far on an annual basis and around 20% from their 2015 highs, it is worth considering how typical this type of movement is. Is such a move a normal occurrence or could it be indicative of something much larger and more concerning to come?
|"There remains the possibility of a big correction in the style of 2008/09 but the market move thus far is simply not of that magnitude."
The chart below shows the distribution of overlapping annual returns of the S&P500 index from 1873 until 2015, measured every month. The large timeframe of more than 100 years ensures this is a very representative sample of likely annual returns. ‘Frequency’ (on the Y axis) reflects the number of years the annual return has occurred – for example, an annual return of approximately 8% has occurred 172 times and hence is one of the most frequent outcomes shown on the chart.
As it happens, the annual return of -7% experienced up to the end of January 2016 is not a particularly unusual event. In around one year in every five (20% of years), the annual return is -10% or lower. A similar proportion of years (20%) sees positive annual returns of 25% or higher.
More extreme years only occur around one year in every 10, which would represent a 10% chance of occurring. A negative year of -20% or worse, or a positive year of around 35% or over, would be an example of such a year. As you can see from the chart, this type of year is quite a rare occurrence for investors, but certainly not a once in a lifetime event.
Properties of annual returns
|Summary of returns
|Source: Robert Shiller Website
The table tothe right shows the statistical properties of annual S&P 500 returns over the same 1873-2015 period. The mean or average annual return over the period has been 6%, with a slightly higher median of 6.6%. The median represents the central year, if all the years were arranged in order. The standard deviation of these returns has been nearly 19%.
The minimum return of -66% was achieved in June of 1932 during the Great Depression. The maximum annual return of 124% was achieved a year after in July 1933, representing something of a recovery. These movements dwarf the low experienced in March 2009 of -42%. The other notable corrections in recent times were -37% experienced in October of 1974 and -29% in September of 2001. It is worth considering that the annual decline we have witnessed so far in the markets, of -7%, has been nothing compared to these moves.
The market movements experienced in recent times are not particularly out of line with historical experience and remain within normal trading limits that we would expect over the course of a year. There remains the possibility of a big correction in the style of 2008/09 but the market move thus far is simply not of that magnitude.
For the passive investor, these highs and lows represent the normal moves of the market and one can take comfort from the fact that, even after devastating falls of the past, the equity market has always subsequently rallied to new highs. The key here is to not panic and sell during bad times and instead to remain invested so as to benefit from subsequent recoveries.
Are Gold ETFs Ready to Shine Again?
After a stellar rise in the price of gold from 2000 until 2012 that saw gold almost hit the $2000 per ounce level, the last 3 years have been a disappointing time for gold investors. Last year saw a price plunge with gold threatening to drop below $1000 per ounce. Flows out of gold ETFs have been large as investors head for the exit.
The market turbulence this year has seen a swing back to gold and it has been one of the best performing assets. Given the upturn in the gold price, investors are once again asking if now is the right time to invest in gold. Have price falls over the last 3 years made gold attractive again as a value opportunity?
Current Moves in Context
The chart below shows the movement in gold from 2000 to 2016. There are two distinct phases in the chart. The first phase was a huge run up in the gold price between 2000 and 2012, which saw gold rise from $250 per ounce up to over $1800 per ounce. The top in 2012, corresponded with concerns about the credit worthiness of the US government, leading to a downgrade in the credit score of the United States.
Since 2012, the price of gold plunged close to the $1000 per ounce level, as investors became more confident in the financial outlook. Equity markets appreciated and the recession of 2008 began to fall from memory. All of this changed however, at the beginning of this year. The chance of a recession has increased in the minds of investors and gold has moved back up to over $1200 per ounce.
At this point investors may be wondering if the fall from 2012 until the end of 2015 was simply a short-term correction. If this is the case then gold looks set to resume its longer-term trend upwards and take out the 2012 high, on the way to $2000 per ounce and above. But an alternative scenario is that 2012 was actually the top and the move in 2016 is simply a small correction, as part of a larger downward trend from here.
There is clearly a pattern of gold rising steadily and for a prolonged period of time during the economic turmoil of recent years and of the 1970’s; whilst declining during times of perceived economic calm such as the 1980’s and 90’s, as fear gives way to greed.
From a short-term perspective, gold is generally a good asset to hold at the tail end of the business cycle. Fear and volatility are currently on the increase and historically this has been a good environment for gold.
Many would point out that the underlying economic problems, which caused the crisis in 2008, have not been resolved and hence there is little reason for a change in the upward trend that started in 2000. Under this scenario, the fall of 2012 to end-2015 was merely a blip. A counter argument would be that gold was massively overvalued in 2012, which has become the turning point for a longer downward move. In this scenario, the recent move up is merely a short-term correction, as part of a longer term bear market for gold.
Tracking the US election
Is there really a link between election years and market outperformance in the US? Chris Riley, investment research manager at RSMR, assesses the evidence
Each January, the thoughts of advisers naturally turn to the prospects for the year ahead. Moving into 2016, they will be keenly aware equity markets have not suffered a serious correction since 2008 which, given their cyclical nature, only increases the chances of one coming in the future.
Clearly this has implications for how advisers may wish to tilt client portfolios and the balance between active and passive investments – and one big factor to take into account is that 2016 is a presidential election year in the US. There is a widespread theory that election-year performance tends to be better than average but does this actually hold up in practice?
|"It is interesting to note that the last two major downturns – 2000 and 2008 – have both been in election years."
Reasons for outperformance during election years
One underlying theory is that the US Federal Reserve is reluctant to raise interest rates during an election year for fear of influencing the economy and affecting the outcome of the election. This tends to boost the performance of equities during the election year – however, once the election is over, rates have to be raised more than would otherwise have been the case. This policy has the effect of delaying pain until after the election, leading to higher returns during election years, but lower returns following the election.
That said, we should also consider the possibility that performance during election years may simply be a statistical fluke – in other words, patterns of performance around election years may simply be a coincidence and actually have nothing to do with the election. Under this scenario, we can conclude little from past performance during election years.
|Data retrieved from FRED, Federal Reserve Bank of St. Louis
In the graph to the right, we show the average performance of the Wilshire 5000, an index of the total US equity market, in US election years versus non election years, over the last 40 years – enough to cover the last 10 elections. Elections have occurred at regular intervals, every four years, and are marked on the graph, starting at 1976 and ending in 2012.
Despite the preconception that downturns are avoided during election years, it is interesting to note that the last two major downturns – 2000 and 2008 – have both been in election years. In each case, the incumbent President was replaced during these years, in the midst of the recession.
The average return in election years was 9.2%, which was lower than the average return outside of election years of 13%, over the sample period. If we exclude the 2008 correction, then performance in election years was slightly higher than outside election years. In summary then, the data over the last 40 years does not support the idea that election-year equity performance is higher than that outside of election years.
Passive investments represent a cheap way to gain exposure to the market – although they offer no downside protection in the event of a market fall, as was the case in the last two recessions that both occurred during election years. They do, however, continue to provide exposure to the upside in a low-cost manner.
A regular recession that occurs every six to 10 years has been a feature of the modern Western economy since the early 1970s. On this basis, we are due a recession between now and 2018. Although there is a view that it is less likely to take place during an election year, this is not supported by the recent experience over the last four decades.
Market timing using the ‘CAPE’ ratio
Chris Riley, investment research manager at RSMR, examines how useful the cyclically-adjusted price earnings ratio can be when it comes to predicting future market returns
In ‘Through the macro cycle’, below, we considered the power of the yield curve as a predictive measure of the business cycle. One thing to bear in mind, however, is that financial markets are not always closely linked to the performance of the underlying economy. While this seems particularly true of emerging economies, the stock markets of developed economies can also become overvalued and subsequently underperform – even when the underlying economy is healthy.
This raises the question of what constitutes a suitable measure of under or overvaluation, given that the markets may not always respond to underlying economic trends. An obvious response is to look at the current valuation of the market, to assess if it is under or overvalued. We will analyse one measure of current valuation – popularised by Professor Robert Shiller – called the cyclically-adjusted price earnings ratio or ‘CAPE’.
|"Over time, market valuation does have a tendency to exert itself."
Also known as the Shiller P/E, the CAPE ratio aims to smooth out short-term fluctuations by using an average of the last 10 years of earnings, all of which allows a full business cycle can be taken into account. The current price of the market is then rebased by the level of average earnings to produce the valuation metric. A high CAPE implies current pricing is high relative to average earnings and vice versa for a low CAPE. We would expect a high CAPE to be associated with low future returns, as prices need to come down in order to bring valuations back to a more sensible level.
Looking at the graph to the right, peaks in the blue line reflect times when the CAPE ratio was high – big peaks came in 1929 (31x), 1966 (24x) and the year 2000(44x). At these points, the market is deemed to be expensive, according to the CAPE ratio. The red line then represents subsequent five-year returns – for example, in the year 2000 when the CAPE ratio was 44x, the five-year subsequent return was -22%.
We would expect to see an inverse relationship between the CAPE and five-year year returns, to the extent that peaks in the blue line should be associated with dips in the red line and vice versa. From visual inspection of the graph alone, we can see there is some relationship between CAPE and future market returns.
The correlations in the table below suggest there is a meaningful relationship between CAPE and longer-term returns – although the link with the shorter-term return is weaker. Over time, market valuation does have a tendency to exert itself but, in the short run, expensive markets can sometimes become even more expensive before they eventually correct.
|Correlations between CAPE and S&P500 Returns (1881-2011)
Some advisers may feel there is sufficient evidence for them to use the CAPE ratio to reshape client portfolios, based on current valuations. Others may feel the amount of precision accorded by the CAPE ratio is not sufficient enough to deviate client portfolios away from fixed-asset weights.
My own view is that more sophisticated models that also include measures of quality and momentum may increase the predictability of markets and I would be hesitant to rely too much on a single measure, such as CAPE, when predicting future returns.