Free marketing & business support,
exclusively for UK financial advisers

RSMR on passive investing - 2016

Chris Riley

Low Volatility Investing: The Behavioural Underpinning

Low volatility investing, through smart beta vehicles, is becoming increasingly popular. Last month, we looked at how the low volatility strategy has performed over time. We found that the return premium between low and high volatility stocks is mainly caused by the poor performance of high volatility stocks. This month, we will expand the analysis by measuring risk adjusted performance of the low volatility strategy and discuss one of the underlying behavioural drivers of the low volatility premium.

Risk Adjusted Performance

A common measure of risk adjusted performance is the Sharpe Ratio, which is calculated by subtracting a risk free rate from the asset performance and then dividing by risk (standard deviation of returns). We assume a risk free rate of 4.9% to calculate the Sharpe ratios below. Although this seems like a high rate by the standards of the last decade, this is the average yield of 3 month Treasury Bills over the sample period. The chart below shows the risk adjusted performance of US stocks, ranked in deciles based on their volatility.

Sharpe Ratio Volatility Deciles (1964-2015)
Figure 1: Data Taken from Ken R French data Library


Although the lowest volatility, decile 1 stocks, have the lowest risk; they still do not provide the highest risk adjusted performance, as their absolute performance is too low. The highest risk adjusted performance is actually provided by decile 3 stocks, which provide a nice mix of high return and low risk. The worst risk adjusted performance is provided by decile 10 stocks. Not only do these stocks have low absolute returns, but the standard deviation of returns is also very high.

Behavioural Underpinning

It appears that high volatility stocks have both low absolute returns and high levels of risk. This is at odds with traditional investment theory, which stresses a positive trade-off between investment returns and risk

Behavioural Finance does have an explanation for the overpricing of high volatility stocks in the form of the probability weighting function. The graph below shows the function with the dotted green line representing the weight a rational investor would assign to various probabilities. The subjective probability they assign to events is equal to the objective probability. The solid blue line represents the weight that investors actually assign to probabilities. We see that for low probability events (p<0.4), investors typically assign a higher probability to these events occurring than is warranted by the objective probability, as the solid blue line is above the dotted green line.

Factor % Returns Per Annum
Figure 2: Taken from Burns et. al. (2010), Overweighting of Small Probabilities. Wiley Encyclopaedia of Operations Research and Management Science


The probability weighting function can explain why people take part in lotteries, despite the expected payoff being extremely poor. The objective probability of winning the jackpot is small, but people assign a higher subjective probability of winning than is warranted.

The same principle can be applied to high volatility stocks. Often these stocks are very high risk businesses, operating with new and innovative business models. The failure rate of these businesses is high, but a small number of them will make a lot of money. The probability weighting function implies that investors will tend to overpay for such stocks, in the same way that they are attracted to lottery style payoffs, as they believe the probability of success is greater than is warranted.


High volatility stocks are a poor investment on average, both in terms of absolute returns and risk adjusted returns. Low volatility smart beta products do avoid these stocks, but the highest risk adjusted returns are generally found in the middle of the volatility spectrum.

There could be many reasons for the overpricing of high volatility stocks, but behavioural explanations are a strong contender. The overweighting of small probability events, such as the chance of earning high returns on glamorous high-risk businesses, ensures that high volatility stocks are overpriced and offer poor subsequent returns for investors.

Chris Riley, RSMR September 2016

Chris Riley

A Closer Look at the Low Volatility Factor

The low volatility factor has become one of the most popular smart beta strategies, following the financial crisis of 2008. Investors believe that low volatility stocks can potentially provide market beating levels of return, at a lower level of risk than the market as a whole.

Given the popularity of the low volatility strategy, we will look at the historical returns in more depth by examining returns across volatility deciles. The results are surprising and show there is anything but a linear relationship between volatility and returns. After that, we will examine the performance of the low volatility factor post-2008, to see if its popularity has changed the return profile.

Returns of Volatility Deciles

The graph below shows the performance of US stock deciles sorted by historical return volatility. Volatility Decile 1 represents the lowest volatility stocks that would commonly be purchased in a long-only, low volatility factor strategy. Decile 10 represents the highest volatility stocks that would be avoided in a long-only strategy, or shorted in a long-short strategy. The blue line represents the annual performance of the respective deciles and the orange line represents the average performance of the 10 deciles in total. If there was a straight linear relationship between volatility and return then we would expect to see a straight line, moving downwards from decile 1 to decile 10.

Factor % Returns Per Annum
Figure 1: Data Taken from Ken R French data Library


Contrary to popular belief, we do not see a straight line relationship between volatility and return. Although the highest volatility shares (decile 10) have a lower return than the lowest volatility shares (decile 1), the return of both of these deciles are below the average for stocks as a whole. The highest return actually comes from stocks of relatively high volatility (decile 8). In general, we can say that the relationship between volatility and return looks more likely to be an n-shaped non-linear relationship than a straight line. Returns are highest for stocks with average to above average levels of volatility and drop for stocks with very low volatility or very high volatility.

Performance Post 2008

The chart below shows the performance of volatility deciles for the shorter period of 2009-2015. This is to account for the large inflows into low volatility strategies, following the financial crisis of 2008 and see if these flows have changed the return pattern. The average line for the period is higher than for the first chart, with average returns running at around 15%.

Factor % Returns Per Annum
Figure 1: Data Taken from Ken R French data Library


The relationship between volatility and returns post 2008 has not been in line with that anticipated. The return on low volatility stocks (decile 1) has been the lowest of all the deciles. These are the stocks that would typically be purchased by a low volatility factor product. The return for the highest volatility stocks (decile 10) has also been below average. In general, there appears to be more of a straight linear relationship between volatility and returns post 2008.


The long-term relationship between volatility and returns is more complex than is presumed. There seems to be little premium for holding low volatility stocks and there is not a straight line relationship between volatility and returns. There has been a high return penalty for holding high volatility stocks in the pre-2009 period. This suggests the low volatility factor would be exploited most efficiently using a long-short strategy, which allows the manager to short the very highest volatility stocks.

Looking at more recent returns, we see that the penalty for holding high volatility stocks has been reduced since 2008. Although the reason for this cannot be proven, many will suspect that preferences against holding high volatility since 2008 have improved the returns of high volatility stocks.

It is also worth bearing in mind that the risk adjusted performance of low volatility stocks will be very much stronger than raw performance. This is because the lowest volatility stocks have the lowest volatility by definition. If the investor is more focussed on risk adjusted performance than on returns alone, then a low volatility strategy may still be appropriate for the lower risk that it provides.

Chris Riley, RSMR August 2016

Chris Riley

Smart Beta: Historical Factor Performance

One issue with the smart beta concept is the sheer number of products available in the market place. For some clients a multi-factor product will be a good option, providing exposure to multiple factors in a diversified portfolio. Other clients will be interested in gaining exposure to specific factors, such as low volatility or high quality. For these clients, it would be useful to provide some insight into the historical returns of each factor. In this month’s article, we will examine the short-term and long-term performance of 6 common factors used in smart beta products.

Performance of Factor Tilts

In the chart below we show the short-term and long-term performance of the factor tilts, with performance data stretching back 50 years and 10 years from the end of 2015. The returns are simulated by investing in the top decile of stocks in the US market sorted by the appropriate factor tilt. Factor definitions were taken from the Ken R French website and are commonly used proxies to form style portfolios. For example, the size factor was defined as low market capitalisation and the value factor defined as high book value to market value.


Factor % Returns Per Annum
Figure 1: Data Taken from Ken R French data Library


The best performing factors over the 50 year period have been the traditional factor tilts: size, value and momentum. The other factors, which have only been discovered in more recent times, do not have as strong a track record over the long term. This may explain why they were not initially considered to be factors.

Over the last 10 years, factors returns are down relative to 50 year returns across all strategies. This is most likely due to the factors becoming more recognised by investors and exploited over the sample period. In this regard, the launch of smart beta products to gain exposure to the factors, is likely to erode factor returns further going forward. The top performing factor was value, with the quality factor a close second. The worst performing style tilt was high dividend yield.

The low volatility factor has become popular in recent years, as risk aversion increased following the Global Financial Crisis of 2008, but its long-term and short-term performance record is not great relative to the other factors. A more in-depth look at the pattern of returns suggests that the highest performing stocks are in the middle of the volatility range, with both high and low volatility stocks underperforming stocks of average volatility.

One thing to bear in mind is that factors in smart beta products may be defined by different metrics to the ones used here. For example, we have defined the value tilt as high book value to market capitalisation. Some smart beta products may define value as another related metric, such as the price to earnings ratio. This may change the result somewhat, although different measures of value should be correlated with each other.


Quality and low volatility have been popular style tilts since the Global Financial Crisis of 2008, but these have not been the best performing factors in the long-run.

It is noticeable that over the last 10 years, factor returns have been lower than the past across all strategies. This should be borne in mind when looking at historical back testing of factor performance. As further capital enters the space, it is likely to drive down factor returns even further.

Another noticeable feature is that factor returns have come broadly into line with each other over the last 10 years. Factor timing and selection is likely to be a difficult process that is not necessarily rewarded with higher returns. This strengthens the argument for smart beta products that provide diversified factor exposure.

Chris Riley, RSMR July 2016

Chris Riley

Smart Beta ETFs

Smart beta ETFs have become more popular in recent years and advisers may be wondering if these products are a good investment. There are many products now available with the smart beta label, each with differing characteristics.

One common feature of smart beta ETFs is that they typically offer exposure to a single factor. Common examples of factors would be value (targeting cheap stocks), size (targeting small stocks) or low volatility (targeting safe stocks). More complex ETFs, which provide exposure to multiple factors, are also available and potentially add additional value over single factor products.

Single Versus Multi Factor Performance

The table below looks at the performance of 3 common style tilts available through single factor ETFs: value, size and quality. Value is defined as the top decile of stocks ranked by the book-equity to market-value ratio. Cheap stocks have a low market value relative to accounting book equity.

The size tilt is measured as the lowest decile of stocks ranked by their market capitalisation. The smallest stocks are figured to have the highest return potential.

Annual Returns and Volatility (1964-2015)
  Combined Value Size Quality
Mean 24.8 23.1 17.6 16.0
SD 22.7 22.6 23.1 23.6
Sharpe 0.87 0.80 0.55 0.51
Source: Ken R French Data Library

Quality is defined as the top decile of stocks ranked by operating profitability. The stocks with the highest operating profit relative to book value are selected. Highly profitable companies are deemed to be good investments.

The combined column represents the performance that is possible by merging the value, size and quality strategies into one. In order to maintain a similar number of stocks across all strategies, stocks are selected that are within the top half of the universe for the size and quality factors and top quartile for the value factor.

Looking at performance on a single factor basis, we see that value has been the best performer over the 51 year period. The average annual return would have been 23.1% per annum and standard deviation would has been in line with the market at 22.6%. The worst performing factor was quality, although its standard deviation of returns was also the lowest. The Sharpe ratio has been calculated as the return minus a risk free rate of 5%, divided by the standard deviation. The Sharpe ratio for the value factor has been the highest of the single factor strategies, with the quality factor again the lowest.

The combined factor performance has been the highest of all, with an annual return of 24.8% and standard deviation of returns has been in line with the single factor strategies. The Sharpe ratio of the combined strategy is higher than that of the three individual strategies, meaning that risk adjusted performance has been the best.


Clients may have a preference for a particular style tilt. Two of the more popular tilts of recent years have been low volatility and quality. Clients perceive that companies favoured by these strategies are likely to perform well in a downturn, which has increased the popularity of these styles.

Multi-Factor smart beta strategies have the potential to improve risk adjusted returns for clients versus single factor strategies, as the results above demonstrate. These products aim to invest in a subset of companies that have positive characteristics across multiple factors. The downside of multi-factor products is increased complexity, as they involve more sophisticated portfolio construction. Fees may be higher to reflect the increase in complexity.

Chris Riley

The highs and lows of passive investing

Investors have certainly suffered a testing start to the year but, says Chris Riley, investment research manager at RSMR, recent market movements are not particularly out of line with historical experience

The first month of 2016 was a testing time for investors, with large falls in equity markets following the Christmas break. After the financial crisis of 2008, many investors are well aware we are now due another recession and there are concerns 2016 could be the year.

With equity markets off around 7% so far on an annual basis and around 20% from their 2015 highs, it is worth considering how typical this type of movement is. Is such a move a normal occurrence or could it be indicative of something much larger and more concerning to come?

"There remains the possibility of a big correction in the style of 2008/09 but the market move thus far is simply not of that magnitude."

The chart below shows the distribution of overlapping annual returns of the S&P500 index from 1873 until 2015, measured every month. The large timeframe of more than 100 years ensures this is a very representative sample of likely annual returns. ‘Frequency’ (on the Y axis) reflects the number of years the annual return has occurred – for example, an annual return of approximately 8% has occurred 172 times and hence is one of the most frequent outcomes shown on the chart.   

Source: Robert Shiller Website

As it happens, the annual return of -7% experienced up to the end of January 2016 is not a particularly unusual event. In around one year in every five (20% of years), the annual return is -10% or lower. A similar proportion of years (20%) sees positive annual returns of 25% or higher. 

More extreme years only occur around one year in every 10, which would represent a 10% chance of occurring. A negative year of -20% or worse, or a positive year of around 35% or over, would be an example of such a year. As you can see from the chart, this type of year is quite a rare occurrence for investors, but certainly not a once in a lifetime event.

Properties of annual returns

Summary of returns
Mean 6.0%   Minimum -65.6%
Median 6.6%   Maximum 124.2%
Standard deviation 18.7%      
Source: Robert Shiller Website

The table tothe right shows the statistical properties of annual S&P 500 returns over the same 1873-2015 period. The mean or average annual return over the period has been 6%, with a slightly higher median of 6.6%. The median represents the central year, if all the years were arranged in order. The standard deviation of these returns has been nearly 19%. 

The minimum return of -66% was achieved in June of 1932 during the Great Depression. The maximum annual return of 124% was achieved a year after in July 1933, representing something of a recovery. These movements dwarf the low experienced in March 2009 of -42%. The other notable corrections in recent times were -37% experienced in October of 1974 and -29% in September of 2001.  It is worth considering that the annual decline we have witnessed so far in the markets, of -7%, has been nothing compared to these moves. 


The market movements experienced in recent times are not particularly out of line with historical experience and remain within normal trading limits that we would expect over the course of a year.  There remains the possibility of a big correction in the style of 2008/09 but the market move thus far is simply not of that magnitude.

For the passive investor, these highs and lows represent the normal moves of the market and one can take comfort from the fact that, even after devastating falls of the past, the equity market has always subsequently rallied to new highs. The key here is to not panic and sell during bad times and instead to remain invested so as to benefit from subsequent recoveries.

Chris Riley

Are Gold ETFs Ready to Shine Again?

After a stellar rise in the price of gold from 2000 until 2012 that saw gold almost hit the $2000 per ounce level, the last 3 years have been a disappointing time for gold investors. Last year saw a price plunge with gold threatening to drop below $1000 per ounce. Flows out of gold ETFs have been large as investors head for the exit.

The market turbulence this year has seen a swing back to gold and it has been one of the best performing assets. Given the upturn in the gold price, investors are once again asking if now is the right time to invest in gold. Have price falls over the last 3 years made gold attractive again as a value opportunity?

Current Moves in Context

The chart below shows the movement in gold from 2000 to 2016. There are two distinct phases in the chart. The first phase was a huge run up in the gold price between 2000 and 2012, which saw gold rise from $250 per ounce up to over $1800 per ounce. The top in 2012, corresponded with concerns about the credit worthiness of the US government, leading to a downgrade in the credit score of the United States.

Source: Ice Benchmark Administration

Since 2012, the price of gold plunged close to the $1000 per ounce level, as investors became more confident in the financial outlook. Equity markets appreciated and the recession of 2008 began to fall from memory. All of this changed however, at the beginning of this year. The chance of a recession has increased in the minds of investors and gold has moved back up to over $1200 per ounce.

At this point investors may be wondering if the fall from 2012 until the end of 2015 was simply a short-term correction. If this is the case then gold looks set to resume its longer-term trend upwards and take out the 2012 high, on the way to $2000 per ounce and above. But an alternative scenario is that 2012 was actually the top and the move in 2016 is simply a small correction, as part of a larger downward trend from here.

Source: Ice Benchmark Administration

There is clearly a pattern of gold rising steadily and for a prolonged period of time during the economic turmoil of recent years and of the 1970’s; whilst declining during times of perceived economic calm such as the 1980’s and 90’s, as fear gives way to greed.


From a short-term perspective, gold is generally a good asset to hold at the tail end of the business cycle. Fear and volatility are currently on the increase and historically this has been a good environment for gold.

Many would point out that the underlying economic problems, which caused the crisis in 2008, have not been resolved and hence there is little reason for a change in the upward trend that started in 2000. Under this scenario, the fall of 2012 to end-2015 was merely a blip. A counter argument would be that gold was massively overvalued in 2012, which has become the turning point for a longer downward move. In this scenario, the recent move up is merely a short-term correction, as part of a longer term bear market for gold.

Chris RileyTracking the US election

Is there really a link between election years and market outperformance in the US? Chris Riley, investment research manager at RSMR, assesses the evidence

Each January, the thoughts of advisers naturally turn to the prospects for the year ahead. Moving into 2016, they will be keenly aware equity markets have not suffered a serious correction since 2008 which, given their cyclical nature, only increases the chances of one coming in the future.

Clearly this has implications for how advisers may wish to tilt client portfolios and the balance between active and passive investments – and one big factor to take into account is that 2016 is a presidential election year in the US. There is a widespread theory that election-year performance tends to be better than average but does this actually hold up in practice?

"It is interesting to note that the last two major downturns – 2000 and 2008 – have both been in election years."

Reasons for outperformance during election years

One underlying theory is that the US Federal Reserve is reluctant to raise interest rates during an election year for fear of influencing the economy and affecting the outcome of the election. This tends to boost the performance of equities during the election year – however, once the election is over, rates have to be raised more than would otherwise have been the case. This policy has the effect of delaying pain until after the election, leading to higher returns during election years, but lower returns following the election.

That said, we should also consider the possibility that performance during election years may simply be a statistical fluke – in other words, patterns of performance around election years may simply be a coincidence and actually have nothing to do with the election. Under this scenario, we can conclude little from past performance during election years.

Data retrieved from FRED, Federal Reserve Bank of St. Louis

The evidence

In the graph to the right, we show the average performance of the Wilshire 5000, an index of the total US equity market, in US election years versus non election years, over the last 40 years – enough to cover the last 10 elections. Elections have occurred at regular intervals, every four years, and are marked on the graph, starting at 1976 and ending in 2012.

Despite the preconception that downturns are avoided during election years, it is interesting to note that the last two major downturns – 2000 and 2008 – have both been in election years. In each case, the incumbent President was replaced during these years, in the midst of the recession.

The average return in election years was 9.2%, which was lower than the average return outside of election years of 13%, over the sample period. If we exclude the 2008 correction, then performance in election years was slightly higher than outside election years. In summary then, the data over the last 40 years does not support the idea that election-year equity performance is higher than that outside of election years. 

Passive investments represent a cheap way to gain exposure to the market – although they offer no downside protection in the event of a market fall, as was the case in the last two recessions that both occurred during election years. They do, however, continue to provide exposure to the upside in a low-cost manner.

A regular recession that occurs every six to 10 years has been a feature of the modern Western economy since the early 1970s. On this basis, we are due a recession between now and 2018. Although there is a view that it is less likely to take place during an election year, this is not supported by the recent experience over the last four decades.

Market timing using the ‘CAPE’ ratio

Chris Riley, investment research manager at RSMR, examines how useful the cyclically-adjusted price earnings ratio can be when it comes to predicting future market returns

In ‘Through the macro cycle’, below, we considered the power of the yield curve as a predictive measure of the business cycle. One thing to bear in mind, however, is that financial markets are not always closely linked to the performance of the underlying economy. While this seems particularly true of emerging economies, the stock markets of developed economies can also become overvalued and subsequently underperform – even when the underlying economy is healthy.

This raises the question of what constitutes a suitable measure of under or overvaluation, given that the markets may not always respond to underlying economic trends. An obvious response is to look at the current valuation of the market, to assess if it is under or overvalued. We will analyse one measure of current valuation – popularised by Professor Robert Shiller – called the cyclically-adjusted price earnings ratio or ‘CAPE’.

"Over time, market valuation does have a tendency to exert itself."

Also known as the Shiller P/E, the CAPE ratio aims to smooth out short-term fluctuations by using an average of the last 10 years of earnings, all of which allows a full business cycle can be taken into account. The current price of the market is then rebased by the level of average earnings to produce the valuation metric. A high CAPE implies current pricing is high relative to average earnings and vice versa for a low CAPE. We would expect a high CAPE to be associated with low future returns, as prices need to come down in order to bring valuations back to a more sensible level.

CAPE ratio and S&P500 five-year forward returns (January 1881 to January 2011)
Source: Robert Shiller Website

Looking at the graph to the right, peaks in the blue line reflect times when the CAPE ratio was high – big peaks came in 1929 (31x), 1966 (24x) and the year 2000(44x). At these points, the market is deemed to be expensive, according to the CAPE ratio. The red line then represents subsequent five-year returns – for example, in the year 2000 when the CAPE ratio was 44x, the five-year subsequent return was -22%.

We would expect to see an inverse relationship between the CAPE and five-year year returns, to the extent that peaks in the blue line should be associated with dips in the red line and vice versa. From visual inspection of the graph alone, we can see there is some relationship between CAPE and future market returns. 

The correlations in the table below suggest there is a meaningful relationship between CAPE and longer-term returns – although the link with the shorter-term return is weaker. Over time, market valuation does have a tendency to exert itself but, in the short run, expensive markets can sometimes become even more expensive before they eventually correct.

Correlations between CAPE and S&P500 Returns (1881-2011)
  One-year Three-year Five-year
CAPE  -0.19 -0.30  -0.34

Some advisers may feel there is sufficient evidence for them to use the CAPE ratio to reshape client portfolios, based on current valuations. Others may feel the amount of precision accorded by the CAPE ratio is not sufficient enough to deviate client portfolios away from fixed-asset weights.

My own view is that more sophisticated models that also include measures of quality and momentum may increase the predictability of markets and I would be hesitant to rely too much on a single measure, such as CAPE, when predicting future returns.

RSMR on passive investing - 2015

The power of simplicity

Chris Riley, investment research manager at RSMR, illustrates how strong outcomes can be produced from a small number of decisions, expressed through low-cost passive vehicles

One thing that should be clear to investors is that high complexity does not always equal high returns. Complexity is often associated with negative features such as high turnover and high fees, which can damage long-term returns.

High portfolio turnover increases transactions costs and, in addition, investors can be lured by short-term market movements into making moves that damage their long-term financial health, such as panic-selling during downturns or becoming over-confident during rallies.

"High-cost, high=turnover strategies that bring more complexity do not necessarily add any additional value and may simply fulfil an emotional need for investors to be ‘doing something’."

In this article we want to show just how powerful a simple strategy of buy-and-hold, or making only a very small number of choices, can be and how it can compound to powerful long-term returns. Active management is also not strictly necessary to generate good long-term returns, which can in fact be achieved using low-cost passive vehicles.

S&P500/Gold Strategy

The following investment strategy can be implemented using two passive vehicles only – an S&P500 index fund and a gold exchange-traded fund (ETF). The S&P500 and gold are two of the most liquid markets in the world, with a range of low-cost mutual funds, or ETFs in the case of gold, that can be used to track the performance of these two markets reliably and at low cost. 

To avoid turnover, we will follow a simple strategy of trading only once a decade, at the maximum. To begin with, the portfolio will invest 100% in gold during the 1970s and we will then switch to the S&P500 for the 1980s and 1990s, before moving back to gold for the 2000s. We then make one final switch to the S&P500 from 2010 onwards. In total, this simple investment strategy would have required only four decisions over the last 45 years and would have been invested in just two low-cost passive investment vehicles. 

Value of $1 Invested in Gold or S&P500 (total return)
Source: Robert Shiller Website

The chart on the right shows how the value of an initial $1 (64p) investment would have increased over time using the switching strategy described above. In addition, it shows how a buy-and-hold strategy of investing in gold alone or the S&P500 alone would have done. 

A $1 investment in gold alone would have returned $34.06, gross of fees over the period, which equates to an 8.4% return per annum. This is a pretty impressive result for a strategy that involves no turnover and no active management. The total return from the S&P500 alone would have been even higher, with a 10% per annum return giving a terminal amount of $89.12. 

Returns increase dramatically, however, once we consider the switching strategy. The annual return of the S&P500 alone could have been doubled to 20% per annum, providing a terminal value of $3,163.08 by making only four switches over the period.

While such a scenario obviously does require a degree of hindsight; remember that we have limited ourselves to only two asset classes and only four switches over a 45-year period. The returns from the switching strategy emphasise that strong outcomes can be produced from a small number of decisions, expressed through low-cost passive vehicles. 

While high returns are also possible through active, high-cost strategies, the evidence we have presented here suggests that simple, low-cost strategies using passive vehicles can also deliver high returns. High-cost, high=turnover strategies that bring more complexity do not necessarily add any additional value and may simply fulfil an emotional need for investors to be ‘doing something’. 

Competition is increasing in the passive space and we continue to see both reductions in fees and the broadening of product ranges from passive providers. Used in an appropriate manner, these vehicles have the potential to deliver excellent results for clients – especially if the tendency towards complexity and overtrading can be curbed.

Through the macro cycle

Chris Riley, investment research manager at RSMR, considers the parts of the macroeconomic cycle when investors and their advisers may be more inclined to move into passive funds

Passive investing tends to be more popular with investors during the bull phase of the equity market cycle. At this time, it is relatively easy to make money and investors are drawn to the lowest-cost means by which to gain exposure to the market, which is usually through passive vehicles.

It is argued that active managers can really add value during the bear market phase as they can position the portfolio in less cyclical stocks and thereby provide protection through active management. There is not a great deal of evidence that active managers can actually do this successfully, although advisers could position client money into active products specifically designed to provide client protection, such as the Troy Trojan Fund.

"In terms of market timing, one indicator has predicted every US recession since the 1970s – the shape of the yield curve."

Predicting the current position in the market cycle is almost impossible as there are a number of macroeconomic factors that can drive the cycle. Looking at the graph below, it begins in 1973 when President Nixon broke the dollar peg to gold.

Ever since this time, the US has experienced a recession every six to 10 years (as represented by the shaded grey bars), with a double-dip recession occurring in the early 1980s. Given the last recession occurred in 2008, this would suggest we are due another recession in the 2014-18 period. We are now a year into this period and, as every year passes, the chance of a recession increases.

US interest rates: Jan 1973- May 2015

In terms of market timing, there is one indicator that has predicted every one of these recessions – the shape of the yield curve. When the short-term rate (the blue line) has touched, or risen above, the level of the long-term rate (the green or red lines), a recession has occurred shortly afterwards. A flat yield curve, where the short term rate is at or above the long-term rate, often reflects tight monetary policy.

Looking at current conditions, the short term rate is still significantly below the long-term one. Monetary conditions remain exceptionally loose and we are only at the beginning of the rate-rising cycle, which may peak with rates around 3% given the low level of long-term rates. Based on the current yield curve, we might conclude that a recession is unlikely in the near future.

For advisers using passive management as a low-cost way to extract market beta, this still appears to be a valid approach for the time being. We are most likely in the latter part of the cycle, but high returns are still possible through a passive vehicle and an adviser may take the view these returns should be accessed at the lowest cost.

As the yield curve flattens, advisers may wish to move into active managers who can provide capital protection. This is more likely to be provided by specialist active managers who specifically target capital protection, as there is little evidence that active managers outperform during market corrections in general.

Stick with market cap weighting

In the first of a new series on passive investing, Chris Riley, investment research manager at RSMR, explains why market cap weighting is still the appropriate choice when investors and their advisers are looking for pure passive exposure

Alternative weighting measures for indexing are becoming more popular and, in addition, we have seen the proliferation of so-called ‘smart-beta products’ in recent years. While we are not against these trends, there are some important reasons why market cap weighting is still the appropriate choice for investors looking for pure passive exposure.

Transactions costs

"This is not to deny the existence of anomalies that can potentially be exploited by investors – it is just that such strategies are not really passive at all."

The market capitalisation of a share represents the current market value of the company, as calculated by the share price multiplied by the number of shares. Tracking the index by market capitalisation is unique in that no trading is required to take account of share price appreciation.

As market capitalisation increases through share price growth, it automatically receives a higher weight in the portfolio. As such, the passive manager only needs to take action to reflect index changes and corporate actions that affect the number of shares in issue, which are a relatively infrequent occurrence.

Alternative weighting schemes imply a high level of turnover in order to implement the strategy successfully. Although the manager may choose to limit their trading, this would call into question whether the portfolio is actually being run in the manner implied by the strategy, as alternative forms of indexing necessarily demand higher levels of turnover than market cap weighting in order to produce the required factor ‘tilts’.

Neutrality of position

What is known as the ‘capital market line’ (CML) implies the most efficient portfolio is the market cap-weighted portfolio of all asset classes, mixed with varying amounts of cash (or gearing), depending on the risk profile of the investors. In practice, however, it is rare to find investors who invest based on the manner implied by the theory.

Higher-risk investors tend to skew the market portfolio, with a higher weight in equity securities, rather than lever up a lower-risk portfolio. The CML would suggest that the Sharpe ratio from such an approach would be lower, as the underlying portfolio is less diversified than a levered version of the market portfolio.

While it is true that various anomalies have been discovered in the markets, there are competing theories as to why these anomalies exist. Risk-based explanations would suggest an alternative weighting scheme for the market portfolio to reflect the extra risk factors, but behavioural based explanations suggest the anomalies are simply mispricing.

Under this view, the anomalies require an active strategy in order to harvest, which deviates from market cap weighting. In summary therefore, the only neutral position an investor can take is the market cap weighting and any other form of weighting scheme involves an active view on the part of the manager. 

The above is not to deny the existence of anomalies that can potentially be exploited by investors – it is just that such strategies are not really passive at all. They require regular rebalancing on the part of the investor to effectively capture exposure to the factors and an active view relative to the current weights, implied by market capitalisation. If the investor requires a pure passive approach then market capitalisation is still the way to go.

Leading fund research and ratings group RSMR has responded to the issue of costs and tracking errors in passive funds by creating a Guide to passive investing as part of a new service focusing on passive funds. The guide, together with RSMR’s ratings and additional information to assist with fund selection, is free to advisers who have registered for RSMR’s Rated Fund Service here

Our Sponsors

  • AXA Investmnent Managers
  • Aviva Investors
  • Baillie Gifford
  • BlackRock
  • BNY Mellon
  • Fidelity International
  • First State Investments
  • Goldman Sachs Asset Management
  • Invesco Perpetual
  • Investec Asset Management
  • Janus Henderson Global Investors
  • Jupiter Asset Management
  • Legal & General Invesment Management
  • M&G Investments
  • Schroders
  • Square Mile Investment Consulting & Research
  • Neptune Real World Investors