Those magic formula loose ends...
Too good to be true? The loose end that bothers me most about last week’s magic formula testing is the incredible success of the formula in the first three years of the test. It’s visible to the naked eye: The 12 portfolios formed between 31 March 2000 and 31 December 2002 beat the FTSE by an average of about 23%. It wasn’t just the margin by which the formula beat the market in those years that surprised me, it was the fact that the formula beat the market at all considering between 2000 and 2002 the market was crashing. In his book, The Little Book That Still Beats The Market, Joel Greenblatt says: ...it turns out that much of the outperformance of our portfolios comes during the up months. On average during this 22-year period, the magic formula portfolios "captured" 95% of the S&P 500's performance during down months and 140% of its performance during up months. In other words, in his experience the formula performs badly in bear markets. In mine it did particularly well in one bear market. I emailed Steve Lewis, the architect of Sharelockholmes, to ask him whether he could think of any reason why the data might be less trustworthy in the earlier period. He replied with three possibilities, all stemming from the fact that the data in Sharelockholmes comes straight from company results and is not adjusted for:
- Changes to the accounting of pension deficits, which would affect the value used in the ROA calculation.
- The introduction of IFRS in 2005, which replaced the old GAAP accounting standards.
- Delisted companies before 2003. Steve set up Sharelockholmes in 2003, using data going back to 1999 for companies still active in 2003. So Sharelockholmes doesn’t contain records for companies that delisted between 2000 and 2003, although it does thereafter.
Don’t be critical Sharelockholmes, it’s the only affordable database I know that has the data I need, albeit unadjusted. Instinct tells me that none of these factors should radically influence the results. Pension accounting wouldn’t affect the ROC calculation yet it exhibits a very similar pattern to the ROA version of the magic formula, and delistings in the 2003-2010 data didn’t affect the average performance of the magic formula strategies. But instinct is basically worthless, and reconstructing the database to include companies that delisted is beyond me. Maybe we should ignore the data from 2000-2003. This would knock out most of the ‘dirty data’ at the expense of reducing the total number of portfolios from 40 to 28, reducing the time period of the test from eleven years to eight, and perhaps exclude a period of genuinely outstanding performance for the magic formula. In the most recent eight years of the test, the marginally superior ROA version of the magic formula beat the FTSE 100 by about 8% a year as opposed to 13% for the whole period of the test, and the ROC version 7% as opposed to 12%. Which is closer to the truth? I honestly don’t know, but either way, the magic formula has worked in the UK. That’s good news.
- Richard Beddard on Share Sleuth Digest: Putting the 'c' in "long-term"
- Richard Beddard on Northgate exits Share Sleuth portfolio
- UK Value Investor on Share Sleuth Digest: Putting the 'c' in "long-term"
- Monevator on Northgate exits Share Sleuth portfolio
- Richard Beddard on Share Sleuth Digest: Choosing a strategy that suits
Richard Beddard's tweets
- Barel Karsan
- Expecting Value
- Gannon on Investing
- Mark Carter
- Musings on Markets
- My Investing Notebook
- Oddball Stocks
- Peston's Picks
- Philip O'Sullivan
- Seth's Posterous
- The Value Perspective
- Turnkey Analyst
- UK Value Investor
- Value Stock Inquisition