Slippage is an important issue for market timing investors. It is a costly consequence of almost all transaction-heavy trading plans – certainly more of a concern than commission costs. Briefly defined, slippage is the difference in the price at which you intend to buy or sell an instrument and the price at which the order is filled. There are multiple reasons for this divergence, including the time that passes between order placement and execution and the relative liquidity of the instrument. Closing the gap between your expectation of what price a stock will be bought or sold and the price you actually get filled should be an important part of your trading practice.
But what about the slippage that occurs between a market signal and trade order? This is an especially important consideration when following a system published by a market-timing advisory service. Can the rates of return achieved by a trading system be closely simulated? How does the model function in real trading? Readers here would ask how do the model Stock Trends Portfolio trading systems rate against actual or achievable trading results? The signals generated by the Stock Trends trading systems are issued after the close of trading on Friday, but the model portfolios register the Friday closing price as a transaction price. Can subscribers to the service attain the same returns registered by the model portfolios by trading post-signal in the following trading sessions?
First of all, a clear statement can be made about exactly duplicating any portfolio: It would be highly improbable that the transactions of a trading strategy can be matched. Even high frequency trading systems generating split second orders cannot be regenerated exactly in the marketplace because every fraction of a second represents a new market for an instrument. Of course, the differences in results may tend toward insignificance when the time between signal and order execution is small, but actual order regeneration at the same price is still highly improbable over numerous trades.
However, we would like to see that the results generated in a particular model are reasonably simulated in actual trading. What is acceptable in terms of variance between the model results and actual results will depend upon the over-all profitability of the system. In other words, will a trading system provide returns high enough to make the differences in actual results acceptable? If a system gives you 5% returns you won’t be too happy with a 2% difference in actual results. If the system gives you 20% overall returns, 2% slippage might be acceptable.
Periodically, I do get asked how the Stock Trends model portfolio performance holds in a post-trigger market - what would be the real trading results for subscribers who want to mimic the trading activity directed by the published strategies? The answer to that question depends on the trade efficiency of the investor and on the type of stock traded. Poor trade order practice and illiquid, volatile stocks make a bad combination.
Certainly, placing ‘market orders’ on trades in this category of stock will very often result in less than optimal results. As a point of reference I will direct readers to the Stock Trends Handbook chapter on executing trades, but there are many other sources that can educate investors on how to properly make a trade. It is important that every investor know that regardless of their source for a trade signal – whether from an advisory service, your own analysis, or your taxi cab driver’s – the responsibility is on you to execute the trade as optimally as possible. A signal to buy or sell is not a signal to go to the market unprepared. It is essential to be tactical with every trade order.
For now, let’s assume that trade order best practice is being used. What can we learn about the differences in trading results possible for investors who go to market after the Stock Trends Portfolio trade signals are issued? I will only do analysis of this question on the weekly data series I maintain and will only seek to approximate possible results on the assumption that the trade is executed in the following week. As a result any differences that are highlighted in this analysis may in fact be lower if the trade data was for the following trading day instead of the following trading week. Nevertheless, I believe this analysis should give us a pretty good idea of what kind of replication of trading prices is likely and whether there is a significant difference in results from the model portfolios published here.
How can we truly approximate actual trades made? Even if I presented trade tickets for each trade it would not be an accurate representation of the population’s (every subscriber who transacted on the signals) trade record. It is necessary to approximate as a central measure, to estimate a price at which a transaction would have tended toward. If we take the mid-point of the range of the stock price in the following period we can estimate a central point, although without actual inter-period data to see what the distribution of prices tells us it would be an imprecise estimate. For instance, a stock may have traded mostly above the range mid-point. This analysis cannot tell us how individual stocks traded on a daily or intra-day basis.
If we take a sample (data, csv) of over 5,200 transactions directed by the six active Stock Trends model portfolios currently published and extract trading data for the following week after trade signals (both buy and sell), we get the following statistics of the results for post-trigger trades made at the midpoint of the weekly range :
|Mean of the difference between the post-trigger price change (%) and the published price change (%):||-0.18|
|Median of the difference between the post-trigger price change (%) and the published price change (%):||0.19|
|Median Absolute Deviation of the price difference from the published trigger price:||3.84|
This tells us that approximately 50% of the transaction prices obtained in the post-trigger period (the following week) at the midpoint of the price range had overall trade results within the range of 3.65% lower and 4.03% higher than the published trade price changes. Roughly restated, if an investor bought or sold the stocks posted in the Stock Trends portfolio transaction reports the week following, and obtained a price near the midpoint of the weekly price range, the overall results of the trades would differ only marginally from the posted results.
Of course, there will be varying experiences on individual trades, and some traders will obtain better or worse prices than the midpoint of the range. A mythical trader who somehow managed to enter each position at the lowest price and exited at the highest price post-trigger would have experienced a 47% improvement in overall returns. Conversely, a mythical trader who somehow managed to enter each trade at the highest price and exited at the lowest price post-trigger would have experienced a 44% drop in overall returns.
These two highly improbable scenarios only serve to expose the range of experiences that are possible given the broader parameter of this analysis – that the trade takes place within the following trading week of a buy/sell signal. The ranges expressed here would likely be tighter if the analysis were done solely on trades the day following the buy/sell trigger. One hopes that the typical trader would tend toward the midpoint (although it would also be a mythical trader who makes all trades at the midpoint of a range) and the results experienced over an extended period would be in line with that published in the model portfolio reports.
This exercise serves two purposes: first it reaffirms that post-trigger trading can simulate the model performances. But more importantly, it reminds us that investors trading their own account should be certain to always engage the market tactically, use limit orders regularly, and to make every effort to exact the best price possible in every trade.