Saturday 31 October 2015

How long before a batting average means something?

Since I last posted, England have battled through two thirds of test series against Pakistan, acquitting themselves much better than at least I imagined they would, but still coming out behind. The struggles of England's middle order look set to lead to a test comeback for James Taylor.

In recent years, England's selectors have been praised for giving players a decent run in the side when called up- giving them more than or two chances to show what they can do. I assume, and hope, that the same treatment will be extended to Taylor and that, barring injury, he'll also play in the South Africa tour.

These ruminations lead me on to today's question: if we judge a batsman by their batting average, how many matches will it actually take before that average fairly reflects their ability?

I think most of us understand that quoting someone's batting average after two games isn't going to provide terribly strong evidence either way about how good they'll be in the long term. But how long should we wait before we can suppose that their average gives a strong clue as to their underlying run scoring prowess? In my experience, the conventional wisdom might place this number somewhere around 10 matches or a little more, depending on who you talk to.

To try and answer this question, I've attempted something a little different to my previous posts. Instead of using data from past test matches, I wrote a computer simulation to simulate the run scoring output of two (fictional) batsmen of known ability and looked at the distribution of their averages as a function of the number of innings played. The reason for doing this is that allows me to make a controlled 'experiment' in which I know how good the players in my simulation 'should' be and can see the degree to which statistical fluctuations obscure that in a finite sample of innings.

In my previous post, I argued that a player's vulnerability to getting out is only weakly dependent on how many runs they already have- being slightly elevated right at the very beginning of their innings (and maybe also a little elevated immediately after reaching 100).

I simulated the output of two players:

Player A had a 12% chance of getting out before reaching 5 and an 8% chance of getting out before scoring the next five runs thereafter. To put these numbers in context, this is very good- in the long run Player A could expect to average around 55.

Player B had a 16% chance of getting out before reaching 5 and an 12% of getting out before scoring the next five runs thereafter. This is rather more mediocre- in the long run Player B could expect to average around 35.

The two graphs below illustrate the probability distribution of batting averages for the each player as a function of the number of innings they were given in the simulation. The green points represent their median average after that number of innings and the red and blue points are the 10th and 90th percentile respectively. The region between the blue and red points reflects their likely range of batting averages after a given number of innings.

What's striking is that even after 50 innings the distributions are still quite broad - particularly for the better player (Player A). After 50 innings Player A has a 10% chance of averaging more than 66- making him look like a potential legend and also a 10% of averaging lower than 45 making look much more run of the mill.

Player B meanwhile has a 10% chance of averaging higher than 42 or lower than 28- the difference  between fairly good and pretty poor.

These averages are converging to a fair reflection of the players' abilities but they are doing so rather slowly- a hint that even after a fairly decent number of tests we need to base our judgements of players on more than their bare batting average.

Imagine if you were a selector, who brought these two imaginary players into your imaginary team and after a fixed number of tests had to choose between these two (perhaps you have a star player about to come back from injury and have to drop someone to fit him in). Would their averages be likely to guide you to the right decision?

The graph below shows the probability that the very good player A has a better average than the pretty mediocre player B after a given number of tests.


After 10 innings there's around an 80% chance that the averages will correctly reflect that player A is better than player B. Which sounds kind of okay, until one reflects that selection decisions are often- necessarily- based on fewer innings than that and that these two players are really not evenly matched at all- in the long run one would average a full 20 runs higher than the other.

Of course, in reality selectors have a lot more information available to them than just batting averages. Anyone can look up a players' average but selectors must exercise their judgement on a player's technique, temperament and suchlike using what they've seen in both matches and training. They have to do so because they don't have the luxury of letting a player play 20 test matches before making a decision about whether they're good enough- which is probably the minimum they would need to justify a decision based on batting average alone. To look at Gary Ballance's batting average of 47.76 after 27 innings, it's hard to avoid the conclusion he's been hard done by to not be in the team right now. And maybe he is- but one can't be sure of that from just his average.

It may well be the case that one could find a better way of estimating a batsman's ability from their stats after a small number of tests, which would converge on something fair a bit faster than simple batting average. On the other hand, fans like me should perhaps give selectors a break sometimes- they have rather complicated decisions to make, with rather limited and noisy information.

3 comments: