Statistics are as much a part of cricket as leather and willow. As one of the pillars upon which the game is built, stats are – with the possible exception of baseball – woven into cricket’s fabric more profoundly than that of any other sport.
However, while batting and bowling performance are scrutinised by a wealth of numerical measures, cricket lacks effective yardsticks for fielding ability (or lack thereof). Beyond simple tallies of catches and run outs, there are still no widely used barometers for performance in the field.
As technology improves and the game’s capacity to capture advanced metrics expands, there’s no reason why cricket statistics should continue to overlook one of the game’s core disciplines.
Taking inspiration from Andy Zaltzman’s recent All-Time Fielding XI published in The Cricket Monthly and borrowing from established practice in other sports, here are some suggestions for metrics that could help to assign empirical value to the art of fielding.
Let’s start with one that should be easy enough to measure: Catching%.
If a fielder is presented with ten chances and drops three, their Catching% is 70%. It’s that simple. Exactly what constitutes a ‘chance’ would need formal definition (and possibly even grading), and it would be important not to penalise players for spilling chances that only materialised through the fielder’s athleticism (i.e. many other players wouldn’t have even got to the ball).
Perhaps that’s all a separate – and potentially tedious – conversation for another day.
RSI (Runs Saved per Innings)
Catch and run out totals can be useful points of reference, but the ability to quantify how many runs a fielder saves their team – while much more complicated – would revolutionise the way we understand fielding performance.
Runs Saved per Innings (let’s call it RSI – not that kind – for short) would measure how many runs individual fielding actions saved a team, giving each player a cumulative value at the end of an innings. For example, a regulation stop on the boundary might save a team one run (i.e. the batsmen ran three but the ball was prevented going for four), so the player’s RSI for that action is +1.
As with many statistics, there are plenty of grey areas that need resolving. Firstly, should regulation stops really be rewarded, or should we save positive values for actions that were not ‘expected’? Should a wicketkeeper be credited with saving four runs with a regulation take because the ball would have travelled to the boundary if he wasn’t there? Gut instinct would say no, but where should the line be drawn?
One possible way of getting around those problems would be to assign a probability to each fielding action. If a player has a 95% chance of making a regulation stop and stops 90 of 100, they are underachieving. The same type of logic is used as part of the Expected Goals metrics that are now regularly used in football.
Different fielding positions also require different skills and so may demand adjustable weightings to account for that. A slip fielder rarely saves their team runs, but their catching ability is essential. Conversely, someone in the covers may not take many catches but they are more likely to regularly save runs by diving to stop drives.
RCI (Runs Cost per Innings)
If we’re going to measure runs saved, then it makes sense to also track how many runs an individual fielder costs their team over the course of an innings.
Superficially, this seems easier to quantify than RSI. If a player drops a batsman on 10 and he goes on to make 25, the fielder has cost his team 15 runs. If a player misfields on the boundary, allowing the ball to go for four when the batsmen otherwise would have run three, one run is added to his RCI score. If a player is responsible for overthrows, then they would also be factored in to the overall RCI value.
RCI is a simple enough on the surface, but there are plenty of situations that could complicate matters. For example, if a fielder catches the ball but is unable to stop their momentum before the boundary rope and so throws the ball back in field, they have technically dropped the batsman but may also have made the correct decision and saved their team several runs.
Further to that point, if a fielder has dropped a batsman, is adding the remainder of the batsman’s runs to that fielder’s RCI really a proportionate response to the error? Brian Lara was dropped on 18 during his innings of 501 for Warwickshire in 1994, but does that mean it would be appropriate to suggest the 483 runs he added were the fault of the guilty fielder?
Another, potentially fairer, way of thinking about that particular situation could be to say that the drop costs the fielding team the batsman’s average above the total he already has (e.g. a batter’s average when he’s on 0 might be 30, but once he reaches 20 not out he may average 60, so a drop on 20 might have a cost of 40 runs).
Using both the RSI and RCI measures, Fielding Efficiency is a +/- statistic that provides a simple overview of a player’s fielding performance.
If a player’s RSI was 16 and their RCI was 5, their Fielding Efficiency would be +11. This could be averaged out over a series or career to determine the impact a player has on their team’s fielding performance in the long-term.
Self-evidently, all the measures I’ve set out here are crude and theoretically leaky, but they demonstrate that there is scope for cricket to do a much better job of measuring fielding performance.
The creation of appropriate new metrics takes a great deal of time and energy (and can take even longer to be accepted by fans and members of the media), but that’s not to say it can’t be done. Cricket tends to attract excellent mathematical thinkers and it is surely only a matter of time before the game starts to institute new statistics to measure the most frequently overlooked of its three disciplines.