Any analysis of player performance must account for the various factors that can affect it, such as:

- Age (here, I correct to age 25)
- Home ballpark
- Strength of league (here, I correct to the NL strength of a player's rookie season)
- Usage (relief vs. starting pitchers)
- Teammates (players do not face their own team)

By using stats adjusted for the above factors, tighter correlation can be seen between a player's stats in multiple seasons. The following figures shows the agreement between raw (blue) and adjusted (red) stats. You will see that, even with the adjustments, the similarity between statistics from one year to another year decreases as the number of years apart increases.

The green dots, which are fairly flat, indicate that even the adjusted stats are not perfect predictors of player performance. There is an inherent 10% per year inaccuracy in these predictions. This is handled by reducing the weight given to a player's stats by 10% for each season. So, if a player had 100 plate appearances each of the past two seasons, you're best off treating the past season as 90 appearances and the previous season as 81 appearances when predicting for the upcoming season.

To evaluate if a player's performance has significantly changed from one year to another, one needs to boil down all of his statistics into a single number. To give the most accurate result, this number should translate directly into the odds of a team winning a game.

The metric I'll use is "wins created", in which each outcome is scored based on the average effect it has on a team winning. For example, the average single in a baseball game increases the team's odds of winning a game by 7%. So, I'll count a single as 0.07.

Another issue is that individual player statistics tend to be very random. A player who gets a bunch of triples one season may have just gotten lucky. So, in predicting future performance, one has to expect that some amount of a player's deviation from average performance was due to luck that won't be repeated the following year. As an example, hitters retain about 40% of their single-hitting rate from one year to the next (so, a player who hit .300 one year in a .260 league is most likely to hit around .276 the following). Pitchers, meanwhile, retain only about 20% of their single-hitting rate (a pitcher who allowed a .300 average one year would be expected to give up around .268).

With the above method laid out, what follows are player performance (through the 2007 season) for four players mentioned in the Mitchell Report as having used steroids. The seasons in question are plotted in red.

Overall, the players singled out in the Mitchell report did not appear to gain much from having used steroids. There were 32 hitters listed, who played 63 seasons (minimum 300 PA) under question, and their performance was improved by 3.4% (with an uncertainty in the mean of 1.2%). The 16 pitchers played a total of 35 seasons, and had performance improvements of 3.3% (with an uncertainty of 1.5%). So, the result is statistically significant at a 2-3 sigma level, but amounts to something like 10 points of batting average. The one exception is the BALCO-tied players, who averaged a 10% increase in production.

Several caveats apply to this conclusion:

- The Mitchell Report identified years in which players reportedly purchased steroids, not the entire time of use. So, some of the "baseline" years could have also been steroid-use years.
- Fairly significant performance swings can be masked by the large statistical uncertainties, even with a full season of data.
- Several players identified began steroid use while injured, so any performance gains from drug use might be masked by performance degradation from the injury.

Another avenue we can use is to go the reverse route -- look at the statistical record for players with particularly unusual career trajectories. Specifically, I'm looking for two things:

- A three-year period with performance significantly better than the career average and the previous three years.
- A large deviation compared with an average player's career trajectory.

Looking over data from 1975 through 2007, the following players meet the above criteria.

- 1975-1977 Rod Carew
- 1986-1989 Mike Scott
- 1993-1998 Greg Maddux
- 1996-2000 Ken Caminiti
- 1999-2003 Pedro Martinez
- 2000-2002 Jason Giambi
- 2000-2002 Sammy Sosa
- 2001-2003 Bret Boone
- 2001-2004 Barry Bonds
- 2002-2004 Jason Schmidt

So, from 1975 through 1995, there were 2-3 instances. From 1995 through 2005, there were eight. This was clearly an unusual decade. That said, MLB-wide, there is no clear indication that any significant number of players had unusual variations from their career averages over this period. The next two plots show average player deviations from their career averages over the whole period (larger number = more deviant), and show no obvious jump starting in 1995.

- The player seasons implicated for steroid use in the Mitchell Report were better than those players' career baseline at about a two-sigma level (97% confidence that this is a real result). The improvement was about 3.4%, which would be about 10 points of batting average.
- From 1993 through 2004, there were an unusually high number of players having significant deviations from their career baselines. Most notably were hitters. However, looking at the entire league, most players had fairly typical deviations from their career baselines over this period.

Return to ratings main page **Note**: if you use any of the facts, equations, or mathematical principles introduced here, you must give me credit.copyright ©2008 Andrew Dolphin