Thursday, May 04, 2006
On CPI-Adjusted Comparisons
by Anonymous
In Tom's last post, he writes, Needless to say, the Post's reporter should have mentioned, in referring to past stamp prices, that consumer price (CPI) inflation makes the 18 cent stamp of 1981 slightly more expensive in 2006 dollars (about 39.6 cents) than today's 39 cent stamp.
I agree that the reporter might usefully have given some context to proposed first-class stamp price. However, I always wonder when I see statements like Tom's, "why 1981? why not 1984, or 1975, or some other reference year?" (This post is intended to be a more general statement about the use of CPI-adjusted comparisons. Tom just kindly provided an example.)
It could be that the reference year is arbitrarily chosen. In this case, 1981 is 25 years ago, and 25 is a nice neat number: Americans like quarters. Presumably, the next time there is a price increase, a new 25-year comparison will be made, and eventually (given enough "draws"), it will all even out in the wash.
However, I think it's more common for producers-of-knowledge and reporters to choose a reference year that is a historical peak ("when gas costs were at their highest") or trough. In this case, CPI comparisons can make the current cost seem more expensive or less expensive, depending on whether a peak or trough is selected. ( Edward Tufte, anyone?)
When the goal is not just to put the number into context, but rather to justify a price increase (or, less commonly, decrease) the logic of the referrent year strategy is even weaker. Sure, the price may be the same, in real terms, as it was in 1975, but how do we know that the good or service wasn't horribly overpriced in 1975, too?
The solution would be for producers-of-knowledge to give, and journalists to report, a historical average instead of a particular reference year. This is commonly done in financial reporting on mutual fund returns, for example. Granted, in the mutual fund case it takes SEC regulation to impose some degree of uniformity on how funds calculate historical average returns. In many other applications, though -- e.g., the cost of a first-class stamp -- the raw data are effectively in the public domain.
Ideally, we'd also get a report of the standard deviation. Given the "average" journalist's statistical training, let alone that of Joe/Jane Reader, this doesn't seem likely to happen soon. But, I live in hope...
(Two costs of allowing guest bloggers are that they overstay their welcome and they don't confine themselves to the comments section. I'm going to try to go 2 for 2.)
In Tom's last post, he writes, Needless to say, the Post's reporter should have mentioned, in referring to past stamp prices, that consumer price (CPI) inflation makes the 18 cent stamp of 1981 slightly more expensive in 2006 dollars (about 39.6 cents) than today's 39 cent stamp.
I agree that the reporter might usefully have given some context to proposed first-class stamp price. However, I always wonder when I see statements like Tom's, "why 1981? why not 1984, or 1975, or some other reference year?" (This post is intended to be a more general statement about the use of CPI-adjusted comparisons. Tom just kindly provided an example.)
It could be that the reference year is arbitrarily chosen. In this case, 1981 is 25 years ago, and 25 is a nice neat number: Americans like quarters. Presumably, the next time there is a price increase, a new 25-year comparison will be made, and eventually (given enough "draws"), it will all even out in the wash.
However, I think it's more common for producers-of-knowledge and reporters to choose a reference year that is a historical peak ("when gas costs were at their highest") or trough. In this case, CPI comparisons can make the current cost seem more expensive or less expensive, depending on whether a peak or trough is selected. ( Edward Tufte, anyone?)
When the goal is not just to put the number into context, but rather to justify a price increase (or, less commonly, decrease) the logic of the referrent year strategy is even weaker. Sure, the price may be the same, in real terms, as it was in 1975, but how do we know that the good or service wasn't horribly overpriced in 1975, too?
The solution would be for producers-of-knowledge to give, and journalists to report, a historical average instead of a particular reference year. This is commonly done in financial reporting on mutual fund returns, for example. Granted, in the mutual fund case it takes SEC regulation to impose some degree of uniformity on how funds calculate historical average returns. In many other applications, though -- e.g., the cost of a first-class stamp -- the raw data are effectively in the public domain.
Ideally, we'd also get a report of the standard deviation. Given the "average" journalist's statistical training, let alone that of Joe/Jane Reader, this doesn't seem likely to happen soon. But, I live in hope...
Comments:
<< Home
Through whatever convergence of space-time, I've heard like 25 references to Tufte in the last month. I have both of his books on "visual information." Am I the only one in the whole of academic who thinks they are not, in truth, very good?
Probably.
The Visual Display of Quantitative Information isn't the best so much as "the only"--but it's the standard until someone produces something better.
The Visual Display of Quantitative Information isn't the best so much as "the only"--but it's the standard until someone produces something better.
Tufte is rapidly becoming one of those scholars that everyone cites but nobody reads.
I think it's widely agreed that he missed the boat with his critique of powerpoint. Certainly academics have done a good job of ignoring it.
I think it's widely agreed that he missed the boat with his critique of powerpoint. Certainly academics have done a good job of ignoring it.
Ken and Kim haven't even begun to outstay their welcome. Preparing the written testimony and documentation in my area of work historically has been the easy part.
A while back, PGL at Angry Bear did essentially what Kim suggested -- took a long-range view of 'real' postage rates w/o an obvious selection-of-comparison-years problem. (Which certainly can be a way of lying with statistics.)
The technical issue for long-range comparisons of "real" postal rates is that the sources of Postal Service cost inflation don't much resemble the sources of consumer price inflation.
A while back, PGL at Angry Bear did essentially what Kim suggested -- took a long-range view of 'real' postage rates w/o an obvious selection-of-comparison-years problem. (Which certainly can be a way of lying with statistics.)
The technical issue for long-range comparisons of "real" postal rates is that the sources of Postal Service cost inflation don't much resemble the sources of consumer price inflation.
I wrote a quick, tongue-in-cheek post on this same topic.
>The New Inflation Protected Security: The 42ยข Forever Stamp
Agree with you that all economic comparisons seem to suffer from a form of selection bias, in that authors will pick dates, either on purpose, or inadvertantly, to make their points.
Post a Comment
>The New Inflation Protected Security: The 42ยข Forever Stamp
Agree with you that all economic comparisons seem to suffer from a form of selection bias, in that authors will pick dates, either on purpose, or inadvertantly, to make their points.
<< Home