Monday, November 06, 2006
The Impact of Social Forces
by Anonymous
Jeremy identifies one source of skepticism about the rankings, namely that the measure of productivity is based on three journals: the two undisputed flagship journals in the field, and a third journal, Social Forces, about which there is far less consensus about its impact, prestige, average quality, and so forth (see post and comments here). As Jeremy points out, because SF publishes more articles than the other two journals, it ends up having a greater impact on departmental productivity scores.
The end result is that departments whose faculty publish a lot in Social Forces and relatively little in AJS/ASR will land higher in the "objective" rankings of productivity than they will in more conventional departmental ranks based on "fuzzier" reputational measures. The fuzzier, reputational rankings, however, presumably give more implicit brownie points to departments whose faculty are publishing in the "big two."
Out of curiousity, I replicated the Hausmann et al ranking exercise using only ASR/AJS articles from the last three years. Well, 3 years minus one issue, in the case of ASR: I couldn't find the table of contents for November's issue, 71(5), on line yet. As in the Hausmann paper, I gave each ASR or AJS article a weight of 1, distributed equally across the institution or institutions where the author(s) is currently employed. A "department's" productivity is simply the sum of the weights.
(Now might be a good time to point out that I agree completely with many of the other critiques levelled at using journal articles to rank departments. It disadvantages "book" departments, obviously, but it also disadvantages departments whose faculty publish path-breaking, discipline-changing articles -- wherever they appear -- every once in a while rather than "incremental progress" papers at more frequent intervals. My intent in producing the table below is simply to see how much Social Forces affects the productivity scores, and should not be seen as an endorsement of the general validity of the enterprise.)
Without further ado, here (in Excel) are the full results. The top 11 "departments" (but see below) are as follows, with their productivity scores in parentheses:
Some observations:
1. Notre Dame, ranked 5th in the "big three" version, falls to a tie for 31st in the ASR/AJS version.
2. UNC drops from 2nd in the big-three version to a tie for 6th in the big-two version. This suggests that UNC does get a boost from Social Forces, but the faculty and students are also doing an extraordinary good job of publishing in ASR/AJS.
3. There is a far less concentration of articles by author or department than I, at least, would have guessed. The 228 articles in the data were produced by nearly 175 different authors employed by 136 institutions. (Granted, the data include a special AJS issue on agent-based modelling, in which over half of the papers were written by scholars who aren't employed by sociology departments and/or who work outside the US. Still, that's just one issue.) The average productivity score for a department is 1.8, and the median is 1 .
4. It's misleading to attribute the productivity scores to departments, because they're measures of a university's productivity in sociology. This may seem like a minor correction, but some departments benefit enormously from having strong social sciences elsewhere on campus, particularly in business schools. Between 33 to 50% of article productivity in the Chicago, Columbia, Stanford, Penn, and Northwestern "sociology departments" (and, of course, 100% of MIT's 17th ranked productivity) comes from faculty and students who aren't paid by the school's sociology department.
For some purposes, the distinction doesn't matter. If you're a prospective graduate student using the rankings to get an idea of the intellectual resources in sociology at a given school, it may not make that much difference whether the resources are located in the soc department or somewhere else on campus. (Then again, it may -- the permeability of departmental boundaries can vary over time and space.) But if you think that a high "departmental" ranking is unequivocal evidence of a particular university's commitment to the sociology department, well, think again.
At the very least, we should be acknowledging that the Hausmann method is not comparing apples to apples. The dispersion of sociologists across departments within an institution also makes me skeptical that there's any terribly accurate way to adjust by faculty size, as the Iowa team did in the late-1990s version of this exercise.
If you take the non-soc department faculty out of the productivity score for a department, here's what happens to the rankings (top 10 only):
The basic lesson, though, is that seemingly small measurement choices can have a large impact. It's just a coincidence that with the last constallation of measurement choices, my alma mater gets two sociology "departments" in the top five. Really.
A new ranking of sociology departments is about to come out in the American Sociological Association's newsletter, and it's creating a bit of a buzz in some sociology blogs (see here and here).
Jeremy identifies one source of skepticism about the rankings, namely that the measure of productivity is based on three journals: the two undisputed flagship journals in the field, and a third journal, Social Forces, about which there is far less consensus about its impact, prestige, average quality, and so forth (see post and comments here). As Jeremy points out, because SF publishes more articles than the other two journals, it ends up having a greater impact on departmental productivity scores.
The end result is that departments whose faculty publish a lot in Social Forces and relatively little in AJS/ASR will land higher in the "objective" rankings of productivity than they will in more conventional departmental ranks based on "fuzzier" reputational measures. The fuzzier, reputational rankings, however, presumably give more implicit brownie points to departments whose faculty are publishing in the "big two."
Out of curiousity, I replicated the Hausmann et al ranking exercise using only ASR/AJS articles from the last three years. Well, 3 years minus one issue, in the case of ASR: I couldn't find the table of contents for November's issue, 71(5), on line yet. As in the Hausmann paper, I gave each ASR or AJS article a weight of 1, distributed equally across the institution or institutions where the author(s) is currently employed. A "department's" productivity is simply the sum of the weights.
(Now might be a good time to point out that I agree completely with many of the other critiques levelled at using journal articles to rank departments. It disadvantages "book" departments, obviously, but it also disadvantages departments whose faculty publish path-breaking, discipline-changing articles -- wherever they appear -- every once in a while rather than "incremental progress" papers at more frequent intervals. My intent in producing the table below is simply to see how much Social Forces affects the productivity scores, and should not be seen as an endorsement of the general validity of the enterprise.)
Without further ado, here (in Excel) are the full results. The top 11 "departments" (but see below) are as follows, with their productivity scores in parentheses:
- 1. Stanford (13.0)
- 2. Harvard (8.2)
- 3. Columbia (7.8)
- 4. Michigan (7.0)
- 5. Northwestern (6.2)
- 6. Princeton (6.0)
- 6. UW (6.0)
- 6. UNC (6.0)
- 9. Berkeley (5.8)
- 9. NYU (5.8)
- 9. Wisc (5.8)
Some observations:
1. Notre Dame, ranked 5th in the "big three" version, falls to a tie for 31st in the ASR/AJS version.
2. UNC drops from 2nd in the big-three version to a tie for 6th in the big-two version. This suggests that UNC does get a boost from Social Forces, but the faculty and students are also doing an extraordinary good job of publishing in ASR/AJS.
3. There is a far less concentration of articles by author or department than I, at least, would have guessed. The 228 articles in the data were produced by nearly 175 different authors employed by 136 institutions. (Granted, the data include a special AJS issue on agent-based modelling, in which over half of the papers were written by scholars who aren't employed by sociology departments and/or who work outside the US. Still, that's just one issue.) The average productivity score for a department is 1.8, and the median is 1 .
4. It's misleading to attribute the productivity scores to departments, because they're measures of a university's productivity in sociology. This may seem like a minor correction, but some departments benefit enormously from having strong social sciences elsewhere on campus, particularly in business schools. Between 33 to 50% of article productivity in the Chicago, Columbia, Stanford, Penn, and Northwestern "sociology departments" (and, of course, 100% of MIT's 17th ranked productivity) comes from faculty and students who aren't paid by the school's sociology department.
For some purposes, the distinction doesn't matter. If you're a prospective graduate student using the rankings to get an idea of the intellectual resources in sociology at a given school, it may not make that much difference whether the resources are located in the soc department or somewhere else on campus. (Then again, it may -- the permeability of departmental boundaries can vary over time and space.) But if you think that a high "departmental" ranking is unequivocal evidence of a particular university's commitment to the sociology department, well, think again.
At the very least, we should be acknowledging that the Hausmann method is not comparing apples to apples. The dispersion of sociologists across departments within an institution also makes me skeptical that there's any terribly accurate way to adjust by faculty size, as the Iowa team did in the late-1990s version of this exercise.
If you take the non-soc department faculty out of the productivity score for a department, here's what happens to the rankings (top 10 only):
- 1. Harvard (7.4)
- 2. Stanford (7.3)
- 3. Princeton (6.0)
- 4. Columbia (5.7)
- 4. Stanford b-school & ed school (5.7)
- 4. OSU (5.7)
- 7. UW (5.5)
- 7. Minn (5.5)
- 9. UNC (5.4)
- 10. Berkeley (5.3)
- 10. Wisc (5.3)
The basic lesson, though, is that seemingly small measurement choices can have a large impact. It's just a coincidence that with the last constallation of measurement choices, my alma mater gets two sociology "departments" in the top five. Really.
Comments:
<< Home
Thanks for these alternative rankings, Kim.
The way I read the Hausmann et al paper, they only looked at faculty who are in the Sociology department and not elsewhere on campus (your second analysis). But maybe I misunderstood that.
It's important to note that ASR 71(5) is key to Notre Dame's ranking because it features a sole-authored piece by one of their new hires, Omar Lizardo. While I'm sure this just highlights the intentionality of the data set's cut-off, it also makes a significant difference in where that department stands in the ASR/AJS ranks.
The way I read the Hausmann et al paper, they only looked at faculty who are in the Sociology department and not elsewhere on campus (your second analysis). But maybe I misunderstood that.
It's important to note that ASR 71(5) is key to Notre Dame's ranking because it features a sole-authored piece by one of their new hires, Omar Lizardo. While I'm sure this just highlights the intentionality of the data set's cut-off, it also makes a significant difference in where that department stands in the ASR/AJS ranks.
interesting stuff, kim. of course, i favor your second set of scores for completely sensible and non-self-interested reasons.
i love the big two and the big three, but i'd probably argue for casting a wider (and weighted) net. if weighted by impact factor, an ajs would count for three times as much as a jmf, but at least a jmf would count for something.
i'm guessing that the whole debate only makes sense in elite and near-elite institutions. i know there are many in the discipline who pay little attention to asr and ajs. in addition to books from a wide range of university presses, a lot of good and interesting stuff appears in jhsb, g&s and jmf and on and on, not to mention ratm, rhcp, and qotsa ...
i love the big two and the big three, but i'd probably argue for casting a wider (and weighted) net. if weighted by impact factor, an ajs would count for three times as much as a jmf, but at least a jmf would count for something.
i'm guessing that the whole debate only makes sense in elite and near-elite institutions. i know there are many in the discipline who pay little attention to asr and ajs. in addition to books from a wide range of university presses, a lot of good and interesting stuff appears in jhsb, g&s and jmf and on and on, not to mention ratm, rhcp, and qotsa ...
Yes, of course with fewer articles counted, there's more fluctuation with the publication of one article. Small n, and all that.
A couple of people have pointed out that ASR 71(5) contains an article by Lizardo, one of Notre Dame's new hires, and that my (purely pragmatic) exclusion of 71(5) might explain why Notre Dame did so well in the big-threee version. Assuming (a) none of the faculty in departments in positions 23-31 also published any fraction of an article in ASR 71(5), and (b) no departments below ND in the big-two rankings gained more than one publication point from the same issue, including ASR 71(5) would push Notre Dame to 23rd in the ASR/AJS only rankings.
A couple of people have pointed out that ASR 71(5) contains an article by Lizardo, one of Notre Dame's new hires, and that my (purely pragmatic) exclusion of 71(5) might explain why Notre Dame did so well in the big-threee version. Assuming (a) none of the faculty in departments in positions 23-31 also published any fraction of an article in ASR 71(5), and (b) no departments below ND in the big-two rankings gained more than one publication point from the same issue, including ASR 71(5) would push Notre Dame to 23rd in the ASR/AJS only rankings.
For whatever its worth, it appears that there is also a small difference in the two papers in terms of assignment of institutional affiliation. Part of the shift in the two scores could also lie in this. For example, it appears that in the Haussman et al paper several senior hires who transitioned this summer are included in their previous institution, while in this version they fall to their new institution. i could be wrong on that, but it seems like it could be *part* of the difference. (Am i correct in reading that you coded institutional affiliation on your own for this, didn't just adopt Hausmann et al's id's?)
Not that this would substnatially change the differences, but if we are talkign about 1-2 articles moving schools a bunch, i think that's at least part of the story. No? i do agree with Jeremy, though that this would warrant a submission to (at least) footnotes as well.
Post a Comment
Not that this would substnatially change the differences, but if we are talkign about 1-2 articles moving schools a bunch, i think that's at least part of the story. No? i do agree with Jeremy, though that this would warrant a submission to (at least) footnotes as well.
<< Home