Sotomayor and "Measurement Error"
Monday, August 10, 2009 at 3:57PM
Sean Wilson in Law & Ideology, Quantitative Ideology Models, Quantitative Methods, Segal & Spaeth, e-mail discussion

(sent to Law Courts re: the problems with arguments that "attitudinal scholars" make about their quantitative models)

... just a few points on measurement error. (I had thought these ideas had finally left the discipline).

First, the only segment of the docket being talked about here are what these people call "civil liberties cases," which is roughly 1/2 of the Court's workload in the data set. When you look at this from 1948-2004, the breakdown of this HALF of the set is: criminal cases (40%); civil rights (30%), First Amendment (16%), Due Process (8%), Privacy (2%) , and Attorneys (2%). So, for simplicity, let's call this the "criminal-plus rights claimants."

When Jeff speaks of "measurement error," what he means is that the media-impression scores for any given justice are disappointing to him when looking at the overall tendency of a justice's craft to have favored criminal-plus rights claimants. Apparently, what he would like is for his media-impression workshops to be able to really nail the rate at which criminal-plus rights claimants win their cases.

There are several obvious problems here:

1. the stuff in the media editorials (that Jeff codes) are not confined to or centered around issues in criminal law and civil rights (70% of the docket concern). And when they are, it is usually just a discreet, hot-button thing. Hence, the content of the one measure has nothing to do with the issues justices actually end up considering.

2. Also, the coding philosophy used here is confused. It indulges the idea of political values existing as exemplar issues in American political psychology, stuffed into one-dimensional space. (Guns, butter, taxes, abortion, big-case controversies like firefighters, speeches about presidential power, affirmative action positions, etc.). Anything mentioned along these lines gets you "coded." I think Jeff even codes based upon whether the journalists uses the world "liberal" in the editorial. Let's call this the "stereotype picture."

The problem here is that when justices decide cases before them that involve criminal-plus rights claimants, the issues in the cases very rarely involve "stereotype politics." Many times, the issues are a real snooze and make only a technical point. Or its only a little extension here or a little take away there. And so, you have this disjuncture between the philosophy of "liberal" being conjured on the one hand (the stereotype) and the thing you want to call "liberal" on the other, but in good faith can't. (At least not without playing games with language).

3. What is curious about all of this is that the majority of justices for whom we have data do not have any real affinity for criminal-plus rights claimants one way or the other. Assuming most legal issues are tough, one would expect 40-60 to be the basic range. Of course, it wouldn't be during periods of innovation, where new rights paradigms emerge and then recede into an equilibrium. But even though we have this dynamic history in the data set, the majority of justices are really not that directional.

And it is this that causes the failure in Jeff's model, not "measurement error." Indeed, the only errors truly present in these models are specification errors (see points 1 and 2 above), errors with ecological inference (which I'll get to in a moment), and language games.

Really, if you think about it , Jeff's measures are lucky. He's got more measurement luck in the model than error. He's lucky that he has those 8 or so justices with high propensity to decide issues favoring criminal-plus rights claimants -- and those crazy scores of perfect liberal and conservatism. Without those 100% or 0% scores coming out of those media prejudice workshops that he recreates -- scores that attach themselves to justices with 80-20 propensities for criminal-plus cases -- there would be no model here at all.

In fact, just think about it. Use half the docket. Don't use all the justices. Get lucky on the rights revolution thing. Tell everyone you do better on the first 3 years of service (another cut). Then just cry measurement error for all the rest.

I think its worth noting that Segal-Cover scores are statistically insignificant and otherwise extremely paltry for the entire docket (every decision for which researchers have data). They are also statistically insignificant for discreet years of voting. I think one was in the early 1990s (I wrote a paper mentioning it). Also, if you take away those justices who are around the 80-20 mark and who are no longer on the Court -- in essence, replicating today's Court -- you don't have anything to speak of.

It isn't measurement error; it is that the whole idea is faulty.
Regards and thanks.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website:
SSRN papers:
New Discussion Group:

Article originally appeared on Ludwig (
See website for complete article licensing information.