This area does not yet contain any content.
This area does not yet contain any content.
This area does not yet contain any content.
Monday
Aug102009

Sotomayor and "Measurement Error"

(sent to Law Courts re: the problems with arguments that "attitudinal scholars" make about their quantitative models)


... just a few points on measurement error. (I had thought these ideas had finally left the discipline).

First, the only segment of the docket being talked about here are what these people call "civil liberties cases," which is roughly 1/2 of the Court's workload in the data set. When you look at this from 1948-2004, the breakdown of this HALF of the set is: criminal cases (40%); civil rights (30%), First Amendment (16%), Due Process (8%), Privacy (2%) , and Attorneys (2%). So, for simplicity, let's call this the "criminal-plus rights claimants."

When Jeff speaks of "measurement error," what he means is that the media-impression scores for any given justice are disappointing to him when looking at the overall tendency of a justice's craft to have favored criminal-plus rights claimants. Apparently, what he would like is for his media-impression workshops to be able to really nail the rate at which criminal-plus rights claimants win their cases.

There are several obvious problems here:

1. the stuff in the media editorials (that Jeff codes) are not confined to or centered around issues in criminal law and civil rights (70% of the docket concern). And when they are, it is usually just a discreet, hot-button thing. Hence, the content of the one measure has nothing to do with the issues justices actually end up considering.

2. Also, the coding philosophy used here is confused. It indulges the idea of political values existing as exemplar issues in American political psychology, stuffed into one-dimensional space. (Guns, butter, taxes, abortion, big-case controversies like firefighters, speeches about presidential power, affirmative action positions, etc.). Anything mentioned along these lines gets you "coded." I think Jeff even codes based upon whether the journalists uses the world "liberal" in the editorial. Let's call this the "stereotype picture."

The problem here is that when justices decide cases before them that involve criminal-plus rights claimants, the issues in the cases very rarely involve "stereotype politics." Many times, the issues are a real snooze and make only a technical point. Or its only a little extension here or a little take away there. And so, you have this disjuncture between the philosophy of "liberal" being conjured on the one hand (the stereotype) and the thing you want to call "liberal" on the other, but in good faith can't. (At least not without playing games with language).

3. What is curious about all of this is that the majority of justices for whom we have data do not have any real affinity for criminal-plus rights claimants one way or the other. Assuming most legal issues are tough, one would expect 40-60 to be the basic range. Of course, it wouldn't be during periods of innovation, where new rights paradigms emerge and then recede into an equilibrium. But even though we have this dynamic history in the data set, the majority of justices are really not that directional.

And it is this that causes the failure in Jeff's model, not "measurement error." Indeed, the only errors truly present in these models are specification errors (see points 1 and 2 above), errors with ecological inference (which I'll get to in a moment), and language games.

Really, if you think about it , Jeff's measures are lucky. He's got more measurement luck in the model than error. He's lucky that he has those 8 or so justices with high propensity to decide issues favoring criminal-plus rights claimants -- and those crazy scores of perfect liberal and conservatism. Without those 100% or 0% scores coming out of those media prejudice workshops that he recreates -- scores that attach themselves to justices with 80-20 propensities for criminal-plus cases -- there would be no model here at all.

In fact, just think about it. Use half the docket. Don't use all the justices. Get lucky on the rights revolution thing. Tell everyone you do better on the first 3 years of service (another cut). Then just cry measurement error for all the rest.

I think its worth noting that Segal-Cover scores are statistically insignificant and otherwise extremely paltry for the entire docket (every decision for which researchers have data). They are also statistically insignificant for discreet years of voting. I think one was in the early 1990s (I wrote a paper mentioning it). Also, if you take away those justices who are around the 80-20 mark and who are no longer on the Court -- in essence, replicating today's Court -- you don't have anything to speak of.

It isn't measurement error; it is that the whole idea is faulty.
Regards and thanks.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website: http://seanwilson.org/
SSRN papers: http://ssrn.com/author=596860
Twitter: http://twitter.com/seanwilsonorg
Facebook: http://www.facebook.com/seanwilsonorg
New Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

Monday
Aug102009

Sotomayor and Liberalism in the Strange World of Political Science

(sent to LawCourts in reply to an inquiry about how "liberal" Sotomayor is expected to be based upon newspaper-confirmation scores. I am a critic of how this whole enterprise works)


Mark:

Whatever may be anyone's private views, the prediction announced here is a function of a model Jeff uses. The logic of the model doesn't work that way you suggest . It doesn't offer predictions in the sense of prognostication. The output is solely a function of the input and the logic of the mathematical specification.

None of the variables account for the points you raise. The dependent variable coded by graduate students is rather "blind." If the decision favors a criminal defendant in a criminal case, for example, it gets thrown in the "liberal bin." If not, it gets the other one. The model doesn't consider what the substantive issue was, whether it shifted over time, whether it was mundane, whether it was big or small, whether the republican party actually supported it, whether anyone even cared, whether it instrumentally created "conservative doctrine" while disposing in favor of the defendant, and so forth. In short, there is no assessment of qualitative factors.

Furthermore, because about half of the docket is excluded from the prediction -- as are various justices from before the ascendancy of Earl Warren -- the prediction is apparently limited to civil liberties cases, and assumes (as all of these sorts of models do) that the past controls the future. (The past meaning only Warren Court forward).

A couple of important points to keep in mind. What drives these models statistically is the presence of justices who had rather extreme tendencies to have decided for or against civil liberties claimants during the Warren and anti-Warren periods of the Court. And more specifically, to have decided criminal cases, which comprise the bulk of the cases that are said to be "civil liberties." (You really could call it the criminal cases and remaining civil liberties docket if you wanted to). Hence, what drives the model are justices like Rehnquist who decided in favor of criminal-plus claimants about 20% of the time (roughly) and the big-time Warren justices, some of whom hit the 80% mark.

Today's justices are more around the 33-65 range -- excluding, I think, Thomas, who is the only one still in the 20s the last time I checked (a few years ago). [I quit doing this work for obvious reasons]. I think Scalia is around 29 or something. (Even he may have made it to 30, I don't know). Jeff's model indeed assumes that the old guard is still there when the prediction is made, because all that the model sees are a bunch of numbers in Stata.

Even so, you will note that the model only produces a 62. Why? Because most justices for which we have data are not that directional when it comes to deciding for or against the claimants. The non-directional justices clog the model.

What is interesting about this is that if Sotomayor does decide whatever civil liberties cases she does -- even if they are not as heavy in criminal cases or anything like the ones from the 60s and 70s -- it makes no real difference. If she comes out a wild 78 or 80 (like the good old days), you can say "the newspaper scores were right about her." But if she winds up at 60, you can say "the model was right." And if she is anything near this side of 50, the industry continues. Next time, the model simply shaves the prediction for someone like her to a 60 or 59 or something (shaving for the mistake). So long as Rehnquist and the Warren people are in the Stata machine, and so long as the docket is shaved, it can't lose. (Plus, take away the old justices).

So in conclusion, there is in fact nothing to the prediction that considers anything you raised.

Regards and thanks.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website: http://seanwilson.org/
SSRN papers: http://ssrn.com/author=596860
Twitter: http://twitter.com/seanwilsonorg
Facebook: http://www.facebook.com/seanwilsonorg
New Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

Monday
Aug102009

Sotomayor and Liberalism in the World of Ideology Scholars

(sent to LawCourts)


someone wrote: "Yes, according to my editorial scores, Sotomayor is the most liberal justice confirmed since Marshall, but remember, there have only been two Democrats confirmed since then, Ginsburg and Breyer."


... let's be clear. According to the "editorial sources," she is thought to have only the score value generated by that procedure. That's all the measure says. Scientifically, the scores do not measure her "ideology" (whatever that means).

And with respect to the announced prediction of what political science calls her "votes," we probably should note a few disclaimers:

1. If you include the whole docket and all of the justices for which there is data, the predictive relationship is extremely paltry.

2. Even if you cherry-pick the docket and the justices, whatever results you get are fundamentally driven by the few justices with extreme propensity for direction under the "liberal index" -- most of whom are no longer there. And even this predicts that Sotomayor will be closer to neutrality (50%) than her alleged reputation (78). And of course, you don't need any newspaper scores to guess that Sotomayor will be in the 60s -- the safe money already has her around 65. (Flipping coins puts her at 50. The PRE on honest logit models was never impressive with these scores).

3. One of the biggest problems these models have is their misleading conclusions. The dependent variable (the so-called "liberal index") is quite peculiar because it doesn't have any empirical or substantive relationship to true "liberal voting." It's just called that by people pretending to do the "science." After all, the great majority of the coding doesn't concern exemplar issues that make up the belief-spectrum in the political system. And it doesn't concern issues that appear in campaigns or the culture war and so forth. In fact, one has to have a great deal of ideology himself or herself in order to see or call this measure "liberal voting." You have to sort of think like a creation scientist would when they study the world. In fact, one might think of political scientists who try to catch "ideology" this way as being sort of "ideology-creation scientists."

When scientists study the external world, they develop rigid designators for the things in the world they have "pinned down." I have always found it extremely curious that in ideology-centered, quantitative political science, no one attempts to talk precisely and honestly about the empirical things they actually observe. If they would, they would find that Sotomayor has a 78 on what appears to be some sort of exemplar-conceived political-issue barometer by a small media/journalist workgroup acting within a discreet time in American politics. And that this thing, when combined with other such prejudice workshops, has some sort of relationship to what we might call a rather badly-conceived yes-no claimant-priority arrangement. (And only when dockets and justices are constructed).

But that's not the way it comes out. It's always comes out as "the ideology going in was measured by the scientists," and "the scientists confirmed that ideology was coming out on the other side." I mean, it reminds me of one who would say, "first they were baptized, and then they went to heaven."

Regards and thanks.

Monday
Jul202009

On Why "Judicial Activism" Is an Empty Nonsense

(sent to lawcourts)

... here's the problem. The term "activism" is a language game which shares only one thing in common: it has facile grammar and is largely deployed for rhetorical purposes by people not wanting to say very meaningful things about justice casuistry. You cannot repair these problems by counting things and throwing them in Stata. Neither good art critics or scientists would begin any work from the starting point of "who's the activist?"  

Another thing that is not understood here is that the term "activism" frequently says says something normative about "law," but does so silently. The term can specifically imply that a person is licentious or aggressive with justification. However, if you ever ask the person what is proper legal justification or what makes for "non-activist" legal justification, it amounts to something not defensible in philosophy of law. The trouble with quantitative political scientists and the lawyers who deploy this vocabulary, therefore,  is that they take are taking poor philosophic positions. And when they do this, they quite often don't realize it while wrongly believing that philosophy is either art or opinion. The political scientists would be even worse off if they put forth these views while believing their work was "science." You cannot make science out of a poor grammar.

It would be very helpful if people using these terms could do one small favor. When calling someone an activist, an attitude-driven judge, a policy judge, an outcome-focused judge -- would you please give examples of what judging would look like if this (a) wasn't so; and (b) why the counterfactual makes for better judging/ law. If we could have these criteria explicit, we would soon find no reason to accept the framework, let alone count anything for a good Stata piece.

Regards. 

Friday
Jul172009

On Statistical Analytsis for 'Judicial Activism'

(sent to lawcourts in response to a post about a problem with using certain "measures" for "judicial activism")
 
... which only gets to the tip of the iceberg about why anyone would use the measures over having an appropriate sense of biography and the decisions themselves. These things are fundamentally contextual. It would be like one who never watched a Jet game last year talking about Favre's quarterback rating. Newsflash: if you watched the season, you don't need the stats.  And always, those who rely upon the stats without witnessing the context are deficient in what they claim to know. But same is not true in reverse. You can always see the fallacy in stats in something you have yourself lived. And if someone knows the context and produces stats in a way that supportive of it, all you have then is a piece of mathematical art.


I wonder when it will begin to set in that terms like "ideology," "activism" "conservatism," etc., only provide a moral critique a person's casuistry?  When is it going to set in that it is a moral grammar? I wonder when it will set in that these terms do the same general sort of thing in language as saying "the decision is virtuous, honorable, has integrity" and so forth? Which justice had more honor? (I don't know, check the stats). 

These topics can only be properly beheld in an intellectual field that accepts ethics and philosophy as craft, and that relies upon biography for its information. Only if you recreate the psychology of the decision maker can you say anything about the "politics" -- which, after all, is nothing other than the drama of the person and his generation in history.

Somewhere -- maybe about 20 years down the road I'd say -- it will finally set in among lawyers and political scientists that this whole area is nothing but a form of art appreciation. It isn't science. It isn't positivism. And the answers are not in STATA.  It's an industry a lot like those NFL analysts. Who's going to win the game? "Well, this one is ranked 2nd in such-and-such, but this one is 80% in 3rd down conversion in the red zone after playing on Mondays." (Give me Dandy Don singing at any time of the week over that). 

I wonder why it hasn't set in yet that quantitative analysis is fundamentally an empirical technique developed for things humans can't see (and therefore need some guessing method). You know, does drug X cause side effects? What do the people think of an issue after the Court decides it? For this sort of thing, you need stats, which function as a kind of journalism. But you really can't use it for things like, "which one was more liberal," "who was more active."   This would be like asking who was more of risk taker -- Favre or Bradshaw? And so someone produces some stats. But if you REALLY wanted to know, you'd just look at the games. But even then, you would only be left with an answer in the nature of art appreciation because of the changed circumstances and contextual complexity . You are ALWAYS only ever going to be left with art appreciation.

(Sigh)