This area does not yet contain any content.
This area does not yet contain any content.
This area does not yet contain any content.
This area does not yet contain any content.
This area does not yet contain any content.

Entries in Segal & Spaeth (18)

Monday
Aug102009

Aggregates and Votes in Quantitative Ideology Models

(sent to law courts)


Hi Howard.

Good to hear from you again. Haven't heard from you in a long, long time. As I think you know, I am quite aware that Jeff uses an ecological model. About two years ago, I spent a great deal of time comparing ecological models with logit models. So I know what is being said. One of the things I found out about those comparisons is that it is a mistake to think that an ecological model is "something different." It isn't; it just requires much more cautious interpretation. To really interpret it, one needs to "look under the hood," so to speak (before aggregation). In fact, I would disagree with you to the extent you suggest that Jeff's model gets some sort of free pass from conducting logit diagnostics underneath those percentages. Any responsible researcher would do so. In fact, that was one of the central flaws that stung the quantitative ideology research program in the first place -- it committed major methodological sins in the substantive interpretation of its ecological offering. And so I would never be from the school or thought that tried to say that one who analyzed aggregates was doing something unconnected with what those figures are summarizing "underneath it all."

The reason why I had asked Jeff for the latest version of his ecological model is for two reasons. First, if he is truly using the aggregated data from the entire data set as the dependent variable, the distribution of votes will be more leptokurtic. And as I believe was your central point, an ecological model is only analyzing the variance of an index. And before we analyze index variance, we would want to know to what degree values within the index cluster around the mean. Because the more leptokurtic the index, the less we would be substantively impressed with a high account of its variance. If you look at the logit model, this becomes instantly clear. The more the values are non-directional, the more the model goes in the tank. So if Jeff is now trying to offer a high correlation in the variance of a an index that isn't varying as much as before to begin with, someone really needs to catch that -- at least for Paul's sake. (And others).

Now, let me do this much more easily. In my 2006 piece, I did something that I thought was very interesting. I went ahead and took Jeff's world "as is." I took his ecological model on its face and dissected it. What I did was break down what the r-squared in that regression was really reporting, by converting the explained versus residual sum of squares into the equivalent number of votes that accounted for each portion. When I did this, I found that only 12.5% of the total votes cast were "explained" by an ecological regression of a civil liberties INDEX. So the headline would be: model correlates with 60% of index variance, and, in doing so, explains only 12.5% of the votes accounted for by the summaries constituting the index. (This is a good illustration of how analyzing votes is supplementary to aggregates and not something of a different kind).

One more point. Howard, may I ask something of you? Why is it that you continue to say that Jeff has an "attitudinal model" here? You didn't say this in the Law and Society piece from a long, long time ago. I think we need to be clear. Jeff has only a model that has variables that gather something from the external world. What he names it is not germane. Hence, what he has is something in the nature of small-group media-perception scores constructed using a political stereotype. He then regresses that in an ecological model against the summary rates in which justice-approaches to legal issues end up favoring particular claimants. Who those claimants are is determined by what we might call the Harold-Spaeth "client list," which is another construction. I mean, there is no one on the plant who thinks that every single issue the court decides in bankruptcy cases, tax cases, economic cases, etc. etc. etc. are "liberal and conservative" because one side had to win. And so you have a forced stereotype score being regressed against an assigned client-winning profile. This is NOT a model that measures attitudes. I don't think it can even accurately claim to measure journalist attitudes for crying out loud.

So why is it that political scientists talk this way? No other science talks this way. Real science is supposed to accurately describe what is measured in the external world. All you have here is a contrived media perception score regressed against a constructed claimant-winning profile. It is not an "attitude model." And it surely isn't "the justice ideology and the votes."

When are political scientists doing this work going to actually adopt basic principles of science, such as rigidly explaining phenomenon under study in the external world?

Howard, as always, regards and thanks. (Please do write me again in the future like you used to in the past).

Monday
Aug102009

Sotomayor's Predicted Liberalism Using Newspaper Scores

(sent to Law-courts)


Jeff Segal wrote in response to Paul's Finkelman's mail, "The predictive value is this: for the justices appointed since Warren, the editorial scores correlate at about .8 with the percentage of times the justices vote liberally."
--------------

First, for any given justice, flipping coins will predict that their score will be 50. So the question becomes how well these media-impression workshops that Jeff recreates improves upon this efficacy. This is called Proportional Reduction in Error (PRE). The PRE on the logit models do show improvement upon blind guessing at 50, but several things must be noted:

1. No one guesses in the blind. Whether these scores are worth their labor is a function of what other perception workshops would tell us. I bet that polling empirical scholars would be better than constructing something from editorials. No one who watches the scores would expect anything more than a 60-ish number anyway, especially when you consider what that number really is.

2. The scores only improve blind guessing (at 50) by about 24%. But if you take away the extremely- directional justices -- the ones no longer on the Court -- the number is 9%. (Subtracted: Rehnquist, Brennan, Marshall, Fortas, Douglas and Goldberg).

3. If you consider the whole docket, of course, all bets are off. You have a statistically-insignificant model from 1948-2004. (about 60,000 so-called "votes"). Model is logit. The PRE is terrible anyway.

4. A couple additional things:

People need to ask themselves to what extent the model really indulges metaphysics. Think about it. As a scientist, you know that the media-perception scores are only a form of prognostication. That's what Jeff has done. He's turned their content into a prediction for either a justice's state of mind or his or her work consequences for criminal-plus claimants.

But if journalists really knew this, the story would be one of clairvoyance or perhaps conspiracy (like insider trading). There is nothing in those editorials different from what, say, informed list members might believe about these things. If Jimmy the Greek predicted numbers well for six weeks in a row, would you go off and say that science was the cause, or that metaphysics (or corruption) was? I think luck would be the real cause. My point is there is nothing special about journalists feelings in this respect. Many of us could do better than a coin flip. There is no need to make either metaphysics or science out of this.

One last point. If Jeff's measures have any significance to anything, it probably is similar to the correlation that young children have in picking presidential elections. That's what it reminds me of. But there, what we say is that this is "carrier evidence." That it shows image perception at some base level of psychology. Here the mistake is not to ask the same question: why is it that a small media-perception work group constructed during the confirmation ritual has any relationship whatsoever to a yes-no tally of claimants winning in criminal-plus cases? The answer really only lies in this:

1. The 6 to 8 extreme justices that drive the results
2. It's an easy game. Pick from 35 to 45 for republicans; 55 to 65 for Democrats - and you'll do fine.

And, if you can find some sort of naturally-occurring process that generates numbers like this -- like media perceptions of a president's pick -- now you have something really neat. It makes the whole thing look automated.

Regards and thanks.

(P.S. -- Paul, see my paper if you want a technical overview of Jeff's model. It is on SSRN, below my signature)

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Twitter: http://twitter.com/seanwilsonorg
Facebook: http://www.facebook.com/seanwilsonorg
New Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

Monday
Aug102009

Sotomayor and "Measurement Error"

(sent to Law Courts re: the problems with arguments that "attitudinal scholars" make about their quantitative models)


... just a few points on measurement error. (I had thought these ideas had finally left the discipline).

First, the only segment of the docket being talked about here are what these people call "civil liberties cases," which is roughly 1/2 of the Court's workload in the data set. When you look at this from 1948-2004, the breakdown of this HALF of the set is: criminal cases (40%); civil rights (30%), First Amendment (16%), Due Process (8%), Privacy (2%) , and Attorneys (2%). So, for simplicity, let's call this the "criminal-plus rights claimants."

When Jeff speaks of "measurement error," what he means is that the media-impression scores for any given justice are disappointing to him when looking at the overall tendency of a justice's craft to have favored criminal-plus rights claimants. Apparently, what he would like is for his media-impression workshops to be able to really nail the rate at which criminal-plus rights claimants win their cases.

There are several obvious problems here:

1. the stuff in the media editorials (that Jeff codes) are not confined to or centered around issues in criminal law and civil rights (70% of the docket concern). And when they are, it is usually just a discreet, hot-button thing. Hence, the content of the one measure has nothing to do with the issues justices actually end up considering.

2. Also, the coding philosophy used here is confused. It indulges the idea of political values existing as exemplar issues in American political psychology, stuffed into one-dimensional space. (Guns, butter, taxes, abortion, big-case controversies like firefighters, speeches about presidential power, affirmative action positions, etc.). Anything mentioned along these lines gets you "coded." I think Jeff even codes based upon whether the journalists uses the world "liberal" in the editorial. Let's call this the "stereotype picture."

The problem here is that when justices decide cases before them that involve criminal-plus rights claimants, the issues in the cases very rarely involve "stereotype politics." Many times, the issues are a real snooze and make only a technical point. Or its only a little extension here or a little take away there. And so, you have this disjuncture between the philosophy of "liberal" being conjured on the one hand (the stereotype) and the thing you want to call "liberal" on the other, but in good faith can't. (At least not without playing games with language).

3. What is curious about all of this is that the majority of justices for whom we have data do not have any real affinity for criminal-plus rights claimants one way or the other. Assuming most legal issues are tough, one would expect 40-60 to be the basic range. Of course, it wouldn't be during periods of innovation, where new rights paradigms emerge and then recede into an equilibrium. But even though we have this dynamic history in the data set, the majority of justices are really not that directional.

And it is this that causes the failure in Jeff's model, not "measurement error." Indeed, the only errors truly present in these models are specification errors (see points 1 and 2 above), errors with ecological inference (which I'll get to in a moment), and language games.

Really, if you think about it , Jeff's measures are lucky. He's got more measurement luck in the model than error. He's lucky that he has those 8 or so justices with high propensity to decide issues favoring criminal-plus rights claimants -- and those crazy scores of perfect liberal and conservatism. Without those 100% or 0% scores coming out of those media prejudice workshops that he recreates -- scores that attach themselves to justices with 80-20 propensities for criminal-plus cases -- there would be no model here at all.

In fact, just think about it. Use half the docket. Don't use all the justices. Get lucky on the rights revolution thing. Tell everyone you do better on the first 3 years of service (another cut). Then just cry measurement error for all the rest.

I think its worth noting that Segal-Cover scores are statistically insignificant and otherwise extremely paltry for the entire docket (every decision for which researchers have data). They are also statistically insignificant for discreet years of voting. I think one was in the early 1990s (I wrote a paper mentioning it). Also, if you take away those justices who are around the 80-20 mark and who are no longer on the Court -- in essence, replicating today's Court -- you don't have anything to speak of.

It isn't measurement error; it is that the whole idea is faulty.
Regards and thanks.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website: http://seanwilson.org/
SSRN papers: http://ssrn.com/author=596860
Twitter: http://twitter.com/seanwilsonorg
Facebook: http://www.facebook.com/seanwilsonorg
New Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

Monday
Aug102009

Sotomayor and Liberalism in the Strange World of Political Science

(sent to LawCourts in reply to an inquiry about how "liberal" Sotomayor is expected to be based upon newspaper-confirmation scores. I am a critic of how this whole enterprise works)


Mark:

Whatever may be anyone's private views, the prediction announced here is a function of a model Jeff uses. The logic of the model doesn't work that way you suggest . It doesn't offer predictions in the sense of prognostication. The output is solely a function of the input and the logic of the mathematical specification.

None of the variables account for the points you raise. The dependent variable coded by graduate students is rather "blind." If the decision favors a criminal defendant in a criminal case, for example, it gets thrown in the "liberal bin." If not, it gets the other one. The model doesn't consider what the substantive issue was, whether it shifted over time, whether it was mundane, whether it was big or small, whether the republican party actually supported it, whether anyone even cared, whether it instrumentally created "conservative doctrine" while disposing in favor of the defendant, and so forth. In short, there is no assessment of qualitative factors.

Furthermore, because about half of the docket is excluded from the prediction -- as are various justices from before the ascendancy of Earl Warren -- the prediction is apparently limited to civil liberties cases, and assumes (as all of these sorts of models do) that the past controls the future. (The past meaning only Warren Court forward).

A couple of important points to keep in mind. What drives these models statistically is the presence of justices who had rather extreme tendencies to have decided for or against civil liberties claimants during the Warren and anti-Warren periods of the Court. And more specifically, to have decided criminal cases, which comprise the bulk of the cases that are said to be "civil liberties." (You really could call it the criminal cases and remaining civil liberties docket if you wanted to). Hence, what drives the model are justices like Rehnquist who decided in favor of criminal-plus claimants about 20% of the time (roughly) and the big-time Warren justices, some of whom hit the 80% mark.

Today's justices are more around the 33-65 range -- excluding, I think, Thomas, who is the only one still in the 20s the last time I checked (a few years ago). [I quit doing this work for obvious reasons]. I think Scalia is around 29 or something. (Even he may have made it to 30, I don't know). Jeff's model indeed assumes that the old guard is still there when the prediction is made, because all that the model sees are a bunch of numbers in Stata.

Even so, you will note that the model only produces a 62. Why? Because most justices for which we have data are not that directional when it comes to deciding for or against the claimants. The non-directional justices clog the model.

What is interesting about this is that if Sotomayor does decide whatever civil liberties cases she does -- even if they are not as heavy in criminal cases or anything like the ones from the 60s and 70s -- it makes no real difference. If she comes out a wild 78 or 80 (like the good old days), you can say "the newspaper scores were right about her." But if she winds up at 60, you can say "the model was right." And if she is anything near this side of 50, the industry continues. Next time, the model simply shaves the prediction for someone like her to a 60 or 59 or something (shaving for the mistake). So long as Rehnquist and the Warren people are in the Stata machine, and so long as the docket is shaved, it can't lose. (Plus, take away the old justices).

So in conclusion, there is in fact nothing to the prediction that considers anything you raised.

Regards and thanks.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Redesigned Website: http://seanwilson.org/
SSRN papers: http://ssrn.com/author=596860
Twitter: http://twitter.com/seanwilsonorg
Facebook: http://www.facebook.com/seanwilsonorg
New Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

Monday
Aug102009

Sotomayor and Liberalism in the World of Ideology Scholars

(sent to LawCourts)


someone wrote: "Yes, according to my editorial scores, Sotomayor is the most liberal justice confirmed since Marshall, but remember, there have only been two Democrats confirmed since then, Ginsburg and Breyer."


... let's be clear. According to the "editorial sources," she is thought to have only the score value generated by that procedure. That's all the measure says. Scientifically, the scores do not measure her "ideology" (whatever that means).

And with respect to the announced prediction of what political science calls her "votes," we probably should note a few disclaimers:

1. If you include the whole docket and all of the justices for which there is data, the predictive relationship is extremely paltry.

2. Even if you cherry-pick the docket and the justices, whatever results you get are fundamentally driven by the few justices with extreme propensity for direction under the "liberal index" -- most of whom are no longer there. And even this predicts that Sotomayor will be closer to neutrality (50%) than her alleged reputation (78). And of course, you don't need any newspaper scores to guess that Sotomayor will be in the 60s -- the safe money already has her around 65. (Flipping coins puts her at 50. The PRE on honest logit models was never impressive with these scores).

3. One of the biggest problems these models have is their misleading conclusions. The dependent variable (the so-called "liberal index") is quite peculiar because it doesn't have any empirical or substantive relationship to true "liberal voting." It's just called that by people pretending to do the "science." After all, the great majority of the coding doesn't concern exemplar issues that make up the belief-spectrum in the political system. And it doesn't concern issues that appear in campaigns or the culture war and so forth. In fact, one has to have a great deal of ideology himself or herself in order to see or call this measure "liberal voting." You have to sort of think like a creation scientist would when they study the world. In fact, one might think of political scientists who try to catch "ideology" this way as being sort of "ideology-creation scientists."

When scientists study the external world, they develop rigid designators for the things in the world they have "pinned down." I have always found it extremely curious that in ideology-centered, quantitative political science, no one attempts to talk precisely and honestly about the empirical things they actually observe. If they would, they would find that Sotomayor has a 78 on what appears to be some sort of exemplar-conceived political-issue barometer by a small media/journalist workgroup acting within a discreet time in American politics. And that this thing, when combined with other such prejudice workshops, has some sort of relationship to what we might call a rather badly-conceived yes-no claimant-priority arrangement. (And only when dockets and justices are constructed).

But that's not the way it comes out. It's always comes out as "the ideology going in was measured by the scientists," and "the scientists confirmed that ideology was coming out on the other side." I mean, it reminds me of one who would say, "first they were baptized, and then they went to heaven."

Regards and thanks.