Google Scholar reports that this article has been cited in at least 290 books and articles since its publication in 1999.

 

Linking Party to Judicial Ideology in American Courts:

A Meta-Analysis



Daniel R. Pinello


The Justice System Journal

Volume 20, Number 3

1999

Pages 219-54


Abstract

            One hundred forty books, articles, dissertations, and conference papers are identified in the legal and political-science literatures between 1959 and 1998 reporting empirical research pertinent to a link between judges' political-party affiliation and judicial ideology in the United States. Meta-analyzing 84 studies to synthesize findings, this paper confirms conventional wisdom among students of judicial behavior that party is a dependable yardstick for ideology: Democratic judges are more liberal on the bench than Republican ones. Two methodological characteristics moderate study results — statistical technique, and researchers’ use of only nonunanimous appellate decisions. The weighted-mean-product-moment effect size of party, regardless of court or subject matter, and corrected for moderator effects, is +.615, explaining 38% of the variance of judicial ideology. Further, party is a stronger attitudinal force in federal courts, attributable for almost half of the variance, than in state tribunals.

 

We assumed that, in general, Democratic [federal-appellate-court] appointees are more liberal and Republicans more conservative in policy orientation.

—Cross and Tiller (1998, 2168)

in the Yale Law Journal

 

Although previous studies have adopted a number of strategies for estimating judges’ political or ideological attitudes, the most common approach — particularly at the state level — has been to use the party affiliation of a judge or of the governor who appointed the judge.

—Flemming, Holian, and Mezey (1998, 40)

in the American Politics Quarterly

 

Here, we made the standard assumption that Republican judges tend to reach more conservative decisions than do Democratic judges.

—Gerber and Park (1997, 395)

in the American Political Science Review

 

            Ideology is a vitally important concept to the empirical study of political behavior (e.g., Campbell, Converse, Miller, and Stokes 1960; Converse 1964; Barton and Parsons 1977; Entman 1983; Luttbeg and Gant 1985; Knight and Erikson 1997; Levine, Carmines, and Huckfeldt 1997). This is no less true for judicial behavior (e.g., Songer 1982; Brenner and Spaeth 1988; Sheehan, Mishler, and Songer 1992).


            Ideology typically is measured along conservative-liberal continua in American politics. The attitudinal model, a dominant paradigm for scholarship on the U.S. Supreme Court, manifests judicial behaviorists' belief in the centrality of ideology, holding that


the Supreme Court decides disputes in light of the facts of the case vis-à-vis the ideological attitudes and values of the justices. Simply put, Rehnquist votes the way he does because he is extremely conservative; Marshall voted the way he did because he is extremely liberal. Segal and Spaeth (1993, 65).

 

            Measuring justices' attitudes, though, has been perplexing because of the circularity inherent in predicting future votes by means of past ones. To circumvent this dilemma, Segal and Cover (1989) developed a mechanism for gauging Supreme Court justices' attitudes through content analysis of preconfirmation elite-newspaper editorials. This ideology-measuring technique, expanded in Segal, Epstein, Cameron, and Spaeth (1995), can be applied to other courts. Emmert and Traut (1994), for example, emulated Segal and Cover to probe the California Supreme Court. Yet, although highly innovative, the Segal-Cover approach is relatively recent and not easily replicated for comparative studies of numerous state or lower federal courts. Editorial commentary is not comprehensively available for lower-court personnel; and even if it were, the method is labor intensive.


            Instead, public-law scholars traditionally have used judges' political-party affiliations as proxies for judicial ideology (Lloyd 1995). Empirical examinations of how party identification influences judges' decisionmaking date from the groundbreaking works of Schubert (1959) and Nagel (1961). As the quotations at the beginning of the paper indicate, conventional wisdom today among students of judicial behavior sees party as a dependable yardstick for ideology: Republican judges are conservatives; Democrats, liberals.


            Nonetheless, is there truly an empirically verified connection between judges' party identification and their behavior on the bench? How extensively and reliably have researchers found Republican judges indeed vote for conservative outcomes and Democrats for liberal ones? Although scholars have mounted partial lists of myriad investigations (e.g., Tarr 1994, 286-88), no comprehensive compilation exists to consolidate almost 40 years of inquiry into party affiliation's impact on judicial action. 


            Meta-analysis affords the opportunity to fuse fragmented knowledge:


In 1904 the British mathematician Karl Pearson invented a statistical method for combining divergent findings. . . . Pearson’s simple but creative idea was to compute the correlation [between two variables] within each [empirical study] . . . and then average the correlations of all the [studies]; the result, balancing out the chance factors and idiosyncrasies of the individual studies, would be a datum more trustworthy than any of the individual statistics that went into computing it (Hunt, 1997: 8).

 

            Yet, despite the promise of meta-analysis, social scientists embrace it sparingly. As Doucouliagos (1995: 58) notes: “Meta-analysis, a set of techniques for distilling a single estimate from a number of studies, is widely used in psychology and sociology and is beginning to be applied in management studies. Unfortunately, with a few notable exceptions . . . , it has not been adopted in economic analysis.” The same is true for law and political science. Those disciplines lack meta-analytic traditions [1].


[Footnote 1: For rare examples of meta-analytic research in political science, see Hale (1998), Lau, Sigelman, Heldman, and Babbitt (1999), and Wolf (1997). Standard references on meta-analysis include Cooper and Hedges (1994), Hunter and Schmidt (1990), Rosenthal (1991), and Wolf (1986). For non-technical discussions of the value of meta-analysis, see Hunt (1997) and Hunter and Schmidt (1996).]


            The legal and political-science literatures in fact report well over one hundred empirical findings on the relationship between political party and judicial ideology in American courts. Yet, no sweeping, systematic analysis has taken stock of that wealth of information. Filling the void, this article provides a compendium of empirical undertakings connecting party ID with judicial ideology, evaluates the findings meta-analytically to synthesize the observations, and determines whether methodological characteristics moderate study results.


Research Design

Identifying Studies

            To inventory relevant research, I perused the American Journal of Political Science, the American Political Science Review, the American Politics Quarterly, the Journal of Politics, the Law and Society Review, Polity, the Social Science Quarterly, and the Western Political Quarterly (renamed Political Research Quarterly in 1993) for articles since 1977 associating political party with judicial ideology in American courts. I also searched the databases Criminal Justice Abstracts, Dissertation Abstracts, Lexis-Nexis, PAIS-International, and Westlaw [2]. The information found through these electronic and print resources in turn lead to other literature citations. In addition, I sent queries to (1) LAWCOURTS-L (the Internet discussion list sponsored by the American Political Science Association’s Law and Courts Section), with over 370 subscribers, and (2) PSRT-L (the APSA's research and teaching list), having more than 1,500 subscribers.


[Footnote 2: The search instructions were (“POLITICAL PARTY” “POLITICAL PARTIES”) W/P (JUD! JUSTICE) or variations thereof depending on the conventions of each database.]


            As a result, I identified 140 books, articles, dissertations, and conference papers in the legal and political-science literatures between 1959 and 1998 revealing empirical research pertinent to a link between party and modern judicial ideology in the United States. The list surely is not exhaustive. Some explorations (especially conference papers not later published) probably escaped my net. Nevertheless, after an extensive search, I'm confident the vast majority of applicable published scholarship is included.


            Publication bias is a perennial theme in meta-analysis (Begg 1994; Hunt 1997, 118-121; Rosenthal 1979). Yet, it’s not an obstacle here. Of 84 studies contained in the analysis, ten are unpublished [3]. Their weighted-mean effect size (discussed infra) is +.305 (with 95% confidence interval from +.204 to +.407). By comparison, the mean effect size for the 74 published investigations is +.273 (confidence interval from +.198 to +.349). In other words, projects languishing in the proverbial “file drawer” estimate a stronger relationship than the publicly disclosed research, contrary to publication-bias theory.


[Footnote 3: Bowen (1965); Crews-Meyer and Anderson (1994); Howard (1998); Jensen and Kuersten (1997); Pinello (1998); Segall (1998); Smith and Tiller (1996); Songer (1995); Vigilante (1998); and Wahlbeck (1997a).]


Inclusion Criteria

            Every study included in the meta-analysis meets seven criteria:

 

1.         The dependent variable is a constitutional, legal, or political topic susceptible of empirical analysis using generally agreed standards for categorization along conservative and liberal norms. Overwhelmingly, dependent variables deal with subjects falling into three major categories: (1) civil rights and liberties; (2) criminal justice; and (3) economic regulation and labor relations. Scholars have used consistent definitions of liberal and conservative judicial action. In criminal-justice cases, votes favoring the defendant are liberal; those for the prosecution, conservative. In government regulation of the economy, choosing the regulator is liberal; the regulated, conservative. Preferring workers in labor-relations cases is liberal; employers, conservative. In civil rights and liberties, votes for the claimed right are liberal; against the right, conservative [4]. For dichotomous variables, the liberal position is most often coded 1 and the conservative, 0.


[Footnote 4: A more comprehensive enumeration is in Rowland and Carp (1996).]

 

Three studies are excluded from the meta-analysis for failure to meet this criterion [5]. Two address judicial support for enhancing presidential power, with Ducat and Dudley (1989) claiming it liberal and Yates and Whitford (1998) believing it conservative. Thus, no scholarly consensus exists by which to categorize the analyses.


[Footnote 5: Ducat and Dudley (1989); Swinford and Waltenburg (1996); and Yates and Whitford (1998).]

 

2.         The independent variable measures judicial ideology by means of judges’ political-party affiliations, with Republican-Party-identified judges most often coded 0, and Democratic-Party-identified judges, 1 [6]. Party affiliation usually means the judges’ personal party identifications are used, although investigations of the federal bench often look at the party of appointing presidents to find cohort effects (e.g., Gottschall 1983; Tomasi and Velona 1987).


[Footnote 6: Infrequently, the independent-variable scale is 0 = Republican, 1 = Independent, and 2 = Democrat (e.g., Tate and Handberg 1991). If the dependent variable is coded 1 for conservative and 0 for liberal, then the party variable is 1 for Republican and 0 for Democrat (e.g., Cohen 1992; and Sisk, Heise, and Morriss 1998).]

 

Feiock (1989) and Songer and Haire (1992) use presidential cohorts in logistic regression analyses and not a singular party variable; thus no single effect size r (discussed below) can be calculated. Also, Humphries and Songer (1997) and Langer (1997) use hybrid ideological measures (encompassing party) and are omitted because of this criterion.

 

3.         Modern (i.e., post-World-War-II) American judicial behavior is the principal interest. Purely historical canvasses such as Schmidhauser's (1963) investigation of 19th Century justices are not included. In addition, works straddling eras to the degree that contemporary court action is overwhelmed by pre-World-War-II judicial votes are eliminated [7].


[Footnote 7: Lanier (1997a, 1997b); Sprague (1968); and Ulmer (1986).]

 

4.         Statistical findings must be susceptible of conversion into the effect size r. In 20 studies, r could not be obtained from the reported findings because either (1) standard errors, t statistics, or degrees of freedom were not reported for regression coefficients [8] or (2) the kind of statistical technique employed (e.g., discriminate function analysis in Aliotta 1988; path analysis in Baum 1980; and difference of medians in Goldman 1966) does not permit conversion of findings into an effect size.


[Footnote 8: Ashenfelter, Eisenberg, and Schwab (1995); Haynie and Tate (1990); Kramer (1997); Tate (1990); and Tate and Handberg (1991).]

 

5.         The units of empirical analysis are court cases or judicial votes. This criterion permits cumulation and comparison of uniform effect sizes (Light and Pillemer 1984, 170-71). Seven studies are excluded because their analytic units are either judges or states [9].


[Footnote 9: Canon and Baum (1981); Howard (1971); Sprague (1968); Vines (1964, 1969); Wenzel, Bowler, and Lanoue (1997); and Wold (1974).]

 

6.         The number of judicial votes supporting statistical results are reported. The need to adjust the weights of individual effect sizes to achieve mean estimates (discussed below) requires this criterion.

 

Five appellate examinations report only the number of court cases. In that circumstance, I estimate the votes by multiplying the reported number of cases by a factor of either three (for U.S. Court of Appeals cases) or five (for state supreme court cases) [10].


[Footnote 10: Some federal appellate decisions are en banc, and some state supreme courts have more than five members. Because studies don’t report what proportion of federal appellate decisions are en banc, nor necessarily what specific state supreme courts are embraced, using the multiplicative factors of three and five, respectively, gives the minimum number of votes. Since the mean estimates of adjusted effect sizes reported here are based on more than 200,000 judicial votes, this conservative technique for converting court cases to votes does not significantly discount impact in the meta-analysis.]


            Four studies are omitted because of this criterion [11].


[Footnote 11: Benesh (1997); Epstein, Walker, and Dixon (1989); Lanier (1997a); and Tate and Handberg (1991).]

 

7.         A work cannot substantially duplicate the data set and statistical technique of another in the meta-analysis. Some scholars use the same or very similar data in different enterprises. Including each separately in the meta-analysis would inappropriately multiply the influence of effect sizes from identical data. Rather, when numerous projects authored by the same scholar(s) or co-researchers are based on identical or very similar data, only the one with the largest data set is directly embraced in the meta-analysis.

 

The clearest example is Rowland and Carp (1996), the culmination of eleven earlier explorations of policymaking in the federal district courts [12]. Because the predecessor projects uniformly involve frequency analyses, Rowland and Carp’s (1996) r (a zero-order correlation) necessarily embraces the earlier findings. Stuart S. Nagel’s studies (1961, 1962a, 1962b, 1969, 1974, 1982) are treated similarly [13].


[Footnote 12: Carp and Rowland (1983); Carp and Stidham (1990); Rowland and Carp (1980,1983a, 1983b); Rowland, Carp, and Stidham (1984); Rowland, Songer, and Carp (1988); Rowland and Todd (1991); Stidham and Carp (1982, 1987); and Stidham, Carp, and Rowland (1983).]


[Footnote 13: An exception to the non-duplication rule is the inclusion of both Gryski and Main (1986) and Gryski, Main, and Dixon (1986). The former examines only nonunanimous decisions while the latter contains unanimous cases — an important difference for the investigation of moderator variables.]

 

The state-supreme-court death-penalty articles by Paul Brace and Melinda Gann Hall, however, offer a contrast [14]. They use probit regression to produce rs that are partial-correlation coefficients determined by sundry independent variables in regression equations. Accordingly, I calculate individual rs and average the effect sizes, weighted by the number of judicial votes per study (Doucouliagos 1995, 63; Hunter and Schmidt 1990, Chapter 10). The number of judicial votes in the meta-analysis for this mean r is the largest (Hall and Brace 1996) among the six articles.


[Footnote 14: Brace and Hall (1993, 1995, 1997); and Hall and Brace (1992, 1994, 1996).]

 

Inevitably, different scholars use overlapping data sets. Rather than slavishly apply a non-duplication criterion (and end up with a very limited meta-analysis), I include all explorations by discrete authors. Such heterogeneity of method in fact improves the quality of the synthesis (Doucouliagos 1995, 63; Glass 1978; Glass 1983, 401, 404).


            Altogether, 84 studies are embodied in the meta-analysis, listed in Table 1 and arranged by court.



Table 1

Studies Included in the Meta-Analysis


United States Supreme Court


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Moderator

#2:

Only Non-

Unanimous

Cases?

Unadjusted

Effect Size

r

Johnston (1976)

12,955

1954-1974

Multiple

Types

Zero-Order

Correlation

Yes

.285

Nagel (1961, 1962a, 1962b, 1969, 1974, 1982) — See State Supreme Courts

Tate (1981)

19,689

1946-1978

Multiple

Types

OLS

Regression

Yes

.869



United States Courts of Appeals


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Moderator

#2:

Only Non-

Unanimous

Cases?

Unadjusted

Effect Size

r

Carp, Songer, Rowland, Stidham, and Richey-Tracy (1993)*

10,514

1980-1992

Multiple

Types

Zero-Order

Correlation

No

.088

Bowen (1965) — See State Supreme Courts

Crews-Meyer and Anderson (1994) — See State Supreme Courts

Cross and Tiller (1998)

465**

1991-1995

Economic

Regulation

Zero-Order

Correlation

No

.200

George (1998)

252

1962-1996

Multiple

Types

Logistic

Regression

No

.066

Goldman (1975)

6,345**

1965-1971

Multiple

Types

Partial

Correlation

Yes

.274

Gottschall (1983)-A

1,198

1979-1981

Civil Rights

& Liberties

Zero-Order

Correlation

No

.172

Gottschall (1983)-B

1,084

1979-1981

Criminal

Justice

Zero-Order

Correlation

Yes

.261

Gottschall (1986)

6,192

1983-1984

Multiple

Types

Zero-Order

Correlation

No

.134

Hensley and Baugh (1987)

406

1982

Criminal

Justice

Zero-Order

Correlation

Yes

.322

Jensen and Kuersten (1997)

10,366

1977-1988

Multiple

Types

Zero-Order

Correlation

No

.239

Johnson (1987)

311

1950-1980

Multiple

Types

Zero-Order

Correlation

No

.000

Kovacic (1991)

401

1977-1990

Economic

Regulation

Zero-Order

Correlation

No

.125

Revesz (1997)

1,678

1970-1994

Economic

Regulation

Zero-Order

Correlation

No

.189

Schultz and Petterson (1992)

214

1965-1991

Civil Rights

& Liberties

Zero-Order

Correlation

No

.174

Segall (1998)

789

?

Criminal

Justice

Logistic

Regression

No

.942

Smith and Tiller (1996)

678

1981-1992

Economic

Regulation

Zero-Order

Correlation

No

.079

Songer (1987)

14,531

1950-1977

Economic

Regulation

Zero-Order

Correlation

No

.049

Songer and Davis (1990)

2,914

1986

Multiple

Types

Zero-Order

Correlation

No

.052

Songer and Sheehan (1990)

224

1952-1974

Civil Rights

& Liberties

Zero-Order

Correlation

No

-.006

Songer and Sheehan (1992)

10,074**

1986

Multiple

Types

Logistic

Regression

No

.826

Stidham, Carp, and Songer (1996)

2,326

1992-1996

Multiple

Types

Zero-Order

Correlation

No

.098

Stidham, Carp, Songer, and Surratt (1992)

2,180

1977-1986

Multiple

Types

Zero-Order

Correlation

No

.099

Tomasi and Velona (1987)

2,622

1985-1986

Multiple

Types

Zero-Order

Correlation

Yes

.296

Wahlbeck (1997a)

106

1970-1991

Economic

Regulation

Logistic

Regression

No

.600

Willison (1986)

1,380

1981-1984

Economic

Regulation

Partial

Correlation

No

.457

    *Includes Rowland, Songer, and Carp (1988).

  **Estimated from the number of judicial decisions and judicial panel size.



United States District Courts


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Unadjusted

Effect Size

r

Cohen (1992)

548

1955-1981

Criminal

Justice

OLS

Regression

.255

Dolbeare (1969)

288

1960-1967

Multiple

Types

Zero-Order

Correlation

-.047

Eisenberg and Johnson (1991)

176

1976-1988

Civil Rights

& Liberties

Logistic

Regression

.173

Giles and Walker (1975)

151

1970

Civil Rights

& Liberties

Zero-Order

Correlation

-.086

Hansen, Johnson, and Unah (1995)

498

1980-1990

Economic

Regulation

Probit

Regression

-.077

Howard (1998)

355

1980-1988

Economic

Regulation

GLS

Regression

.152

Johnson (1987) — See United States Courts of Appeals

Kritzer (1978)

505

1965-1972

Criminal

Justice

OLS

Regression

.177

Lloyd (1995)

196

1964-1983

Civil Rights

& Liberties

Logistic

Regression

.772

Quinn (1996)

108

1975-1994

Criminal

Justice

Zero-Order

Correlation

-.054

Rowland and Carp (1996)***

44,105

1933-1987

Multiple

Types

Zero-Order

Correlation

.085

Schultz and Petterson (1992) — See United States Courts of Appeals

Sisk, Heise, and Morriss (1998)

291

1988

Criminal

Justice

Logistic

Regression

.197

Stidham, Carp and Songer (1996)

9,081

1992-1996

Multiple

Types

Zero-Order

Correlation

.145

Stidham, Carp, Songer, and Surratt (1992)

763

1977-1986

Multiple

Types

Zero-Order

Correlation

.099

Vigilante (1998)

205

1987-1997

Civil Rights

& Liberties

Logistic

Regression

.845

Walker (1972)

1,177

1963-1968

Civil Rights

& Liberties

Zero-Order

Correlation

-.028

Yarnold (1997)

328

1970-1990

Civil Rights

& Liberties

Probit

Regression

.320

  ***Includes Carp and Rowland 1983; Carp and Stidham 1990; Rowland and Carp 1980, 1983a, 1983b; Rowland, Carp, and Stidham 1984; Rowland, Songer, and Carp 1988; Rowland and Todd 1991; Stidham and Carp 1982, 1987; and Stidham, Carp, and Rowland 1983.



State Supreme Courts


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Moderator

#2:

Only Non-

Unanimous

Cases?

Unadjusted

Effect Size

r

Adamany (1969)

744

1957-1965

Economic

Regulation

Zero-Order

Correlation

No

.081

Beatty (1970b)

1,179

1965-1969

Multiple

Types

Zero-Order

Correlation

Yes

.433

Beiser and Silberman (1971)

5,416

1934-1967

Economic

Regulation

Zero-Order

Correlation

No

.063

Bowen (1965)

3,364

1960

Criminal

Justice

Partial

Correlation

No

.230

Brace and Hall (1993, 1995, 1997); Hall and Brace (1992, 1994, 1996)

4,699

1980-1988

Criminal

Justice

Probit

Regression

No

.790****

Crews-Meyer and Anderson (1994)

631

1971-1993

Civil Rights

& Liberties

Logistic

Regression

No

.326

Dubois (1980)

8,150

1960-1974

Multiple

Types

Zero-Order

Correlation

Yes

.248

Fair (1967)

1,108

1960-1961

Multiple

Types

Zero-Order

Correlation

Yes

.345

Flemming, Holian, and Mezey (1998)

931

1965-1994

Civil Rights

& Liberties

Probit

Regression

No

.539

Gryski and Main (1986)

145

1971-1981

Civil Rights

& Liberties

Logistic

Regression

Yes

.244

Gryski, Main, and Dixon (1986)

630**

1971-1981

Civil Rights

& Liberties

Logistic

Regression

No

.582

Kilwein and Brisbin (1997)

5,200**

1970-1993

Civil Rights

& Liberties

Logistic

Regression

No

.536

Nagel (1961, 1962a, 1962b, 1969, 1974, 1982)

287

1955

Criminal

Justice

Zero-Order

Correlation

Yes

.176

Pinello (1995)

398

1960-1990

Multiple

Types

Zero-Order

Correlation

No

.045

Pinello (1998)

84

1982-1992

Civil Rights

& Liberties

Logistic

Regression

No

-.163

Schubert (1959)

480

1954-1957

Economic

Regulation

Zero-Order

Correlation

No

.236

Songer (1995)-A

755

1970-1990

Civil Rights

& Liberties

Logistic

Regression

No

.110

Songer (1995)-B

3,383

1970-1990

Criminal

Justice

Logistic

Regression

No

.513

Songer (1995)-C

1,192

1970-1990

Economic

Regulation

Logistic

Regression

No

.327

Stecher (1977)

11,351

1945-1971

Multiple

Types

Zero-Order

Correlation

No

.123

Swanson and Melone (1995)

369

1991-1993

Multiple

Types

Zero-Order

Correlation

Yes

.383

Tarr (1977)

594

1947-1973

Civil Rights

& Liberties

Zero-Order

Correlation

No

.080

Ulmer (1962)

716

1954-1960

Economic

Regulation

Zero-Order

Correlation

No

.275

    **Estimated from the number of judicial decisions and judicial panel size.

****Weighted average of r from the six studies by Brace and Hall.



State Intermediate Appellate Courts


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Moderator

#2:

Only Non-

Unanimous

Cases?

Unadjusted

Effect Size

r

Dubois (1988)

659

1971-1983

Multiple

Types

Zero-Order

Correlation

No

.123

Pinello (1998)

121

1982-1992

Civil Rights

& Liberties

Logistic

Regression

No

-.030



State Trial Courts


Study

Number of

Judicial

Votes

Years

Covered

Subject

Matter

Moderator

#1:

Statistical

Method

Unadjusted

Effect Size

r

Dolbeare (1967)

340

1948-1963

Multiple

Types

Zero-Order

Correlation

.038

Gibson (1978)

1,194

1968-1970

Criminal

Justice

Zero-Order

Correlation

.020




            Notably, merely two studies emerge for the United States Supreme Court, reflecting (1) the omission of 13 investigations pursuant to the inclusion criteria [15], (2) the unavailability of another four [16], and equally important, (3) Supreme-Court researchers’ abandonment of party as an ideological measure and their adoption of the Segal and Cover (1989) scores. As Epstein and Mershon (1996, 265) comment: “[I]t would hardly be an exaggeration to write that almost every recent study of [U.S. Supreme] Court decision making has — in one way or another — invoked [the Segal and Cover (1989)] scores” [17]


[Footnote 15: Aliotta (1988); Epstein, Walker, and Dixon (1989); Haynie and Tate (1990); Howard (1971); Lanier (1997a, 1997b); Schmidhauser (1963); Sprague (1968); Swinford and Waltenburg (1996); Tate (1990); Tate and Handberg (1991); Ulmer (1986); and Yates and Whitford (1998).]


[Footnote 16: Leavitt (1972); Lewis (1972); Lindstrom (1968); and Padgett (1980).]


[Footnote 17: Research using the Segal-Cover measures includes Kearney and Sheehan (1992); Mishler and Sheehan (1993, 1996); Nelson (1997); Segal (1997); Segal, Cameron, and Cover (1992); Sheehan, Mishler, and Songer (1992); and Wahlbeck (1997b).]


            Thirty-nine probes are blocked from the metal-analysis for failure to meet one or more of the selection criteria. In addition, 17 unpublished dissertations and conference papers are unavailable to me, and Dissertation Abstracts otherwise does not reveal sufficient statistical information to satisfy the inclusion criteria. Table 2 inventories the 56 discarded inquiries and summarizes the reasons for omission.


Table 2

Studies Excluded from the Meta-Analysis


United States Supreme Court

 

Study

Reason(s) for Exclusion

Aliotta (1988)

Statistical data presented cannot be converted into an effect size

Epstein, Walker, and Dixon (1989)

Number of court cases or judicial votes in study is not reported

Haynie and Tate (1990)

Statistical data presented cannot be converted into an effect size

Howard (1971)

Judicial biographies; units of analysis are judges, not court cases or judicial votes

Lanier (1997a)

Post-World-War-II judicial behavior is not the primary focus; number of court cases or judicial votes in study is not reported

Lanier (1997b)

Post-World-War-II judicial behavior is not the primary focus

Leavitt (1972)

Unpublished conference paper is unavailable to author

Lewis (1972)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Lindstrom (1968)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Padgett (1980)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Schmidhauser (1963)

Nineteenth Century judicial behavior is the focus

Sprague (1968)

Post-World-War-II judicial behavior is not the primary focus; units of analysis are judges, not court cases or judicial votes

Swinford and Waltenburg (1996)

Ideological position of dependent variable is not clearly conservative versus liberal

Tate (1990)

Statistical data presented cannot be converted into an effect size; substantially duplicates data in Tate (1981)

Tate and Handberg (1991)

Number of judicial votes relevant to findings presented in study is not reported; statistical data presented cannot be converted into an effect size

Ulmer (1986)

Post-World-War-II judicial behavior is not the primary focus

Yates and Whitford (1998)

Ideological position of dependent variable is not clearly conservative versus liberal



United States Courts of Appeals


Study

Reason(s) for Exclusion

Benesh (1997)

Specific number of court cases or judicial votes is not reported

Giles, Hettinger, and Peppers (1997)

Statistical data presented cannot be converted into an effect size

Goldman (1965)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Goldman (1966)

Statistical data presented cannot be converted into an effect size

Goldman (1979)

Number of court cases or judicial votes in study is not reported; substantially duplicates data in Goldman (1975)

Howard (1981)

Statistical data presented cannot be converted into an effect size

Humphries and Songer (1997)

Independent variable for judicial ideology is hybrid measure (including party)

Mosely (1972)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Note (1989)

Statistical data presented cannot be converted into an effect size

Smith and Tiller (1997)

Statistical data presented cannot be converted into an effect size; substantially duplicates data in Smith and Tiller (1996)

Solimine (1988)

Statistical data presented cannot be converted into an effect size

Songer and Haire (1992)

Statistical data presented cannot be converted into a single effect size

Tauber (1998)

Statistical data presented cannot be converted into an effect size

Unah (1997)

Statistical data presented cannot be converted into an effect size

Van Winkle (1997)

Statistical data presented cannot be converted into an effect size



United States District Courts


Study

Reason(s) for Exclusion

Ashenfelter, Eisenberg, and Schwab (1995)

Statistical data presented cannot be converted into an effect size

Baum (1980)

Statistical data presented cannot be converted into an effect size

Ducat and Dudley (1989)

Ideological position of dependent variable is not clearly conservative versus liberal

Feiock (1989)

Statistical data presented cannot be converted into a single effect size

Heike (1978)

Unpublished conference paper is unavailable to author

Moulds (1992)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Vines (1964)

Units of analysis are judges, not court cases or judicial votes



State Supreme Courts


Study

Reason(s) for Exclusion

Beatty (1970a)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts; substantially duplicates data in Beatty (1970b)

Bradley and Ulmer (1980)

Statistical data presented cannot be converted into an effect size

Canon and Baum (1981)

Units of analysis are states, not court cases or judicial votes

Feeley (1969)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Feeley (1971)

Statistical data presented cannot be converted into an effect size

Herndon (1963)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Kramer (1997)

Statistical data presented cannot be converted into an effect size

Langer (1997)

Independent variable for judicial ideology is hybrid measure (including party)

Vines (1969)

Judicial interviews; units of analysis are judges, not court cases or judicial votes

Vogt (1985)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts

Wenzel, Bowler, and Lanoue (1997)

Units of analysis are states, not court cases or judicial votes

Wold (1974)

Judicial interviews; units of analysis are judges, not court cases or judicial votes



State Trial Courts


Study

Reason(s) for Exclusion

Girasa (1988)

Unpublished dissertation is unavailable to author; only insufficient statistical data are available in Dissertation Abstracts



Court Unknown


Study

Reason(s) for Exclusion

Dudley (1989)

Unpublished conference paper is unavailable to author

Lunt and Champagne (1979)

Unpublished manuscript is unavailable to author

Prachera (1977)

Unpublished conference paper is unavailable to author

Schwab and Eisenberg (1988)

Unpublished manuscript is unavailable to author



            Scholars’ verdicts from 36 excluded studies in my possession (or otherwise adequately described in Dissertation Abstracts), and not duplicating works already in the meta-analysis, indicate their omission does not distort the synthesis. Sixteen conclusions support the hypothesis that party affiliation does affect judicial policymaking [18]. Fourteen decide party is not important to judicial action [19]. Six produce ambiguous findings [20]. Further, among the four deleted projects for which some effect size can be computed, two are strong (.461 for Benesh 1997; and .885 for Epstein, Walker, and Dixon 1989), one is moderate (.303 for Ulmer 1986), and the last is weak (.024 for economic-regulation decisions and .192 for civil-rights-and-liberties cases in Lanier 1997a). Thus, because these impressionistic characterizations do not favor one side or the other of the issue but rather appear to be random, the works’ exclusion does not introduce identifiable bias into the meta-analysis.


[Footnote 18: Aliotta (1988); Benesh (1997); Epstein, Walker, and Dixon (1989); Bradley and Ulmer (1980); Feeley (1969); Feiock (1989); Giles, Hettinger, and Peppers (1997); Goldman (1965); Lindstrom (1968); Note (1989); Songer and Haire (1992); Tate and Handberg (1991); Tauber (1998); Van Winkle (1997); Vogt (1985); and Yates and Whitford (1998).]


[Footnote 19: Ashenfelter, Eisenberg, and Schwab (1995); Baum (1980); Canon and Baum (1981); Ducat and Dudley (1989); Howard (1971, 1981); Kramer (1997); Lewis (1972); Solimine (1988); Ulmer (1986); Vines (1964, 1969); Wenzel, Bowler, and Lanoue (1997); and Wold (1974).]


[Footnote 20: Goldman (1966); Herndon (1963); Lanier (1997a, 1997b); Sprague (1968); Unah (1997).]


Determining Effect Sizes

            Meta-analytic effect sizes measure statistical findings with a common metric. The Pearson product-moment correlation coefficient (r) is the yardstick for synthesizing scholarship reporting on the relationship between two variables, both measured on interval or ratio scales (Cohen 1988, 75; Cooper 1989, 104; Wolf 1986, 34).


            Table 1 reports the unadjusted effect sizes in the meta-analysis. Three researchers directly report partial-correlation coefficients [21]. More frequently (38 surveys), the reporting of percentages of judicial votes in favor of liberal or conservative positions by specific judges whose party affiliations are known permits conversion into proportions of votes in each category, and thereby, cross-tabulations reveal r. Included here are canvasses using multivariate analysis yielding statistics that are not convertible to effect sizes (e.g, Songer and Davis 1990).


[Footnote 12: Bowen (1965); Goldman (1975); and Willison (1986).]


            For 22 studies using regression techniques susceptible of conversion and reporting complete data, the effect size is determined by the formula


r = square root of (t-squared/(t-squared+df))


where t is the t-test value reported for the coefficient or estimate of the party variable in the equation and df is the degrees of freedom in the analysis (Cooper 1989, 104; Wolf 1986, 35).


Findings

            From the effect sizes in Table 1, I calculate a mean correlation, weighted by the number of judicial votes per study [22], and then estimate the variance in observed correlations not due to sampling error [23]. These data are displayed in the first row of Table 3, together with 95% credibility and confidence intervals following Whitener (1990).


[Footnote 22: The use of the word “study” here and in the balance of the article is not precise because multiple unadjusted effect sizes in Table 1 come from a single source. Songer (1995), for example, reports different logistic regression equations for civil-rights-and-liberties, criminal-justice, and economic-regulation cases, without consolidating them. Also, instead of combining Gottschall’s (1983) zero-order correlations for civil-rights-and-liberties and criminal-justice decisions, I include them separately because the latter is based only on nonunanimous cases while the former contains unanimous holdings — an important distinction for moderator-variable investigation. I disaggregate the district-court and appellate-court data found in Stidham, Carp, and Songer (1996) and Stidham, Carp, Songer, and Surratt (1992). Thus, the textual reference ought to be to “unadjusted effect size” rather than “study.” Unfortunately, “unadjusted effect size” might lead to confusion with “weighted mean effect size,” and so I choose the inexact “study” instead.]


[Footnote 23: This is a “bare bones” meta-analysis, not correcting for other artifacts such as measurement error or range restriction (Hunter and Schmidt, Chapter 3).]


Table 3

Meta-Analytic Findings*



R

o

w

#

Population or

Subpopulation of

Studies

Number

of

Unadj’d

Effect

Sizes

Total

Number

of

Judicial

Votes

Weighted

Mean

Effect

Size

Standard

Deviation

95%

Credibility

Interval

Standard

Error

95%

Confidence

Interval

1

All Studies

66

222,789

+.277

.275

-.262 to +.816

.034

+.210 to +.343

2

Zero-Order-Correlation Effect Sizes in All Studies

42

171,006

+.145

.092

-.035 to +.325

.014

+.117 to +.173

3

Regression-Derived Effect Sizes in All Studies

24

51,783

+.711

.229

+.263 to +1.159

.047

+.619 to +.802

4

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in All Studies

32

136,501

+.111

.066

-.018 to +.239

.012

+.087 to +.134

5

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in All Studies

10

34,505

+.282

.035

+.214 to +.350

.012

+.258 to +.306

6

Using All Court Cases for Regression-Derived Effect Sizes in All Studies

22

31,949

+.615

.243

+.139 to +1.092

.052

+.514 to +.717

7

Civil-Rights-and-Liberties Studies

27

60,861

+.352

.294

-.225 to +.928

.057

+.240 to +.463

8

Zero-Order-Correlation Effect Sizes in Civil-Rights-and-Liberties Studies

15

39,101

+.161

.063

+.037 to +.284

.017

+.127 to +.194

9

Regression-Derived Effect Sizes in Civil-Rights-and-Liberties Studies

12

21,760

+.695

.228

+.247 to +1.142

.066

+.565 to +.824

10

Using All Court Cases for Regression-Derived Effect Sizes in Civil-Rights-and-Liberties Studies

10

9,257

+.473

.175

+.130 to +.816

.056

+.363 to +.583

11

Criminal-Justice Studies

24

53,416

+.226

.237

-.238 to +.690

.049

+.131 to +.321

12

Zero-Order-Correlation Effect Sizes in Criminal-Justice Studies

18

43,201

+.129

.098

-.063 to +.321

.024

+.083 to +.175

13

Regression-Derived Effect Sizes in Criminal-Justice Studies

6

10,215

+.634

.216

+.211 to +1.057

.088

+.461 to +.807

14

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in Criminal-Justice Studies

12

36,983

+.098

.056

-.012 to +.208

.017

+.065 to +.131

15

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in Criminal-Justice Studies

6

6,218

+.315

.088

+.142 to +.487

.038

+.241 to +.389

16

Using All Court Cases for Regression-Derived Effect Sizes in Criminal-Justice Studies

6

10,215

+.634

.216

+.211 to +1.057

.088

+.461 to +.807

17

Economic-Regulation Studies

28

79,825

+.194

.236

-.268 to +.655

.045

+.106 to +.281

18

Zero-Order-Correlation Effect Sizes in Economic-Regulation Studies

23

70,343

+.122

.092

-.058 to +.301

.020

+.084 to +.160

19

Regression-Derived Effect Sizes in Economic-Regulation Studies

5

9,482

+.726

.289

+.160 to +1.292

.129

+.473 to +.979

20

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in Economic-Regulation Studies

18

60,672

+.097

.068

-.036 to +.230

.017

+.065 to +.129

21

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in Economic-Regulation Studies

5

9,671

+.278

.063

+.155 to +.402

.030

+.220 to +.336

22

Using All Court Cases for Regression-Derived Effect Sizes in Economic-Regulation Studies

4

2,151

+.218

.182

-.138 to. +.574

.093

+.036 to +.400

23

Federal-Court Studies

41

171,847

+.268

.288

-.296 to +.832

.045

+.180 to +.356

24

Zero-Order-Correlation Effect Sizes in Federal-Court Studies

27

137,835

+.138

.089

-.036 to +.312

.017

+.104 to +.172

25

Regression-Derived Effect Sizes in Federal-Court Studies

14

34,012

+.794

.203

+.397 to +1.191

.054

+.688 to +.901

26

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in Federal-Court Studies

22

114,423

+.108

.066

-.021 to +.237

.014

+.080 to +.136

27

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in Federal-Court Studies

5

23,412

+.283

.000

+.282 to +.283

.006

+.271 to +.295

28

Using All Court Cases for Regression-Derived Effect Sizes in Federal-Court Studies

13

14,323

+.692

.281

+.140 to +1.243

.078

+.539 to +.845

29

State-Court Studies

26

54,120

+.295

.222

-.141 to +.731

.044

+.209 to +.381

30

Zero-Order-Correlation Effect Sizes in State-Court Studies

16

36,349

+.170

.095

-.016 to +.356

.024

+.122 to +.217

31

Regression-Derived Effect Sizes in State-Court Studies

10

17,771

+.551

.187

+.185 to +.917

.059

+.435 to +.667

32

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in State-Court Studies

11

25,256

+.121

.060

+.005 to +.238

.019

+.084 to +.159

33

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in State-Court Studies

5

11,093

+.280

.063

+.157 to +.403

.029

+.223 to +.337

34

Using All Court Cases for Regression-Derived Effect Sizes in State-Court Studies

9

17,626

+.553

.185

+.190 to +.917

.062

+.432 to +.675

35

US-Supreme-Court Studies

2

32,644

+.637

.286

+.077 to +1.197

.202

+.241 to +1.033

36

US-Court-of-Appeals Studies

23

79,903

+.242

.252

-.252 to +.735

.053

+.139 to +.345

37

Zero-Order-Correlation Effect Sizes in US-Court-of-Appeals Studies

19

68,682

+.148

.095

-.038 to +.335

.022

+.105 to +.192

38

Regression-Derived Effect Sizes in US-Court-of-Appeals Studies

4

11,221

+.815

.119

+.581 to +1.049

.060

+.698 to +.932

39

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in US-Court-of-Appeals Studies

15

58,225

+.124

.084

-.040 to +.288

.022

+.081 to +.168

40

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in US-Court-of-Appeals Studies

4

10,457

+.280

.000

+.279 to +.281

.009

+.262 to +.298

41

Using All Court Cases for Regression-Derived Effect Sizes in US-Court-of-Appeals Studies

4

11,221

+.815

.119

+.581 to +1.049

.060

+.698 to +.932

42

US-District-Court Studies

16

58,775

+.099

.072

-.041 to +.239

.018

+.063 to +.135

43

Zero-Order-Correlation Effect Sizes in US-District-Court Studies

7

55,673

+.091

.030

+.032 to +.150

.012

+.067 to +.115

44

Regression-Derived Effect Sizes in US-District-Court Studies

9

3,102

+.246

.241

-.228 to +.719

.082

+.085 to +.407

45

State-Supreme-Court Studies

23

51,806

+.306

.221

-.126 to +.739

.046

+.216 to +.397

46

Zero-Order-Correlation Effect Sizes in State-Supreme-Court Studies

13

34,156

+.177

.093

-.005 to +.359

.026

+.126 to +.229

47

Regression-Derived Effect Studies in State-Supreme-Court Studies

10

17,650

+.555

.179

+.205 to +.906

.057

+.444 to +.667

48

Using All Court Cases for

Zero-Order-Correlation Effect Sizes in State-Supreme-Court Studies

8

23,063

+.128

.058

+.014 to +.241

.022

+.086 to +.170

49

Using Only Nonunanimous Cases for Zero-Order-Correlation Effect Sizes in State-Supreme-Court Studies

5

11,093

+.280

.063

+.157 to +.403

.029

+.223 to +.337

50

Using All Court Cases for Regression-Derived Effect Sizes in State-Supreme-Court Studies

9

17,505

+.558

.178

+.210 to +.906

.059

+.442 to +.674

*The data in this table are obtained following the procedures in Hunter and Schmidt (1990, Chapter 3) and Whitener (1990) and are corrected for sampling error.



            Thus, across all courts and subject matters, and without regard to moderators (discussed below), the weighted mean correlation between political party and judicial ideology is +.277. Cohen (1988, 79-81) categorizes this as a medium effect size, attributing about 8% of the variance of the dependent variable (ideology) to the independent variable (party).


            Because credibility intervals test for the influence of moderator variables and address whether a population should be broken into subpopulations (Whitener 1990), the inclusion of zero in the interval in Row 1 of Table 3 suggests the need to search for moderators. Two prospective ones emerge.


Moderator Variables

            Method of Statistical Analysis. Twenty-four of 66 effect sizes in Table 1 come from multivariate regression; 39 are zero-order correlations. The unadjusted rs reveal the highest effect sizes arise from regression. Indeed, in light of conventional wisdom that party reliably predicts judicial ideology, one would expect higher regression-derived effect sizes because multivariate analysis produces the best independent-variable estimates. Accordingly, regression may provide effect sizes systematically larger than zero-order correlations in the meta-analysis, and statistical technique may be a moderator variable.


            Use of Only Nonunanimous Court Cases. Judicial-behavior scholars exploring multimember courts frequently look solely at nonunanimous opinions because, they argue, the lack of unanimity indicates the legal questions involved in such cases are relatively difficult ones to decide, with respectable arguments on each side, often involving controversial public issues (e.g., Nagel 1973; Tomasi and Velona 1987). Further, nonunanimous opinions facilitate certain analytical techniques, such as Guttman scales (Tate 1983).


            Nonetheless, statistically significant findings regarding party impact may be artifacts of methodology and models overlooking the vast majority of appellate output [24]. The propriety of analyzing only nonunanimous cases has been contested since the 1960s. Goldman (1969) and Grossman (1969) laid the theoretical underpinnings of the debate. The approach, for instance, is too susceptible to criticism as nonrepresentative of the full body of appellate output because nonunanimous opinions usually constitute just a fraction — typically 30% or less — of high-court activity. Sprague (1968), for example, finds 71% of Supreme Court decisions between 1889 and 1959 were unanimous; Goldman (1966) counts at least 84% unanimous opinions in U.S. Courts of Appeals; and Dubois (1988, 953), 87% in California intermediate appellate courts. Studies ignoring unanimous decisions, then, support models addressing merely small portions of all judicial activity.


[Footnote 24: Johnston (1976, 79-80) cautions methodology may affect findings in personal-attribute research:

[W]hen E. Lindstrom [(1968)], using a different methodology but the same data base as Sprague [(1968)], found that there were positive correlations between Supreme Court justices' backgrounds and their voting behavior, where Sprague had been unable to find any such correlations, it became evident that methodology might be the cause of some of the conflicting results.]

 

            More important, recent empirical research provides troubling evidence about the theory behind the nonunanimous methodology. Brenner and Arrington (1990) test the "two-situation" model advocated by C. Herman Pritchett and Sheldon Goldman, which explains unanimous opinions as circumstances when the law is clear and judges have but one choice, and note:


If the two-situation model is valid, then there is no reason to believe that during any extended period of the Court's history, either the liberal or the conservative outcome was dominant in unanimous cases, for there is no reason to believe that the law in these cases would predominantly favor the liberal or the conservative outcome. Brenner and Arrington (1990, 209).

 

Nonetheless, the authors find liberal outcomes dominate unanimous votes in 33 terms of the Vinson, Warren, and Burger Courts and conclude:


[This finding] is incompatible with the two-situation model. The dominance of the liberals in the unanimous votes throughout this extended period is inconsistent with the idea that in some cases, the law is so clear that the justices have no choice. One would have to assume that for some reason, the law consistently favored the liberals in all kinds of cases and in all kinds of times during these decades. Brenner and Arrington (1990, 217).


            Similarly, Dean (1988), probing all civil-rights-and-liberties and economics decisions by the Supreme Court between 1969 and 1985, finds unanimous decisions distinctly more liberal than nonunanimous. She concludes that ideology clearly affects unanimous civil-rights-and-liberties decisions (although not unanimous economics cases). Likewise, from a random sample of U.S. Court of Appeals decisions between 1965 and 1972, Songer (1982) estimates the proportion of unanimous cases containing “real choice situations” (i.e., permitting judges to decide appeals consistently with their ideology) is between 21 and 33½ percent.


            Pinello (1995, 25) also undermines the theory that unanimous decisions represent instances where courts have a single choice. There, in a carefully structured matched-pair study, two state supreme courts vote without dissent — but reach opposite results — on identical legal issues in fourteen of 29 instances (i.e., 48% of the time). Hence, the convention of using only nonunanimous decisions may mislead scholars, excluding important data from analysis.


            Besides, nonunanimity is no reliable indicator of consequentiality of judicial policy. Brown v. Board of Education (1954), Gideon v. Wainwright (1963), and United States v. Nixon (1974) — to name but three landmark Supreme Court decisions — were decided unanimously. Yet, a methodology operationally requiring nonunanimous opinions to work necessarily misses such momentous, judicially created policies.


            Goldman (1966, 1975) provides an example of how the nonunanimous approach may affect findings. The former work canvasses both nonunanimous opinions and unanimous reversals of district-court judgments; the latter, exclusively nonunanimous decisions. The nonunanimous-based 1975 undertaking finds clear demarcation by party across all legal topics investigated, while the partially-unanimous-based 1966 survey discovers variance in economic issues alone, not criminal or civil-rights appeals.


            Moreover, empirical inquiry based exclusively on nonunanimous cases at best supports partial models. As Sprague (1968, 7) remarks: "Strictly speaking, the results hold only for cases

. . . in which the Court divided." Thus, before these models can be utilized, another is required to predict when appellate courts fracture.


            In sum, scholars’ use of only nonunanimous court cases may be a moderator in the meta-analysis.



            Two statistical methods are available to confirm moderator variables as such. The first disaggregates studies into subpopulations suggested by the prospective moderators and then looks to the credibility intervals of those subgroups to test for the inclusion of zero (Whitener 1990). Table 3 reveals those subpopulations and credibility intervals.


            Every entry in Table 3 regarding statistical method demonstrates substantially higher findings for regression-derived mean effect sizes than for zero-order correlations, as expected [25]. Indeed, the lone instance of overlap between confidence intervals for simple correlations and regression data occurs for federal district courts (Rows 43 and 44), while every other court and all subject matters demonstrate substantial margins between confidence intervals delineated by statistical method. Further, zero is not embraced in any regression-derived credibility interval (except, again, for federal trial courts), suggesting continued searches for moderators of these subgroups are unnecessary (Whitener 1990) [26]. Clearly, then, statistical method is a moderator variable in the meta-analysis and is referred to hereafter as Moderator #1 [27].


[Footnote 25: Compare Rows 2 and 3, 8 and 9, 12 and 13, 18 and 19, 24 and 25, 30 and 31, 37 and 38, 43 and 44, and 46 and 47.]


[Footnote 26: District-court exceptionalism here probably is a statistical artifact of Rowland and Carp’s (1996) dominance. See infra.]


[Footnote 27: Despite Bowen’s (1965) pioneering work, multiple regression didn’t appear in the judicial-behavior literature until the mid-1970s. Giles and Walker (1975), Goldman (1975), Johnston (1976), Gibson (1978) and Kritzer (1978) ushered in the technique even though earlier statistical procedures, like cumbersome scalogram analysis, continued well into the ‘70s (e.g., Flango and Ducat 1977). This history helped determine when the journal perusal in my research design began.]


            Similar findings appear for the use of only nonunanimous appellate opinions. Every entry in Table 3 specified by this moderator demonstrates considerably higher mean effect sizes for nonunanimous-case enterprises than for those relying on all court decisions [28]. No overlap appears between confidence intervals for all-case studies and those based only on nonunanimous decisions, and zero is not included in any nonunanimous-case-derived credibility interval. Researchers’ use of only nonunanimous court cases, then, also is a moderator variable and is referred to hereafter as Moderator #2.


[Footnote 28: Compare Rows 4 and 5, 14 and 15, 20 and 21, 26 and 27, 32 and 33, 39 and 40, and 48 and 49.]


            The second method to confirm moderator variables is meta-regression analysis (Steel and Ovalle 1984; Stanley and Jarrell 1989; Phillips and Goss 1995). Table 4 displays the outcome from an ordinary-least-squares regression using the effect sizes in Table 1, weighted by the number of judicial votes per study, as the dependent variable and the two prospective moderators as independent variables [29].


[Footnote 29: Moderator #1 is coded 1 if an effect size derives from regression; 0, if a zero-order correlation. The three projects (Bowen 1965; Goldman 1975; and Willison 1986) reporting partial correlations are eliminated from the analysis. Moderator #2 is coded 1 if an examination is based only on nonunanimous cases; 0, if based upon all decisions.]


Table 4


OLS Regression


Dependent Variable = Weighted Effect Size


Independent Variable Beta Standard Error t-Statistic

 

           Moderator #1           .00598           .003    2.036*

 

           Moderator #2           .00921           .004    2.451*

 

           Constant       .00050           .002    0.253


 

           Standard Error                     .01111

           n                                           63

           Adjusted R2                         .096

           * p < .05




            Both Moderator #1 and Moderator #2 are significant at the .05 level. The coefficients appear minuscule (.00921 for #1 and .00598 for #2) because they reflect the dimension of the weighted effect sizes (ranging from -.00018 to .08082 and with mean of .00438 and standard deviation of .0117).


            The equation in Table 4 explains only about 10% of variance, suggesting other moderators exist. The same conclusion results from the inclusion of zero in the credibility intervals in Rows 4, 14, 20, 26, and 39 of Table 3. Although statistical method and the exclusive use of nonunanimous cases apparently are the sole moderators for state courts (i.e., zero is not contained in the credibility intervals in Rows 31, 32, 33, 34, 47, 48, 49, and 50), that is not the circumstance for federal courts.


Discussion

            Because only positively signed values appear in all 95% confidence intervals in Table 3, the most cautious conclusion from the meta-analysis about the relationship between judges’ political-party affiliation and their ideology is that there is a relationship: Democratic judges indeed are more liberal than Republican ones. That’s the easy part.


            Saying how much more liberal is the challenge. Referring just to the weighted-mean effect size (+.277) of all studies in the meta-analysis (Row 1 of Table 3), for example, focuses on a gross measure and ignores analytic nuance. Rather, the answer to the question of “How much more liberal?” has to be “It depends” — on the court, on the subject matter, and equally important, on moderators.


            Comparing party’s impact on ideology across subject matters demonstrates the necessity — and limitations — of accounting for moderator effects. The mean r for all 27 civil-rights-and-liberties investigations is appreciably higher (+.352, Row 7) than for either the 24 criminal-justice (+.226, Row 11) or the 28 economic-regulation (+.194, Row 17) explorations, with very similar standard errors. However, the difference vanishes when Moderator #1 is taken into account in regression-derived subgroups. There, twelve civil-rights-and-liberties surveys result in a mean effect size of +.695 (Row 9); six criminal-justice studies, +.634 (Row 13); and five on economic regulation, +.726 (Row 19). Further, as standard errors increase with fewer rs, confidence intervals become larger, reinforcing an impression of little variation across subject matter. Finally, when Moderator #2’s bias is removed, party has the greatest strength in ten criminal-justice studies (+.634, Row 16), substantially less for six on civil rights and liberties (+.473, Row 10), and even less in four economic-regulation instances (+.218, Row 22). Indeed, the data in Row 22 are based on the smallest number of judicial votes of any subgroup in Table 3, and the dramatic difference in mean effect sizes between Rows 19 and 22 arises from the omission of only one study (Tate 1981). Thus, although focusing attention on appropriate subgroups, correcting for both moderator variables substantially reduces the number of unadjusted effect sizes for calculating weighted means. A tradeoff arises between conserving unadjusted effect sizes in subpopulations and revising weighted-mean-effect-size estimates for moderator distortion.


            Regression-derived effect sizes yield the highest estimates of the relationship between party and ideology — +.711 for all 24 regression-based rs (Row 3 of Table 3). Yet, if broad range of court action is important, then Moderator #2's influence should be eliminated. Row 6 reveals a weighted-mean effect size of +.615 in that circumstance. Although the estimate drops by almost one-tenth of a point in going from Rows 4 to 6, Cohen (1988) still categorizes the latter effect size as large, ascribing about 38% of the variance of judicial ideology to party [30].


[Footnote 30: Simply two studies separate Row 4 from Row 6. One is Tate (1981), representing almost 20,000 judicial votes and one of only two US-Supreme-Court effect sizes in the meta-analysis. Thus, no Supreme Court inquiry remains in the Row 6 subgroup (because the effect size of Johnston, 1976, is a zero-order correlation).]


            The problem with pursuing this interpretive strategy across subpopulations, again, is the small number of appellate-court studies both using regression and relying on all judicial votes. Indeed, only one group of appellate courts — State Supreme Courts (Row 50) — has an adequate supply. By comparison, U.S. Courts of Appeals (Row 41) have only four, with one (Songer and Sheehan 1992) whose 90% of the votes dominate.


            Party is a stronger attitudinal force in federal courts than in state tribunals. The federal-court regression-derived mean effect size is +.794 (Row 25), while that for state judges is +.551 (Row 31), with virtually identical standard errors. Adjusting for Moderator #2 reduces the difference to +.692 (Row 28) and +.553 (Row 34), although still almost half (48%) of the variance in federal-court ideology is attributable to party while less than one-third (31%) is in state courts [31].


[Footnote 31: Eight of the nine unadjusted effect sizes in Row 34, representing 70% of the total judicial votes, come from studies explicitly controlling for region or state ideology. Thus, the state-court weighted-mean effect size is not significantly skewed by conservative Southern Democrats' confusing partisan ideological clarity. Likewise, more than 80% of votes in Row 28 stem from federal-court probes with controls for region.]


            Despite party’s overall importance to federal judges, though, sundry statistical dilemmas afflict the meta-analytic findings regarding individual federal-court levels. As noted, for example, only two unadjusted effect sizes support the US-Supreme-Court data (Row 35). The resultant standard error and confidence interval are large, and little reliance can be placed on the mean-effect-size estimate there. Further, both studies use only nonunanimous cases.


            The absence of party-based projects surveying both nonunanimous and unanimous Supreme Court decisions is not an insurmountable hurdle for civil liberties research because the Segal-Cover scores are at hand. However, investigations incorporating the impact of justices’ ideology on non-civil-liberties topics can’t rely on those measures (Epstein and Mershon, 1996).


            Similarly, few options exist for lower federal courts, where Segal-Cover scores don’t apply at all [32]. As mentioned, most studies (15 of 19) of the U.S. Courts of Appeals provide zero-order-correlation effect sizes, the weighted mean of which (+.124, Row 39) varies dramatically from that for the four regression-based rs (+.815, Row 41), dominated by Songer and Sheehan (1992). Thus, placing confidence in either side of the range is problematic.


[Footnote 32: See Giles, Hettinger, and Peppers (1998) for one prospect.]


            Likewise, Rowland and Carp’s study (1996) overwhelms the 16 federal-trial-court undertakings, with 75% of the judicial votes in the subpopulation. Indeed, the book is so commanding its zero-order correlation (+.085) overpowers the field. Nine regression-based studies, with combined votes representing only 7% of Rowland and Carp’s (1996) total, barely raise the overall district-court mean effect size (to +.099, Row 42). Interestingly, however, their separate mean is merely +.246 (Row 44), a far cry from the intermediate-appellate courts’ +.815.


            Accordingly, under the current literature, the most circumspect estimate of party impact on judicial ideology in federal courts at every level is the overall federal court weighted-mean effect size (+.692, Row 28), corrected for moderator effects and derived from 13 studies involving more than 14,000 judicial votes.


            Several explanations for the difference between the U.S. Courts of Appeals and District Courts are likely. The first is senatorial courtesy. Presidents have greater political flexibility to place candidates with more extreme ideological positions on intermediate-appellate courts than trial courts. A complementary interpretation is that presidents, when making nominations to higher courts, pay more attention to appointees’ policy views (Baum 1997, 63).


            In this regard, a third moderator for federal courts may be era. Party may cleave federal judges appointed in recent decades more than in the years immediately after World War II. Because contemporary American presidents generally have nominated federal judges more reflective of presidential ideology (Goldman 1997), the federal bench may be more balkanized by party than ever before. The problem in testing this hypothesis, though, is the collinearity between era and the rise of scholars’ application of regression techniques, which have been in widespread use for only about two decades — the same period in which the federal bench became more politicized, if the hypothesis is true [33]. Disaggregating the variables’ effects may prove difficult, and in any event, is beyond the scope of the present analysis.


[Footnote 33: See Footnote 27.]


            State Supreme Courts have a weighted-mean-effect-size estimate, corrected for moderator effects, of +.558 (Row 50), explaining 31% of judicial-ideology variance. This is the only individual-court-level estimate based on sufficient studies in the meta-analysis to warrant substantial confidence.


            In contrast, state intermediate-appellate and trial courts have just two unadjusted effect sizes each, reflecting scholars’ relative lack of empirical interest in lower state tribunals. Moreover, three of the four rs are zero-order correlations, while the fourth is based on very few votes. Accordingly, no meaningful weighted-mean estimates of party’s impact in lower state courts are available.


Conclusion

            Cumulating and synthesizing empirical findings on the link between judges’ political-party affiliation and their performance on the bench confirm conventional wisdom that party is a dependable measure of ideology in modern American courts. Democratic judges indeed are more liberal on the bench than Republican counterparts. Research employing regression analysis provides the best estimates of the relationship, while scholars’ use of only nonunanimous appellate opinions overestimates party’s effect on the broad range of judicial action. Corrected for moderator-variable influence, weighted-mean-product-moment effect sizes range from +.553 for state courts to +.692 for federal tribunals, explaining between 31% and 48% of ideological variance, respectively.


            The yearning to refine [34], or replace [35], political party as an ideology measure in judicial-behavior scholarship is patent. These meta-analytic findings can help fashion improved metrics for future research.


[Footnote 34: Brace, Langer, and Hall (1998); Humphries and Songer (1997); and Langer (1997).]


[Footnote 35: Giles, Hettinger, and Peppers (1998); Segal and Cover (1989); Songer and Sarver (1997); and Songer, Segal, and Cameron (1994).]



References


Adamany, David W. 1969. "The Party Variable in Judges' Voting: Conceptual Notes and a Case Study." American Political Science Review 63 (March):57.


Aliotta, Jilda M. 1988. “Combining Judges’ Attributes and Case Characteristics: An Alternative Approach to Explaining Supreme Court Decisionmaking.” Judicature 71 (February-March):277.


Ashenfelter, Orley, Theodore Eisenberg, and Stewart J. Schwab. 1995. “Politics and the Judiciary: The Influence of Judicial Background on Case Outcomes.” Journal of Legal Studies 24 (June):257.


Barton, Allen H., and R. Wayne Parsons. 1977. "Measuring Belief System Structure." Public Opinion Quarterly 41 (Summer):159.


Baum, Lawrence. 1980. “Responses of Federal District Judges to Court of Appeals Policies: An Exploration.” Western Political Quarterly 33 (June):217.


Baum, Lawrence. 1997. The Puzzle of Judicial Behavior. Ann Arbor, MI: University of Michigan Press.


Beatty, Jerry K. 1970a. An Institutional and Behavioral Analysis of the Iowa Supreme Court — 1965-1969. Unpublished Ph.D. dissertation, University of Iowa.


Beatty, Jerry K. 1970b. "Decision-Making on the Iowa Supreme Court — 1965-1969." Drake Law Review 19 (May):342.


Begg, Colin B. 1994. “Publication Bias.” In Harris Cooper and Larry V. Hedges (eds.), The Handbook of Research Synthesis. New York: Russell Sage Foundation.


Beiser, Edward N., and Jonathan J. Silberman. 1971. "The Political Party Variable: Workmen's Compensation Cases in the New York Court of Appeals." Polity 3 (Summer):521.


Benesh, Sara C. 1997. “Confession Cases in American Courts: Perspectives on the Hierarchy of Justice.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Bowen, Don Ramsey. 1965. The Explanation of Judicial Voting Behavior from Sociological Characteristics of Judges. Unpublished Ph.D. dissertation, Yale University.


Brace, Paul, and Melinda Gann Hall. 1993. "Integrated Models of Judicial Dissent." Journal of Politics 55 (November):914.


Brace, Paul, and Melinda Gann Hall. 1995. "Studying Courts Comparatively: The View from the American States." Political Research Quarterly 48 (March):5.


Brace, Paul R., and Melinda Gann Hall. 1997. "The Interplay of Preferences, Case Facts, Context, and Rules in the Politics of Judicial Choice." Journal of Politics 59 (November):1206.


Brace, Paul, Laura Langer, and Melinda Gann Hall. 1998. “Measuring the Preferences of State Supreme Court Judges.” Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Bradley, Robert, and S. Sidney Ulmer. 1980. “An Examination of Voting Behavior in the Supreme Court of Illinois: 1971-1975.” Southern Illinois University Law Journal 1980 (No. 3):245.


Brenner, Saul, and Theodore S. Arrington. 1990. "Unanimous Decision Making on the U.S. Supreme Court: Case Stimuli and Judicial Attitudes." In Harold J. Spaeth and Saul Brenner (eds.), Studies in U.S. Supreme Court Behavior. New York: Garland.


Brenner, Saul, and Harold J. Spaeth. 1988. “Ideological Position as a Variable in the Authoring of Dissenting Opinions on the Warren and Burger Courts.” American Politics Quarterly 16 (July):317.


Brown v. Board of Education. 1954. 373 U.S. 83.


Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American Voter. New York: John Wiley and Sons.


Canon, Bradley C., and Lawrence Baum. 1981. "Patterns of Tort Law Innovations." American Political Science Review 75 (December):975.


Carp, Robert A., and C.K. Rowland. 1983. Policymaking and Politics in the Federal District Courts. Knoxville, TN: University of Tennessee Press.


Carp, Robert A., Donald Songer, C. K. Rowland, Ronald Stidham, and Lisa Richey-Tracy. 1993. “The Voting Behavior of Judges Appointed by President Bush.” Judicature 76 (April-May):298.


Carp, Robert A., and Ronald Stidham. 1990. Judicial Process in America. Washington, DC: Congressional Quarterly Press.


Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences. Second Edition. Hillsdale, NJ: Lawrence Erlbaum Associates.


Cohen, Mark A. 1992. “The Motives of Judges: Empirical Evidence from Antitrust Sentencing.” International Review of Law and Economics 12 (March):13.


Converse, Philip E. 1964. "The Nature of Belief Systems in Mass Publics." In David E. Apter (ed.), Ideology and Discontent. New York: Free Press.


Cooper, Harris M. 1989. Integrating Research: A Guide for Literature Reviews. Second Edition. Newbury Park, CA: Sage Publications.


Cooper, Harris, and Larry V. Hedges (eds.). 1994. The Handbook of Research Synthesis. New York: Russell Sage Foundation.


Crews-Meyer, Kelley A., and Jenny R. Anderson. 1994. “A Cross-Court Model of Judicial Decision Making: Gender Discrimination Cases in State Supreme Courts and the United States Courts of Appeals.” Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Cross, Frank B., and Emerson H. Tiller. 1998. “Judicial Partisanship and Obedience to Legal Doctrine: Whistleblowing on the Federal Courts of Appeals.” Yale Law Journal 107 (May):2155.


Dean, Karen E. 1988. An Examination of Unanimous Decision Making on the Burger Court. Unpublished Ph.D. dissertation, Kent State University.


Dolbeare, Kenneth M. 1967. Trial Courts in Urban Politics: State Court Policy Impact and Functions in a Local Political System. New York: John Wiley & Sons.


Dolbeare, Kenneth M. 1969. "The Federal District Courts and Urban Public Policy: An Exploratory Study (1960-1967)." In Joel B. Grossman and Joseph Tanenhaus (eds.), Frontiers of Judicial Research. New York: John Wiley and Sons. 


Doucouliagos, Chris. 1995. “Worker Participation and Productivity in Labor-Managed and Participatory Capitalist Firms: A Meta-Analysis.” Industrial and Labor Relations Review 49 (October):58.


Dubois, Philip L. 1980. From Ballot to Bench: Judicial Elections and the Quest for Accountability. Austin, TX: University of Texas Press.


Dubois, Philip L. 1988. "The Illusion of Judicial Consensus Revisited: Partisan Conflict on an Intermediate State Court of Appeals," American Journal of Political Science 32 (November):946.


Ducat, Craig R., and Robert L. Dudley. 1989. “Federal District Judges and Presidential Power During the Postwar Era.” Journal of Politics 51 (February):98.


Dudley, Robert L. 1989. "Lower-Court Decision-Making in Pornography Cases: Do We Know It When We See It?" Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Eisenberg, Theodore, and Sheri Lynn Johnson. 1991. “The Effects of Intent: Do We Know How Legal Standards Work?” Cornell Law Review 76 (September):1151.


Emmert, Craig F., and Carol Ann Traut. 1994. "The California Supreme Court and the Death Penalty." American Politics Quarterly 22 (January):41.


Entman, Robert M. 1983. "The Impact of Ideology on Legislative Behavior and Public Policy." Journal of Politics 45 (February):163.


Epstein, Lee, and Carol Mershon. 1996. “Measuring Political Preferences.” American Journal of Political Science 40 (February):261.


Epstein, Lee, Thomas G. Walker, and William J. Dixon. 1989. “The Supreme Court and Criminal Justice Disputes: A Neo-Institutional Perspective.” American Journal of Political Science 33 (November):825.


Fair, Daryl R. 1967. “An Experimental Application of Scalogram Analysis to State Supreme Court Decisions.” Wisconsin Law Review 1967 (Spring):449.


Feeley, Malcolm M. 1969. A Comparative Analysis of State Supreme Court Behavior. Unpublished Ph.D. dissertation, University of Minnesota.


Feeley, Malcolm M. 1971. "Another Look at the 'Party Variable' in Judicial Decision-Making: An Analysis of the Michigan Supreme Court." Polity 4 (Autumn):91.


Feiock, Richard C. 1989. “Support for Business in the Federal District Courts: The Impact of State Political Environment.” American Politics Quarterly 17 (January):96.


Flango, Victor E., and Craig R. Ducat. 1977. “Toward an Integration of Public Law and Judicial Behavior.” Journal of Politics 39 (February):41.


Flemming, Gregory N., David B. Holian, and Susan Gluck Mezey. 1998. “An Integrated Model of Privacy Decision Making in State Supreme Courts.” American Politics Quarterly 26 (January):35.


George, Tracey E. 1998. “Developing a Positive Theory of Decisionmaking on U.S. Courts of Appeals.” Ohio State Law Journal 58:1635.


Gerber, Scott D., and Keeok Park. 1997. "The Quixotic Search for Consensus on the U.S. Supreme Court: A Cross-Judicial Empirical Analysis of the Rehnquist Court Justices." American Political Science Review 91 (June):390.


Gideon v. Wainwright. 1963. 372 U.S. 335.


Gibson, James L. 1978. "Race as a Determinate of Criminal Sentences." Law and Society Review 12 (Spring):455.


Giles, Micheal W., Virginia A. Hettinger, and Todd C. Peppers. 1997. “Majority-Minority Panels and Policy Making on the United States Courts of Appeals.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Giles, Micheal W., Virginia A. Hettinger, and Todd C. Peppers. 1998. “Alternative Measures of Preferences for Judges of the Courts of Appeals.” Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Giles, Micheal W., and Thomas G. Walker. 1975. "Judicial Policy-Making and Southern School Segregation." Journal of Politics 37 (November):917.


Girasa, Rosario J. 1988. The Civil Court of Bronx County: An Inquiry into Judicial Decision-Making. Unpublished Ph.D. dissertation, Fordham University.


Glass, Gene V. 1978. “In Defense of Generalization.” The Behavioral and Brain Sciences 3:394.


Glass, Gene V. 1983. “Synthesizing Empirical Research: Meta-Analysis.” In S. A. Ward and L. J. Reed (eds.), Knowledge Structure and Use. Philadelphia: Temple University Press.


Goldman, Sheldon. 1965. Politics, Judges, and the Administration of Justice: The Backgrounds, Recruitment, and Decisional Tendencies of the Judges on the United States Courts of Appeals, 1961-64. Unpublished Ph.D. dissertation, Harvard University.


Goldman, Sheldon. 1966. "Voting Behavior on the U.S. Courts of Appeals." American Political Science Review 60 (June):374.


Goldman, Sheldon. 1969. "Backgrounds, Attitudes and the Voting Behavior of Judges: A Comment on Joel Grossman's Social Backgrounds and Judicial Decisions." Journal of Politics 31 (February):214.


Goldman, Sheldon. 1975. "Voting Behavior on the U.S. Courts of Appeals Revisited." American Political Science Review 69 (June):491.


Goldman, Sheldon. 1979. “The Effect of Past Judicial Behavior on Subsequent Decision-Making.” Jurimetrics Journal 19:208.


Goldman, Sheldon. 1997. Picking Federal Judges: Lower Court Selection From Roosevelt Through Reagan. New Haven, CT: Yale University Press.


Gottschall, Jon. 1983. "Carter's Judicial Appointments: The Influence of Affirmative Action and Merit Selection on Voting on the U.S. Courts of Appeals." Judicature 67 (October):164.


Gottschall, Jon. 1986. "Reagan's Appointments to the U.S. Courts of Appeals: The Continuation of a Judicial Revolution." Judicature 70 (June-July):48.


Grossman, Joel B. 1969. "Further Thoughts on Consensus and Conversion: A Reply to Professor Goldman." Journal of Politics 31 (February):223.


Gryski, Gerard S., and Eleanor C. Main. 1986. “Social Backgrounds as Predictors of Votes on State Courts of Last Resort: The Case of Sex Discrimination.” Western Political Quarterly 39 (September):528.


Gryski, Gerard S., Eleanor C. Main, and William J. Dixon. 1986. "Models of State High Court Decision Making in Sex Discrimination Cases." Journal of Politics 48 (February):143.


Hale, Scott L. 1998. “Attack Messages and their Effects on Judgments of Political Candidates: A Random-Effects Meta-Analytic Review.” Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Hall, Melinda Gann, and Paul Brace. 1992. "Toward an Integrated Model of Judicial Voting Behavior." American Politics Quarterly 20 (April):147.


Hall, Melinda Gann, and Paul Brace. 1994. "The Vicissitudes of Death by Decree: Forces Influencing Capital Punishment Decision Making in State Supreme Courts." Social Science Quarterly 75 (March):136.


Hall, Melinda Gann, and Paul Brace. 1996. "Justices' Responses to Case Facts: An Interactive Model." American Politics Quarterly 24 (April):237.


Hansen, Wendy L., Renée J. Johnson, and Isaac Unah. 1995. "Specialized Courts, Bureaucratic Agencies, and the Politics of U.S. Trade Policy." American Journal of Political Science 39 (August):529.


Haynie, Stacia L., and C. Neal Tate. 1990. “Institutional Liberalism in the United States Supreme Court, 1916-1988: An Explanation of Economics and Civil Rights and Liberties Outcomes.” Presented at the Annual Meeting of the American Political Science Association, San Francisco.


Heike, Susan. 1978. "Federal District Judges and School Desegregation: Social Background Factors Reconsidered." Presented at the Annual Meeting of the Southern Political Science Association, Atlanta, GA.


Hensley, Thomas, and Joyce Baugh. 1987. “Impact of the 1978 Omnibus Judgeships Act.” In Stuart S. Nagel (ed.), Research in Law and Policy Studies. Greenwich, CT: JAI Press.


Herndon, James F. 1963. Relationships Between Partisanship and the Decisions of the State Supreme Courts. Unpublished Ph.D. dissertation, University of Michigan.


Howard, Jr., J. Woodford. 1971. "Judicial Biography and the Behavioral Persuasion." American Political Science Review 65 (September):704.


Howard, Jr., J. Woodford. 1981. Courts of Appeals in the Federal Judicial System. Princeton, NJ: Princeton University Press.


Howard, Robert M. 1998. “Judicial Decision-Making, IRS Policy and Taxpayer Litigation: Interaction, Interdependence and Democratic Control.” Manuscript, State University of New York, Stony Brook.


Humphries, Martha Anne, and Donald R. Songer. 1997. “A Strategic Actor Model of the United States Courts of Appeals Decisions in Administrative Agency Litigation.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Hunt, Morton. 1997. How Science Takes Stock: The Story of Meta-Analysis. New York: Russell Sage Foundation.


Hunter, John E., and Frank L. Schmidt. 1990. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. Newbury Park, CA: Sage Publications.


Hunter, John E., and Frank L. Schmidt. 1996. “Cumulative Research Knowledge and Social Policy Formulation: The Critical Role of Meta-Analysis.” Psychology, Public Policy and Law 2 (June):324.


Jensen, Jennifer M., and Ashlyn Kuersten. 1997. “Do Women Judges Vote Differently on the U.S. Courts of Appeals?” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Johnson, Charles. 1987. "Law, Politics and Judicial Decision Making." Law and Society Review 21 (No. 2):325.


Johnston, Richard E. 1976. "Supreme Court Voting Behavior: A Comparison of the Warren and Burger Courts." In Robert L. Peabody (ed.), Cases in American Politics. New York: Praeger.


Kearney, Richard C., and Reginald S. Sheehan. 1992. “Supreme Court Decision Making: The Impact of Court Composition on State and Local Government Litigation.” Journal of Politics 54 (November):1008.


Kilwein, John C., and Richard A. Brisbin, Jr. 1997. "Policy Convergence in a Federal Judicial System: The Application of Intensified Scrutiny Doctrines by State Supreme Courts." American Journal of Political Science 41 (January):122.


Knight, Kathleen, and Robert S. Erikson. 1997. "Ideology in the 90s." In Barbara Norrander and Clyde Wilcox (eds.), Public Opinion in the 90s. Washington, DC: Congressional Quarterly Press.


Kovacic, William E. 1991. "The Reagan Judiciary and Environmental Policy." Environmental Affairs 18:669.


Kramer, Paul A. 1997. “Assessing the Utility of Integrated Judicial Decision Making Models: The Case of State Supreme Courts.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Kritzer, Herbert. 1978. "Political Correlates of the Behavior of Federal District Judges: A 'Best Case' Analysis." Journal of Politics 40 (February):25.


Langer, Laura. 1997. “State Supreme Court and Countermajoritarian Behavior.” Presented at the Conference on the Scientific Study of Judicial Politics, Atlanta, GA.


Lanier, Drew N. 1997a. “Of Time and Judicial Behavior: Longitudinal Analyses of United States Supreme Court Decision-Making, 1888-1989.” Presented at the Annual Meeting of the Southern Political Science Association, Norfolk, VA.


Lanier, Drew N. 1997b. Of Time and Judicial Behavior: Time Series Analyses of United States Supreme Court Agenda-Setting and Decision-Making, 1888-1989. Unpublished Ph.D. dissertation, University of North Texas.


Lau, Richard R., Lee Sigelman, Caroline Heldman, and Paul Babbitt. 1999. “The Effects of Negative Political Advertisements: A Meta-Analytic Assessment.” American Political Science Review 93 (December):851.


Leavitt, Donald. 1972. "Political Party and Class Influences on the Attitudes of Justices of the Supreme Court in the Twentieth Century." Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Levine, Jeffrey, Edward G. Carmines, and Robert Huckfeldt. 1997. "The Rise of Ideology in the Post-New Deal Party System, 1972-1992." American Politics Quarterly 25 (January):19.


Lewis, Peter W. 1972. U.S. Supreme Court Decisions on Criminal Cases with Opinions (1953-1971): An Empirical Analysis of the Warren and Burger Courts. Unpublished Ph.D. dissertation, Florida State University.


Light, Richard J., and David B. Pillemer. 1984. Summing Up: The Science of Reviewing Research. Cambridge, MA: Harvard University Press.


Lindstrom, Eugene Emil. 1968. Attributes Affecting the Voting Behavior of Supreme Court Justices: 1889-1959. Unpublished Ph.D. dissertation, Stanford University.


Lloyd, Randall D. 1995. "Separating Partisanship from Party in Judicial Research: Reapportionment in the U.S. District Courts." American Political Science Review 89 (June):413.


Lunt, C., and Anthony Champagne. 1979. "Judicial Backgrounds and Decisions in Abortion Cases." Unpublished manuscript.


Luttbeg, Norman R., and Michael M. Gant. 1985. "The Failure of Liberal/Conservative Ideology as a Cognitive Structure." Public Opinion Quarterly 49 (Spring):80.


Mishler, William, and Reginald S. Sheehan. 1993. “The Supreme Court as a Countermajoritarian Institution? The Impact of Public Opinion on Supreme Court Decisions.” American Political Science Review 87 (March):87.


Mishler, William, and Reginald S. Sheehan. 1996. “Public Opinion, the Attitudinal Model, and Supreme Court Decision Making: A Micro-Analytic Perspective.” Journal of Politics 58 (February):169.


Mosely, William H. 1972. Personal Attitudes and Judicial Role in Judicial Decision-Making: A Study of the United States Courts of Appeals. Unpublished Ph.D. dissertation, University of Hawaii.


Moulds, Elizabeth F. 1992. The United States Sentencing Guidelines: Public Policy Implementation in the Federal Courts. Unpublished D.P.A. dissertation, University of Southern California.


Nagel, Stuart S. 1961. "Political Party Affiliation and Judges' Decisions." American Political Science Review 55 (December):843.


Nagel, Stuart S. 1962a. "Judicial Backgrounds and Criminal Cases." Journal of Criminal Law, Criminology and Police Science 53 (September):333.


Nagel, Stuart S. 1962b. "Testing Relations Between Judicial Characteristics and Judicial Decision-Making." Western Political Quarterly 15 (September):425.


Nagel, Stuart S. 1969. The Legal Process from a Behavioral Perspective. Homewood, IL: The Dorsey Press.


Nagel, Stuart S. 1973. Comparing Elected and Appointed Judicial Systems. Beverly Hills, CA: Sage Publications.


Nagel, Stuart S. 1974. "Multiple Correlation of Judicial Backgrounds and Decisions." Florida State University Law Review 2 (Spring):258.


Nagel, Stuart S. 1982. “Discretion in the Criminal Justice System: Analyzing, Channeling, Reducing, and Controlling It.” Emory Law Journal 31 (Summer):603.


Nelson, Blake. 1997. “Judicial Role and Ideology: An Integrated Legal and Attitudinal Model of U.S. Supreme Court Decision Making.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Note. 1989. “The Politics of En Banc Review.” Harvard Law Review 102 (February):864.


Padgett, George E. 1980. A Quantitative Analysis of United States Supreme Court Decision-Making Relative to First Amendment Issues of Free Speech and Free Press. Unpublished Ph.D. dissertation, Ohio University.


Phillips, Joseph M., and Ernest P. Goss. 1995. “The Effect of State and Local Taxes on Economic Development: A Meta-Analysis.” Southern Economic Journal 62:320.


Pinello, Daniel R. 1995. The Impact of Judicial-Selection Method on State-Supreme-Court Policy: Innovation, Reaction, and Atrophy. Westport, CT: Greenwood Press.


Pinello, Daniel R. 1998. "Explaining Success and Failure of Lesbian-and-Gay-Rights Claims in State Appellate Courts." Unpublished manuscript. [A revised account of this study is available in Daniel R. Pinello, Gay Rights and American Law (Cambridge University Press 2003).]


Prachera, John S. 1977. "Background Characteristics and Judicial Voting Behavior: An Examination." Presented at the Annual Meeting of the Western Political Science Association, Phoenix.


Quinn, John R. 1996. “‘Attitudinal’ Decision Making in the Federal Courts: A Study of Constitutional Self-Representation Claims.” San Diego Law Review 33 (May-June):701.


Revesz, Richard L. 1997. “Environmental Regulation, Ideology, and the D.C. Circuit.” Virginia Law Review 83:1717.


Rosenthal, Robert. 1979. “The ‘File Drawer Problem’ and Tolerance for Null Results.” Psychological Bulletin 86:638.


Rosenthal, Robert. 1991. Meta-Analytic Procedures for Social Research. Revised Edition. Newbury Park, CA: Sage Publications.


Rowland, C. K., and Robert Carp. 1980. "A Longitudinal Study of Party Effects on Federal District Court Policy Propensities." American Journal of Political Science 24 (May):291.


Rowland, C. K., and Robert Carp. 1983a. "Presidential Effects on Federal District Court Policy Decisions: Economic Liberalism, 1960-77." Social Science Quarterly 64 (June):386.


Rowland, C. K., and Robert Carp. 1983b. "The Relative Effects of Maturation, Time Period, and Appointing President on District Judges' Policy Choices." Political Behavior 5 (No. 1):109.


Rowland, C.K., and Robert A. Carp. 1996. Politics and Judgment in Federal District Courts. Lawrence, KS: University Press of Kansas.


Rowland, C. K., Robert A. Carp, and Ronald A. Stidham. 1984. "Judges' Policy Choices and the Value Basis of Judicial Appointments: A Comparison of Support for Criminal Defendants Among Nixon, Johnson, and Kennedy Appointees to the Federal District Courts." Journal of Politics 46 (August):886.


Rowland, C. K., Donald Songer, and Robert A. Carp. 1988. "Presidential Effects on Criminal Justice Policy in the Lower Federal Courts: The Reagan Judges." Law and Society Review 22 (No. 1):191.


Rowland, C.K., and Bridget J. Todd. 1991. "Where You Stand Depends on Who Sits." Journal of Politics 53 (February):175.


Schmidhauser, John. 1963. Constitutional Law in the Political Process. Chicago: Rand McNally.


Schubert, Glendon. 1959. Quantitative Analysis of Judicial Behavior. Glencoe, IL: The Free Press.


Schultz, Vicki, and Stephen Petterson. 1992. “Race, Gender, Work, and Choice: An Empirical Study of the Lack of Interest Defense in Title VII Cases Challenging Job Segregation.” University of Chicago Law Review 59 (Summer):1073.


Schwab, Stewart J., and Theodore Eisenberg. 1988. “The Influence of Judges’ Political Party in Civil Rights and Prisoner Cases.” Unpublished paper, dated June 28, Georgetown Law Journal.


Segal, Jeffrey A. 1997. “Separation-of-Powers Games in the Positive Theory of Congress and Courts.” American Political Science Review 91 (March):28.


Segal, Jeffrey A., Charles M. Cameron, and Albert D. Cover. 1992. “A Spatial Model of Roll Call Voting: Senators, Constituents, Presidents, and Interest Groups in Supreme Court Confirmations.” American Journal of Political Science 36:96.


Segal, Jeffrey A., and Albert D. Cover. 1989. "Ideological Values and Votes of U.S. Supreme Court Justices." American Political Science Review 83 (June):557.


Segal, Jeffrey A., Lee Epstein, Charles M. Cameron, and Harold J. Spaeth. 1995. "Ideological Values and the Votes of U.S. Supreme Court Justices Revisited." Journal of Politics 57 (August):812.


Segal, Jeffrey A., and Harold J. Spaeth. 1993. The Supreme Court and the Attitudinal Model. New York: Cambridge University Press.


Segall, Michael. 1998. “Death by Discrimination: A Study of Judicial Voting.” Manuscript, Emory University.


Sheehan, Reginald S., William Mishler, and Donald R. Songer. 1992. "Ideology, Status, and Differential Success of Direct Parties Before the Supreme Court." American Political Science Review 86 (June):464.


Sisk, Gregory C., Michael Heise, and Andrew P. Morriss. 1998. “Charting the Influences on the Judicial Mind: An Empirical Study of Judicial Reasoning.” New York University Law Review 73 (December):1377.


Smith, Joseph L., and Emerson H. Tiller. 1996. "The Strategy of Judging: An Empirical Assessment." Presented at the Annual Meeting of the Midwest Political Science Association, Chicago.


Smith, Joseph L., and Emerson H. Tiller. 1997. "The Strategy of Judging: Evidence From Administrative Law.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Solimine, Michael E. 1988. “Ideology and En Banc Review.” North Carolina Law Review 67 (November):29.


Songer, Donald R. 1982. "Consensual and Nonconsensual Decisions in Unanimous Opinions in the U.S. Courts of Appeals." American Journal of Political Science 26 (May):225.


Songer, Donald R. 1987. “The Impact of the Supreme Court on Trends in Economic Policy Making in the United States Courts of Appeals.” Journal of Politics 49 (August):830.


Songer, Donald R. 1995. “Integrated Models of State Supreme Court Decision Making.” Presented at the Annual Meeting of the American Political Science Association, Chicago.


Songer, Donald R., and Sue Davis. 1990. "The Impact of Party and Region on Voting Decisions in the United States Courts of Appeals, 1955-1986." Western Political Quarterly 43 (June):317.


Songer, Donald R., and Susan Haire. 1992. "Integrating Alternative Approaches to the Study of Judicial Voting: Obscenity Cases in the U.S. Courts of Appeals." American Journal of Political Science 36 (November):963.


Songer, Donald R., and Tammy A. Sarver. 1997. “Implementation of Precedent Through the Honor System: Tort Diversity Cases in the United States Courts of Appeals, 1960-1988.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Songer, Donald R., Jeffrey A. Segal, and Charles M. Cameron. 1994. "The Hierarchy of Justice: Testing a Principal-Agent of Supreme Court-Circuit Court Interactions." American Journal of Political Science 38 (August):673.


Songer, Donald R., and Reginald S. Sheehan. 1990. "Supreme Court Impact on Compliance and Outcomes." Western Political Quarterly 43 (June):297.


Songer, Donald R., and Reginald S. Sheehan. 1992. “Who Wins on Appeal? Upperdogs and Underdogs in the United States Courts of Appeals.” American Journal of Political Science 36 (February):235.


Sprague, John D. 1968. Voting Patterns of the United States Supreme Court: Cases in Federalism, 1889-1959. Indianapolis: Bobbs-Merrill.


Stanley, T. D., and Stephen B. Jarrell. 1989. “Meta-Regression Analysis: A Quantitative Method of Literature Surveys.” Journal of Economic Surveys 3 (June):161.


Stecher, Jamie B. W. 1977. "Democratic and Republican Justice: Judicial Decision-Making on Five State Supreme Courts." Columbia Journal of Law and Social Problems 13 (Number 2):137.


Steel, Robert P., and Nestor K. Ovalle, 2d. 1984. “A Review and Meta-Analysis of Research on the Relationship Between Behavioral Intentions and Employee Turnover.” Journal of Applied Psychology 69 (No. 4):673.


Stidham, Ronald, and Robert A. Carp. 1982. "Trial Court Response to Supreme Court Policy Changes." Law and Policy Quarterly 4 (April):215.


Stidham, Ronald, and Robert A. Carp. 1987. "Judges, Presidents, and Policy Choices." Social Science Quarterly 68 (June):395.


Stidham, Ronald, Robert A. Carp, and C. K. Rowland. 1983. “Women’s Rights Before the Federal District Courts, 1971-1977.” American Politics Quarterly 11 (April):205.


Stidham, Ronald, Robert A. Carp, and Donald R. Songer. 1996. “The Voting Behavior of President Clinton’s Judicial Appointees.” Judicature 80 (July-August):16.


Stidham, Ronald, Robert A. Carp, Donald R. Songer, and Donean Surratt. 1992. “The Impact of Major Structural Reform on Judicial Decisionmaking: A Case Study of the U.S. Fifth Circuit.” Western Political Quarterly 49 (March):143.


Swanson, Rick A., and Albert P. Melone. 1995. “The Partisan Factor and Judicial Behavior in the Illinois Supreme Court.” Southern Illinois University Law Journal 19 (Winter):303.


Swinford, Bill, and Eric N. Waltenburg. 1996 “An Action-Reaction Model of Supreme Court Litigation: The Case of the States.” Presented at the Annual Meeting of the American Political Science Association, San Francisco.


Tarr, G. Alan. 1977. Judicial Impact and State Supreme Courts. Lexington, MA: Lexington Books.


Tarr, G. Alan. 1994. Judicial Process and Judicial Policymaking. St. Paul, MN: West.


Tate, C. Neal. 1981. "Personal Attribute Models of the Voting Behavior of U.S. Supreme Court Justices' Liberalism in Civil Liberties and Economic Decisions, 1946-1978." American Political Science Review 75 (June):355.


Tate, C. Neal. 1983. "The Methodology of Judicial Behavior Research." Political Behavior 5 (No. 1):51.


Tate, C. Neal. 1990. “Personal Attributes as Explanations of Supreme Court Justices’ Decision Making.” In Henry R. Glick (ed.), Courts in American Politics. New York: McGraw-Hill.


Tate, C. Neal, and Roger Handberg. 1991. "Time Binding and Theory Building in Personal Attribute Models of Supreme Court Voting Behavior, 1916-88." American Journal of Political Science 35 (May):460.


Tauber, Steven C. 1998. “On Behalf of the Condemned? The Impact of the NAACP Legal Defense Fund on Capital Punishment Decision Making in the U.S. Courts of Appeals.” Political Research Quarterly 51 (March):191.


Tomasi, Timothy B., and Jess A. Velona. 1987. "All the President's Men? A Study of Ronald Reagan's Appointments to the U.S. Courts of Appeals." Columbia Law Review 87 (May):766.


Ulmer, S. Sidney. 1962. "The Political Party Variable in the Michigan Supreme Court." Journal of Public Law 11 (Number 2):352.


Ulmer, S. Sidney. 1986. "Are Social Background Models Time-Bound?" American Political Science Review 80 (September):957.


Unah, Isaac. 1997. “Specialized Courts of Appeals’ Review of Bureaucratic Actions and the Politics of Protectionism.” Political Research Quarterly 50 (December):851.


United States v. Nixon. 1974. 418 U.S. 683.


Van Winkle, Steven R. 1997. “Dissent as a Signal: Evidence from the U.S. Courts of Appeals.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Vigilante, Katherine O’Hara. 1998. “Supreme Court Signals: Voting Rights and Racial Classifications in the U.S. District Courts.” Manuscript, Emory University.


Vines, Kenneth N. 1964. "Federal District Judges and Race Relations Cases in the South." Journal of Politics 26 (May):337.


Vines, Kenneth N. 1969. "The Judicial Role in the American States: An Exploration." In Joel B. Grossman and Joseph Tanenhaus (eds.), Frontiers of Judicial Research. New York: John Wiley and Sons.


Vogt, Gayle H. 1985. Education Decision Making: The Influence of the Political and Social Environment on the Justices of the California Supreme Court, 1954-1982. Unpublished Ph.D. dissertation, Claremont Graduate School.


Wahlbeck, Paul J. 1997a. “The Development of a Legal Rule: The Federal Common Law of Public Nuisance.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.


Wahlbeck, Paul J. 1997b. “The Life of the Law: Judicial Politics and Legal Change.” Journal of Politics 59 (August):778.


Walker, Thomas G. 1972. "A Note Concerning Partisan Influences on Trial-Judge Decision Making." Law and Society Review 6 (May):645.


Wenzel, James P., Shaun Bowler, and David J. Lanoue. 1997. "Legislating From the State Bench: A Comparative Analysis of Judicial Activism." American Politics Quarterly 25 (July):363.


Whitener, Ellen M. 1990. “Confusion of Confidence Intervals and Credibility Intervals in Meta-Analysis.” Journal of Applied Psychology 75 (No. 3):315.


Willison, David H. 1986. “Judicial Review of Administrative Decisions: Agency Cases Before the Court of Appeals for the District of Columbia, 1981-1984.” American Politics Quarterly 14 (October):317.


Wold, John T. 1974. "Political Orientations, Social Backgrounds, and Role Perceptions of State Supreme Court Judges." Western Political Quarterly 27 (June):239.


Wolf, Fredric M. 1986. Meta-Analysis: Quantitative Methods for Research Synthesis. Beverly Hills, CA: Sage Publications.


Wolf, Patrick J. 1997. “Why Must We Reinvent the Federal Government? Putting Historical Developmental Claims to the Test.” Journal of Public Administration Research and Theory 7 (July):353.


Yarnold, Barbara M. 1997. “Factors Related to Outcomes in Religious Freedoms Cases, Federal District Courts: 1970-1990.” Justice System Journal 19 (No. 2):181.


Yates, Jeff, and Andrew Whitford. 1998. “Presidential Power and the United States Supreme Court.” Political Research Quarterly 51 (June):539-50.