Quantitative Methods Resources

Building Objective Psychometric Instruments

Data Diagnosis, Outliers and Examining Parametric Test Assumptions

Data Transformations, Difference Scores, Proportion-of-Control Scores & Control-Residualized Scores

Distribution-Free Tests (Monte Carlo, Permutation, Etc.)

Effect Sizes

Individual Differences: Concepts

Internet-Based Calculators

Intra-class Correlations

Mediation and Moderation Analyses


-         Bayesian stats

-         Cluster analysis

-         COV

-         P-rep

-         Orthogonal polynomial coefficients

-         Controlling familywise error

-         Flavors of correlations

-         Dichotomizing

-         Ethics

-         Fitting distributions

-         Missing data

-         Ordinal regression

-         MS Excel tricks

-         Philosophy of science

-         Probing GLM/HLM Interactions

-         Random sampling of stimuli

-         Repeated-Measures GLM as a Between-Subjects ANOVA/Regression

-         Trimmed estimators of central tendency

-         What statistical reviewers like/dislike)

Nonparametric Statistics

Picking Which Test to Use

Psychometrics for Affective Neuroscientists

Single Subject ("Case Study") Analyses  

Structural Equation Modeling (SEM)

Teaching (and Learning) Univariate Parametric Statistics

Testing for Significant Differences Between Cronbach's Alphas

Testing Pairwise Differences in Within-Subjects (Repeated-Measures) Designs

The Visual Display of Quantitative Information

Unequal Cell Frequencies ("n's") (a.k.a. "Unbalanced Designs")

Using SPSS

Why I Don't Use MS Excel for Statistics

Within-Subjects (Repeated-Measures) Error Bars


Still Can't Find It? Jump to Unsorted Files 

Return to Homepage or Laboratory for Affective Neuroscience or Waisman Laboratory for Brain Imaging and Behavior

Picking Which Statistical Test to Use


Note: ...recognize that the selection of a statistical procedure may to some extent be a matter of judgment and that other statisticians may select
alternative procedures.
-- ASA Ethical Guidelines for Statistical Practice, published by the American Statistical Association, 1989.


see also Hand, 1994


Return to top

Internet-Based Calculators 



    SISA Bonferroni Correction (see also the section on Controlling Familywise Error)

    Betty Jung's collection


Return to top


Mediation and Moderation Analyses 

Web-based Expertise

    Andrew Hayes, OSU

    David Kenny, UConn

    David MacKinnon, ASU

    Kris Preacher, University of Kansas

    Paul Jose's Mediation & Moderation Help Centre


Published Papers

    Baron & Kenny (1986) {the seminal explication} 

    Coan & Allen (2004) {discusses applications of moderator/mediator analyses to neurophysiological data}

    Frazier, Tix, & Barron (2004) {good, non-technical introduction} 

    Holmbeck (1997) {good, non-technical introduction}

    Holmbeck (2002) {good, non-technical introduction} 

    Irwin & McClelland (2001) {moderation analyses}

    Judd, Kenny & McClelland (2001) {repeated-measures|within-subjects mediation and moderation analyses; see also Judd et al., 1996} 

    MacKinnon, Lockwood, Hoffman, West & Sheets (2002) {quantitative comparison of techniques for testing mediation}

    MacKinnon, Lockwood & Williams (2004) {fleshes out some of the issues raised in MacKinnon et al., 2002} 

    McClelland & Judd, 1993

    Muller, Judd & Yzerbyt (in press) {mediated moderation and moderated mediation} 

    Shrout & Bolger (2002) {discuss bootstrapping techniques for testing mediation; cf. MacKinnon et al., 2002, 2004}

    Spencer, Zanna & Fong (in press) {criticism of the overapplication of mediation analyses}


Karl Wuensch's Summary

    Download mediationmodels.doc


On-line Sobel Test Calculator


        (see also Preacher & Hayes, 2004 and the accompanying syntax (Preacher-BRMIC-2004.zip) and data (Figure2data.sav)

        (an updated bootstrapping approach: syntax; website & local mirror)


Paul Jose's Excel-Based Moderation and Mediation Plot Generators and Sobel Calculator


    Download local mirror of ModGraph.xls

    Download local mirror of MedGraph.xls


Jeremy Dawson's Excel-Based Moderation (2-way and 3-way interaction) Plot Generators


    Download instructions

    Download local mirror of 2-waybinary.xls

    Download local mirror of 2-waystandardised.xls

    Download local mirror of 2-wayunstandardised.xls

    Download local mirror of 3-waystandardised.xls

    Download local mirror of 3-wayunstandardised.xls


Jason Newsom's SPSS Macros


    Download instructions

    Download local mirror of simple1.sps

    Download local mirror of simple2.sps


Dirk Enzmann's Tools


        Local mirror for archival purposes

        Keywords: CI and p's for 2 independent betas; centering variables for interactions in GLM; macro for writing out COV matrix; macro for creating

                        dummy variables; Excel-template for plotting interactions of a regression equation with interaction term; executable for computing the

                        reliability of a difference score, macro for computing a biserial correlation; executable for calculating the p, 95%-CI, and Fisher's Z for r;

                        macro for computing tetrachoric correlations; program for computing tetrachoric correlations


Excel-Based Calculator for the Clogg and Freedman-Schatzkin Tests

    Download Calculator


Return to top


Conceptual Resources for Individual Differences Analyses


    Underwood, B.J. (1975). Individual differences as a crucible in theory construction. American Psychologist, 30, 128-134.


    Kosslyn, S.M., Cacioppo, J.T., Davidson, R.J., Hugdahl, K., Lovallo, W.R., Spiegel, D., & Rose, R. (2002). Bridging psychology and biology: The analysis of individuals in groups. American Psychologist, 57, 341-351.


...and an application, via ANCOVA, to improving sensitivity


Return to top


Psychometrics for Affective Neuroscientists

General Resources

    Nunnally & Bernstein's Psychometric Theory, 3rd ed.


    Tomarken, A.J. (1995). A psychometric perspective on psychophysiological measures. Psychological Assessment, 7, 387-395.


    Tomarken, A. J., Davidson, R. J., Wheeler, R. E., & Kinney, L. (1992). Psychometric properties of resting anterior EEG asymmetry: Temporal stability and internal consistency. Psychophysiology, 29, 576-592. {application to resting EEG data}


Web-based Expertise

    G. David Garson's Reliability Page

    John Uebersax's ICC Page

    Robert Yaffee's ICC Page


Internet-based Calculator

    ICC Calculator


Construct Validity

    Some lecture notes


Correcting Measures of Association for Attenuation Caused by Imperfect Reliability

    Charles, 2005, Psychological Methods

    deShon, 1998, Psychological Methods {application to SEM}

    Schmidt & Hunter, 1996

    Schmidt & Hunter, 1999, Intelligence (and Borsoom & Mellenbergh's 2002 Comment)


G Theory

    Di Nocera et al., 2001, Psychophysiology


Internal Consistency

    Schmidt, F.L., Le, H. & Llies, R. (2003) {discuss metrics "beyond" Cronbach's alpha for more fully characterizing different sources of variance}

    Chong Ho Yu's Notes


Intra-class Correlations

    Muller & Buttner (1994) (cf. Vargha's comment) {contains a decision-making tree for picking the appropriate ICC}

    McGraw & Wong, 1996, Psychological Methods [erratum] {drawing inferences about ICCs}

    Shrout & Fleiss, 1979, Psychological Bulletin {the seminal introduction}


Item Response Theory

    Reise et al., 2005 {general introduction}


Measurement Error

    Schmidt & Hunter, 1996 {a wonderful general introduction for non-experts}



    Deanna Barch's notes   

    Hopkins, 2005

    Shrout, 1998, SiM {general introduction}

    Michael Smithson's notes

    Thompson & Vacha-Haase, 2000, EPM


Reliability of Difference (Change, Growth, Gain) Scores

    Dirk Enzmann's code for computing the reliability of a difference score (cf. Zimmerman & Williams, 1982): reldiff.exe


Test-Retest Stability

NB: Simple Pearson correlations are appropriate when the main question is the stability of individual differences irrespective of mean differences across assessments, as when habituation effects are likely


    Allen, J.J.B., Urry, H.L., Hitt, S.K., & Coan, J.A. (2004) {application to resting EEG data}



Return to top 


Within-Subjects (Repeated Measures) Error Bars & Confidence Intervals

    Belia et al., 2005

    Blouin & Riopelle, 2005

    Cummings, G. & Finch, S. (2005) {Drawing correct inferences from error bars}

    Fidler et al., 2001

    Masson, M.E.J. & Loftus, G.R. (2003). Canadian Journal of Experimental Psychology {The definitive source}


    Christian Schunn's perspective {Drawing correct inferences from error bars} and an Excel-based calculator


Return to top



Data Diagnosis / Examining Parametric GLM Assumptions


    A Short Summary of Tips


    Kruskall on "Wild Observations" {underscores both the importance of keeping careful notes at the time the data was collected and the potential utility of non-parametric methods}...see also Anscombe and Guttman, 1960 (and Kruskal et al's commentary) and Beckman and Cook, 1983


    Remember, Influence = Leverage x Discrepancy

        Leverage = distance of a case from the centroid of the swarm; related to Mahalanobis D: D = (N-1)(L - 1/N)) or L = (D/(N-1)) + (1/N)

        Discrepancy = degree to which a case lies off the GLM or HLM fit-line


   Rules of Thumb for Cases with Undue Influence

Cook's D > 1

absolute values of DFBETA > 2/(sqrt(N))

Mahalanobis D with p < .001, evaluated on the Chi-Square Distribution with df equal to the number of variables

Absolute value of the standardized residual >3.3 (Tabachnick & Fidell, 2001)

Visually inspect the scatter plot formed by the residuals and the leverage for outliers.

   Examining the Assumptions of Normality, Linearity, Zero-Mean-Centered, and Homoscedasticity of Residuals

Create a scatter plot of the residuals (y-axis) against the predicted values of the DV (x-axis) (see also this page)

Examine residuals for each assumption, as well as possible outliers (see above)

Homoscedasticity Rule of Thumb: the spread of SDs of residuals around predicted values is 3x greater for the widest vs. the narrowest spread (Fox, 1991 [also provides some formal tests, pp. 64-66])


NB: There is no distributional assumptions about the IVs, other than their relationship with the DV. However, a prediction equation often is enhanced if IVs are normally distributed, primarily because linearity between the IV and DV is enhanced (Tabachnick & Fidell, 2001, p. 119). It is, however, assumed that continuous IVs are not afflicted with outliers.


    Note: Combined with knowledge of sample size, these resources can help determine whether the use of nonparametric tests (see below) is warranted (cf. Riniolo & Porges, 2000)



    The Cohen's textbook and the revision

    Fox's Regression Diagnostics

    Steven's textbook [good description of transformations, residual plots, rules of thumb for leverage/Cook's D, rules of thumb for detection of non-normality, rules of thumb for variance-cell frequency interactions, rules of thumb for diagnosing and dealing with multicolinearity]

    Tabachnick & Fidell's textbook (see esp. Chs. 4, 5 and 9.3) [describes different diagnostics and treatments of outliers/leverage/influence; describes how you might describe this for publication]

    Weisberg's textbook (Chapters 1, 7-9)



    Download a MS Powerpoint presentation describing different kinds of residuals, leverage, etc.


Web-based Resources

    Alex Yu's page


Published Reports

    Bryk & Raudenbush, Psychological Bulletin, 1985 {heterogeneity of variances}

    Chatfield, JRSSA, 1985 {exploratory data analysis; cf. Tukey's books}

    Conover et al., Technometrics, 1981 {comparison of homogeneity of variance tests}

    Grubbs, Technometrics, 1969 {outliers test}

    Lix et al., RER, 1996 {quantitative review of alternatives to conventional F test}

    Mallows, TAS, 1979 {exploratory data analysis}

    Zhang, Luo & Nichols, HBM, 2006 {application of EDA/NP to neuroimaging data}


Levene's Test for Dependent Student's t Test

    See here


Multivariate and Bivariate Normality (see also Assessing the Presence of Groups in Bivariate Relations)


        D'Agostino et al., 1990

        DeCarlo, 1997

        Koziol, 1986

        Landry & Lepage, 1992

        Mecklin & Mundfrom, 2003

        Mecklin & Mundfrom, 2004


        Gnanadesikan's book

    SPSS Macros

        Lawrence DeCarlo's macro for univariate and multivariate skew and kurtosis

        Lawrence DeCarlo's macro for Mardia's multivariate skew and kurtosis

Return to top


Data Transformations, Difference Scores, Proportion of Control Scores & Control-Residualized Scores


Data Transformations

    Discussed extensively in Cohen et al. (suggest the possibility of using ARC to estimate an optimal λ)


    See Steven's Graphical Rules of Thumb


    Logrithmic transformations {underscores that natural logs are useful because the SD of the logged data is approximately equal to the coefficient of variation (SD/mean) of the raw data ... for constructing CIs on logged data, see Zhou & Gao's report; for computing meta-analytic estimates of the SD of a logged measure, see Quan and Zhang's report}


Z-Scores, Difference Scores, and Proportion of Control Scores

    Blumenthal et al., 2004

    Sutton et al., 1997


Control-Residualized Scores

    Gross, Sutton & Ketelaar, 1998


Controlling for "Initial" Values (i.e., Uncorrelating Treatment-Post from Control-Pre Values)

    Benjamin, 1967

    Jamieson, 1999


Misc. Transforms and Comparisons of Transforms with an Emphasis on RT Distributions (and Dealing with Speed-Accuracy Tradeoffs)

    Bush et al., 1993

    Gasser et al., 1982 {transformations applied to EEG data}

    Ratcliff, 1993  {cf. speed-accuracy tradeoffs}

    Salthouse & Hedden, 2002 {describes several more sophisticated means of dealing with speed-accuracy tradeoffs}

    Ulrich & Miller, 1994 {cf. speed-accuracy tradeoffs}


Rank Transformations

    Rank transformations (RTs) represent a potentially powerful, extremely practical means of satisfying the prerequisites for GLM analyses. However, a number of issues and concerns have cropped up since the landmark publication of Conover and Iman's 1981 review paper advocating the utility of RTs. Rank-transformations should not be blindly applied. In particular, concerns have arisen in the context of repeated-measures designs (e.g., rank-transformation can reduce/alter the correlation between repeated-measures, leading to a loss of power) and factorial designs (i.e, as a non-linear transformation, RT can fundamentally alter tests of factorial interactions). Consequently, several important modified RT approaches (e.g., aligned-rank-transform) have more recently been proposed. Regardless of the transform chosen, descriptive statistics for the raw and transformed data should be carefully compared to one another and to the assumptions of the applicable inferential test(s). Transformation does not guarantee improvement (e.g., RT will not invariably suppress outliers). 


       Conover and Iman's 1981 review {rank transformations as a bridge between parametric and nonparametric tests}

       Sawilowsky, RER, 1990 {review of NP techniques, including rank-transforms, for testing interactions}



       Akritas, JASA, 1990 {asymptotic rank-transform applied to two-way ANOVA}

       Akritas, JASA, 1991 {critical investigation of the rank-transform applied to repeated-measures}

       Akritas et al., JASA, 1997 {rank-transform applied to unbalanced factorial ANOVA}

       Beasley, JEBS, 2000 {NP tests for interactions in mixed-model/split-plot factorial designs}

       Beasley & Zumbo, CSDA, 2003 {application of modified ranks to mixed-model/split-plot designs}

       Brunner & Dette, JASA, 1992 {modified rank-transform for mixed-model/split-plot factorial designs}

       Conover & Iman, Biometrics, 1982 {rank-transform applied to ANCOVA}

       Gao & Song, BMCI, 2005 {nice overview of rank-transforms and aligned-rank transforms for factorial designs}

       Harwell & Serlin, Psychological Bulletin, 1988 {power of various NP (including rank-transform, "MPF model") approaches to ANCOVA}

       Headrick & Rotou, CSDA, 2001 {application of rank-transform to multiple regression}

       Hora & Conover, JASA, 1984 {rank-transform applied to two-way ANOVA}

       Iman & Conover, Technometrics, 1979 {rank-transform applied to regression} and corrigendum

       Kepner & Wackerly, JASA, 1996 {rank-transform applied to repeated-measures ANOVA}

       Lei, Holt & Beasley, JMASM, 2004 {aligned-rank transform applied to interaction tests in mixed-model designs tested using modified ANOVA/MANOVA}

       Payton et al, JEE, 2006 {comparison of aligned-ranks, rank-transform, and power-family transforms for testing interactions}

       Sawilowsky, Blair & Higgins, JES, 1989 {critical study of the power of the rank-transformed ANOVA}

       Thompson & Ammann, JASA, 1990 {rank-transform and aligned-ranks applied to repeated-measures}

       Thompson, JASA, 1991 {ranks applied to repeated-measures designs}

       Toothaker & De Newman, JEBS, 1994 {power of various NP (including rank-transform and aligned-ranks) approaches to ANOVA}



        Higgins' text (2004) on aligned rank-transforms   


    Web-Based Resources

       Will Hopkins site I and II



            Q: I have ordinal variables and thus used Spearman's rho. How do I use these ordinal correlations in SPSS for

            partial correlation, regression, and other procedures?

            A: You got the output by selecting Statistics, Correlate, then checking Spearman's rho as the correlation type. This

            invoked the NONPAR CORR procedure, but the dialog boxes (as of ver. 7.5) did not provide for matrix output. Re-run

            the Spearman's correlations from the syntax window, which is invoked with File, New, Syntax. Enter syntax such as the

            following, then run it:
CORR VARIABLES= horse engine cylinder
            The correlation matrix will now be in the
SPSS Data Editor, where you change the ROWTYPE_ variable values to CORR

            instead of RHO. Optionally, you may want to select File, Save As at this point to save your matrix. Then select Statistics,

            Correlate, Partial Correlation (or another procedure) and SPSS will use the Spearman's matrix as input. Alternatively, in

            the syntax window use MATRIX=IN(*) in PARTIAL CORR or another procedure which accepts a correlation matrix as

            input. (http://www2.chass.ncsu.edu/garson/pa765/correl.htm#ordinal2)


Reliability and Criterion Validity of Difference Scores

    Zimmerman et al., 1982

    Zimmerman et al., 1982

    Zimmerman et al., 1982

    Zimmerman et al., 1993

    Zimmerman et al., 1998



    Cook and Weisberg's Arc software and book {permits computation of Box-Cox lambda and Yeo-Johnson transforms} {for more info on a quick graphical method for estimating B-C lambda, see this report; these should only be used when values less than or equal to 0 are encountered}


Return to top


Building Objective Psychometric Instruments

    Clark, L.A. & Watson, D. (1995) {Extremely thorough, readable introduction} 

    Floyd, F.J. & Widaman, K.F. (1995)

    Gorsuch, R.L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Guadagnoli & Velicer, 1988

    Smith, G.T. & McCarthy, D.M. (1995) 

    Smith, G.T., Fischer, S. & Fister, S.M. (2003) {Describe modified key criterion approach}


Return to top


Using SPSS

Web-Based Expertise

    Raynald's Page and Links and Book

    Dirk Enzmann's page of tools (local mirror for archival purposes)

        Keywords: CI and p's for 2 independent betas; centering variables for interactions in GLM; macro for writing out COV matrix; macro for creating

                        dummy variables; Excel-template for plotting interactions of a regression equation with interaction term; executable for computing the

                        reliability of a difference score, macro for computing a biserial correlation; executable for calculating the p, 95%-CI, and Fisher's Z for r;

                        macro for computing tetrachoric correlations; program for computing tetrachoric correlations







    USC Macros


Scripts (see also Raynald's Page)

    Create as many .sav files as there are .txt files in a directory and then (optionally) merge them: CreateAll.sbs

    {an alternate technique is to use the DOS command: copy *.txt merged.txt}{or the UNIX command: cat FILES > OUTPUT where FILES is a list with(out) wildcards and OUTPUT is the target file name}


Example of line continuation:

strCmd = "GET DATA /TYPE = TXT /FILE = '" & strPath & strFname & "' /DELCASE = LINE /DELIMITERS = '\t' /ARRANGEMENT = DELIMITED " _

& "/FIRSTCASE = 1 /IMPORTCASE = All /VARIABLES = rsf1 A25 rsf2 A25 rsf3 A25 rsf4 A25 rsf5 A25 rsf6 A25 rsf7 A25 rsf8 A25 rsf9 A25 rsf10 A25 rsf11 A25 rsf12 " _

& "A25 rsf13 A25 rsf14 A25 rsf15 A25 rsf16 A25 rsf17 A25 rsf18 A25 rsf19 A25 rsf20 A25 rsf21 A25 rsf22 A25 rsf23 A25 rsf24 A25 rsf25 A25 rsf26 A25 rsf27 A25 rsf28 " _

& "A25 rsf29 A25 rsf30 A25 rsf31 A25 rsf32 A25 rsf33 A25 rsf34 A25 rsf35 A25 rsf36 A25 rsf37 A25 rsf38 A25 rsf39 A25 rsf40 A25 rsf41 A25 rsf42 A25 rsf43 A25 rsf44 " _

& "A25 rsf45 A25 rsf46 A25 rsf47 A25 rsf48 A25 rsf49 A25 rsf50 A25 rsf51 A25 rsf52 A25 rsf53 A25 rsf54 A25 rsf55 A25 rsf56 A25 rsf57 A25 rsf58 A25 rsf59 A25 rsf60 " _

& "A25 rsf61 A25 rsf62 A25 rsf63 A25 rsf64 A25 rsf65 A25 rsf66 A25 rsf67 A25 rsf68 A25 rsf69 A25 rsf70 A25 rsf71 A25 rsf72 A25 rsf73 A25 rsf74 A25 rsf75 A25 rsf76 " _

& "A25 rsf77 A25 rsf78 A25 rsf79 A25 rsf80 A25 rsf81 A25 rsf82 A25 rsf83 A25 rsf84 A25 rsf85 A25 rsf86 A25 rsf87 A25 rsf88 A25 rsf89 A25 rsf90 A25 rsf91 A25 rsf92 " _

& "A25 rsf93 A25 rsf94 A25 rsf95 A25 rsf96 A25 rsf97 A25 rsf98 A25 rsf99 A25 rsf100 A25 rsf101 A25 rsf102 A25 rsf103 A25 rsf104 A25 rsf105 A25 rsf106 A25 rsf107 " _

& "A25 rsf108 A25 rsf109 A25 rsf110 A25 rsf111 A25 rsf112 A25 rsf113 A25 rsf114 A25 rsf115 A25 rsf116 A25 rsf117 A25 rsf118 A25 rsf119 A25 rsf120 A25 rsf121 A25 " _

& "rsf122 A25 rsf123 A25 rsf124 A25 rsf125 A25 rsf126 A25 rsf127 A25 rsf128 A25 rsf129 A25 ." & vbCr

strCmd = strCmd & "SAVE OUTFILE='" & strPath & Mid(strFname,1,InStr(strFname,".")-1) & ".sav'." & vbCr

strCmd = strCmd & "Execute."


Using MATLAB to Painlessly Create Dummy Variables

    Go there


Summary of Correspondence Concerning "Large" Files in SPSS

    Download it



Written by Alexis-Michel Mugabushaka of the University of Kassel. Takes as input a list of syntax files or a file containing a list of syntax files. The program runs each syntax one by one and create a separate output file for each of the syntax files. Note that you must include a Get File command in each .sps file, e.g.,

FILE='C:\Documents and Settings\SHACKMAN\Desktop\test.sav'.


Return to top


Hints for Testing Pairwise (Planned or Post Hoc) Differences in Within-Subjects/Mixed Models

<<under construction>>

    Download a description of modifying SPSS Syntax


    Read David Howell's textbook or visit his web site


Tips from SPSS Inc.

    Download readme.txt

    Download Syntax: rmpostl.sps, rmpostb.sps, and rmpostd.sps


Return to top


Testing for Significant Differences Between Correlations

Web-Based Expertise

    James Steiger, Vanderbilt


Independent Correlations

    Vassar Stats Calculator



A Correlation and the Hypothesized Value of that Correlation (r vs. rho Hypothesis Test)

    Vassar Stats Calculator


Dependent Correlations



    Download SPSS Syntax


    Download Excel-Based Calculator


    See also, Meng, X.-L., Rosenthal, R., & Rubin, D. B. (1992) (and Andrew Hayes SPSS syntax)


Two Pairs of Dependent Correlations

    Raghunathan, T. E., Rosenthal, R., & Rubin, D. B. (1996)


    Steiger, J.H. (1980)


Nonparametric Correlations

    You could first convert tau or rho coefficients to approximations of r (as described in this report) before proceeding as usual.


    You could perform the usual computations on ranks, as described in this report.


Return to top


Distribution-Free Tests (Monte Carlo, Permutation, Etc.)

    <<under construction>>

but see Tom Nichols' website, the LORETA website, the NPStat website, and David Howell's Resampling website


Return to top


The Consequences of Unequal Cell Frequencies (a.k.a. "Unbalanced Designs")

    see a mirror of David Howell's page


Return to top


Testing for Significant DifferencesBetween 2 Cronbach's Alphas

Published Reports

    Alsawalmeh & Feldt, 1999

    Alsawalmeh & Feldt, 2000

    Feldt, L. S. (1980).

    Feldt, L. S., & Ankenmann, R. D. (1998).

    Feldt, L. S., & Ankenmann, R. D. (1999).

    Feldt & Charter, 2003

    Feldt LS, Woodruff DJ, Salih FA. (1987).

    Hakstian, A.R. & Barchard, K.A. (2000).



    Andrew Hayes' SPSS syntax

Excel-Based Calculator for Testing the Difference Between Dependent Cronbach's Alphas

    Click here


Return to top


Introductory Material for Non-parametric Statistics (see also Distribution-Free Tests)

    Picking a nonparametric test (see also the section on Rank-Transforms)

        Picking an appropriate non-parametric test   


    Web-based References

        Angela Hebel's Powerpoint presentation



        Siegel and Castellan's text


        Higgins text


    Published Reports

        A report by Nijsse on appropriately testing Kendall's tau and Spearman's rho.

        A report showing how to convert from tau and rho to approximate values of Pearson's r. 

        Blair & Higgins, Psychological Bulletin, 1985 {power of the paired-samples Student's t vs. NP alternatives}

        Higgins, 2004 {comparison of the power and efficiency of common parametric and NP mean-difference tests}       

       Sawilowsky, RER, 1990 {review of NP techniques, including rank-transforms, for testing interactions}


Return to top


Why I Don't Use MS Excel for Stats

    Read this and this.


Return to top


The Visual Display of Quantitative Information

    Edward Tufte's web site (see also, Clay Helberg's Essay)


    Anscombe, 1973, The American Statistician

    Cleveland et al., 1982, Science {zooming out from a scatterplot causes observers to judge the correlation as larger}

    Cleveland, 1984, The American Statistician

    Cleveland, 1984b, The American Statistician

    Fienberg, 1979, The American Statistician

    Leong & Carlile, 1998, JNM {displaying spherical information}


    PeltierTech's Excel Tricks: How to Make a Broken Y-Axis in MS Excel


Return to top


Intra-Class Correlations

    <<see Psychometrics for Affective Neuroscientists>>


Return to top


Single Subject ("Case Study") Analyses

    Allen, 2002, Psychophysiology

    Crawford & Garthwaite, 2005

    Schretlen et al., 2003, JINS


Return to top


Effect Sizes

Web Resources

    Lee Becker's notes I and II






Published Papers 

   Abelson, 1985 {demonstrates that small effects, R^2, can make a big cumulative difference}

   Dunlap et al. 2004 {Application to Multiple Regression}

   Kraemer & Kupfer, 2005

   Levine & Hullett, 2002 {eta-squared vs. partial-eta-squared}

   Olejnik & Algina, 2003

   Pierce, Block & Aguinis, 2004 {eta-squared vs. partial-eta-squared}

   Rosenthal & Rubin, 2003

   Rosnow and Rosenthal, 1996

   Cohen et al., 1999 {"POMP"}



Return to top      


Structural Equation Modeling (SEM)

Web-based Expertise

    Ed Rigdon's collection of links and FAQs

    Graham et al., 2003, Structural Equation Modeling


Return to top


Teaching (and Learning) Univariate Parametric Statistics

Web-based Expertise (see this page also)

    Betty Jung's links

    British Medical Journal Statistics Notes

    Gerard E. Dallal's Little Handbook of Statistical Practice

    G. David Garson's StatsNotes

    Clay Helberg's Essay

    Paul Johnson's Software

    Don Macnaughton's collection of links and papers

    B. Weaver's collection of links

    An Introduction to ROC Analyses

    About.Com's Tips I and II and III and IV

    UC-Davis PostDoc Tips

    David Saville's flowchart showing relations between MSE, SD, SED, and LSD

    Frank Schmidt on NHST   

    Kris Preacher's Practical Stats Notes


Radiology Primers


    Graphing Data

    Interrater Reliability

    Nonparametric Statistics

    Receiver Operator Characteristics (ROC)

    Risk and Odds Ratios

    Sample Size Calculations I and II

    Testing Proportions



    Art Glenberg's XXXX

    David Howell's XXXX, 5th ed.
    Gary McClelland's Seeing Statistics


Statistics Packages




Return to top



Bayesian Statistics

    Goodman, 2005, CT: Bayes I

    Louis, 2005, CT: Bayes II

    Berry, 2005, CT: Bayes III


Cluster Analysis

    Van der Kloot et al., 2005


Coefficient of Variation Inferences

    Tian, 2005, SiM


Computing the Average Probability of Replication (now required for submission to Psychological Science)


        Killeen, 2005, Psychological Science

        Cumming, 2005, Psychological Science

        Doros & Geier, 2005, Psychological Science

        MacDonald, 2005, Psychological Science


    Excel Calculators

        Killeen's calculator

        Cumming's calculator


see also Cohen, 1994


Contrast Coefficients of Orthogonal Polynomials: 3 to 75 groups


    Fisher & Yates, 1963, Table XXIII (available at the Digital Fisher Archive)

Note on the organization of Table XXIII

Coefficients are arranged vertically from left (linear) to right (quadratic, etc.)

For n = 9 and above, only the right-hand half of the code is provided (including the mid-point for odd numbers of groups).


Contrast Codes and Dummy Codes Scripts

    Raynald's SPSS Code: I and II

    An alternative is to run a GLM with the variable (e.g., ID) you wish to code as k-1 codes...and have SPSS print the contrast codes to the output file...then use those in, e.g., regression analyses


Controlling Familywise Error in Massively Univariate (Neuroimaging) Datasets

   Garcia, 2004 {critique of Bonferroni}

   Miller et al., 2001 {FDR}

   Nichols & Hayasaka

   Nichols & Holmes


Note: p(FW Error) = 1 - (1-alpha)^c where alpha = PW alpha, and c = number of orthogonal tests

Note: The number of positive tests expected by chance is less than or equal to c * alpha


Controlling Familywise Error in More Normal Datasets (see also the SISA calculator), Such as High-Density EEG

   Garcia {describes Bonferroni, provides SISA reference, and describes various FDR methods (see also here)}

   Ventura et al.'s FDR Approach: article, matlab code, and readme

Correlations, Flavors of

   Keith Calkins' site

   Dennis Roberts' take

   Wikipedia's take {see especially this image of different r coefficients}

   Correlation simulator and downloadable

   A related take


Dichotomizing (e.g., Mean/Median Splits) the Extreme Groups Approach

    Cohen, 1983, Applied Psychological Measurement

    Irwin & McClelland, J. Marketing Research

    MacCallum et al., 2002, Psychological Methods

    Preacher et al., 2005, Psychological Methods

Ethics: American Statistical Association Guidelines for Statisticians



Fitting Distributions

    Cousineau et al., 2004   


Missing Data

    Schaefer & Graham, 2002, Psychological Methods


Ordinal Regression (e.g., Log Regression)

    Ananth et al., 1997, IJE


MS Excel Tricks

    Excel Oddities

    PeltierTech's Excel Tricks: How to Make a Broken Y-Axis in MS Excel


Philosophy of Science: Causes and Effects

    Holland, 1986, JASA


Probing GLM and HLM Interactions

    Kris Preacher's Introduction and Tools (see also Aiken & West, 1991)

    Provalis Research's Italassi Interaction and Moderator|Mediator Viewer (Freeware) 


Random Sampling of Stimuli

    Wickens & Keppel, 1983


Running a Repeated-Measures GLM as a Between-Subjects ANOVA or a Linear Regression

    An excel-based overview   


    Tabachnick & Fiddell's Computer-Assisted Research Design and Analysis

          Repeated-Measures ANOVA computed as a regression

          Mixed-Model ANOVA computed as a regression

          Repeated-Measures ANCOVA with Changing Covariates


Trimmed Estimators of Central Tendency

    Leonowicz et al., 2005


What Statistical Reviewers Like and Dislike

    Martin Bland's essay



Return to top


Return to Shackman's Homepage or Laboratory for Affective Neuroscience or Waisman Laboratory for Brain Imaging and Behavior