This webpage contains excerpts from the SPM usergroup's burster. SPM users around the globe run into the same problems as you do here are there questions, with answers from the experts! You can search the SPM burster archives (at the official SPM website) for a particular keyword, or peruse this webpage, which has the same items, only organized topically. You will find that this webpage adheres to the Socratic method, but you don't have to do all that walking to and fro. There are several general categories:
Click on an item in the outline below to go to a particular topic, or just dig in and start reading! (Items without hyperlinks in the outline are still pending...) Note that most of the topics (but not the individual discussions) are repeated in the several major categories, so if you want to find out about e.g. Contrasts, you can look in the "Menu Items" for how to enter a contrast, in "Analysis Items" for what contrast may be appropriate to a particular study, and in "Concepts" for a more detailed look at how contrasts work under the hood.
Many of the answers/responses have been edited slightly for brevity's sake. If you feel there was too much editing, you can easily search the archive and obtain the full text, since the responder and date are included after most of the entries.
Return to LfAN SPM resources page.
II. Model Items (Creating a design matrix)
fMRI design with >16 sessions: picking scans
> We are attempting an fmri analysis, using a design matrix with 16
> sessions. When we run the estimation model, instead of asking us which
> scans are required for each individual session, it asks which scans are
> required for all sessions. Choosing all scans at the same time does not
> work. When we use 12 sessions or lower, we are asked which scans are
> required for each session (1 through 12), and the analysis runs
> correctly. Does anyone know why can we do this with 12 sessions and not
> 16?
This is my fault. The idea was that some designs could be viewed as a
series of short sessions (e.g. burstmode or sparse sampling). In this
context it would be easier to select all sessions at once.
The limit is 16 sessions. To change this, say to 32, change line 236
in spm_fmri_spm_ui.m from
if nsess < 16
to
if nsess < 32
% get filenames
%
nsess = length(xX.iB);
nscan = zeros(1,nsess);
for i = 1:nsess
nscan(i) = length(find(xX.X(:,xX.iB(i))));
end
P = [];
****if nsess < 16 ****
for i = 1:nsess
str = sprintf('select scans for session %0.0f',i);
if isempty(BCH)
q = spm_get(Inf,'.img',str);
else
q = sf_bch_get_q(i);
end %
P = strvcat(P,q);
end
else
[Who/when ???}
Y is the fitted response.
y is the adjusted data (adjusted for confounds and bandpass filtering)
Raw data has to be read from the Y.mad file.
For the plot option "fitted and adjusted responses", Y and y
refer to the whole timeseries.
For the plot option "event or epochrelated responses", Y and
y refer to peristimulus time (effectively "averaging across
trials").
The PSTH option is just a way of binning the y values into separate
peristimulus timebins, allowing calculation of mean and standard errors
within each timebin. There is no easy way of collapsing y across
trialtypes (eg plotting a contrast of eventrelated data), other than
collapsing across trialtypes in the original model (ie having one
column only).
> Ran into a problem when I tried to plot fitted and adjusted responses
> against time a couple times. Despite high significance, SPM99
> indicated that no raw data had been saved at that voxel and gave me the
> option of moving to the closest point which had data; when I chose this
> option, it went to another cluster entirely. In one case, this was for
> the local maximum of the most significant cluster (better than 0.000
> uncorrected for the voxel and for the cluster). The entire area was
> grayed in on the glass brain. The default for the statistics was set
> at 0.001 so it seemed like there should not have been a problem. Am I
> doing something wrong or is this a bug? For which voxels is SPM99
> supposed to be saving raw data for?
This apparent paradox is due to the fact that the p value for a
particular Tcontrast may be more significant than the default
Fcontrast used to decide whether to save data in Y.mad. All I can
suggest is that you reduce you default threshold further. Note that
you can still plot fitted responses and standard error for every voxel
(but not the actual residuals thenselves unless the data are saved in Y
.mad). To do this simply decline the option to 'jump'.
[Karl Friston 3 Jul 2000]
Plot 'contrast of parameter estimates'
Q. I have some PET data I'm trying to interpret, but I'm not sure about what is plotted in the 'contrast of parameter estimates' when they are plotted for each condition for an effect of interest in SPM99.
A. This barplot shows the meancorrected parameter estimates of all
effects of interest. The red lines are the standard errors of the
parameter estimates.
Q. This plot seems to be representing the 'size' of the effect on interest at a given maxima  my question is how a negative value in this plot should be interpreted. Is it a 'deactivation'?
A. Because of the mean correction, the barplot shows the deviations of the parameter of interest estimates from their mean. Therefore a negative value does not necessarily mean that the parameter estimate is negative, it is just lower than the mean of the parameter of interest estimates. Note that the (non meancorrected) parameter estimates of a given voxel are stored in the workspace variable 'beta', when you plot them. By typing beta in your matlab window, you can display them.
Q. Or, asked another way, what does the 0 effect size mean in these plots?
A. It means that this parameter estimate is equal to the mean of all
parameter of interest estimates. As a special case of only one parameter
of interest, it would mean that this parameter is zero.
I guess that the typical use of this plot is to easily assess the
relative sizes of the parameter estimates for a given voxel. You could
also use this plot to extract the vector of parameter estimates (and
other variables like the standard errors of the parameter estimates, the
fitted and the adjusted data stored in 'SE', 'Y' and 'y') from SPM99.
[Stefan Kiebel, 20 Jun 2000]
Q. Can anyone tell me what exactly is being plotted when I choose "contrast of parameter estimates" for my plot.
A. This plot shows one or more linear combinations of the parameter of interest estimates, where the linear combinations are contrasts. In the case, when you specify 'effects of interest', there is one (mean corrected) contrast for each parameter such that each grey bar shows the relative height of each estimated parameter of interest. In the case that you specify one of your own contrasts, the single bar shows the estimated parameters of interest weighted by the contrast. In both cases, the red line denotes the standard error SE of the weighted parameter estimates. The range of the red line is [SE SE]. If you like to read some informative matlab code, the possibly most exact description can be found in spm_graph.m, lines 231  251.
Q. How does this relate to the fitted response?
A. Let the general linear model used in the analysis be
Y = X * \beta + \epsilon
where Y are the functional observations, X is a design matrix, \beta the parameter vector and \epsilon the error of the model. Let b be the estimated parameters. The design matrix X can be subdivided into
X = [X_1  X_2], where X_1 denotes the covariates of interest and X_2 the covariates of no interest. Equally, b = [b_1 b_2]. Let c be the contrast(s) you choose for the plot, where c is a matrix with onecontrast per column. Then c' * b is the height of the grey bar(s) plotted. Note that c is zero for all estimated parameters of no interest b_2. The fitted (and corrected for confounds) data is then given by X_1 *b_1.
To make it complete, the adjusted data (and corrected for confounds) is given by
X_1 * b_1 + R, where R are the residuals R = Y  X*b.
In other words, the relationship between the contrast of parameter estimates and the fitted response is given by the parameter estimates. In one case you weight the parameter of interests by a contrast vector and in the other case, you project the estimated parameters (of interest) back into the time domain.
[Stefan Kiebel, 21 Jun 2000]
> I want to plot parametric responses (time x condition effects) using the
> same scaling (e.g. from 1 to 2 with 0.5 steps) on the zaxes for
> different subjects . In the interactive windows "attrib" (plot controls)
> I can change only the xaxes (Xlim, peristimulus time) and the yaxes
> (YLim, time) but not the zaxes (responses at XYZ). Has someone a
> modified matlab script (I think spm_results_ui.m and spm_graph.m) to do
> this ?
One could of course modify spm_results_ui.m, but I think the shortcut
for you is to change the ZLim (or any other property of the plot)
directly from within matlab. To make the figure the current axes, click
on your plot and then type:
set(gca, 'ZLim', [1 2]);
and
set(gca, 'ZTick', [1:0.5:2]);
[Stefan Kiebel 17 July 2000]
> Is there a way to use the matlab window to obtain the values used by SPM
> to generate plots (contrast of parameter estimates)? I am interested in
> obtaining the plot values and the standard deviation.
Yes, during each plot in SPM several values are stored in workspace
variables. When you plot the parameter estimates or a contrast of these,
SPM writes the variables beta (vector of parameter estimates) and SE
(standard error) to the workspace.
If you look at spm_graph.m lines 241  251, you can e.g. see how SPM99
generates the bar plot based on beta and SE.
[Stefan Kiebel 27 July 2000]
To customize the segmentation:
One of the steps is to explore [spm_sn3d.m] file. And then insert the
following line at the beginning of the main routine (way below where the %'d
lines end, and after the definition of global values, eg, line number 296)
in spm_sn3d.m
sptl_CO=0;
This will direct you, when you're running normalization, to choose all the
options currently available in SPM normalization. Read the descriptions in
the comment lines.
[Jae S. Lee 21 Jun 2000]
> we have constructed a design matrix with 60 sessions. When we explore the
> design we are able to view only 51 sessions.
> Is it possible to check the other 9 sessions? Which is the matlab routine
> where the max number of session are specified?
Actually, it is only partially a SPM issue. Your 60 sessions are still
there, the limitation is due to the inability of matlab5.3.1 to display
menus with more than 51 entries on your screen. To see the other
sessions as well, you could type the following in matlab after starting
spm and cd to your analysis directory:
load SPM_fMRIDesMtx.mat
spm_fMRI_design_show(xX,Sess,60,1)
This would show you trial 1 of session 60. Change the last two arguments
to see the other sessions and trials.
[Stefan Kiebel 05 Jul 2000]
> I also have a programing question. When attempting to plot "contasts of
> parameter estimates" I am not able to view or choose from all contrasts.
> I have a data set with about 50 contrasts and I am only able to choose
> from those that fit on the screen. If I type the contrast number, SPM
> only allows me to enter 19. Is there any way to plot the data for
> contrasts that do not fit in the window?
Yes, there is a way around... It involves some typing:
1. Change line 213 in spm_graph.m from
Ic = spm_input('Which contrast?','!+1','m',{xCon.name});
to
Ic = spm_input('Which contrast?','+1','m',{xCon.name});
2. Before plotting type in the matlab window
global CMDLINE
CMDLINE = 1
The first action makes sure that you can get into command line mode and
the second actually activates the command line mode.
[Stefen Kiebel, 14 July 2000]
> Is it possible to instruct spm99 to search all voxels within a given
> mask image rather than all above a fixed or a %mean threshold?
Yes, with SPM99 it's possible to use several masking options.
To recap, there are 3 sorts of masks used in SPM99:
1. an analysis threshold
2. implicit masking
3. explicit masking
1: One can set this threshold for each image to Inf to switch off this
threshold.
2: If the image allows this, NaN at a voxel position masks this voxel
from the statistics,
otherwise the mask value is zero (and the user can choose, whether
implicit
masking should be used at all).
3: Use mask image file(s), where NaN (when image format allows this) or
a nonpositive value
masks a voxel.
On top of this, SPM automatically removes any voxels with constant
values over time.
So what you want is an analysis, where one only applies an explicit
mask.
In SPM99 for PET, you can do this by going for the Full Monty and
choosing Inf for the implicit mask and no 0thresholding. Specify one
or more mask images. (You could also define a new model structure,
controlling the way SPM for PET asks questions).
With fMRI data/models, SPM99 is fully capable of doing explicit masking,
but the user interface for fMRI doesn't ask for it. One way to do this
type of masking anyway is to specify your model, choose 'estimate later'
and modify (in matlab) the resulting SPMcfg.mat file. (see spm_spm.m
lines 27  39 and 688  713). Load the SPMcfg.mat file, set the xM.TH
values all to Inf, set xM.I to 0 (in case that you have an image format
not allowing NaN). Set xM.VM to a vector of structures, where each
structure element is the output of spm_vol. For instance:
xM.VM = spm_vol('Maskimage');
Finally, save by
save SPMcfg xM append
> If so,
> does the program define a voxel to be used as one which has nonzero
> value in the named mask image?
Not nonzero, but any positive value and unequal NaN. Note that you can
specify more than one mask image, where the resulting mask is then the
intersection of all mask images.
[Stefan Kiebel 27 Jun 2000]
> Do I have to mask this contrast by another contrast (e.g. main effect)
> and how can I specify the masking contrast?
You do not have to but if you wanted t; use a 2ndlevel model with
(AeCe) in one column and (BeCe) in another (plus the constant term).
Then mask [1 1 0] with [1 1 1]. The latter is the main effect of
Factor 1.
[Karl Friston 18 July 2000]
[also see "Model Items: 3factor design"]
For those of you wanting to specify explicit masking at the SPM (PET/SPECT)
model setup stage, here's a recipe to do it without having to resort to the
"Full Monty" design: Start SPM99 and paste the following into the MatLab
command window:

%Choose design class
D = spm_spm_ui(char(spm_input('Select design
class...','+1','m',...% {'Basic stats','Standard PET designs','SPM96 PET
designs'},...% {'DesDefs_Stats','DesDefs_PET','DesDefs_PET96'},2)));
%Choose design from previously specified class
D = D(spm_input('Select design type...','+1','m',{D.DesName}'))
%Turn on explicit masking option
D.M_.X = Inf
%Pass this design definition to SPM (PET/SPECT)
spm_spm_ui('cfg',D)

[Andrew Holmes 20 July 2000]
> It appears masking is a binary operation does this mean the mask
> specified must be in a bitmapped {0,1} format, or just that it is treated
> that way?
The latter. The mask can have any numbers. If the mask image format
(e.g. 'float') supports NaN, NaN is the masking value, otherwise it is
0.
[Stefan Kiebel 21 July 2000]
> With respect to estimating a model. I would like to potentially do an
> apriori mask of my collected brain. I could go in and just change all
> of my img files and mask explicity each one (ie zero out the
> noninteresting portions), however, any hints on where in the
> estimation code I would insert a masking to zero out the portions of
> the brain that I am not interested in estimating. That is, if we could
> we would have acquired a smaller region of volume during the scanning,
> but I can affect this by just masking my data before estimation.
Absolutely, if you want to assess the number of voxels above a given
threshold, you can count these in the timages. With respect to your
question about the masking to effectively constrain the analysis to a
ROI, you could look at
http://www.mailbase.ac.uk/lists/spm/200006/0196.html
http://www.mailbase.ac.uk/lists/spm/200007/0205.html
which might provide a solution, how to implement your explicit masking
easily (without changing each image, but just constraining the analysis
to a subset of voxels).
If you do an explicit masking, a script to counting voxels above
threshold in a ROI wouldn't be necessary, because then you could use the
cluster sizes as computed by SPM. You could also try to use a maskimage
to apply the SVC.
Mask part of the brain
> for the analysis of SPECT perfusion data, I would like to "crop" my images
> prior to statistical analysis
>  that is, remove nonbrain counts [scalp, sinuses, muscles]
>
> from reading about "Mask object" in spm_sn3d, I gather spm will not do this
> during this step. True?
>
> if not, is there a function available to do so?
Yes, there is a function to do exactly what you want. During the
statistical analysis setup, you can specify an explicit masking. To get
to this and related masking options, you have to choose Full Monty as
your design option. Then you can specify a maskimage, which could be in
your case e.g. a normalized cropped image, where NaN (or 0) would mean
to exclude this voxel from the analysis. You find a more detailed
documentation about this type of masking in the SPMhelp for PETmodels.
[Stefan Kiebel 19 Jul 2000]
Conjunctions are specified by holding down the'control' key during
contrast selection.
Get pixel coordinates for all voxels within an activated cluster
One easy way would be to position the cursor on the cluster you're
interested in (after displaying the results using the 'results' button),
and paste the following lines from spm_list.m at the matlab prompt:
[xyzmm,i] = spm_XYZreg('NearestXYZ',...
spm_results_ui('GetCoords'),SPM.XYZmm);
spm_results_ui('SetCoords',SPM.XYZmm(:,i));
A = spm_clusters(SPM.XYZ);
j = find(A == A(i));
XYZ = SPM.XYZ(:,j);
XYZmm = SPM.XYZmm(:,j);
The last two variables  XYZ and XYZmm  would contain the pixel and the
mm coordinates of all voxels in the current cluster. (Check the cursor to
see where it is after pasting the above, it may jump a bit, moving to
nearest suprathreshold voxel.)
[Kalina Christoff 25 Jun 2000]
You could also use spm_regions in 'results' (VOI)
>> help spm_regions
VOI timeseries extraction of adjusted data (local eigenimage analysis)
FORMAT [Y xY] = spm_regions(SPM,VOL,xX,xCon,xSDM,hReg);
SPM  structure containing SPM, distribution & filtering detals
VOL  structure containing details of volume analysed
xX  Design Matrix structure
xSDM  structure containing contents of SPM.mat file
xCon  Contrast definitions structure (see spm_FcUtil.m for details)
hReg  Handle of results section XYZ registry (see spm_results_ui.m)
Y  first eigenvariate of VOI
xY  structure with:
xY.name  name of VOI
xY.y  voxelwise data (filtered and adjusted)
xY.u  first eigenvariate
xY.v  first eigenimage
xY.s  eigenimages
*** xY.XYZmm  Coordinates of voxels used within VOI ***
xY.xyz  centre of VOI (mm)
xY.radius  radius of VOI (mm)
xY.dstr  description of filtering & adjustment applied
Y and xY are also saved in VOI_*.mat in the SPM working directory
[Karl Friston 26 Jun 2000]
See mean global estimates for individual raw scans.
>> load SPMcfg.mat
>> plot(xGX.rg)
Change the threshold for global normalization.
If you want to try different thresholds, then you need to modify line 57 of spm_global.c, and then recompile. The modification would involve something like changing from:
s1/=(8.0*m);
to:
s1/=(4.0*m);
 1) This may be a really idiotic question, but how does one view the
 uncorrected tstatistic images? I'm assuming that viewing the tstatistic
 images for a given contrast using the default values: "corrected height
 threshold = no", "threshold {T or p value} = 0.001", and "extent threshold
 {voxels} = 0" still applies a correction that is based on the the
 smoothness estimates and consequently the number of resels.
This displays the raw uncorrected t statistics that are more significant
than p<0.001. There is no correction for the number of resels when you
dont specify a corrected height threshold.
Another way of displaying the statistic images is to use <Display> or
<Check reg>.
[John Ashburner 21 Jun 2000]
> I'm performing a manual rotation and I don't know how to save the
> rotated image.
Use the display button. Your image will come up in the graphics window.
Use the gray boxes to the left and below the image to alter the
orientation, then, when you are happy with the result, press the
reorient images button in the same window. spmget will launch. Select
the images you want to be rotated (the image you have been working on
+/ any others), and the changes to the orientation will be written out
in a *mat file.
[Alex Leff 19 July 2000]
A right click in the background of an SPM results table brings up a context
menu including options to "Print Text Table" and "Extract Table Data
Structure". The first prints the table as plain text in the Matlab command
window, the second returns the table data structure to the base matlab
workspace (as 'ans'). See the help for spm_list.m for further details (also
available from the table context menu as "help").
 I'd like to create, for each individual subject, a subtraction image that
 reflects %change in normalized rCBF. Thus, instead of tvalues, the pixel
 values of this image would be numbers reflecting change above or below
 average whole brain. In my particular case, I have two baselines and two
 activations, so I'd like to create the percent change subtraction image of:
 (i1+i3)/2  (i2+i4)/2.

 Is there a way to easily accomplish this in SPM? As far as I can tell,
 proportional scaling only comes as part of a process that produces a
 statistical parametric map image, and I don't see anything in the image
 calculator that would enable me to perform this step separately (i.e., take
 an image, normalize each pixel by whole brain average, and then do the
 subtractions).
In Matlab, you can obtain the "globals" for each image by:
V = spm_vol(spm_get(4,'*.img'));
gl1 = spm_global(V(1))
gl2 = spm_global(V(2))
gl3 = spm_global(V(3))
gl4 = spm_global(V(4))
Then these can be plugged into the ImCalc expression by:
(i1/gl1+i3/gl3)/2  (i2/gl2+i4/gl4)/2
I think you actually need to enter the values of the globals rather than
the variable names.
[John Ashburner 11 Aug 2000]
If you have problems with SPM halting, and perhaps with your Matlab
session also quitting, type the following before entering Matlab:
unlimit stacksize
N.B. this only works on a Unix machine.
1 subject compared to controls
 1. How can SPM best be used to compare a single subject to a group of
 controls in order to establish the pattern of regional abnormalities? I
 have tried using the two sample ttest, two groups, one scan per subject
 model, with success, but was wondering if anyone had ideas about other
 approaches using the software.
This is probably the best approach, but it may be worth also modelling
confounding effects such as age or possibly nonlinear age effects
(by also including age^2 and age^3).
Depending how many controls you have, you may also wish to try a
nonparametric analysis using SNPM.
[John Ashburner 13 July 2000]
> number of conditions or trials : 1
> (is this correct? Should I enter "2"?)
Yes. With one condition alternating with rest it is appropriate to model the
rest implicitly by specifying just the active condition onsets. To use 2
conditions would not be wrong, but is redundant.
> Results button
> > I set default value for mask, threshold and so on.
> I set tcontrast "1 1" or "1 1", is it correct?
> I want to zscore, which is (mean(rest)mean(activation))/SE,
> but the different options give different zscores.
This is what you are doing wrong I think. You specified one condition so
have two columns in the resulting design matrix. One represents the boxcar
(activation vs rest), the other is a constant term modeling the mean
activity over all conditions. Your tcontrasts are comparing these two
regressors, which will give weird results.
What you should do is use contrasts [1] or [1] to see areas where
activation>rest, or rest>activation respectively. If you had used two
regressors to model activation and rest separately then the corresponding
contrasts would be [1 1] and [1 1].
[Geraint Rees 25 July 2000]
1 group, 2 conditions, 1 covariate
PET/SPECT models: Multisubject, conditions and covariates
 I'm trying to do simple correlations with SPM99..will someone please
 help me, this should be very simple.

 I have 2 PET scans per subject, one at baseline and one on drug. I
 have 2 clinical rating scores, one at baseline and one after drug.
 I want to look at increases in GMR after drug correlated with
 increases in the clinical rating. I also want to look at negative
 correlations. What model should I use and how do I define the
 contrasts??
PET/SPECT models: Multisubject, conditions and covariates. For each
subject, enter the two scans as baseline and then drug. One covariate,
values are the clinical rating scores in the order you selected the scans,
i.e. baseline score for subject 1, drug score for subject 1, baseline score
for subject 2, drug score for subject 2, &c. No interactions for the
covariate. No covariate centering. No nuisance variables. I'd use
proportional scaling global normalisation, if any. (You could use
"straight" Ancova (with grand mean scaling by subject), but SPM99 as only
offers you AnCova by subject, which here would leave you with more
parameters than images, and a completely unestimable model).
Your model (at the voxel level) is:
[1] Y_iq = A_q + C * s_iq + B_i + error
...where:
Y_iq is the baseline (q=1) / drug (q=2) scan on subject i (i=1,...,n)
A_q is the baseline / drug effect
s_iq is the clinical rating score
C is the slope parameter for the clinical rating score
B_i is the subject effect
...so the design matrix has:
2 columns indicating baseline / drug
1 column of the covariate
n columns indicating the subject
You will have n1 degrees of freedom.
Taking model [1] and subtracting for q=2 from q=1, you get the equivalent model:
[2] (Y_i2  Y_i1) = D + C(s_i2s_i1) + error
...where D = (A_2  A_1), the difference in the baseline & drug main
effects. (Note that this only works when there are only two conditions and one scan per condition per subject!) I.e. a simple regression of the difference in voxel value baseline to drug on the difference in clinical scores, exactly what you want.

Entering [0 0 1] (or [0 0 1] as an Fcontrast will test the null
hypothesis that there is no covariate effect (after accounting for common
effects across subjects), against the alternative that there is an effect
(either positive *or* negative. I.e., the SPM{F} will pick out areas where
the difference baseline to drug is correlated with the difference in
clinical scores.
[0 0 +1] and [0 0 1] as tcontrasts will test against one sided
alternatives, being a positive & negative correlation (respectively) of
baseline to drug scan differences with difference in clinical scores. Since
you're interested in both, you should interpret each at a halved
significance level (double the pvalues). This will give you the same
inference as the SPM{F} (which is the square of the SPM{t}'s), but with the
advantage of separating +ve & ve correlations in the glass brain for you.

Incidentally, the variance term here incorporates both within and between
subject variability, and inference extends to the (hypothetical) population
from which you (randomly!) sampled your subjects from.
[Andrew Holmes, when ???]
> Given 2 conditions, 1 scan/condition, 1 covariate obtained at each scan,
> meancentered covariate with proportional global scaling. A condition &
> covariate design with a contrast 0 0 1 is equivalent to correlation
> between the change in covariate and the change in the scans.
Indeed or more precisely the partial correlation between the covariate
and scanbyscan changes having accounted for the conditionspecific
activations.
[Karl Friston 28 Jun 2000]
> I have a SPECT study with 34 patients and 2 conditions per patients and 1
> covariate.
> I want to find the regions where there is positive corelation between the
> rise in blood flow from the first scan to the second scan with the
> covariate.
> I centered the covariate around 0 and used a new covariate of +a/2,a/2 as
> a new covariiate as recommended by Andrew Holmes.
>
> Could anyone please explain to me what would the difference be in this
> case, if I use the "Multi subject covariate only" design or a
> "Multi subject condition and covariate design" and use a [0 0 1] contrast.
If you use 'Multi subject condition and covariate design', the model
expresses your assumption that each series of observations in a voxel
(over subjects) can be explained by subject effects, condition effects
(which are the same for all subjects) and by your covariate.
If you choose 'Multi subject covariate only', you express your belief
that there is no need to model a condition effect, but that your
covariate alone times the estimated slope is a good explanation for your
observations.
So the difference between the two models is that in the first you model
some additive condition effect commonly observed over all subjects.
[Stefan Kiebel 25 July 2000]
1 group, 2 conditions, 1 covariate, 2 nuisances
> I will first start with what we have: Within an fmri study,
> One group
> Five subjects
> Two conditions
> Auditory Monitoring versus its own baseline
> Working Memory versus its own baseline
> Two nuisance variables
> anxiety score (one score per subject)
> Depressive mood score (one score per subject)
>
> One covariate of interest
> error score on the working memory task
> This is what we did
> Design Description
> Desgin: Full Monty
> Global calculation: mean voxel value (within per image fullmean/8 mask)
> Grand Mean scalingL (implicit in PropSca global normalization)
> Global normailzation: proportional scaling to 50
> Parameters: 2 condition, +1 covariate, +5 block, +2 nuisance
> 10 total, having 7 degrees of freedom
> leaving 3 degrees of freedom from 10 images
>
> Is this a valid way of looking at this? We are concerned with the
> large degrees of freedom that we are using up. Also how would we
> accurately interpret such a model? Does the statistical map only
> represent activations that are associated with the covariate of
> interest after controlling for anxiety and depression scores?
Firstly I assume this is a second level analysis where you have taken
'monitoring' and 'memory' contrasts from the first level. If this is
the case you should analyse each contrast separately. Secondly do not
model the subject effect: At the seond level this is a subject by
contrast interaction and is the error variance used for inference.
Thirdly a significant effect due to any of the covariates represents a
condition x covariate interaction (i.e. how that covariate affects the
activation).
I would use a covariates only single subject design in PET models (for
each of the two contrasts from the first level). A secondlevel
contrast testing for the effect of the constant term will tell you
about average activation effects. The remaining covariatespecific
contrasts will indicate whether or not there is an interaction.
[Karl Friston 17 July 2000]
1 group, 2 conditions, 3 levels/condition
> we're attempting to conduct a parametric analysis.
> We have two condition A(experimental condition) B(control condition); in the
> experimental condition the parameter assumes 3 different values.
> 1) Which is the difference between choosing a polynomial or a linear
> relationship in the model?
A linear model is simply a polynomial model with 0th and 1st order
terms. Any curvilinear relationship between evoked responses and
the parameter of interest would require 2nd or higher order terms
to be modeled.
> 2) In the results session how can we specify the contrast for the AB
> difference? and for the parameter effect on the experimental condition?
Simply test for the [polynomial] coefficients one by one. The 0th
order term (e.g. [1 0 0]) models the mean difference between A and B
averaged over the three levels. The 1st order coefficient (e.g. [0 1
0]) reflects the linear dependency on the parameter and the 2nd (e.g.
[0 0 1]) or higher reflect the nonlinear components. The 0th order
term is the conventional boxcar or condotionspecific effect modeled
in simple, nonparamteric analyses. Note that because you only have 3
levels in condition A a 2nd order model is the highest you would
consider ( a parabola can join three points together).
[Karl Friston 29 Jun 2000]
> I chose the multisubject: conditions and covariates design, for
> PET scan. The four scans/conditions (A,B,C,D) were entered in
> timeorder. I want to compare only two conditions (e.g. B and C) in
> the analysis. Do I have to put the other conditions on 0 while
> defining contrasts (i.e. 0,1,1,0), or do I have to make another
> spm.mat file in which only the two conditions I want to compare are
> taken and define contrasts as 1,1? Is there a difference between the
> two ways?
The first solution is the more appropriate one. One should generally try
to specify one design matrix modelling all observations and then use the
one estimated parameter set to compute all the contrasts.
The difference between the two solutions is that in the first case you
make the assumption that it is valid to use all scans for estimating the
variance under the null hypothesis, even if a contrast vector element is
zero for the associated basis function/condition. This is a valid
assumption for a fixed effects PET analysis. As a result, you have more
degrees of freedom in your statistical test at each voxel such that the
analysis is more sensitive to the underlying signal.
[Stefan Kiebal, when ???]
> Further to my question to you earlier this week which was:
> Q. My paradigm is a block design with 4 different active blocks each
> followed by it's respective null block, i.e I have 4 different null
> blocks. How do I go about specifying the design matrix for a second
> level analysis taking these different null blocks into account?
> e.g.
> if my 4 active blocks are: A1 A2 A3 A4
> and my 4 null blocks are: N1 N2 N3 N4
>
> If I specify trials 18 in the following order:
>A1 N1 A2 N2 A3 N3 A4 N4
>
> how do I contrast [A1N1]  [A2N2]? or vice versa?
>
> Your answer was:
> A.You would simply specify 8 conditions (A1  N4) and use the appropriate
> contrasts.
> Unfortunately, 'use the appropriate contrasts' is the bit we don't know how
> to do now that we have so many different nulls.
>
> I have specified the conditions 18:A1 N1 A2 N2 A3 N3 A4 N4
> for simple contrast A1N1 I've used:1 1 0 0 0 0 0 0
> & for contrast A2N2 I've used: 0 0 1 1 0 0 0 0
> How do I specify a 2nd level contrast looking at the activity in A1 minus
> it's null N1 versus the activity in A2 minus it's null N2,
> i.e. [A1N1]  [A2N2]?
>
> If I use:A1 N1 A2 N2 A3 N3 A4 N4
> 1 1 1 1 0 0 0 0
> then surely this is just adding the activity in A1 and A2 and taking
> away the activity in N1 and N2 which is not what we want to do.
In fact this is [A1N1]  [A2N2] and is exactly what you want. I
think the confusion may be about the role of the 2ndlevel analysis.
To perform a second level analysis simply take the above contrast [1
1 1 1 0 0 0 0] and create a con???.img for each subject. You
then enter these images into a one sample T test under 'Basic Designs'
to get the secondlevel SPM. To do this you have to model all your
sujects at the first level and specify your contrasts so that the
effect is tested in a subjectspecific fashion:
i.e. subject 1 [1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 ...
subject 2 [0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 ...
...
[Karl Friston 19 July 2000]
>how do I contrast [A1N1][A2N2]? or vice versa? i.e perform a 2nd order
>contrast.
The first issue is exactly what question you are asking. [A1N1] vs
[A2N2] looks like an interaction, and I think that this is what you
are after. You can think of it as comparing the 'simple main effect'
AxNx in two contexts, x=1 and x=2. Put another way, the
interaction is the 'Aspecific activity' in context 1 compared with
the 'Aspecific activity' in context 2, each being compared with its
own baseline. Let me know if this is not what you need.
>Would the appropriate contrast be A1N1A2+A1?
No, it would be A1N1A2+N2 (I suspect that this is what you meant
and that the A1 on the end is just a typo). Thus with your
covariates ordered as specified above, your contrast will be 1 1 1
1 0 0 0 0). As Karl pointed out (but for a different contrast), you
now need to perform this contrast on each of your subjects, within
the 'fixed effects' design matrix:
Subject 1: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 ...
Subject 2: 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 ...
etc.
Each contrast image generated (i.e. one for each subject) gets
entered into a onesample t test in the 'second level' analysis. The
question which you now ask of every voxel is whether its value
departs significantly from zero (which is its expected value under
the null hypothesis).
Incidentally it may be worth just mentioning an alternative (less
good) approach, which I suspect that you might have been considering.
(You can ignore this bit if you like.) You could specify the simple
main effect contrasts A1N1 and A2N2, and test for the difference
between them. Thus your contrasts would be
Subject 1 (A1N1): 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...
Subject 1 (A2N2): 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 ...
Subject 2 (A1N1): 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 ...
Subject 2 (A2N2): 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 ...
In this case, the second level analysis would test whether the A1N1
contrasts, as a population, are significantly greater than the A2N2
contrasts. The reason why this is less good than the first approach
outlined above is that it is equivalent to an unpaired t test (in
which you just compare an 'A1N1' population with an 'A2N2'
population) whereas your data are obviously paired (i.e. each A1N1
estimate goes with the A2N2 estimate for the same subject).
However, if you can do a paired t test, then as I understand it the
result should be exactly the same as the first analysis  I've never
tried this so I don't know if it is possible within SPM99.
[Richard Perry 20 July 2000]
> We have conducted a Working Memory study in H2OPET with the following
> design:
>
> 8 subject
> 4 Conditions:
> A: WM 1 (high load)
> B: WM 1 (low load)
> C: WM 2 (high load)
> D: WM 2 (low load)
> 3 Scans/per condition/subject
>
> Thus we have a total of 96 scans. To look for the main effect of
> Working memory and for the domain specific effect we have chosen the
> Multisubject: cond x Subj interaction & covariates design. We find
> nice WM main effects and also interesting domain specific effects.
>
> Experimental question: We are interested if there is a correlation
> between performance (as measured by RT) and WMspecific activation.
>
> What kind of design do we have to choose? As we work mainly with
> fMRIstudies we first thought of a second level analysis. That is:
> Formulate subjects specific contrast, e.g. WM 1 high load minus WM 1
> low load on the first level, then feed the eight resulting conimages
> into basic models (simple regression) and chosse the median of the RT
> for the three WM 1 high load scans as covariates into this modell. Then
> our analysis should yield regions in which there is a correlation of
> one WM domain with performance.
>
> Question 1: Is this analysis correct?
Yes it is but it may not be the most sensitive analysis because you are
proceeding to a secondlevel analysis whereas you have scanspecific
performance measures.
> Question 2: There are several options at the first level in which it is
> possible to specify scan specific covariates. Is it intelligible to
> choose one of these models and feed in scan specific RT in order to
> answer our experimental question? If so, what is the appropriate model?
> We have tried several but if we enter scans and covariates we use up all
> our degrees of freedom, e.g. if we use Mulitsubj: covariates only. So
> something must be wrong.
I would simplify your model and omit subject x condition interactions.
You could then enter the condition x performance interaction as a
covariate of interest. This is simply the mean corrected performance
data multiplied by 1 for high load and 1 for low load. You could do
this in a conditionspecific fashion for WM 1 and WM 2 using two
covariates but ensure the behavioural data are centered within
condition before constructing the interaction (here use 0 for the
'other' condition).
[Karl Friston, 20 Jun 2000]
> I ran SPM99b. We have tested 11 normal young subjects with H2 O15 PET
> study in five different conditions 1 at rest and 4 with auditory stimulation.
> The first design matrix to check the differences between stimulationno
> stimulation we used is:
> multisubject conditions & covariates
> conditions 0 1 2 1 2 (rest, cond 1, cond 2 cond 3 and cond 4)
> Ancova by subject
> scale subjects grand means to 50
> analysis threshold 0.8
> mean voxel value (within per image fullmean/8 mask)
> Is the right approach?
> Should be 4 1 1 1 1 ?
Yes, the design matrix specification part sounds fine. You should take
the contrast [4 1 1 1] to compare the mean effect of all stimulation
conditions vs. rest.
> The design matrix we used to look for different activity in one of the four
> conditions was.:
> multisubject conditions & covariates
> conditions 0 1 1 1 3 (rest, cond 1, cond 2 cond 3 and cond 4 wich is the relevant)
> Ancova by subject
> scale subjects grand means to 50
> analysis threshold 0.8
> mean voxel value (within per image fullmean/8 mask)
> what is the difference using 0 1 1 1 3, assuming we
> espected to find greater
> activity at condition 4?
The difference between [0 1 1 1 3] and [0 1 1 1 3] is that the first
tests at each voxel, whether the mean effect of conditions 1 to 3 is
larger than the effect of condition 4 and the second tests, whether the
effect of condition 4 is larger than the mean effect of condition 1 to
3.
> I was also interested in measuring rCBF in auditory cortex at the different
> stimulation conditions. Unfortunately the differences between conditions are so
> subtle that using a T test there is no significance with the actual number
> of subjects (11).
If you want to see the effect of each condition vs. rest, then you could
try the contrasts
[1 1 0 0 0]
[1 0 1 0 0]
[1 0 0 1 0]
[1 0 0 0 1]
If you have a specific hypothesis that e.g. condition 4 should activate
more than condition 1, you could try [0 1 0 0 1].
[Stefan Kiebel 11 July 2000]
> I have a few questions concerning a multisubjects fMRI experiment.
> For each subject we acquired 4 separate scansessions according to the
> following experimental design:
> session 1,2: Ce Cr Ce Cr Ce Cr Ce Cr Ae Ar Ae Ar Ae Ar Ae Ar
> session 3,4: Ce Cr Ce Cr Ce Cr Ce Cr Be Br Be Br Be Br Be Br
> where:
> e = encoding
> r = retrieval
> C = baseline
> A = condition1
> B = condition2
>
> Up to now, I have been able to analyze contrasts between conditions
> belonging to sessions of the same type (e.g. AeCe; ArCr for sessions 1
> and 2) both using a fixedeffect and a 2ndlevel random effect model.
>
> In addition, however, I would like to compare conditions belonging to
> sessions of a different type  though I know it would have been much
> better to include all conditions in one session. E.g. compare AeBe. I
> have tried to do this on a randomeffect basis, by computing the simple
> main effects for each subject (AeCe and BeCe) and the entering the
> corresponding con*.images into a paired ttest (AeCe vs BeCe). My
> questions here are: what am I actually looking by this approach? At
> interaction effects [of the type (AeCe)  (BeCe)]?
Yes indeed. You could construe your design as a 3 factor design:
Factor 1: C vs. Active (A or B) (2 levels)
Factor 2: e vs. r (2 levels)
Factor 3: Condition 1 vs. Condition 2 (2 levels)
Your effect is a 2 way interaction Factor 1 x Factor 3, under e.
> Do I have to mask this contrast by another contrast (e.g. main effect)
> and how can I specify the masking contrast?
You do not have to but if you wanted t; use a 2ndlevel model with
(AeCe) in one column and (BeCe) in another (plus the constant term).
Then mask [1 1 0] with [1 1 1]. The latter is the main effect of
Factor 1.
> Is there any way to look at direct comparisons (AeBe)?
Yes; just do a two sample t test on contrasts testing for Ae and Be
separately. These will be the same as the beta???.img.
[Karl Friston 18 July 2000]
> We are analyzing PET data of subjects from 2 groups who underwent the
> same stimulation. Group1 contains 7 subjects, Group2 contains 12
> subjects. We can do a single subject analysis and a RFX group
> evaluation of group 2. For the group evaluation of group 1 the number of
> 7 subjects might be not enough. (?) We want to do a comparison of the 2
> groups. Which model do we have to use for the first and second level
> analysis?
For the 1stlevel analysis use a 'multigroup conditions and covariates'
for the 2ndlevel analysis compare the two sets of subjectspecific
contrasts with a two sample t test under 'basic models'. The latter will
have reasonable power because you are using all 19 subjects.
[Karl Friston 10 Jul 2000]
> Now have properly normalized 2 groups with 5 datasets each. Paradigm:
> RARARAR (A: Activation, R: Rest). Group I: controls, group II
> patients. Now I want to subtract the group I from group II to see,
> wether there is any difference between the two groups concerning
> condition A. How shall I proceed?
If you want to assess the differences in activations this constitutes
an inference about the group x condition interaction. Having modeled
your two conditions in a multigroup PET design you will have four
columns (A  group I, B  group I, A  group II and B  group II). The
contrast you require is [1 1 1 1] for bigger activations in the
controls and [1 1 1 1] for the converse.
[Karl Friston 19 July 2000]
2 groups, 2 conditions, 1 covariate
> Given 2 conditions, 1 scan/condition, 1 covariate obtained at each scan,
> meancentered covariate with proportional global scaling. A condition &
> covariate design with a contrast 0 0 1 is equivalent to correlation
> between the change in covariate and the change in the scans.
> Can this approach be generalized to a multigroup design ? If there are 2
> groups and 2 conditions, with 1 scan for each subject under each
> condition, and a single covariate collected during each scan, then would
> one specify 1 or 2 covariates.
I would specify two, each with a groupcentered covariate. This would
allow you to look for differences in the partial correlation with the
contrast [0 0 0 0 1 1]. i.e. models group x covariate interactions.
> If only one covariate is specified, would one test for a covariate effect
> in group 1 alone with the contrast 1 0 0 0 1 (Group 1, Group 2, Condition
> 1, Condition 2, Covariate), and the group x covariate interaction with the
> contrast 1 1 0 0 1 ?
No. The interaction is not modeled with only one covariate. This contrast
is simply the main effect of group plus the main effect of contrast.
> Alternatively would one use 2 covariates ? Each covariate consisting of
> the mean centered values for a single group with 0 padding for the
> subjects in the other group, essentially what you get when you specify a
> group x covariate interation The effect of the covariate for group 1
> alone would be tested with the contrast 0 0 0 0 1 0 ( (Group 1, Group 2,
> Condition 1, Condition 2, Covariate1, Covariate 2) and the interaction
> across groups tested with the contrast 0 0 0 0 1 1 ?
Exactly.
[Karl Friston 28 Jun 2000]
> I have two groups of subjects who, on prescreening, exhibited
> differential mood responses (one group positive scores, the other
> negative) to a drug. So with 2 groups (positive and negative
> responders), 2 conditions (drug and placebo), 1 scan/subject under each
> condition, and 1 covariate (mood score) collected/scan, how would I go
> about determing whether rCMglu is affected by:
>
> 1) prescreening mood scores both within and across groups
These are the simple and main effects of mood (or group) and would be
best addressed with a secondlevel analysis using the subjecteffect
parameter estimate images (i.e. averaging over condotions for each
subject).
> 2) post drug scan mood scores both within and across groups
> and also whether:
This is a main effect of 'post scan mood score' within the post drug
level of the condition effect. This is best analysed using a
conditioncentered covariate ('post scan mood score') and testing for a
significant regression with 'drug'.
> 3) mean centering of the covariate is useful/necessary in these cases
Yes. For (1) there is not covariate required.
> 4) mean should be computed within group or across all subjects
For (2) within condition. There are other centering you could use to
look at different interactions. e.g. groupcentered would allow you to
see if there was any interaction between prescreening and post scan
mood.
[Karl Friston 28 Jun 2000]
3 groups (2 patient, 1 control), 2 conditions (ABABAB)
 I have a data set involving 3 groups of subjects (2 different
 patient groups and 1 control group) doing a simple ABABAB style
 paradigm. I have made a con* image representing the difference
 between A and B for each subject. I can do a within group analysis
 by feeding the con images for one group into a 1sample ttest and I
 can compare 2 groups using a 2 sample ttest. What I would like to
 do is compare all 3 groups. 3 pairwise comparisons would seem to be
 the best I can do in "basic fMRI models". I assume it is more
 elegant to mdel all 3 groups in a single analysis. But PET multi
 group models seem to need one image PER CONDITION rather than one
 image per subject representing the difference between conditions. Is
 there a way round this without generating spm98 style "adjmean"
 images for each condition?
Try the "Oneway Anova" option in the "Basic models". The default
Fcontrast produced is the usual analysis of variance Fstatistic for any
difference (in response, since you're looking at contrast images) between
the three groups.
You can conduct followup comparisons (comparing pairs of groups), using
simple contrasts (like [+1 1 0] for example). However, you should adjust
the significance level at which you examine these followup comparisons to
take into account the number of comparisons: The simplest method is the
Bonferroni method, in which you multiply your pvalues by the number of
planned followup comparisons. Note that although there are three ways to
compare pairs of three groups, this corresponds to six contrasts for
SPM{t}'s in SPM, since SPM's tcontrasts only effect onesided tests.
Note that Anova assumes that the intra and intersubject variability
expressed in your contrast images is constant across the three groups. If
this assumption is met, you'll get slightly more powerful paired
comparisons from the Anova model than from simple pairwise comparisons
because of the increased degrees of freedom available for variance
estimation. In essence, you're comparing two groups with a variance
estimator pooled across all three, even the one your contrast appears not
to be looking at.
Comparison of the Anova followup paired comparisons with the "2 sample
ttest" results will give some indication of whether the homoscedasticity
assumption is met.
 NB: the number of subjects is not the same in each group if this is
 relevant.
That's OK, but note that for the two level contrast image approach to
random effects analyses to be valid, the experimental design for each
subject (regardless of group) should be such that the design matrix is the
same.
[Who / when ???]
Variable epoch lengths
> I am having a few problems setting up a model for a study with variable
> epoch lengths.
>
> I have a resting breathing condition (12 x 30 second periods) and a
> voluntary hyperventilation condition (12 x 30 second periods)
>
> In the first analysis I set up a model in which I specified two
> conditions (rest and voluntary), however I have since realised that I
> maybe should have treated the study as having only one condition  vol,
> and treated the resting periods as baseline.
It should not make any difference with boxcar regressors and one basis
fuction per condition.
> In the second model I have specified 1 condition (treating the rest as
> baseline), epoch  fixed response box car, convolved with hrf and
> temporal derivative.
> The problem arises when I specify the epoch lengths (some are 5 scans
> in length instead of 6 because of variability in breathing). After I
> enter vector of onsets, SPM asks 'variable duration'. At this point I
> enter yes, then I am prompted by SPM to enter 'duration (scans)'. If I
> enter the variable epoch lengths at this point, I am also asked at a
> later point in the model setup to enter 'Epoch length (scans) for
> trials'. However, SPM will only accept scalars and not a string of
> vectors (which specify the variable epoch lengths). I don't understand
> why it asks me twice to enter epoch length and will only accept scalars
> on the second prompt (when I set up the first analysis it accepted a
> string of vectors) Also, should I be specifying each condition in the
> design matrix or only the vol condition?
Variable length epochs are dealt with using the eventrelated options.
Select eventrelated, not epochrelated when chosing your basis set.
The simpler alternative would be to have two 'vol' conditions (one of 5
scans and one of 6 scans) and simply take to average using contrasts
later on.
[Karl Friston 17 July 2000]
HRF width
> I'm trying to model eventrelated activity that is due to two stimuli
> presented sequentially with an SOA of 1.5 seconds. In looking at the
> raw data, the hemodynamic response to such an event often has a wider
> peak than does the typical HRF for a single event. I'm wondering
> whether modelling the data using a dispersion derivative (i.e., hrf +
> time + dispersion derivatives) will enable SPM to fit a canonical hrf
> to the data that has the appropriate width. In other words, does the
> dispersion derivative allow the width of the of the canonical HRF to be
> adjusted analagous to the way that the temporal derivative allows the
> onset of the canonical response to be adjusted?
Absolutely.
[Karl Friston 21 July 2000]
Realignment (a.k.a. Motion Correction)
 I realigned images [creating mean image only], and tried to run spm_sn3d,
 only to be told "not enough overlap". when I viewed the data in Display or
 Check Reg, I found that the origin was somewhere far Northeast of the
 vertex. I then tried to set the coordinates using Hdr Edit, only to fail. I
 think the message below says why:

 At 06:25 AM 03/24/2000 , you wrote:
 >Once an image has a .mat file, then SPM99 ignores the origin and
 >voxel size information in the headers. The best way of changing the
 >origin is via the Display button. It also allows you to reorient

 so, this means EITHER: a] set the origin in all images PRIOR to realign
 using Hdr edit [which can be applied to many images at once]
 OR b] once realigned, only Display can be used. [one image at a time]
 True?
Display can be used to reorient many images at the same time. It is simply
a matter of selecting all the images that are in register with the one
currently being displayed after you click the "Reorient Images..." button.
The reorientation applies a relative transformation to the images, so if
the images have been realigned or coregistered, or have different voxel sizes
or whatever, they will still be in register with each other after reorienting.

 To use Hdr edit, I would open an image in display, and set my crosshairs to
 the vicinity of the AC. Then, I would enter those voxel coordinates [in mm]
 into Hdr_edit, and apply those values to the relevant images. yes?
If you were to do it this way, then it should work (providing the images
have no .mat files).
[John Ashburner 21 July 2000]
Does one always need to reslice while doing movement correction, or is
coregister alone enough? I'm asking because it appears that one can reslice
later at the spatial normalization stage anyway.
Reslicing once only at the normalisation stage will work fine. Note that
the smoothness of the resultant data will be slightly less with one
reslice than with two.
[Geraint Rees 10 Jul 2000]
 Dear SPManagers: I am realigning multiple SPECT scans within
 subjects, and am not clear what this option offers me. I have looked
 through help files relating to realign, and am not enlightened.
Weighting of the reference image allows the realignment parameters
to be estimated from only a specific region of the reference image. For
example, there may be artifacts in the images that you do not wish
to influence the estimation of the realignment parameters. By giving
these regions zero weight in the realignment, they have no influence
on the estimated parameters. The weighting procedure uses an image
which can contain zeros and ones, or more properly, it can be thought of
as containing the reciprocals of the standard deviation at each voxel
(unlike weighting in the spatial normalisation where the weight is
the reciprocal of the variance  I must fix this).
The function minimised in the realignment is something like:
\sum_i (wt_i * ( g_i  s * f_i ))^2
whereas for the spatial normalisation it is more like:
\sum_i wt_i * ( g_i  s * f_i )^2

 I do understand that the defualt for PEt and SPECT is to make a mean
 image from a "firstpass" realignment, and tehn realigns all images to
 that. Is this other option datatype specific?
It does this whenever you use the PET or SPECT modality. It would also
do this for fMRI, but I figured it would slow things down too much if
it did two passes. Also, PET and SPECT images are noisier, so realigning
to a mean image improves the results more. Ideally, the procedure would
be repeated a few times, but again, this would be too slow.
 1. is there a number which corresponds to hardcoding "Create mean image
 only" for spm_realign?
I'm afraid the sptl_CrtWht variable only accepts the two values.

 2. what is the range of possible values for regularisation? what numbers
 correspond to "medium" and "heavy"?
Any positive value you like. Medium and heavy are given by 0.01 and 0.1
respectively. A value of zero does not regularise the nonlinear part
of the spatial normalisation.
[John Ashburner 19 July 2000]
 I have applied slice timing to my eventrelated series. For motion correction
 (spatial realignment) I would like to use a nonSPM software. This software can
 read the aV*.img images but not aV*.mat files.

 My question is: have aV* images been actually written with the slice timing
 correction imbedded? That is, if I use them outside SPM without their .mat
 files, will slice timing be still taken into account? If not, what should I do?
I think you can quite happily delete these .mat files as they probably
don't contain any additional information. Whenever any new images are
derived from existing ones, then the positional information is preserved
in the new set of images. This positional information is derived from the
.mat files if they exist, but otherwise from the voxelsize and origin
fields of the .hdr files. If the origin contains [0 0 0], then it defaults
to mean the centre of the image, which, depending if you have odd or even
image dimensions, is either an integer value or halfway between two voxels.
Because the origin field can only store integers, half voxel translations
can not be represented in the .hdr files, so this information is
written to .mat files.
[John Ashburner 25 July 2000]
 I want to coregister a structural (anat.img) and 60
 functional (func.img) images. Shall I: 1. coregister
 anat.img with <coregister> button and 2. coregister
 func.img with <coregister> button or first do 2. and
 then 1.. During second coregistration I am asked after
 target and object image. What is best to choose for
 this?
Realign the functional images, possibly creating a mean at
the same time. Then coregister the anat.img to the mean
(or possibly one of the individual images from the series).
To do this, you would have target=mean_func.img, object=anat.img
and other=none.
[John Ashburner, 13 July 2000]
 In "realign" what does mean "adjust sampling errors" and when I must
 use these option?
It removes a tiny amount of interpolation error from the data. In reality
it should not do much to the data, but it can increase your t statistics
slightly. It does slow things down a lot though. Full documentation of
what it does can be obtained via the <Help> facility in SPM99.
[John Ashburner 2 Aug 2000]
When you coregister a pair of images, and want to carry an additional image
along, then you can specify the additional image as other. In the example
below, there were two ways of doing the coregistration:
1) Target: Image 1
Object: Image 2
Other: Image 3
Changes the .mat file of image 2 so that it matches image 1, and applies
the same change to image 3.
2) Target: Image 2
Object: Image 1
Other: none
Updates the mat file of image 1 so that it matches image 2. Image
2 is already in register with image 1.
Because the voxels of image 2 are aligned with those of image 3, then all
you need to do is ensure that image 3 has the same .mat file as image 2.
In this case, you can do this by simply copying the .mat file of image 2
to that of image 3. Use the <Check Reg> button to make sure everything
is OK.
[John Ashburner 3 Aug 2000]
If your objective is ultimately to superimpose your functional activations on to
your high resolution structural image, then the route I would take would be:
1) Without spatial normalisation:
Coregister the structural image to the mean functional.
Do the stats on the realigned functional images.
Simply display the activations on the structural image (not the resliced
version).
This works because the .mat file that is written for the structural
image encodes the relative positioning of the structural image relative
to the mean functional image. You can check that this has worked with
the <Check Reg> button, and selecting the unresliced high res
structural and the mean (or any of the individual functional images).
2) With spatial normalisation:
Coregister the structural image to the mean functional.
Estimate spatial normalisation parameters either from the mean functional
or the structural image.
Apply the spatial normalisation parameters to the functional images
to get spatially
normalised functional images with whatever resolution you like.
Hit the <Defaults> button, select spatial normalisation, then the option for
changing the defaults on how the images are written, and specify
something like 1x1x1mm resolution.
Write the spatially normalised structural image using the new default
voxel sizes
(selecting the original structural image, rather than the resliced one).
The .mat files created by the coregistration (or realignment for that
matter) are incorporated into the affine part of the spatial
normalisation procedure. Processing the structural image in this way
means that it is not resampled down to the resolution of the functional
images at any stage.
[John Ashburner 4 Jul 2000]
 1. in the case of PET/SPECT, the "reference image" is the mean image
 calculated on the first pass
For PET/SPECT realignment, the reference image is the first of the series
during the first pass, and the mean of the realigned series for the
second pass.
 2. the default setting is NOT to weight this image
This is true.
 3. one cannot choose another image, since this is dataderived, reflecting
 variance StdDev, really at each voxel
When weighting the realignment, the user normally specifies their own
image. The weighting is not actually derived from the residual variance,
although I did think about incorporating this in the realingment model.

 in my case, I am aligning 2 or 3 images per subject, so I conclude this
 option would not help me, as the voxel varainces would not be a useful
 index of anything. Do you agree?
I don't think you could obtain a useful variance image from 2 or 3 images,
although I guess that some optimal variance smoothing could be used in
principle.

 >on the estimated parameters. The weighting procedure uses an image
 >which can contain zeros and ones, or more properly, it can be thought of
 >as containing the reciprocals of the standard deviation at each voxel
 >(unlike weighting in the spatial normalisation where the weight is
 >the reciprocal of the variance  I must fix this).

 when you say "fix this", are you referring to Realign or Spatial? fix to
 which, var or SD? why ?
I was referring to making the weighting of the spatial normalisation
consistent with the weighting for the realignment. Currently one uses
images that would be proportional to 1/variance whereas the other uses
weighting images proportional to 1/sqrt(variance). However, normally
the weights are either zero or one, in which case the distinction does
not make any difference.
[John Ashburner 21 July 2000]
 is it a requirement that all data from multiple subjects hav the same voxel
 sizes when running spm_sn3d?
They don't all need to have the same voxel sizes.
[John Ashburner 26 July 2000]
> Is it possible to retrieve SPMnonlineartransformation information
> computed after spatial normalization with the T1template in order to
> apply it to an other volume in the same coordinate system ?
>
The record of all the spatial transforms (linear and nonlinear) are
kept in the *sn3d.mat file produced by the normalization process
carried out on your object image. This can be applied to any image that
starts off in the same space as the original object image (normalize >
write normalized only > subjects (1, presumably) > select the *sn3d.mat
file from the smpget window > select image to be transformed).
If you just want the nonlinear transforms only, that is possible using
commands in the matlab window, but you would have to try someone else
who knows a bit more about this.
[Alex Leff 27 Jun 2000]
 with 128x128/30 Slices and voxel size 1.95x1.95x4.5 is
 it worth to do sinc normalisation or is bilinear quite
 enough?
Sinc interpolation is generally recommended for interpolation
when doing movement correction. If you reslice the images at
the realignmnt step, then I don't think you gain much by
using sinc interpolation at the normalisation stage. However,
if you just estimate the movement parameters at the realignment
stage, then these movements are incorporated into the spatial
transformations at the normalisation stage, so it is probably
better to use sinc interpolation.
[John Ashburner 17 July 2000]
If its functional data I would have thought bilinear is 'quite enough'
[Karl Friston 17 July 2000]
 Similarly, I noticed that the defaults non linear basis function parameters
 are 7x8x7. It was set at 4x5x4 in SPM 96. I've read in the spm archives that
 7x8x7 is suitable for T1 MRI images but not for PET images. Shall I go back
 to the 4x5x4 value ?
More basis functions generally works better than having fewer when the images
can be easily matched to the template. It is sometimes better to use fewer
basis functions if the brain images contain lesions, or if the image contrast
differs slightly from that of the templates. Alternatively, the amount of
regularisation can be varied in order to modify the amount of allowable warping.
The main reason for the extra basis functions used in SPM99, is that spatial
normalisation in SPM99 tends to be much more stable than the version in SPM96.
[John Ashburner, 12 July 2000]
 Can you telle me please the meaning of X ,Y, Z, when i use the mutual
 information registration method?
 I have used a PET cardiac images and I want to compare the angles that
 I've imposed with the results of SPM.
The matrix multiplication displayed after running mutual information
coregistration, shows a mapping from voxels in the stationary image,
to those in the image that was rotated and translated. This mapping
is derived from a series of rotations and translations (and zooms where
voxel sizes differ between the images). The X, Y and Z refer to voxel
coordinates in the stationary image, whereas X1, Y1 and Z1 refer to
coordinates in the other one. The matrix can be decomposed into a
series of translations, rotations, zooms and shears using the spm_imatrix
function. For example, if the display says:
X1 = 0.9998*X  0.0175*Y + 0*Z  10.0000
Y1 = 0.0174*X + 0.9997*Y + 0.0175*Z + 0
Z1 = 0.0003*X  0.0174*Y + 0.9998*Z + 10.0000
Then typing:
M = [0.9998 0.0175 0 10.0000
0.0174 0.9997 0.0175 0
0.0003 0.0174 0.9998 10.0000
0 0 0 1.0000];
spm_imatrix(M)
should produce:
ans =
Columns 1 through 7
10.0000 0 10.0000 0.0175 0 0.0175 1.0000
Columns 8 through 12
1.0000 1.0000 0.0000 0 0
which describes translations in the x and z directions of 10 and 10 voxels
and rotations about the x and z axes (pitch and yaw) of 1 and 1 degrees
(0.0175 and 0.0175 radians). The parameters are probably better explained
if you type:
help spm_matrix
[John Ashburner 25 July 2000]
if I do normalisation on anatomical and functional
images, shall I coregister both modalities first or not?
If you want to use the anatomical image to overlay the functional
activations, then one way would be to coregister the T1weighted
anatomical to the T2*weighted EPI; normalise the EPI to the EPI
template; then normalise the T1 image using the same parameters.
If you are normalising the T1weighted and T2*weighted images separately,
to separate templates, then prior coregistration is irrelevant as the
normalisation parameters will be determined independently. But this
strategy might not be best, unless you have a specific reason.
[Geraint Rees 15 July 2000]
 I am trying to normalize the contrast images of individual subjects
 to run a random effect analysis. I read some of the messages on the
 list about this and the biggest concern it seems to be that with the
 normalization procedure one could lose or get strange results on the
 outer rim of brain, because of the interpolation of NaN. I tried both
 bilinear and nearest neighbour interpolation and, just eyeballing,
 couldn't notice any strange border patterns coming up in both cases.
 Would you think anyway that with nearest neighbour interpolation
 this border effect is negligible (how does NN interp. treat NaN?)?
The edge effects involve possibly losing some voxels. For trilinear
interpolation, if any of the 8 closest voxels are NaN, then the
interpolated voxel is set to NaN. For nearest neighbour interpolation,
if the nearest voxel is NaN, then the voxel is output as NaN.
[John Ashburner 2 Aug 2000]
> Was wondering if there are any new methods for converting
> "Talairach" coordinates obtained using spm96 spatial normalization with MNI
> single brain template to the canonical Talairach's atlas coordinates. I used
> the transform from this template to the template from spm95 (PET) that
> Andreas posted but it is not perfect since the template is not matched to
> Talairach. Since the Talairach Daemon contains a scanned version of the
> atlas has anybody warped this to the MNI template?
The page:
http://www.mrccbu.cam.ac.uk/Imaging/mnispace.html
has a routine on it which does a reasonable job of MNI to Talairach atlas
conversion.
[Matthew Brett 16 Feb 2000]
> I notice that the documentation describes the template images supplied with
> SPM as, approximate to the space described in the atlas of Talairach and
> Tournoux (1988). I was just wondering how 'approximate' they are and
> whether or not the coordinates provided by SPM should be reported as being
> 'Talairach'. For example, we are interested in the fusiform gyrus and it
> seems to me that the coordinates provided by SPM are about 24mm inferior
> compared to the Talairach atlas.
Matthew Brett has posted a helpful discussion of MNI/Talairach differences
at http://www.mrccbu.cam.ac.uk/Imaging/mnispace.html that I think will
answer your queries. The bottom line in terms of reporting is to reference
activation loci to the surface anatomy of that individual subject if in
doubt.
[Geraint Reese 15 Aug 2000]
> Wouldn't it be a good idea to move the smoothing procedure
> from the "spatial preprocessing" to a step just before the result
> section. The idea is, that since smoothing (convolution with a
> kernel) and beta estimation (projection) are both linear operations
> they commute, and thus the order in which they are applied does not
> matter. So why not smooth a few beta images and the ResMS.img instead
> of hundreds rawimages.
>
> I forgot to say that switching the order of smoothing and estimation is
> a problem in first level analyses because the RPV.img is calculated in
> the "estimate" section and not in the "results" section. So how about
> moving the two steps: smoothing and resel per voxel estimation, to the
> results section?
You are absolutely right about the commutative nature of the estimation
and convolution operators
y*K = X*B*K + e*K ....smooth before
B*K = pinv(X)*y*K
y = X*B + e
B = pinv(X)*y
B*K = pinv(X)*y*K ....smooth after
However nonlinearities enter with error vaiance estimation. One would
have to smooth the residual images and then compute the sum of
squares. This part is not commutative and the number of residual
images equals the number of raw images.
i.e.diag(K'*e'*e*K) ~= K'*diag(e'*e)
Taking the sum of squared smoothed residuals is not the same as
smoothing the sum of sqaured residuals.
The reason one can smooth after a 1st level analysis is because the sum
of squared residuals do not enter into second level.
[Karl Friston]
One caveat which you may know if you've followed the list, is that the
con**.img and beta*.img images have the areas outside the brain values set
to NaN. Thus any smoothing operation at this stage will lose the outer layer
of voxels. Methods of dealing with that have been discussed previously by
John and Russell Poldrack I believe.
[Darren Gitelman]
> I am conducting a research on the response of PTSD and non PTSD patient
> to the repetition of their traumatic event. We have 2 group one of
> PTSD patients and one of non PTSD patients. For each patient we have 3
> baselines and 4 repetition of the traumatic story.
>
> We want to check the difference in the reaction to the traumatic script
> between the PTSD and control group.
>
> We use three type of covariates
>
> constant response  0 0 0 1 1 1 1
> rise during the repetition  0 0 0 1 2 3 4
> decrease during the repetition 0 0 0 3 2 1 0
>
> These covariates are not orthogonal. If I want to check in what region
> of the brain the response correspond to each of the covariate how
> should I define my contrasts?
The repetitiondependent increases and decreases are modelling the same
effect and you only need to specify one regressor:
main effect of traumatic script  1 1 1 1 1 1 1
script x [linear] time interaction  0 0 0 3 1 1 3
Note that these regressors are orthogonal (and orthogonal to the
constant term) and can be tested with contrasts [1 0], [1 0], [0 1]
and [0 1]. The last two give you increases and decreases
respectively.
[Karl Friston 13 Mar 2000]
> Alternatively would one use 2 covariates ? Each covariate consisting of
> the mean centered values for a single group with 0 padding for the
> subjects in the other group, essentially what you get when you specify a
> group x covariate interaction The effect of the covariate for group 1
> alone would be tested with the contrast 0 0 0 0 1 0 ( (Group 1, Group 2,
> Condition 1, Condition 2, Covariate1, Covariate 2) and the interaction
> across groups tested with the contrast 0 0 0 0 1 1 ?
>
> Would the contrast 0 0 0 0 1 1 in the multigroup model be a
> test of the effect of the covariate collapsed across all subjects
> disregarding group membership ?
Yes indeed  it would be the main effect of the covariate.
[Karl Friston 13 Mar 2000]
> Our data is as follows: 11 subjects and 8 scans per subject, which
> makes 88 scans. At each scan, each subject gave a subjective rating (0
> 10).
> We are trying to do a correlation analysis between subjective ratings
> and actual voxel values. The problem is that our data is not
> independent i.e. does not follow the assumption for correlation
> (regression analysis).
> Is it possible to analyze this kind of data using correlation
> analysis? Anyway, we tried the following model:
> Multisubject : Covariates only, Covariate interaction with
> subject. Is this correct?
Yes  this models the main effect of the covariate and the covariate x
subject interactions (i.e. computes a regression coefficient for each
subject). You can test for the significance of the average regression
(or indeed differences among the subjects) with the approproate
contrast.
[Karl Friston 11 Jul 2000]
> I have a few more questions concerning the multisubject correlation
> analysis.
> Does "Covariates only: interaction with subject" model take into account
> the dependence within subject?
I am not sure what 'dependence within subject' means. This design fits
a different regression slope for each subject and assumes the error
within subject is independently and identically distributed.
[Karl Friston 12 Jul 2000]
> Given 2 conditions, 1 scan/condition, 1 covariate obtained at each scan,
> meancentered covariate with proportional global scaling. A condition &
> covariate design with a contrast 0 0 1 is equivalent to correlation
> between the change in covariate and the change in the scans.
Indeed or more precisely the partial correlation between the covariate
and scanbyscan changes having accounted for the conditionspecific
activations.
> Can this approach be generalized to a multigroup design ? If there are 2
> groups and 2 conditions, with 1 scan for each subject under each
> condition, and a single covariate collected during each scan, then would
> one specify 1 or 2 covariates.
I would specify two, each with a groupcentered covariate. This would
allow you to look for differences in the partial correlation with the
contrast [0 0 0 0 1 1]. i.e. models group x covariate interactions.
> If only one covariate is specified, would one test for a covariate effect
> in group 1 alone with the contrast 1 0 0 0 1 (Group 1, Group 2, Condition
> 1, Condition 2, Covariate), and the group x covariate interaction with the
> contrast 1 1 0 0 1 ?
No. The interaction is not modeled with only one covariate. This contrast
is simply the main effect of group plus the main effect of contrast.
> Alternatively would one use 2 covariates ? Each covariate consisting of
> the mean centered values for a single group with 0 padding for the
> subjects in the other group, essentially what you get when you specify a
> group x covariate interation The effect of the covariate for group 1
> alone would be tested with the contrast 0 0 0 0 1 0 ( (Group 1, Group 2,
> Condition 1, Condition 2, Covariate1, Covariate 2) and the interaction
> across groups tested with the contrast 0 0 0 0 1 1 ?
Exactly.
[Karl Friston 28 June 2000]
> I have a very similar question to that posted by Steven Grant on June
> 9th. I have two groups of subjects who, on prescreening, exhibited
> differential mood responses (one group positive scores, the other
> negative) to a drug. So with 2 groups (positive and negative
> responders), 2 conditions (drug and placebo), 1 scan/subject under each
> condition, and 1 covariate (mood score) collected/scan, how would I go
> about determing whether rCMglu is affected by:
>
> 1) prescreening mood scores both within and across groups
These are the simple and main effects of mood (or group) and would be
best addressed with a secondlevel analysis using the subjecteffect
parameter estimate images (i.e. averaging over condotions for each
subject).
> 2) post drug scan mood scores both within and across groups
> and also whether:
This is a main effect of 'post scan mood score' within the post drug
level of the condition effect. This is best analysed using a
conditioncentered covariate ('post scan mood score') and testing for a
significant regression with 'drug'.
> 3) mean centering of the covariate is useful/necessary in these cases
Yes. For (1) there is not covariate required.
> 4) mean should be computed within group or across all subjects
For (2) within condition. There are other centering you could use to
look at different interactions. e.g. groupcentered would allow you to
see if there was any interaction between prescreening and post scan
mood.
[Karl Friston 28 June 2000]
> When one performs a multisession fMRI analysis with SPM, in either its
> SPM96 or SPM99 incarnation, my guess is that SPM removes intersession
> effects, perhaps behind the scenes. If this is so, if each session were
> a different subject, and if one tried to use the subjects' age (rounded
> to the nearest year) as a confound in the analysis, would this result in
> a problem of linear dependence in the columns of the design matrix?
>
Yes and no. It would result in a linear dependence in the design matrix
(there will be a linear combination of "sessionregressors" that equals
your "ageregressor"). It will not be a problem as long as all you want to
do is to use it as a confound (since SPM uses the generalised inverse), but
then again it would not do you any good either since the space of the
confounds is identical in both cases. You cannot use it as a covariate of
interest since it is exactly in your confound space.
In short, the confounds already modelled removes any age effects, and more.
[Jesper Anderson]
> Dear Jesper: May be I am misunderstanding something here, but I am still
> slightly puzzled at the apparent impossibility to enter age or IQ as
> covariates in an SPM fMR analysis.
> I assume it would be possible to trick the analysis into doing it by
> treating all data from one group as a single session and entering the age
> covariate for the data from each respective subject. But then
> betweensession variance would probably smother any taskrelated effects.
You are right on the first count, one can enter data as if they were from a
single session and "trick" SPM into looking at age effects.
I very strongly suspect you are right on the second count as well,
intersession variance originating from other sources of variance would
dominate over any "true" age effects. The question is related to one by Kris
Boksman a few days ago.
>
> Would all this mean that I can only look for age or IQ effects post hoc,
> e.g. looking at Z scores or mean signal change within ROIs etc.?
I don't really think it is meaningful in any way to look at main effect of age
using T2* weighted data. You could look at main effect of age using a
morphological technique (e.g. using T1 weighted and Voxel based morphometry) or
you could use a quantitative technique for measuring perfusion (i.e. PET or
perusion MRI).
With T2* data you may look at taskbyage interactions, i..e. how age affects
the response to a given stimuli. To do this you would generate the appropriate
contrast for each subject (say you are interested in how the difference between
conditions 1 and 3 changes with age, then you would enter the contrast [1 0 1
...] for each subject. In "Basic models" pick "simple regression (correlation)"
and enter the resulting con*.img images as input images and enter the age of
each subject as your covariate. This will constitute a Random effects model and
will allow you to make proper population inferences.
>From your suggestion above (i.e. to look at zscores posthoc) I suspect it may
really have been a conditionbyage interaction you were after in the first
place. Note though that by using zscores (rather than the linear combination
of parameter estimates offered by the con*.img images) you would be assessing
reliability of activation across subjects rather than magnitude. That is
slightly different and more akin towards a meta analysis.
[Jesper Anderson, 12 July 2000]]
> I'm analysing some PET data, and am interested in looking at regions where
> activity correlates with RT, and was wondering what was the best way to
> proceed.
> We have 6 subjects, 12 scans per subject
> When I choose the multisubject covariates only design, there appear to be
> two ways to proceed. One is to not have covariate * subject interactions,
> to mean centre the RTs, and then to make the following contrasts: 1 to look
> at regions where activity correlates with increasing RT and 1 the converse.
> The second is to select covariate by subject interactions, and then to have
> the following contrasts: 1 1 1 1 1 1 and 1 1 1 1 1 1.
> Could someone help explain what the differences are between these two
> analyses?
If you don't model the subject by covariate interaction, you assume that
there is a common slope of the covariate over all subjects. This saves
you some degrees of freedom, but essentially you're assuming that the
slope of these covariates is the same for each subject, if there is some
component in the observations, which can be explained by your covariate.
If you do model subject by covariate interaction, you don't make this
assumption of the same slope for each subject, but allow for fitting a
different slope for each subject. Note that this model can also be used
to generate subject specific contrastimages, which you can use as input
to a second level analysis.
[Stefan Kiebel 3 Aug 2000]
Contrasts: Fcontrasts explained
> Thank for you help. I have done what you suggested, and now, I'm stuck at the
> contrast stage.
>
> > and use Fcontrasts to test for mean effect and differences in the
> > epochrelated responses.
>
>
> I have four conditions and the design matrix now has 8 columns, 2 for each
> condition. I'm not used to looking at F maps. Usually, if I was comparing
> tmaps, I would enter a contrast 1 1 0 0 if I wanted to compare condition A
> with B. Do I now enter:
> 1 1 1 1 0 0 0 0
> or do I look at the individual activations
> 1 1 0 0 0 0 0 0 and
> 0 0 1 1 0 0 0 0
> and then use making to look at commonalities/differences?
First use the Fcontrasts computed by default in the results section 
contrast manger, under Fcontrasts. These will give you the condition
specific Fcontrasts. For example the responses due to condition 1
should look like:
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
To look for the differences between condition 1 and 2 use something like:
1 0 1 0 0 0 0 0
0 1 0 1 0 0 0 0
If you have used 'mean and decay' than you can also try the
conventional Tcontrasts looking for differential respones in terms of
mean or decay. For the 'mean' it is simply:
1 0 1 0 0 0 0 0
An Fcontrast can be though of as a collection of Tcontrasts, that you
wish to test jointly.
[Karl Friston 28 Jun 2000]
>I'm still struggling a little with understanding how contrasts need to be
>specified in spm99... The timecourse does not always look like I expect it to!
>I have the following blocked fmri design:
>control task control task control task
>If a set it up as one condition and use the contrast [1] then I see areas
>that are activated with respect to the control. The contrast [1] reveals
>areas that are deactivated with respect to control.
>However, I'm interested in the task vs. control (activation) x time
>interaction. So I specified the design as 3 conditions. I thought that I
>could mask [1 1 1] (task activated wrt control) with [1 2 3] (increasing
>trend over time) or [3 2 1] (decreasing trend over time). However, the [1
>1 1] contrast appears to show regions that are both activated and
>deactivated for task wrt control. Can anyone explain this?
If I understand correctly, you are primarily interested in
timebycondition interactions, where you have a control and a task
condition that alternate over the timeseries.
I would set up the design matrix in the following way.
Your design matrix should have four othogonal covariates:
i) Control condition (control condition not modulated over time)
ii) Task condition (task condition not modulated over time)
iii) Control condition x time (control modulated by time)
iv) Task condition x time (task modulated by time)
(iii) and (iv) are simply (i) and (ii) multiplied by a trend (linear or
nonlinear  see below). SPM99 implements this in the following way. If I
remember correctly, after you have entered the details of your variables,
it will ask about parametric specifications. The choices offered are
"none", "time" or "other". Your choice should be "time". The next choice
will relate to the nature of the expansion ("linear", "polynomial" or
"exponential"). I would choose "linear"as a first pass. You will then be
asked which trial types to apply the expansion to. Select both your trial
types.
If you choose to modulate both your conditions, there will be four
covariates of interest in your design matrix (first four columns). Your
contrasts will be:
1. Task > control (not over time) 1 1 0 0
2. Control > Task (not over time) 1 1 0 0
3. Task > control (interaction with time) 0 0 1 1
4. Control > task (interaction with time) 0 0 1 1
This provides a flexible design matrix in which you may wish to compare
(iv) either with (iii) (control and task both change over time).
Alternatively, if there is good reason to believe that the control
condition itself remains unchanging, the timebycondition interaction may
be found by 1 0 1 0 and 1 0 1 0 (compare (i) with (iii)).
[Narender Ramnani 14 Aug 2000]
>>>However, I'm interested in the task vs. control (activation) x time
>>>interaction. So I specified the design as 3 conditions.
>>
>actually what I specified was block one (first presentation of
>control, task) as condition 1, block two (second presentation of the
>same control, task) as condition 2, and block three (third
>presentation) as condition 3.
Thanks. Sorry not to have worked this out first time around. Now I
think I see the idea behind of your contrasts. The contrast 1 2 3
was supposed to pick out voxels which show an increase in
taskspecific activity over time. In fact, this contrast asks a
completely different question (not one that you are interested in),
i.e. is the sum of the parameter estimate for the first column, plus
twice the parameter estimate for the second column, plus three times
the parameter estimate for the third column, significantly different
from zero. A voxel which shows, for example, exactly the same
taskrelated activity in all three blocks (i.e. no condition by time
interaction) will show up in this contrast.
The contrast 1 0 1, however, is beginning to get towards what you
are after. This will pick out voxels which show significantly more
taskrelated activity in block 1 than in block 3, suggesting a
timedependent decrease. Similarly, contrast 1 0 1 will show voxels
which show significantly more taskrelated activity in condition 3
than condition 1, suggesting a timedependent increase.
>However, if I specify it as one condition, use the parametric
>modulation with time, and then use the contrast
>0 1 0, will this tell me the task x time interaction across all of
>the scans? (ie rather than blockwise as I tried to specify it
>above)? does this tell me the interaction between areas activated
>(task wrt control) that increase with time?
Yes, it would tell you the task x time interaction across all of the
scans. This new model is more tightly constrained, in that you have
to specify the shape of the parametric modulation with time. I would
guess that you are happy with a linear model, but SPM also offers you
an exponential model and one other model too (I can't quite remember
what).
If a voxel shows a net increase in taskrelated activity over time,
then in the new model it should show up with the contrast 0 1 0. The
voxels which will be best fitted by the model are those in which the
increase over time is linear (e.g. one in which the taskrelated
activity is 1 in the first block, 2 in the second block and 3 in the
third block). However, even a voxel in which the relationship with
time is more complicated will be fitted to some extent, provided
there is an increase overall during the experiment (i.e. there is a
linear component to the relationship).
>and 0 1 0 would tell me areas of activation that decrease with
>time? these are the two contrasts I'm interested in.
You have to be careful with use of language here. 0 1 0 does not
tell you 'areas of activation that decrease with time', in the sense
that it would also include 'areas of deactivation which increase with
time'. The contrast shows the voxels whose taskrelated BOLD becomes
more negative during the experiment. The question of whether, over
the whole experiment, there is a net positive taskrelated signal is
an orthogonal question.
If by 'areas of activation that decrease with time' you mean voxels
which satisfy the following 2 criteria ...
1. the signal during all of the task epochs is greater than the
signal during all of the control epochs (taking the whole experiment
together (to give you 'areas of activation...'); and
2. the taskrelated signal becomes more negative during the course of
the experiment (to give you '...that decrease with time')
...then you should probably mask the contrast 0 1 0 with the contrast 1 0 0
Similarly, if you wanted to find 'areas of activation that increase
with time' then you might mask the contrast 0 1 0 with the contrast 1
0 0.
[Richard Perry 14 Aug 2000]
>Dear Narender, Richard
>aha yes!! I was using F contrasts rather than t. Suddenly these
>contrast maps look much more like what I expected them to... How do
>the F and t contrasts differ? (I can also look this one up on the
>archives..)
There's a whole lot of stuff to F contrasts and how to use them (only
some of which I understand). But briefly, in a situation like yours
the F contrast is the t contrast squared. Hence it doesn't matter
whether you use +1 or 1, the result is the same.
In general, F contrasts test whether the covariates that you specify
contribute significantly to modelling the variance, regardless of the
sign of the parameter estimate. Your contrast 0 1 (or 0 1) was
pulling out all of the voxels within which the second covariate
explained a significant amount of the variance in the data.
[Richard Perry 15 Aug 2000]
> What would be the apropriate Contrast for three conditions A B C that
> tests for A > B & C ?
The contrast [2 1 1] tests if the activity in A is larger than the
mean activity in B and C. The conjunction between contasts [1 1 0]
and [1 0 1] tests if the activity in A is larger than the activity in
B, AND is larger than the activity in C. I suspect the latter makes a
little more sense.
[Jesper Andersson 10 Jul 2000]
> I have a simple contrast question. I have a paradigm wiht 3 conditions
> (a, b, ab). If I set a contrast of 1 1 1 does this show me the areas
> where ab>a+b? If not what does this contrast represent?
The contrast [1 1 2] would show you, where ab>a+b, i.e. where ab has a
larger effect than the averaged effect due to a and b.
[Stefen Kiebel, 14 July 2000]
I'm not sure if you are working with PET or fMRI. Certainly with PET
paradigms and most fMRI paradigms, contrasts across conditions within a
group, need to total zero; i.e. they need to be balanced. It appears from
what you have written that you have three conditions a, b, and ab. If so you
need to specify contrasts across these conditions that add up to zero. So to
test for areas that are more active in b than a, you would need to enter: 1
1 0. For areas more active in ab than a AND b: 1 1 2; which I think
answers the latter part of your query.
[Alexander Leff, 16 July 2000]
> I have a simple question (I hope). I have completed a multisubject (6)
> multicondition (3) fmri study. I have completed individual analysis
> and am working with a fixed effect group analysis because my degrees of
> freedom are too small. I am interested in the best method to determine
> the area(s) that are activated in the study population across all three
> conditions. In other words, what are the common areas activated across
> the three conditions in this study. If anyone has suggestions I would
> be grateful.
A conjunction analysis would be appropriate but would necessitate each
active condotion being referred to its own control, to ensure three
orthogonal contrasts. Conjunctions are specified by holding down the
'control' key during contrast selection.
[Karl]
> I have a few questions concerning conjunction analyses. I have a study
> with 9 subjects. I would like to do a conjunction
> analysis but have several questions.
>
> 1. Should corrected or uncorreted p values be used for the analysis.
> The mailbase continually refers to uncorrected p values.
>
In SPM99 a conjunction SPM comprises the minimium t values of the
component SPMs. These minimum t values have their own distributional
approximation which allows one to compute both corrected and
uncorrected p values, just like ordinary SPMs. The criteria for using
corrected or uncorrected inference is exactly the same as for any other
SPM.
> 2. In relation to question 1, should a height threshold be used. I have
> worked with SPM and realize that extent thresholds cannot be used with
> conjunction analyses.
>
The distributional approximations for the spatial extent of a
conjunction SPM are not known (at present) and therefor inference based
on spatial extent is precluded. Consequently height is currently used
to specify thresholding. This can be corrected or uncorrected and both
pertin to the final significance of the conjunction SPM (Pconj) (not the
components  in SPM99b the uncorrected height threshold refered to the
components).
> 3. What p value is most appropriate? A recent mail from Karl stated
> that between .5 and .05 is most approptiate. At what level (i.e. simple
> threshold or height threshold) is this entered? Also, could you explain
> why such a p value is appropriate.
Thresholds are entered in the results section after specifying which
contrasts are to enter into the conjunction (by holding down the
control key). The recommendations above probably refered to uncorrected
p values for the component SPMs (Pcomp) (in SPM99b). In SPM99 a
corrected p value of 0.05 or an uncorrected p value of 0.001 would be
sensible. These might correspond to 0.5 or even more from the
perspective of the component SPMs. Note that for uncorrected p values
Pconj = Pcomp^n, where n = number of [orthogonalized] contrasts.
[Karl Friston]
> We're piloting a motor learning paradigm, in which subjects are scanned
> while performing a motor task A and a motor task B, then are trained
> for a week on A alone, then are scanned while doing both tasks again.
> What we are looking for ideally is 1) strength of signaltose voxels
> which show a different activation for A than B in the second scan, but
> not in the first scan; and 2) extent of activationa larger or smaller
> area for A than B only in the second scan, but not in the first scan.
> We have three subjects, and will only run more if we think the results
> look promising, since this is a pilot study.
>
> Given the preliminary nature of the study, is it a fair assessment to
> compare cluster sizes for each subject individually for A before and
> after training, and for B the same way? Or is it better to put all
> three subjects in a single design matrix and look for conjunctions? Or
> is some other approach even better?
The conventional approach to this problem would be to create a SPM of
the effect of interest, namely the condition x session interaction.
This obtains from putting all your data into a single, session
separable model and testing for (A1  B1) > (A2  B2) or (A1 
B1) < (A2  B2) interactions with the appropriate contrast. This
approach controls for nonspecific time effects and should be
practicespecific.
The question about differential areal activation is implicitly answered
in the interaction SPM (i.e. if the area contracts the penumbra will
show a negative interaction and if it expands it will show a positive
one).
A more sensitive analysis obtains if you use a conjunction of the
interaction and main effects: i.e. test for a conjunction of the two
hypotheses: This motor responsive area (Hypothesis 1 = main effect of
A vs B) shows learningdependent adaptation (Hypothesis 2 =
interaction).
[Karl Friston 17 Apr 2000]
>I have a question regarding how to determine pvalue related to
>spatial extension (voxel number) in a conjunction analysis using
>spm99. I noticed that spm99 does not provide any pvalue for spatial
>extension for conjunction analysis, in contrast to simple contrasts.
>My understanding is that statistical assumptions underlying
>computations of those pvalues are no more valid in conjunctions of
>contrasts. Is it right? If so, how can we decide significant spatial
>overlap between several contrasts. Any insights, help or references
>adressing this question would be greatly appreciated. Thank you.
The preliminary answer is that spatial extent values are not provided
because they have not been worked out for conjunction analyses. It
also sounds like what your trying to work out is how to do a
conjunction on the differences between groups. i.e. how does one
decide whether activations seen on a conjunction analysis are
significantly different between groups.
As far as I know this can't be done in spm. Conjunctions basically
tell one about main effects. For the interactions one has to do
either fixed effects or random effectstype analyses. This of course
leads to difficulties sometimes interpreting results because the
interactions are not being looked at in the same way as the
conjunction. Perhaps something is possible with masking, but I can't
see that this would let you reject the null hypothesis. Hopefully
there will be a more expert addendum.
[Darren Gittelman]
> I have two groups of
> subjects (actually they are the same subjects, but this is two distinct
> fmri sessions) and I wanted to know which regions are involved in the
> same contrast in both groups. A conjunction analysis seems appropriate
> (the two contrasts are orthogonal). My problem is that as a result I
> got clusters including e.g. 1, 2, 10 or 50 voxels. Which extend
> threshold to use to select regions with significant joint activations?
> (in the example, 1 and 2 voxels appear too small, 50 voxels
> significant, and 10 voxels in between). My first answer would be to use
> the same threshold as in single contrast (e.g. 15 voxels, i.e. p<.05),
> because the conjunction has already taken into account the joint test
> (at least on height threshold). However, it seems to be inapproriate in
> some instance: Consider for example that each single contrast activates
> a cluster including 20 voxels and the resulting conjunction analysis
> provides a cluster of 12 voxels within each singlecontrast clusters.
> Using the same theshold (15 voxels) would reject this conjoint
> activation, but 12 voxels over 20 voxels which cojointly activated
> seems to be quite significant. I was thinking that in this instance, we
> indeed use the implicit assumption that we are testing for joint
> activations within given clusters and a standard masking analysis might
> be more appropriate. But, the same spatial extend threshold problem
> appears to occur again, if I am right. What do you think?
The corrected p values are sensitive to the spatial extent threshold
for single contrasts. In other words the p<0.05 corrected height
threshold is lower if you specifiy an extent threhold of 8 voxels than
if you use 0 voxels. For conjunctions you are forced to use 0 voxels
because, as Darren points out, the theory does not exist for > 0.
Therefore even a cluster with 1 voxel in a conjunction SPM is
significant. Remember the conjunction SPM finds the overlap among a
series of component regions. This overlap can be small but very
significant.
[Karl Friston]
> How can I integrate conjunction analysis in a multi subject random
> effect model? Single subject conjunction of two contrasts is easy to
> perform, but what do I enter at the second level or how do I perform
> the conjunction analysis over a group of subjects?
You do not (generally). A conjunction analysis at the first level,
over subjects, addresses the same thing as a single contrast at the
second level (using the subjectspecific contrast images). In some
instances you may want to do a conjunction of two contrasts pertaining
to different effects at the second level. This requires the two or
more [orthogonal] contrasts to be entered into the secondlevel model
with a simple conjunction (e.g. [1 0] and [0 1]). However, you are
making strong assumptions about the sphericity of the error terms, at
the second level, by doing this. These assumptions might easily be
violated (for example subjects who activate in contrast 1 may be more
likely to activate in contrast 2). If you are obliged to enter more
than one contrast per subject into a secondlevel analysis then you
should qualify your inferences along these lines.
[Karl Friston 4 Aug 2000]
> Following up on the issue of conjunctions, in looking at a paper by
> Keith Worsley and yourself (A test for a conjunction), it suggested [to
> me] the possibility that a conjunction could be taken over a region of
> interest as well as at the voxel level. Would this be possible, that
> is designating a region of interest, and performing a conjunction
> analysis which would test the null hypothesis that all subjects did not
> have an activation in a certain area? Then the localization would then
> pertain to the area and not to a particular voxel.
>
> Unfortunately the maths of the paper escape me, so if this is possible
> I'm not sure how to implement this over an area as opposed to a voxel.
> Any help appreciated.
By using the corrected p value one can test the null hypothesis that
one or more subjects did not activate within the search volume (to
which the correction applies). By using a small volume correction
within a conjunction SPM one can restrict the inference to a VOI.
I am not sure that this is what you had in mind but it represents a
useful combination of SVC and conjunctions.
[Karl Friston 5 Apr 2000]
Conjunctions: Differences between SPM96, SPM99
> After huge digs in the SPM archives I still didn't find the answers to
> some of my questions...Could you please reexplain to me:
>
> 1. What are the theoretical and practical differences of conjunction
> analyse between spm96 and 99?
Conjunctions in SPM96 were based on getting a significant main effect
(averaged over all the contrasts that entered into the conjunction) in
the absense of any interactions among the contrasts. This was
suboptimal because it relied on accepting the null hypothesis of 'no
interactions'.
Conjunctions in SPM99 are tested by ensuring all the contrasts are
jointly significant. The conjunction SPM would look similar to the SPM
that obtains from exclusively masking all the contrasts with theselves
to reveal the common areas or intersections. The t values are the
smallest among all the component SPM(T) values and the conjunction SPM
is a 'minimum Tfield'. Gaussian field theory is now applied to this
SPM{minimum T} to give corrected p values. We could not do this in
SPM96 because the minimum Tfield theory had not been devised.
[Karl Friston 22 Mar 2000]
>> I am uncertain about the difference and interpretation of a conjunction
>> analysis compared to masking of contrasts. For example, is inclusive
>> masking of one contrast by another the same as a conjunction between the two
>> contrasts ? When is it best to use conjunctions and when is it best to use
>> masking ?
Jose' Ma. Maisog wrote:
>I'd be interested in seeing some sort of answer to this question, too.
>My understanding from reading Cathy Price's paper (Price CJ & Friston
>KJ, Neuroimage 5:261270 (1996)) is that conjunction analysis is not the
>same as masking one contrast by another, and that a conjunction analysis
>between two contrasts is done as follows:
>
> (1) Include as a regressor in the GLM the
> interaction effect between the two contrasts.
>
> (2) Threshold the Z map for the interaction
> effect, generating a mask.
>
> (3) Use this mask to mask the Z maps for the two
> contrasts.
>
>Is this right? Or, have there been new developments (e.g., the paper by
>Worsley & Friston) which render the above obsolete? Has conjunction
>analysis changed between SPM96 and SPM99?
>
>Thanks,
>
>Joe.
Yes Conjunctions have changed in SPM96 and SPM99.
In SPM96 and SPM97, conjunctions involved calculating the main effect of
two contrasts (A1B1) + (A2B2) and then subtracting areas that showed an
interaction. In this context, conjunctions provided a Z score and the
associated probability for the main effect where there were no
interactions. However, there were a number of interpretation problems.
First, the probability generated relates to the overall main effect rather
than the liklihood that two events occur independently. Second, the
conjunction relied on the sensitivity of finding an interaction. In other
words the voxels identified were those where there was no significant
interaction. The objection here is that you can not prove a null effect.
Indeed, when we plot out some of the areas identified by conjunction
analysis in SPM96 and SPM99 we find that the effect sometimes only comes
from one contrast because the difference between contrasts doesn't reach
significance.
The conjunction analysis in SPM99 overcomes these problems by using
multiple masking and reporting the probability that relates to the
cooccurrence of two or more effects in the same voxel. Probability
decreases as the number of contrasts in the conjunction increases. The
masking option is still included because it allows you to (i) specify a
mask that is not included in the conjunction; and (ii) specify an
independent threshold for the mask; (iii) Mask with contrasts which are not
orthoganol to those in the conjunction. For example, the conjunction
(A1B1) + (A2B2) could be masked with A1 Rest and A2  Rest.
Including these masks might be useful if it is necessary to differentiate
increases in A from decreases in B.
[Cathy Price 21 July 2000]
Conjunctions: Orthogonal Contrasts
>I want to run an analysis that I have done by you just to make sure that
>my logic is sound.
>
>I have an fmri study with multiple stimulus conditions in each series. I
>am interested in two questions. The first is to locate the areas of
>activation that overlap across stimulus conditions. I have performed
>conjunction analyses at the individual and group level (fixed effects).
>My conjunction included the positive contrasts for both stimulus
>conditions.
>
>My second question is: Do areas of activation from one stimulus overlap
>with inactivations from the other stimulus. To do this I performed
>conjunctions analyses (again at the individual and group level) and
>selected the positive contrast from one condition and the negative
>contrast from the other condition. In other words the did a conjunction
>of the contrasts 1 0 and 0 1.
>
>Are these analyses valid? They seem to follow what has been addressed on
>the mailbase but I am quite surprised by my results and want to verify
>that I have not made some stupid misinterpretation.
>
Your approach was quite reasonable, except that you have made the
assumption that the contrasts 1 0 and 0 1 are orthogonal, and I
guess that this is probably not the case. (After all, when you have
stimulus A, presumably you can't also have stimulus B at the same
time?) If they are not orthogonal, then the conjunction which you
have tried is not really interpretable.
To take an over simplistic example, imagine a situation in which
there were only two conditions, A and B, and these were both modelled
with box cars, and in fact the covariate for B was equal to that for
A multiplied by 1 and with +1 added (i.e. when A had ones, B had
zeros and when A had zeros, B had ones). In this overspecified
model, the same variance in voxels which were actually 'activated' by
condition A but not B could either be modelled using covariate A
(with a positive parameter estimate) or with covariate B (with a
negative parameter estimate). Many of these voxels would end up
being modelled by some combination of the two, and these would show
up in the both contrasts, 1 0 and 0 1, and would therefore also show
up in the conjunction of these two, in spite of the fact that in this
example there is no response at all to B.
Really to answer your question you need to have more conditions.
Ideally you should have condition A and its own baseline condition,
and condition B with its own baseline condition (you can't use the
same baseline for two conditions which you want to use for a
conjunction analysis). With these four covariates you could do the
conjunction of 1 1 0 0 and 0 0 1 1, and get a meaningful answer. I
guess that what you have ended up with, in your conjunction of 1 0
and 0 1, are many more voxels than you expected. These are not
necessarily voxels in which 'areas of activation from one stimulus
overlap with inactivations from the other stimulus'; it may just be
telling you that your covariates are significantly colinear.
[Richard Perry 8 Aug 2000]
>Richard thanks for your reply. I suspected that there may be a problem with
>nonorthogonal contrasts but I did not fully understand. What is SPM doing
>when it "orthogonalizes" the contrasts in a conjunction? Does this
>have any meaning
>and why does it not account for the nonorthogonal nature of the contrasts?
>
Sorry, I was forgetting that SPM99 tries to take account of this
problem. I must admit that I haven't used SPM99 for this purpose, so
I don't quite know how it works. I still don't think that it helps
you, for the following reason.
As I understand it, when you have two covariates, then the variance
modelled by these can be partitioned into three components:
1. variance which can only be explained by covariate A
2. variance which can be explained by either covariate,
3. variance which can only be explained by covariate B.
The way in which you specify your orthogonalization order (SPM99
prompts you for this after you have chosen your contrasts) will
influence the parameter estimate for one or other covariate, and I
have to confess that I cannot remember which way round it works, as I
find it a bit confusing. I think that the first contrast which you
specify is left unchanged (i.e. the same contrast is applied to the
same parameter estimates), but the second contrast is modified to
compensate for the fact that your parameter estimate is for
components 2 and 3 rather than just component 3. Thus your parameter
estimates stay the same, but you will see that the second (I think!)
contrast now looks slightly different, and includes nonzero values
even for some of the covariates which only appeared in the first
contrast when they were originally specified.
However, regardless of the implementation, I think that the idea is
that you are ascribing the variance which can be modelled by either
covariate (component 2 above) to one or other. You don't actually
know which one it comes from, and there is no way to find out. You
could still be mislead in your situation. Thus, you might set things
up so that the common variance (component 2) is explained by
covariate B, when in reality it is entirely attributable to covariate
A. The remaining variance which can only be explained by covariate A
(component 1) is appropriately modelled by this covariate. Once
again you have a situation where variance which actually comes from
one condition appears to be attributable to a combination of both,
and so you have voxels showing up spuriously in your conjunction.
But I may be wrong about this. It may be that SPM99 discounts the
common variance (component 2), so that the conjunction would now ask
whether the data from a voxel includes both 'component 1' and
'component 3' variance. If there is considerable colinearity
between the contrasts, so that much of the variance is 'component 2',
then this test would obviously be rather insensitive, but I think
that the results might be meaningful even in your case. However, if
this is what SPM99 does, then I wouldn't have thought that it would
need to ask you for an orthogonalization order.
I hope that someone else will be able to give a more expert reply,
and tell you which of these SPM99 actually does. Some of the real
experts are away at the moment, though. If this question is
important, though, I would seriously consider doing another
experiment in which each condition has its own baseline, as described
before!
[Richard Perry 8 Aug 2000]
Richard is essentially correct. Conjunctions in SPM99 use minimum
tfield theory, which requires that the tmaps are independent,
which they approximate when the contrast spaces are orthogonal
(and there are many dfs). Note that it is possible that even
though contrast weights are orthogonal, eg [1 0] and [0 1], the
subspaces defined by the contrasts are not (if there is correlation
between the covariates for example).
Thus SPM99 may ask you for an orthogonalisation order, such as
[1 2]. This will orthogonalise the contrast space for contrast 2
with respect to that for contrast 1. This will modify the second
contrast to include the shared variance (that Richard talks about).
(Rather confusingly, for the special case of two covariates, this
is equivalent to a [0 1] contrast after orthogonalising the first
covariate with respect to the second.)
[Rik Hensen 16 Aug 2000]
> 1) Is there a preferred method for doing the first level analysis:
> should I calculate the con*.img files for the second level analysis on
> an individual basis or should I use a multisubject fixedeffects
> model, and then calculate the single subject con*.img files by
> presenting the other sessions (subjects) as null events (e.g. 2
> conditions,
subject 1: 1 1 0 0 0 0.....;
subject 2: 0 0 1 1 0 0........ ;
subject 3: 0 0 0 0 1 1 0 0......etc).
>
> In the latter case, the individual con*.img files are not independent,
> I would think.
In fact they are. Even when using AnCova to remove the confounding
effects of global activity SPM is set up (if you use the defaults and
model subject x condition interactions) to be subject seperable. This
means that the contrast images are the same as if you had analyzed each
subject seperately. As such doing a multisubject firstlevel analysis
is much more convenient (for selecting subjectspecific contrasts, as
you specifcy, for the second level).
> 2) Is it possible to do a RFX analysis with an unbalanced design, i.e.
> a randompresentation eventrelated design with a different model for
> each individual subject?
Yes, if the designs are sufficiently similar (i.e. roughly the same
number of events etc.) the secondlevel analysis of an unbalanced
design can be considered a mixedeffect analysis (with assumtions about
sphericity). There has been some discussion about this in the
archives.
[Karl Friston 24 Mar 2000]
> I have a few more questions concerning the multisubject correlation
> analysis.
> Does "Covariates only: interaction with subject" model take into account
> the dependence within subject? If it does, how does it do that?
> This is important because we have 8 scans per subject.
>
> We specified the following contrasts (11 subjects) to test
> 1) positive correlation 1 1 1 1 1 1 1 1 1 1 1
> 2) negative correlation 1 1 1 1 1 1 1 1 1 1 1.
> Are these tcontrasts correct for testing the average positive and
> negative correlation?
>
> I wonder whether these results are fixed effects analyses, that cannot
> be generalized into population level.
>
You are correct that this is a fixed effects analysis, and cannot strictly
speaking be generalised into a population level inference.
The way to perform a random effect analysis on this study is to generate a
contrast parameter estimate map for each subject (by simply looking at
contrasts [1 0 ... 0], [0 1 0 ... 0] ... [0 .. 0 1] in the results
section). You then perform a onesample ttest (i.e. compare the average of
your parameter estimates with zero) on these (con_00*.img) maps. You do the
same thing with the [1 0 ... 0], [0 1 0 ... 0] ... [0 .. 0 1] contrasts
to check the negative correlations.
The RFX model will tell you if all subject "activate" in the same location
and with roughly the same magnitude. If you want to answer the slightly
less stringent question "do they all activate in the same location?" you
can use a conjunction across subjects instead. Its real easy. Once you have
entered all the individual contrasts above you simply select them all
(positive and negative separately) in the results section using the control
button.
[Jesper Andersson 12 July 2000]
> when we do a 'multigroup conditions and covariates' analysis, we can
> not produce the subject specific contrast images, because the columns
> of the design matrix contain the different conditions for all
> subjects. How can we obtain the contrast images to enter into the
> second level analysis ? Do we have to use the results from the single
> subject analysis ?
You could proceed by treating each subject as a separate group. If you
are using covariates ensure you select 'covariate x subject interactions'.
[Karl Friston 12 July 2000]
...or just use "Multisubject: cond x subj interaction 7 covariates", and
put all the subjects in together. This will ask you less questions, but
effect the same result as Karl's suggestion.
Although you have two groups, you only want the individual subject level
contrasts, which you then will assess at the second level. So, the group
membership isn't important in the first level of the analysis since the
model fits each subject separately.
As Karl notes, all effects must be fitted as interactions with the subject
effect to ensure subject separability.
[Andrew Holmes 1 Aug 2000]
> How can I integrate conjunction analysis in a multi subject random
> effect model? Single subject conjunction of two contrasts is easy to
> perform, but what do I enter at the second level or how do I perform
> the conjunction analysis over a group of subjects?
You do not (generally). A conjunction analysis at the first level,
over subjects, addresses the same thing as a single contrast at the
second level (using the subjectspecific contrast images). In some
instances you may want to do a conjunction of two contrasts pertaining
to different effects at the second level. This requires the two or
more [orthogonal] contrasts to be entered into the secondlevel model
with a simple conjunction (e.g. [1 0] and [0 1]). However, you are
making strong assumptions about the sphericity of the error terms, at
the second level, by doing this. These assumptions might easily be
violated (for example subjects who activate in contrast 1 may be more
likely to activate in contrast 2). If you are obliged to enter more
than one contrast per subject into a secondlevel analysis then you
should qualify your inferences along these lines.
[Karl Friston 4 Aug 2000]
Random Effects: 2 groups, 2 conditions
> suppose the following study:
> Group A: 12 patients
> Group B: 12 controls.
> Each groups performs Task I and II.
>
> We are interested in task specific effects withingroups and group
> differences between tasks.
>
> To analyse within group effects I perform two fixed effects analysis,
> for each group seperately with 12 subjects, compute subjects specific
> conimages and feed them into a second level, Rxanalysis. So far so
> clear.
>
> But what if I want to know something about the group differences for
> Task A? I see two possibilities: Either I take the conimages of each
> group specific Fxeffects analysis. OR I do a third FxEffects analysis
> with all 24 subjects in one group. The main difference is that global
> normalisation in the first case is group specific in the second case
> not. But isnt the second case, i.e. to make a new Fxeffects analysis
> not the correct way because I want to compare both groups?
to do the betweengroups analysis, use the twosample ttest option under
basic models and then provide the subjectspecific con*.img files for each
of your two groups. this will test for differences in that effect between
the groups (in either direction depending upon the contrast you specify).
> Is it better
> (more correct) to take the con*images from one FxEffects analysis (24
> subjects, both groups) or from two seperate FxEffects analysis (12 from one
> group, 12 from the other).
you should run a separate fixed effects analysis on each subject and then
enter the con* images from those analyses into the twosample model (one for
each subject).
[Russ Poldrack]
As far as I understand the Con images should be the same no matter how many
subjects is in the design matrix. The difference between the two setups lies
within the residual mean square (ResMS) and thus the Ressels per voxel (RPV)
image and the T maps. Because one should only put contrast images into the
second level analysis it makes no difference which setup you use. If you also
want to make fixed effect analysis, you will need the design matrix including
all subjects, and in that case I would just use the big one. If you do not care
about first level analysis and have many subects I would go for one
designmatrix per subject (parameter estimation works faster with smaller
designmatrices).
[Torben Ellegaard Lund]
> I have two groups of fMRI subjects (10 young subjects and 10 old
> subjects). Both groups performed an AB block design where B was a
> control task. All data was spatially normalized. I have computed the
> effects of task A in each individual subject, and for the youngsters as
> a group, and for the oldsters as a group and for the total group.
>
> How do I determine regions that were significantly more active in the
> young group than in the old group (and vice versa).
I would recommend the random effects approach to test the interaction
between groups:
1. compute individual (first level) contrast images of AB for each
subject:
2. Second level analysis: hit the basic models button.
select a two sample ttest
You dont need global normalization here since you did it (hopefully) at the
first level.
Enter the contrast images of A vs B in the young group for one sample and
then do same for older group.
3. The set up contrasts with Results button.
Your contrast should be:
1 1 Young > Old
1 1 Old > Young
[Sterling C. Johnson 30 Jun 2000]
I just saw that Sterling already answered, but I didn't want to throw
away my answer...
I would put all subjects of both groups into one model. Each subject
gets a separate session, in which you model condition A. You can then
choose 2 contrasts to test young vs old. Given that you entered first
all young subjects, then all old subjects the contrast for testing,
where your young activate more than your old subjects would be:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
and old more than young:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
You could also proceed to a 2nd level analysis. To do this you create
one contrastimage for each subject, where the appropriate contrast for
young subject #7 would be:
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
You then enter the 20 images into a 2sample ttest in 'Basic Models'.
[Stefan Kiebel 30 Jun 2000]
> We are analyzing PET data of subjects from 2 groups who underwent the
> same stimulation. Group1 contains 7 subjects, Group2 contains 12
> subjects. We can do a single subject analysis and a RFX group
> evaluation of group 2. For the group evaluation of group 1 the number of
> 7 subjects might be not enough. (?) We want to do a comparison of the 2
> groups. Which model do we have to use for the first and second level
> analysis?
For the 1stlevel analysis use a 'multigroup conditions and covariates'
for the 2ndlevel analysis compare the two sets of subjectspecific
contrasts with a two sample t test under 'basic models'. The latter will
have reasonable power because you are using all 19 subjects.
[Karl Friston 10 July 2000]
Random Effects: 3 cond A, B, Rest
> Let me ask one simple question about fMRI Random Effects Analysis(SPM99).
> I have 3 conditions, A, B, and Rest. I compared A and B for each subject,
> specifying the tcontrast (1,1), and obtained con_***.imgs. Then I tried to
> perform a group analysis using random effects model. I selected "one sample
> t test" from Basic Stats menu and selected the con_***.imgs. So far so good.
Yup  that's all fine.
> Then the program requested to specify contrasts again. My question is how to
> specify this second level contrast and what it means. I assume that one
> sample ttest refers to a significant test against the null hypothesis: the
> populaton mean=0, so I guess that the second level contrast should be "1".
> Is this correct? and If so, why should we specify it? (because I cannot
> think of any other numbers. As long as one sample t test is concerned, it
> must be always 1, isn't it? What does it mean when it is , say, 1?)
You're right: for the second level analysis you specify, your H0 is that the
population mean is not significantly different to 0 (i.e. no consistant mean
effects). Doing an F at the second level would test for ANY mean effects
significantly different to 0  WITHOUT any constraints on the direction
of these effects.
However, using your onesample t, you can test for the direction of the
difference if you have prior predictions that some areas will show increases
while other may show decreases (I am sidestepping the issue of how one
wishes to interpret 'deactivations' or mean decreases in fMRI). So a contrast of
(1) means 'test for a significant mean (+ve) effect' and a contrast of (1) means 'test for a significant mean (ve) effect'.
[Dave McGonigle]
Random Effects: 1 group, 2 cond., 1 covariate, 2 nuisances
> I will first start with what we have: Within an fmri study,
> One group
> Five subjects
> Two conditions
> Auditory Monitoring versus its own baseline
> Working Memory versus its own baseline
> Two nuisance variables
> anxiety score (one score per subject)
> Depressive mood score (one score per subject)
>
> One covariate of interest
> error score on the working memory task
> This is what we did
> Design Description
> Desgin: Full Monty
> Global calculation: mean voxel value (within per image fullmean/8 mask)
> Grand Mean scalingL (implicit in PropSca global normalization)
> Global normailzation: proportional scaling to 50
> Parameters: 2 condition, +1 covariate, +5 block, +2 nuisance
> 10 total, having 7 degrees of freedom
> leaving 3 degrees of freedom from 10 images
>
> Is this a valid way of looking at this? We are concerned with the
> large degrees of freedom that we are using up. Also how would we
> accurately interpret such a model? Does the statistical map only
> represent activations that are associated with the covariate of
> interest after controlling for anxiety and depression scores?
Firstly I assume this is a second level analysis where you have taken
'monitoring' and 'memory' contrasts from the first level. If this is
the case you should analyse each contrast separately. Secondly do not
model the subject effect: At the seond level this is a subject by
contrast interaction and is the error variance used for inference.
Thirdly a significant effect due to any of the covariates represents a
condition x covariate interaction (i.e. how that covariate affects the
activation).
I would use a covariates only single subject design in PET models (for
each of the two contrasts from the first level). A secondlevel
contrast testing for the effect of the constant term will tell you
about average activation effects. The remaining covariatespecific
contrasts will indicate whether or not there is an interaction.
[Karl Friston 17 July 2000]
Random Effects: 4 cond, 4 matched rest
> Further to my question to you earlier this week which was:
> Q. My paradigm is a block design with 4 different active blocks each
> followed by it's respective null block, i.e I have 4 different null
> blocks. How do I go about specifying the design matrix for a second
> level analysis taking these different null blocks into account?
> e.g.
> if my 4 active blocks are: A1 A2 A3 A4
> and my 4 null blocks are: N1 N2 N3 N4
>
> If I specify trials 18 in the following order: A1 N1 A2 N2 A3 N3 A4 N4
>
> how do I contrast [A1N1]  [A2N2]? or vice versa?
>
> Your answer was:
> A.You would simply specify 8 conditions (A1  N4) and use the appropriate
> contrasts.
>
> Unfortunately, 'use the appropriate contrasts' is the bit we don't know how
> to do now that we have so many different nulls.
>
> I have specified the conditions 18:A1 N1 A2 N2 A3 N3 A4 N4
> for simple contrast A1N1 I've used:1 1 0 0 0 0 0 0
>
> & for contrast A2N2 I've used: 0 0 1 1 0 0 0 0
> How do I specify a 2nd level contrast looking at the activity in A1 minus
> it's null N1 versus the activity in A2 minus it's null N2,
> i.e. [A1N1]  [A2N2]?
>
> If I use:A1 N1 A2 N2 A3 N3 A4 N4
> 1 1 1 1 0 0 0 0
> then surely this is just adding the activity in A1 and A2 and taking
> away the activity in N1 and N2 which is not what we want to do.
In fact this is [A1N1]  [A2N2] and is exactly what you want. I
think the confusion may be about the role of the 2ndlevel analysis.
To perform a second level analysis simply take the above contrast [1
1 1 1 0 0 0 0] and create a con???.img for each subject. You
then enter these images into a one sample T test under 'Basic Designs'
to get the secondlevel SPM. To do this you have to model all your
sujects at the first level and specify your contrasts so that the
effect is tested in a subjectspecific fashion:
i.e. subject 1 [1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 ...
subject 2 [0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 ...
...
[K Friston 19 July 2000]
Random Effects: variable eventrelated
> I am currently doing an eventrelated fMRI study using a memory task. I
> would like to do an RFX analysis on hits  baseline. However, the
> problem is that the individual subjects have considerably different hit
> scores, so that different numbers of events are taking into the first
> level analysis, giving different degrees of freedom (ranging from 90 
> 130).
> I understand that this can be problematic in that RFX analysis assumes
> equal degrees of freedom. Is there any way to correct for this?
In fact the secondlevel analysis, used in SPM, assumes that the design
matrices are identical for each subject. To do your analysis properly
you would have to fit a hierachical model with different variance
estimators for each subject. However, remember it is the variance of
the contrast that is the critical variance here and the d.f. for each
contrast will be the same (assuming the same scan number for each
subject). The only thing you have to assume is that the variable
number of 'hits' in each subject does not induce substantial
differences in the variance of the contrast estimators. I think that
as long as you qualify the results, and make it clear what you have
assumed, you should be OK.
We are currently working on this issue but it is too early to quantify
how robust the twostage approach to RFX analyses is at this stage.
[Karl Friston 11 Jul 2000]
HRF (Hemodynamic Response Function)
> We are trying to specify a subjectspecific HRF as a basis function at the
> individual level of analysis. The 'subjectspecific HRF' is an
> independently collected and averaged timecourse of signal intensity in a
> cluster in the motor cortex during a separate (motor) task.
>
> It appears that this can easily be added as an option in spm_get_bf.m, by
> adding a 'user specified hrf' option for the Cov variable around Line 91,
> and inserting a condition to import a userspecified vector for bf,
> instead the default SPM's hrf around Line 150, which is now:
> [bf p] = spm_hrf(dt);
>
> However, I was wondering, are there any properties that a Gamma
> basis function has (and that a userspecified vector may not have)
> that are used at some later stage of the statistical modelling and the
> benefit of which would be lost in the case of a userspecified vector?
>
I can't think of any properties that a gamma bf has that would influence the
statistics of your analysis above and beyond being a better or worse fit to your
actual data. A single gamma bf just happens to be a shape that looks 'hrfey' and
has the added advantage of being described completely by a single parameter. The
more mathematically gifted may wish to correct me if I've oversimplified things
here, or just got them plain wrong!
As far as I am aware, the main problem with using any single parameter basis
function to describe a complex waveform such as the hrf is one common to all
attempts to fit a model to data in linear regression  the model may not be well
specified. You can't fit a square peg into a round hole, and so it is often the
case that the choice of bf is not appropriate. There are options in SPM that take
this into account, and allow the modelling of neurovascular responses by a basis
set of more that one function (either the Fourier or 3 gamma bfs options). These
will fit any example of the 'family' of responses that can be described by a
linear combination of your bfs. The disadvantage is that it becomes harder to
relate these more complex fits back to the underlying neural activity that we
assume generates the hrf, and so unambiguously talk about differences between
evoked responses that we wish to describe by a difference in parameter estimates.
A recent study by Geoff Aguirre and colleagues showed a great deal of shape
difference between the hrfs of different subjects in the region of the central
sulcus to a transient motor response (The variability of human, BOLD hemodynamic
responses; Neuroimage 1998 Nov;8(4):3609), but less variability in hrf shape
within subjects when studied over a number of different runs/sessions. This group
now regularly defines subjectspecific hrfs which are then used in subsequent
analyses  the approach is described in 'Using eventrelated fMRI to assess
delayperiod activity during performance of spatial and nonspatial working memory
tasks. Brain Res Brain Res Protoc 2000 Feb;5(1):5766'. This seems to suggest
that your strategy of independently defining subjectspecific hrfs is a sound
one, assuming you wish to use these to fit responses in the same region, as hrfs
may show spatial variability even in the same brain. An SPM answer just wouldn't
be an SPM answer without a few caveats, would it?
[Dave McGonigle]
> In the diagram of my efMRI model basis set (hrf with time and dispersion
> derivatives), it looks as though the dispersion derivative is the 2nd
> derivative of the hrf reflected about zero. Is that the case? If not what
> is it?
It is the derivative of the HRF with respect to dispersion (see
spm_hrf) for how dispersion is parameterized. This is actually very
similar to the second derivative.
[K Friston 12 July 2000]
> I am learning to use SPM 99 to generate experiments. It seems that the
> onset times for the various stimuli are contained in the matrix
> Sess{1}.ons{n} for the nth event type.
>
> I've noticed with several different designs that these onset times are
> 0.125 seconds later than I would expect: for example, the beginning of
> the first epoch for a block design  Sess{1}.ons{1}(1)  is 24.125
> sec when I specify 12 scans (at 2 sec TR) as the time to first trial.
> This also occurs for eventrelated designs.
This is a reflection of the fact that onset times are specifed in
timebins of TR/16 = 0.125s for you. One starts acquiring the first
scan at t = 0 seconds (scans) and finishes at t = TR seconds (1 scan).
To ensure indices do not start at 0, 1 timebin is added to every
onset. 125ms does not matter one way or the other given the time
constants of the hemodyanic response.
[Karl Friston 17 July 2000]
> we just stumbeled over a question that might have been discussed
> earlier. Our fMRIparadigm is slightly asynchronous to the image
> acquisition. From the performance of execution of the paradigm we
> derive a covariate. As during the images at the beginning and end of
> the block the execution starts or ends, we do not know what to choose
> as a covariate for these images (for all the other images, we use
> either the average of the performance data acquired during the image or
> zero (during rest). In this context we also would like to know,
> whether the covariate data is convolved with the hemodynamic delay
> function, and if so, how ?
If the covariate is presumed to have a neuronal correlate then it
should be convolved with the HRF (because this is how that effect will
be expressed in the data). One way to model this is to treat your
block design as an eventrelated design, where each block is a train of
trials or events. Using the parametric option simply modulate each
trial with the perfomance measure for that trial. This approach
eschews any problem with asynchrony between acquisition and trials and
automatically ensures approoriate convolution with the HRF in
'microtime' (i.e. timebins of TR/16).
[Karl Friston 21 July 2000]
> I'm trying to model eventrelated activity that is due to two stimuli
> presented sequentially with an SOA of 1.5 seconds. In looking at the
> raw data, the hemodynamic response to such an event often has a wider
> peak than does the typical HRF for a single event. I'm wondering
> whether modelling the data using a dispersion derivative (i.e., hrf +
> time + dispersion derivatives) will enable SPM to fit a canonical hrf
> to the data that has the appropriate width. In other words, does the
> dispersion derivative allow the width of the of the canonical HRF to be
> adjusted analagous to the way that the temporal derivative allows the
> onset of the canonical response to be adjusted?
Absolutely.
[Karl Friston 21 July 2000]
You might consider using the 'parametric modulation' option to explore
timedependent effects by creating boxcar regressors that are modulated by
an exponential (or other) function of time. This is slightly more
complicated to set up, but has the potential merit of greater flexibility in
characterising the nature of any timedependent effect.
[Geraint Rees 26 July 2000]
> Can anyone tell me the best design matrix that I can specify for the SPM
> analysis? Also, what are the effects of selecting "SOAvariable",
> "Convolve with HRF" and "Add temporal derivatives" for fmri model setup?
> TR: 3000ms
> TE: 60ms
> ONOFF experiment: 12121212¡Kstart with off state
> Multiphase: 62
>
One possibility is to make a design matrix with one condition. Your SOA is
fixed (the time between the start of each ON block) and these onsets
represent EPOCHS that are 12 scans long. Do convolve with the HRF, don't add
temporal derivatives (at least at first), select the default high pass
filter and use hrf for low pass filtering with no modelling of the
autocorrelations. Use a contrast of [1] to visualise ON>OFF. That should do
the trick I hope!
To answer your questions:
SOAvariable is used when either the start of the ON blocks, or the timing
of invdividual events in an eventrelated design, is irregular.
'Convolve with hrf' convolves the regressors with a synthetic haemodynamic
response function. This is usually a good idea unless you have made up your
own userspecified regressors that are already preconvolved.
'Add temporal derivatives' makes a second column in the design matrix for
each regressor that approximates the temporal derivative of that regressor.
The idea is that some (unknown) combination of these two columns,
appropriately weighted, can model a (small) temporal shift in the fit of the
original regressor. This can be used to improve the overall fit of the model
if precise timings are uncertain, or to formally test for differences in the
onset of the BOLD response for different conditions.
[Geraint Reese 12 Aug 2000]
> I would like to extract time courses for the whole brain, however I
> quickly run into memory problems when trying to extract regions with a
> radius larger than 40mm. Is there a way of getting the time courses more
> directly? i.e. is there an object somewhere that contains all the time
> courses (preferably high pass filtered)?
One possibility is to use the Y.mad file to extract the raw time course of
voxels (it will only have information about those voxels that survived
the upper Fthreshold specified in you defaults file, so you might want
to raise this threshold to 1 in order to write all voxels' timecourse).
Then in matlab you could do something like
>> load SPM
and then to extract a voxel with voxel coordinates [39, 19, 1] and put
the timecourse in variable Y, type:
>> idx = find(XYZ(1,:)==39 & XYZ(2,:)==19 & XYZ(3,:)==1);
>> Y = spm_extract('Y.mad',idx);
To extract a voxel defined in terms of mm coordinates, you'd have to first
convert the XYZ variable above, into mm coordinates index:
>> XYZmm = M(1:3,:)*[XYZ; ones(1,size(XYZ,2))];
and then do the same as above but this time using the XYZmm variable and
the estimates of the voxel location in mm.
I think you can also apply the filtering you've specified at the design
specification stage (both high and low if they've been specified) by:
>> Y = spm_filter('apply',xX.K,Y);
[Kalina Christoff]
>I am learning to use SPM, and I have a question about slice timing
>correction that I haven't been able to find an answer for. To wit, does
>SPM 99 properly correct the slice timing for coronal slices? The program
>seems to expect axial slices, and if it works for slices on other axes, I
>don't know how it determines the axis to use.
>
The slice timing program doesn't care about orientation per se, just that
you have a time series of images. In terms of specifying the reference
slice you have to very careful about this if the acquisition is not
ascending or descending as the interleaved order is not intuitive (you
might just specify the order yourself using the option provided).
SPM in general expects axial slices for lining up the activations on the
glass brain templates but for running the statistics the orientation
doesn't matter as long as the origin is specified correctly. If you've
figured out which slice is 1 for your origin in the coronal direction then
use that numbering scheme to pick the proper reference slice.
[Darren R. Gitelman 15 July 2000]
>Thank you for your reply, Darren. I'm afraid I still don't understand
>exactly what is going on, though. Here's my situation: I have 140 3D image
>files from a scan series, one every 2 seconds, that I want to work
>on; each file contains 20 64x64 coronal slices. (I made these 3D files
>because I thought that's what SPM needed to work with.) Is it that I need
>to provide SPM with a set of 20 coronal slice files instead, and repeat
>the timing correction for each of the 140 scans?
I had this issue for a study myself. If you have reconstructed your
coronal slice set as a volume in the correct spm orientation then YES
you do need to be careful as spm will be expecting the data to be acquired
in the zdirection (ie SI).
My solution:
use AIR to rotate the volumes with yz
then do slice time correction, then rotate back with zy option.
Of course you will need to double check what becomes the effective
top of the volume for specifying to spm the slice acquisition order.
The slice timing is done on your session (140 scans).
[Robert C. Welsh, 15 July 2000]
> Christian Buechel's post
>
> http://www.mailbase.ac.uk/lists/spm/199907/0154.html
>
> gave a formula for correcting an onset time, to account for SPM allowing a
> userspecified choice of which time point to sample regressors (fMRI_T0).
> Is this formula still applicable, or has spm99 now do this automatically
> with no user input?
That formula is still valid, and nothing has changed in SPM99.
We thought about linking the reference slice chosen in slicetiming
correction to the value of fMRI_T0, but decided against it because
some people do not use the slicetiming stage during preprocessing
(eg with long TRs, when the interpolation error may be large, or for
blocked designs, when model/slice timing differences are often
negligible).
> Minor related question: is there any reason that fMRI_T (the number of
> sample points per TR for e.g. the hrf) should be the same as the number of
> slices? E.g., if I have 24 slices and slicetiming correct to the middle
> slice, I think it should be OK to use other values of fMRI_T, as long as
> fMRI_T0/fMRI_T = 0.5. Is that right?
You are right. The value of fMRI_T could match the number of
slices, but doesn't have to  it is the ratio you describe
that is important. Basically, increasing the value of fMRI_T
may gave you more temporal precision, but the advantage is
likely to be negligible (unless you have a long TR, eg >4s)
and the computation of covariates will take longer.
[Rik Henson 11 Aug 2000]
> I'd be much obliged if someone could confirm that my understanding
> from the spm_archives is correct.
> To slice time correct a volume of 21 slices, acquired in an
> interleaved manner from bottom to top as:
>
> "1 3 5 7 9 11 13 15 17 19 21 2 4 6 8 10 12 14 16 18 20"
>
> (Where the numbers are the Analyse format spatial positions (i.e.1=bottom))
> Is the above number phrase exactly what I would need to enter
> into the GUI under the user specified option?
Assuming that is the particular interleaving of your scanner, yes.
[Rik Hensen 17 Aug 2000]
 I have applied slice timing to my eventrelated series. For motion correction
 (spatial realignment) I would like to use a nonSPM software. This software can
 read the aV*.img images but not aV*.mat files.

 My question is: have aV* images been actually written with the slice timing
 correction imbedded? That is, if I use them outside SPM without their .mat
 files, will slice timing be still taken into account? If not, what should I do?
I think you can quite happily delete these .mat files as they probably
don't contain any additional information. Whenever any new images are
derived from existing ones, then the positional information is preserved
in the new set of images. This positional information is derived from the
.mat files if they exist, but otherwise from the voxelsize and origin
fields of the .hdr files. If the origin contains [0 0 0], then it defaults
to mean the centre of the image, which, depending if you have odd or even
image dimensions, is either an integer value or halfway between two voxels.
Because the origin field can only store integers, half voxel translations
can not be represented in the .hdr files, so this information is
written to .mat files.
[John Ashburner 25 July 2000]
 I'd like to know how to handle EPI scans w/ interslice gaps when analyzing
 using SPM99 (typical realign>coreg>norm>smooth'ing). For example, let's say
 I have 14 slices of epi scans w/ slice thickness of 7mm and gap of 2mm. Do
 I spcifically setup the header file or just handle it as if it's 9mm thick?
 Any comments are appreciated.
Just enter the third voxel as 9mm.
[John Ashburner 20 July 2000]
> 1. Uncorrected p value.
>
> I am having some difficulty determiniing what is acceptable in terms of
> an uncorrected p value. For example lets say I set the voxel threshold to
> an uncorrected level of p<0.01 and get a list of regions, most of which I
> would predict. Typically The areas I am interested in are between the
> range of p<0.009 and p <0.0001 (uncorrected). When I set this to
> p<0.001 I obviously loose some of the areas. Is using a threshold of p
> <0.01 (uncorrected) level acceptable or should I use p<0.001
I would report the results at two levels. First go through the
anticipated regions using a SVC (at p <0.05). Second report anything
you had not predicted (but is interesting) 'descriptively' at p <0.05
uncorrected. The second reports can be used by you or others to guide
anatomically specific hypotheses in future experiments. Say at the
beginning of the results that you will be reporting the results at
these two levels.
> My next step is to focus on the areas of interest, basal ganglia (e.g
> globus pallidus) or frontal cortex and use the SVC option. When I run
> the SVC do I use the voxel or cluster level (corrected) p value ?
Use a voxellevel SVC p value.
> 2. SVC
>
> In one of your previous emails to me you suggested using a SVC with a
> radius of 16 mm for anticipated areas and a correction for the whole
> volume for non anticipated areas.
>
> In my case the anticipated areas are cortical (eg parietal or frontal
> gyri) and subcortical (basal ganglia) and as such will have different
> shapes and volumes. Is it appropriate therefore to use a fixed value
> (16 mm) for all these regions or should I use different radii for each
> of the different regions ? I have been using the sphere option and
> specifying a 16 mm radius for the subcortical regions but am unclear
> what to use for the cortical regions. In one of your previous papers,
> J. Neuroscience Dec 1999 p 1087, you based your corrections on the
> volume of interest and cited Filipek et al 1994. and Worsley et al
> 1996. I have the Worsley paper and know there is a table that lists the
> cc values of different areas.
I think you have the lattidude to change the radius depending on the
structure involved. You could use any published data as a guide. The
Caudate, for example, would have a small radiou (sqy 6mm) whereas
parietal cortex should be greater unless you specify which part (e.g.
IPS).
[Who/when ???]
> 5) In the last mail, you said that for flipped vs nonflipped analyses
> I had to consider flipped images as coming from different subjects. As
> I work in PET I should thus do a multistudy analyse? Can't I just say
> the flipped images come from the same subject but are from a different ?
> condition?
No because you want to remove the main effect of hemisphere. Treat each
the flipped image as another subject. This gives a better model that
accomodates both the subject effect and hemisphere effect as
confounds. Hemisphere x condition effects can be tested using the
appropriate Tcontrast.
http://www.mailbase.ac.uk/lists/spm/200005/0142.html
[K Friston ?]
 Does anyone have a C program which can be implemented within
 MATLAB or otherwise, which can read in an image in Analyze format,
 then multiplies each and every voxel by a user defined scale factor
 to produce a new 'scaled' image?
The easiest way is to change the scalefactors in the .hdr files,
which can be done something like:
V = spm_vol('imagename.img');
V.pinfo(1:2,:) = V.pinfo(1:2,:)*scalefactor;
spm_create_image(V);
Alternatively, the ImCalc button will do this. Select the image you
want to scale, then enter an output filename. The operation you want
to perform is then something like i1*scalefactor .
[John Ashburner]
The easiest way of converting to a different datatype [e.g. from 64bit to
16bit] is probably something like:
VI = spm_vol(spm_get(1,'*.img'));
VO = VI;
VO.fname = '16_bit_version.img';
VO.dim(4) = spm_type('int16');
dat = zeros(VO.dim(1:3));
for i=1:VI.dim(3),
dat(:,:,i) = spm_slice_vol(VI,spm_matrix([0 0 i]),VI.dim(1:2),0);
end;
spm_write_vol(VO,dat);
clear dat
[John Ashburner 25 July 2000]
> I need to write a script to pull out number of voxels activated in a
given ROI for a group of subjects. I have done the estimation and
results etc to build contrast images etc. I presume the con images just
hold the z score (???) for a given location. It looks like the spmT
images are just coordinates?
The conimages are contrastweighted estimated parameter images. The
timages are the computed tvalues for a given contrast.
> > So one, can someone tell me the correct files to use to do the
analysis I would like to? And second, a more general question, is there
a description somewhere of what information each file holds etc that is
produced at Results time. I can piece together information from here
and there, but it would be nice to have a concise description of all
the files from one locale.
Yes, the documentation seems to be a bit sparse here. Type 'help
spm_getSPM' for some information about the outputimages.
> in the img files, spm_read_vols returns a 3d volume and another
> matrix which appears to be a set of coordinates. What is the purpose
> of the xyz matrix (just curious and trying to understand the innards
> of spm)
XYZ denotes the coordinates of (original) image intensities as stored in
Y.mad. There is more information on this in spm_spm.m (l. 148  169),
which should clarify things.
[Stefan Kiebel 24 July 2000]
Realignment is not a panacea for all movement artifacts within the scanner.
Some of the artifacts it does not correct are:
Interpolation error from the resampling algorithm used to
transform the images can be one of the main sources of motion
related artifacts. When the image series is resampled, it is
important to use a very accurate interpolation method such as
sinc or Fourier interpolation.
When MR images are reconstructed, the final images are usually
the modulus of the initially complex data, resulting in any
voxels that should be negative being rendered positive. This has
implications when the images are resampled, because it leads to
errors at the edge of the brain that can not be corrected however
good the interpolation method is. Possible ways to circumvent
this problem are to work with complex data, or possibly to apply
a low pass filter to the complex data before taking the modulus.
The sensitivity (slice selection) profile of each slice also
plays a role in introducing artifacts.
fMRI images are spatially distorted, and the amount of distortion
depends partly upon the position of the subject's head within the
magnetic field. Relatively large subject movements result in the
brain images changing shape, and these shape changes can not be
corrected by a rigid body transformation.
Each fMRI volume of a series is currently acquired a plane at a
time over a period of a few seconds. Subject movement between
acquiring the first and last plane of any volume leads to another
reason why the images may not strictly obey the rules of rigid
body motion.
After a slice is magnetised, the excited tissue takes time to
recover to its original state, and the amount of recovery that
has taken place will influence the intensity of the tissue in the
image. Out of plane movement will result in a slightly different
part of the brain being excited during each repeat. This means
that the spin excitation will vary in a way that is related to
head motion, and so leads to more movement related artifacts.
Ghost artifacts in the images do not obey the same rigid body
rules as the head, so a rigid rotation to align the head will not
mean that the ghosts are aligned.
The accuracy of the estimated registration parameters is normally
in the region of tens of micro m. This is dependent upon many
factors, including the effects just mentioned. Even the signal
changes elicited by the experiment can have a slight effect on
the estimated parameters.
[John Ashburner 4 Aug 2000]
 1) Is this correct: the more basis functions are used the better the result
 ?
In theory, the best results can be obtained with the most basis functions,
although the amount of regularisation may need to be tweeked to achieve this.

 2) Why is the default ( 7 8 7 ) , not for example ( 7 7 7 ) ? In other
 words, why not a same number for each axis?
The dimensions of the SPM template images are 91x109x91. With the 7th lowest
frequency basis function, there are 3 whole cycles((n1)/2) over the 91 voxels,
meaning that a period covers 30.333 voxels. The 8th basis function has 3.5
cycles, so a whole period lasts for 31.142 voxels.

 3) How does the choice of basis functions ( x y z ) affect the spatial
 normalization of each axis ?
This is a very difficult one to explain without pictures. Basically, the
choice of basis functions determines the types of deformations that can be
modelled. Displacements in all three directions are modelled by the same
number of parameters. The dimensions (e.g., [7 8 7]) reflect how many
low frequency coefficients of a 3D DCT are used to model displacements
in each of the directions. It is difficult to explain, but I have included
a few lines of Matlab that may illustrate the point. The effect of the
regularisation is not modelled though. To change the number of basis
functions in the different directions, you would modify the first line,
before copying and pasting into Matlab.
d = [8 3];
[X1,X2]=ndgrid(1:64,1:64);
B1=spm_dctmtx(64,d(2));
B2=spm_dctmtx(64,8);
Y1=X1+B2*randn(d(1),d(2))*20*B1';
Y2=X2+B2*randn(d(1),d(2))*20*B1';
plot(Y1,Y2,'k',Y1',Y2','k');
axis image xy off
[John Ashburner 27 Mar 2000]
> I have a couple of basic questions regarding the code contained in the
> spm_spm.m file for the estimation of map smoothness from the residuals.
The smoothness estimation is described in
SJ Kiebel, JB Poline, KJ. Friston, AP. Holmes, KJ. Worsley Robust
Smoothness Estimation in Statistical Parametric Maps Using Standardized
Residuals from the General Linear Model. Neuroimage 1999; 10, 756766
Note that the general principle used for the smoothness estimation, as
described in this paper, is employed in the current version of SPM
(SPM99), however there are differences in the algorithm/implementation
due to new developments made by Keith. These developments allow for
nonstationary smoothness and involves taking the expectation of the
determinant of the [co]variances of the first spatial partial
derivatives of the residuals as opposed to taking the determinant of
the expected [co]variance. For stationary fields these are the same.
For nonstationary fields the new estimator is valid.
>
> 1) Is the FWHM calculated for the tfield components of ei, the tfield
> tnull or the Gaussianized Tfield Znull? I suspect what is desired is the
> second (tnull), but could not identify code that converted from the
> components ei to tnull.
In SPM99, the FWHM is calculated on the residual fields. This allows
one to estimate the smoothness in terms of FWHM of the resulting
tfield under the null hypothesis. The smoothness of the residual
fields are an estimate of the smoothness of the underlying component
fields. These component fields are not just those of a t field, they
could also be for an F field or any other statistic. They are a
generic representation of the data (not any statistical process derived
from the data).
> 2) This question may be the answer to my first. What is the purpose of this
> line of code at position 1142?:
>
> %adjust FWHM such that prod(1/FWHM) = (unbiased) RESEL estimator
> %
> FWHM = FWHM*((RESEL/prod(FWHM(1:N))).^(1/N));
The important thing here is that instead of taking the determinant of
the sum of the squared partial derivatives, one takes the sum of the
determinants of all squared partial derivates (see above). Effectively,
this enables one to estimate the RESEL count in nonisotropic and
nonstationary data. We no longer deal with FWHM or smoothness per se
but consider the number of RESELS the search volume comprises. The
computation of corrected p values, for any statistic, only requires this
[smoothnessindependent] volume metric. The FWHM characterization
above,
saved by spm_spm, is the equivalent stationary [nonisotropic] FHWN that
corresponds to the number of RESELS in the search volume (it is not
actulally used by later routines). This stationary FWHM equivalent is
given by the line above.
> Perhaps this is the conversion that I'm looking for?
No, not really. As stated in the paper (p 759, 2nd column) the
conversion you are looking for no longer exists in SPM99. The
smoothness of the Gaussianized Tfield was last estimated in SPM96, but
was abandoned on SPM97 for the more robust estimators based on the
residuals.
>
> 3) I couldn't find any evidence of the effective degrees of freedom
> correction that is necessary to compensate for working with the estimated
> component fields [i.e., the correction factor (v2)/(v1)]. Is this actually
> in the code someplace and I'm missing it?
You are right, this factor is now redundant because we are taking the
expectation of the determinant (as opposed to the determinant of the
expectation).
[Stefan Kiebel 21 Mar 2000]
> I need a reference for the adjustment that is performed to correct for
> nonindependence of voxels when determining the signficance level of
> activations in the SPM(t) maps for fMRI.
>
> I'm not certain which paper would be the correct one to reference.
> Guidance would be most sincerely appreciated!
This is simply Gaussian Field Theory:
Friston KJ Holmes AP Worsley KJ Poline JB Frith CD and Frackowiak RSJ
Statistical Parametric Maps in functional imaging: A general linear
approach Human Brain Mapping 1995;2;189210
Worsley KJ Marrett S Neelin P Vandal AC Friston KJ Evans AC A unified
statistical approach for determining significant signals in images of
cerebral activation Human Brain Mapping 1996;4:5873
[Karl Friston 11 Jul 2000]
> Why and when should I use second level analyses with PET? If, for
> instance, I want to compare flipped with nonflipped images in one
> group of subjects, I think I can do that without second level analyses.
> If I want to compare the difference between flipped and nonflipped in
> one group compared to another group of subjects it seems to me I can
> still do this without second level analyses but I think in one of the
> mails of the archive I read that you should do that with second level
> analyses, why?
The same applies to PET and fMRI. Firstlevel analyses use
withinsubject variance and provide for inferences that generalize to
the subjects studied. Secondlevel analyses use
betweensubject/session differences whose estimator is the correct mix
of within and betweensubject error to give a Mixedeffect (i.e.
Random) analysis. This allows you to generalise to the population from
which the subjects came.
In PET (but not fMRI) the similarity of between and withinsubject
variances and between the number of scans per subject and the number of
subjects means that the difference between first and secondlevel
analyses are much much less severe. Traditionally PET studies are
analysed at the first level. Secondlevel analyses are usually
employed when you want to make an inference about group differences
given some withinsubject replications. In your example you would
collaspe repeated measures of flipped vs. unflipped into one contrast
per subject and then compare the contrasts at the second level. This
would be less sensitive but would allow you to generalise to the
populations from which the subjects came.
[Karl Friston 22 Mar 2000]
We have problems with a simple activation task for two groups: patients
and healthy controls. We are using a blockdesign with alternating rest
and activation condition (Tr=3s, 100 measurements, 10 measurements per
epoch). Everything was preprocessed and put in one big statistical
matrix. The individual contrasts show that the healthy controls have
activation in brain regions where the patients don't activate. I get
corresponding results when I set up contrasts like 0 0 0 0 1 1 1 1 where
all patients are set to one and the controls are set to zero and vice
versa. Also contrasts like 1 1 1 1 1 1 1 1 seem to give me good
results for interaction. Now, is this method valid?
This depends on the inference you want to make from the comparisons. You
are describing a fixed effects model, so the statistical inference is
restricted to the specific group of patients and specific group of
controls you are studying. Usually in comparing patients and controls you
would like to generalise your inferences to the population of patients and
controls. This can be implemented in SPM by the 'second level' analysis
you describe, effecting a random effects model where the error variance is
solely the intersubject (i.e. intrapopulation) variance.
When I do a second level analysis with the individual contrast images
(what information contain these images anyway ?????)
The contrast images represent spatially distributed images of the
weighted sum of the parameter estimates for that particular contrast. In
essence and for your particular case, it's like a difference image for
(activationrest). You need one contrast image for each patient and each
control. By doing that you are collapsing over intrasubject variability
(to only one image per contrast per subject) and the imagetoimage
residual variability is now between subject variance alone.
 With a twosample
ttest or with onesample ttests for each group I get really strange
results. Suddenly the patients are activating more then the controls in
brain regions where not even one patient activated individually. Things
change again dramatically (and get even stranger) when I use
proportional scaling in the first level analysis. Now I would like to
know, what exactly the second  level analysis tells me and whether it
is valid just to work with the first level analysis.
There are many possible reasons for the differences, which are basically
telling you that the error map for the fixed and random
effects models are different (as might be expected). Usually
proportional scaling would be used in the first level of analysis,
because you want the contrast images entering into the second level of
analysis to be on the same scale. You don't tell us how many
subjects are in each group, but I infer from the fixed effect
contrasts that you have four subjects in each group. In this case, you
will not have very many degrees of freedom for the second level of
analysis and therefore lack power. In general, the recommendation would be
to use 1012 subjects per group for a second level analysis so adding
subjects will help considerably.
As to which method is 'valid', that depends on the nature of the
statistical inference you wish to make. In general for comparisons
between patient populations and control populations a random effects
model will be more appropriate, as one would like to generalise the
result beyond the specific individuals studied to the population. You
might be interested in the excellent summary of the rfx discussions
prepared by Darren Gitelman, which is at
http://www.brain.nwu.edu/fmri/spm/ranfx.html
[Geraint Rees 9 Feb 2000]
> We have a 2groups PET study (9 Healthy(G1) vs 8 Patients(G2)) with 6
> conditions (N,H,A,B,C,S) and 2 replications by condition.
> N and H are control conditions for the S activation condition
> A is control condition for B and C activation condition.
> we wanted to compare the two groups for the SN, SH and CA
> contrasts.
> We first defined individual contrasts and we then performed second
> level analyses in order to compare the two groups, i.e., SN(G1) and
> SN(G2) interaction; SH(G1) and SH(G2) interaction; C
In a RFX analysis, one wants to look at the variance over subjects. You
do this in SPM99 by fitting a model to the weighted parameter estimate
images of the 1st level analysis (contrast images). The error variance
of this 2nd level model is then over subjects, not over scans, because
you have got one image per subject. The degrees of freedom for the
estimate of this error variance is here 'number of subjects'  'rank of
2nd level design matrix', i.e. the degrees of freedom is lower than in
the 1st level analysis, in your case 15 = 172.
[Stefan Kiebel 30 Jun 2000]
A(G1) and
> CA(G2) interaction. These analyses have very low degrees of freedom
> (15). But, since each individual contrast involved in these analyses
> was obtained from 4 scans (2conditions, 2 replications), the total
> number of scans involved in the analyse is 4 x (9 + 8) = 68.
> Is there mean to correct df with the real number of scans?
No, not really. There is a critical distinction between a fixed effects
(FFX) and random effects (RFX) analysis of your data with respect to the
degrees of freedom and the inference. You described the RFX analysis,
leaving you with degrees of freedom, which is a function of the number
of subjects and the design matrix employed at the 2nd level analysis.
In terms of inference, the difference between a FFX and RFX analysis is
that with a RFX analysis you generalise your inferences to the
population of the subjects/patients. With a FFX analysis, you make
inferences only about your measured data. However, the more general
inference facilitated by a RFX analysis has its price in the lower
degrees of freedom available (given that you have more than 1
scan/subject).
There is a helpful website about RFX analyses put together by Darren
Gitelman, where he compiled some references and many of Andrew's answers
to the SPMmailbase into a knowledge base about RFX analyses....
http://www.brain.nwu.edu/fmri/spm/ranfx.html
Concerning the degrees of freedom:
In a FFX analysis, you analyze the data only at the 1st level. The
estimated error variance at a voxel is a function of the model and the
actual fit to the data, i.e. you look at the variance over scans. Here
one usually has got high degrees of freedom for a group study, because
the degrees of freedom is 'number of scans'  'rank of design matrix'.
(PET)
> I have a few more questions concerning the multisubject correlation
> analysis.
>
> Does "Covariates only: interaction with subject" model take into account
> the dependence within subject?
> If it does, how does it do that?
> This is important because we have 8 scans per subject.
>
> We specified the following contrasts (11 subjects) to test
> 1) positive correlation 1 1 1 1 1 1 1 1 1 1 1
> 2) negative correlation 1 1 1 1 1 1 1 1 1 1 1.
> Are these tcontrasts correct for testing the average positive and
> negative correlation?
>
> I wonder whether these results are fixed effects analyses, that cannot
> be generalized into population level.
>
You are correct that this is a fixed effects analysis, and cannot strictly
speaking be generalised into a population level inference.
The way to perform a random effect analysis on this study is to generate a
contrast parameter estimate map for each subject (by simply looking at
contrasts [1 0 ... 0], [0 1 0 ... 0] ... [0 .. 0 1] in the results
section). You then perform a onesample ttest (i.e. compare the average of
your parameter estimates with zero) on these (con_00*.img) maps. You do the
same thing with the [1 0 ... 0], [0 1 0 ... 0] ... [0 .. 0 1] contrasts
to check the negative correlations.
The RFX model will tell you if all subject "activate" in the same location
and with roughly the same magnitude. If you want to answer the slightly
less stringent question "do they all activate in the same location?" you
can use a conjunction across subjects instead. Its real easy. Once you have
entered all the individual contrasts above you simply select them all
(positive and negative separately) in the results section using the control
button.
[Jesper Andersson 12 July 2000]
> >In fact the secondlevel analysis, used in SPM, assumes that the design
> >matrices are identical for each subject.
>
> I've seen this point mentioned before but I think I may have missed its
> importance. Does this mean that if you randomize the order of
> presentation across subjects then you cannot use an RFX because each
> subject has a different design matrix? I think this is rarely done in
> fMRI studies but it is normal practice for PET. I'm wondering whether
> this means that the assumptions underlying an RFX analysis are violated
> when analysing (typical) PET data.
The critical thing, about using the simple 2stage analysis to implement
an exact RFX analysis in SPM, is that the contribution from the error
variance (Ce) at the 1st level is the same for each subject. This is
pinv(X)*Ce*pinv(X)'
where X is the 1stlevel design matrix. Because
pinv(X)*Ce*pinv(X)' = pinv(X(i,:))*Ce*pinv(X(i,:))'
where i is any permuation of indices, randomizing the order of
conditions over subjects will have no effect. Indeed randomizing the
onset times of different trial types in fMRI will have no effect
(ignoring minor interactions with serial correlations). The only
situation where one should be careful is when the number of trials, in
X, varies substantially from subject to subject.
> Would it make a difference if one analysed a group of subjects (with
> different stimuli presentation ordering) using the 'conditions x subj'
> option before generating the subjectspecific contrasts? In that
> case, each subject would be part of the same design matrix although the
> contrasts would come from independent (and different) subsets.
Yes. For PET one must always model the effects in a subjectseparable
fashion (i.e. 'conditions x subj'). This is enforced in the fMRI setup
because each session is specified separately.
[Karl Friston 12 July 2000]
> Reading SPM 99 documentation, I have understood that the statistics
> corresponding
> to "random effect" are done using a twostage approach, i.e. calculating
> one contrast image for each subject (as if only one determination has been
> performed on each subject, so that the residual df is number_of_subject1)
> and then running a second level analysis (I did not find out how this
> analysis is performed. Is it by comparing the mean t value to 0?). Is this
> correct?
Yes, the t (and F) tests are against the null hypothesis of zero mean,
using the one (or two)sample ttest option in SPM99.
> In books concerning variance analysis, the random effect (mixed models) is
> generally performed by calculating a F value as the ratio : (main effect
> linked variance) / (interaction variance). Then, the interaction df is:
> (number_of_subject1)*(number_of_replication_per_subject  1). The
> contrasts of interest are then calculated in the same way than in SPM but
> the interaction variance is taken as residual variance.
>
> 1) Did I correctly understood the random analysis in SPM?
Yes. In the special case of two conditions (two levels of your main
effect) and one replication per subject (or data averaged over balanced
replications), the "conventional" Ftest you describe and the
Fcontrast [1] on an SPM one sample ttest are equivalent. Because the
con*imgs already contain the effect parameters for each subject, the
residual error in the SPM model is identical to the subject x effect
interaction, the denominator of the conventional repeated measures
ANOVA.
When there are more than two levels of your factor and you want an
omnibus Ftest (rather than a specific planned comparison, ie
tcontrast), you must use a PET design. However, the resulting analysis
uses a pooled error term (even for factorial designs) and there is
currently no correction for sphericity violations (so your pvalues may
be invalid). This is why we advise keeping secondlevel models to
one/twosample ttests on specific tcontrast images.
> 2) As the number of values for the contrast is always low (the number
> of subjects), is it better to use a non parametric test to compare the mean
> t value to 0?
It may be, particularly with ~10 or less subjects  see:
http://www.mailbase.ac.uk/lists/spm/200007/0053.html
(though if you use permutation tests, as in SnPM, you are not really
treating subjects as a random effect  but then again how often are
subject samples for imaging experiments true random samples from the
population?)
> 3) Is the first order risk (false positive) in the twostage approach in
> SPM the same (or lower or greater) as in the classical approach (one_stage
> analysis, F determination and contrasts deducting using of interaction
> variance in place of between replicates variance)?
>
> 4) The same question for second order risk (power).
They are the same in both cases (if I have understood you correctly),
for the reasons given above.
[Rik Hensen 17 Aug 2000]
The contrast 1, 0 tests whether the parameter estimate for the first
covariate is significantly different from zero. The contrast 1 1
tests whether the parameter estimate for the first covariate is
significantly different from the parameter estimate for the second
covariate. So if a voxel is picked up by the 1, 1 contrast but not
the 1, 0 contrast, then this presumably means that the parameter
estimate for the second covariate is negative (unless its something
funny to do with the error used for each of these comparisons, but I
don't think so). Obviously a glance at the parameter estimates
themselves will establish whether this is the case, and indeed at
least some of these same voxels may show up in the contrast 0, 1
(although others may be subthreshold in this comparison).
I am not sure that contrasts such as 1, 0 are really 'typical' in
eventrelated experiments. Many groups design their experiments to
look for differential effects between different types of event, and I
must admit that personally I would always feel much more comfortable
with that approach. Otherwise you are imaging everything that is
timelocked to your events, much of which may not be relevant to the
cognitive component that you are interested in.
Also, if you only have two types of events, then there might be a
significant degree of colinearity between your covariates. In the
worst case, for example, where events A and B alternate with a fixed
SOA of, say, about 12 seconds, the second covariate might look a bit
like the first one multiplied by 1 ( just because event B occurs
during the bits of the time series when event A isn't occurring). If
so, you could easily be misled looking at main effects contrasts like
1, 0 or 0, 1. Whether a voxel shows up in the first or second of
these contrasts may be determined largely by noise.
[Richard Perry 2 Aug 2000]
> In the SPM course notes chapter, "Statistical Models and Experimental
> Design," it is written that "a *contrast* is an estimable function with the
> additional property c^T betahat = c'^T Yhat = c'T Y ... Thus a contrast is
> an estimable function whose c' vector is a linear combination of the columns
> of X."
>
> Note that being a contrast is a property of c, not c'. I claim that any
> estimable function c (shorthand for the estimable function c^T beta) must be
> a contrast.
I think what was meant here is that an estimable function is defined by the
property above. (which is not really, this is just a corollary of the usual
definition that c^T beta is estimable if there exists c' such that E(c'^T Y) = c^T beta
for any beta. I guess it could be used as a definition ...)
I do agree that the sentence in that text is misleading ....
> Why: c estimable means that there is a c' such that
> c^T = c'^T X.
yes, and your "means" is an "if and only if" ...
> Pick such a c'. Now define c'' to be the orthogonal
> projection of c' onto the range of X. Since c'  c'' is orthogonal to the
> range of X, (c'  c'')^T X = 0. Hence c^T = c''^T X.
yes, more simply put, c' needs not be unique, but its ortho projec. onto
the range of X is unique. Another way of seeing c' is to think of it as
a constraint on the model X ... (so it must lies in its space !)
> Moreover, c''^T
> (Yhat  Y) = 0, since c'' lies in the range of X, and Yhat  Y is
> orthogonal to it. Thus, c^T betahat = c''^T X betahat = c''^T Yhat =
> c''^T Y.
>
> Is this correct?
absolutly, and it is used in the spm routines.
>
> Furthermore, a natural extension of this is that c is estimable (and hence a
> contrast) if and only if it is orthogonal to the null space of X.
yes, or more simply that the contrast lies in the range of X^T, and that's
how we check estimability ... (have a look at spm_SpUtil, spm_FcUtil,
and spm_sp if you want more details on how this is handled.
In fact, because as you point out c'' is in the range of X, we use the
coordinates of it in an orthonormal basis of X to save memory and computation
time and work in the parameter dimension rather than the temporal dimension.
>
> A more specific question about contrasts: The same section also states that
> "For most designs, contrasts have weights that sum to zero over the levels
> of each factor." If the most liberal definition of contrast is used, which
> I claim above is equivalent to c being orthogonal to the null space of X,
> then this would imply that the row sums of the design matrix vanish.
not quite sure what you mean here, what was meant is that in simple factorial
designs, valid contrasts usually have weights that sum up to zero.
> (I
> assume that the constraint that c_0 vanish is to insist that the contrast
> not "see" the constant term; it's not made necessary by the definition
> itself.)
yes. But in any case, the spm interface would not allow a non valid contrast.
[JeanBaptiste Poline 13 Feb 2000]
> I still have some basic questions about the Contrasts ins SPM99. I'm
> looking at a quite simple paradigm with the conditions [A B R] and would
> like to test for Voxels where the activation for A is significantly
> higher than for B.
> If I look at the results for the Contras 1 1 and plot the
> event/epochrelated response for these voxels, all plots show me
> activation for A and deactivation (negative response function) for B. I
> just wonder what happened to the voxels where there is activation for
> both contrasts but more for A....
> Is this a problem with my data or do I understand something wrong about
> SPM?
I don't think there is necessarily a problem with your data. It might
simply be a question of power. Obviously the largest differences in
your contrast [1 1] will be found in the cases weher there is an
activation in A and a deactivation in B. If your power isn't very high
(i.e. if you have relatively small no. of events) it may be that you
are unable to detect the more subtle differences resulting from an
activation in A and a smaller activation in B. Try to lower the
threshold (you may have to lower it in the SPMdefaults to ensure data
are saved for plotting) and have a look at some voxels with smaller
zscores and see what you find. You should also be aware that an
"activation" or a "deactivation" is always relative to some baseline
which may be more or less well defined. If you are using rapid stimulus
presentation (short SOA) without null events it will be less well
defined, and it will be very difficult to determine between activations
and deactivations. In that case the interpretation of a positive
finding in the contrast [1 1] can be larger activation in A than in B,
or less deactivation in A than in B, or anything in betweeen. If R in
your design denotes null events you are in a better position and the
question of activations or deactivations should not be determined from
plots of eventrelated responses of event types A and B, but rather be
based on the [1 0 1] and [0 1 1] contrasts.
[Jesper Andersson 10 Jul 2000]
> What is the statistical value of the contrast of parameter estimates?
A contrast is just a specific weighting of the parameter vector. This is
used to specify a null hypothesis at each voxel (e.g. there is no
activation in condition 1 as compared vs. the rest condition). At each
voxel, a tvalue is then computed by dividing the scalar product of the
contrast and the parameter vector by the estimate of the standard error.
After this a pvalue (corrected for multiple comparisons) is computed
for each tscore. You can then assess the significance of this pvalue
(e.g. p <0.05).
[Stefan Kiebel 11 July 2000]
> If the selected contrasts for a conjunction analysis are not orthogonal,
> SPM asks the orthogonalization order of contrasts.
> The results seems to depend on which contrast is first in
> orthogonalization order. The first contrast remains the same, but those
> who are nonorthogonal regarding the first contrast change.
One assumption made for a conjunction analysis in SPM99 is that the used
contrasts are orthogonal in such a way that the spaces spanned by the
contrasts are orthogonal to each other, i.e. if X is your design matrix
and c1 and c2 two contrasts, then orthogonalizing c2 with respect to c1
means that c2 is changed such that
c1'*pinv(X) is orthogonal to c2'*pinv(X)
where pinv(X) = (X'*X)^(1)*X'
>
> How to interpret the results of an orthogonalized conjunction analysis?
I don't know of an generally applicable interpretation about
orthogonalized conjunction analysis, but why not just say that you
modified the contrasts such that the spaces spanned by the contrasts are
orthogonal to each other. This removes common subspaces spanned by more
than one contrast from all but one contrast, which would otherwise make
your conjunction analysis invalid. Maybe others might want to comment
here.
[Stefan Kiebel 20 July 2000]
> I would like an opinion on the following issue: we have submitted a
> paper on a PET experiment, where we collected data from 11 volunteers
> and performed different comparisons between 3 experimental conditions
> and a baseline with SPM96. During the experiment, we collected online
> reaction times and accuracy scores. In the paper we report that the
> behavioral performances (both reaction timies and accuracy) of the 11
> subjects were significantly different from each other (p=0.0001).
> However, we didn't include the behavioral scores in any way in the SPM
> analysis (one reason was that we had no hypothesis about any possible
> physiological impact of the behavioral scores). We now have been asked
> from a referee to include the behavioral data for each single subject
> in the SPM analysis as a confound. Are you aware of any publications,
> where as a similar approach was adopted? And, more in general, do you
> think such an approach would be correct?
This is my opinon:
This is a difficult question to answer in a general way. There are an
enourmous number of instances where behavioural or psychophyisical data
are entered into the design matrix as explanatory variables. The
objective here is generally to find the neurophysiological corrrelates
of the data in question. On the other hand it would be ridiculous to
include a behavioural response variable as a confound if its variance
was caused by an experimental factor already in the design matrix (for
example including visual analogue scores of pain as a confound in a
pain study would be silly).
The question you have to address is whether the RT and accuracy data
contain information that is independent of the effect you are
interested in and, if so, would the analysis be better if you used the
RT and accuracy data as surrogate markers for this confounding effect.
You then have to think carefully about the orthogonalization scheme you
would adopt in the case of collinearity.
If the RT and accuracy data represent measures of the process you are
interested in, then you could include the RT and/or accuracy data as
regressors of interest and report these effects.
In general reviewers who specify that a particular statistical model
should be adopted before the report will be considered for publication
are, in my opinion, in danger of overstepping their brief. On the
other hand, reviewers are, generally, only trying to help you present
your ideas in a valid and clear fashion. It may be that the reviewer
thinks you are misattributing activations to one experimental cause
when there is another more parsimoniius explanation that is reflected
in the RT and accuracy data.
[Karl Friston 21 July 2000]
"eigenvariates" and "eigenimages" refer to a factorisation of functional
imaging data of the following form. Say you have your data stored in an nxm
matrix Y where n is the number of voxels in each image volume, and m the
number of scans in the time series. We may now factor Y in the following
form
Y = U*S*V'
where U is an orthogonal nxm matrix, S is a diagonal mxm matrix and V an
orthogonal mxm matrix. This is a bit like trying to factorise a number into
the product of three other numbers, i.e. there are a lot of ways of doing
it. To make the factorisation unique the first column of U multiplied with
the first number in S multiplied with the first column of V has to be the
combination which explains the maximum possible amount of the variance in
Y. The second combination has to be that which explains the most variance
in Y after the first one has been removed etc. This is called singular
value decomposition.
In neuroimaging the columns of U are typically/often denoted eigenimages,
and the columns of V may be called "eigenvariates", "eigentimecourses" or
something like that.
Its usefulness in neuroimaging comes from the similarity between the
factorisation above and the multivariate version of the general linear
model
Y = P*X' + E
where Y is still the data, X is the good old design matrix, the columns of
P contains the parametric images (one for each column of X) and E is the
error matrix.
For well behaved data (e.g. PET) the SVD factorisation can sometimes be
useful in that the "automatic generation" of the "design matrix" V can give
new insights to the experiment. It can also be used as a data reduction
method or as a preconditioning of data prior to e.g. ICA.
As a general reference I quite like the explanation given in "Numerical
Recipies in C" by Press et al. For Neuroimaging purposes you might refer to
Friston et al 1993 in JCBFM, or you could look for papers by S. Strother.
[Jesper, 20 Jun 2000]
> I have a question regarding the power of SPM results. Specifically I have
> FDG PET studies obtained in 17 adult controls and 7 children older than 6
> year of age. I performed a routine SPM analysis at the 0.05 level and no
> significant differences can be detected. The reviewer however argues that
> this is a null finding of group differences between small groups and thus
> I cannot conclude that subtle differences do not exist.
> How can I determine what power my results have?
I assume that you have the tvalues from the SPM analysis comparing
responses at particular areas of interest between the adults and children.
Although the tvalues themselves are not a good measures of effect size
(because they are a function of sample size), for between group designs
there is a simple transformation that converts t to the effect size
statistic d, assuming that there is no treatment x subject interaction.
The general formula is d = [1/sqrt (p*q)] t / sqrt N , where p and q are
the proportion of cases in the two groups and N is the total sample size.
It seems to me that you would be able to use this effect size estimate in
the standard formulas for power (e.g., Cohen, 1977) to determine the power
for the specific effects in your study, to estimate sample size required for
future studies, and to reply to the reviewer.
Note that the situation is not as straightforward if one is using a within
subject design or a mixed design. One would generally want to test
specifically for a treatment x subject interaction as well as consider the
testretest reliability of the dependent variable when computing the effect
size estimate. In SPM conjunction analysis seems to have been used to
address similar issues.
{Frank Funderburk 26 Jun 2000]
> I have a quick question: performing spm analysis of PET data comparing
> two groups (17 and 7 subjects) and not detecting any significant
> differences, what is the power for my null findings? In other words,
> how confident can I be that I did not missed effects larger than say
> 10% difference?
In theory you should be able to do this using the standard error of the
contrast of parameter estimates at a representative voxel. After
plotting the contrast of interest the variable SE in working memory
represents the standard error of that contrast. The probability of
detecting an activation of A% at a specificity of defined by a T
threshold u (given the standard deviation of the parameter estimate is
SE) is (I think):
1  spm_Ncdf(u*SE,A,SE^2)
This is the power or sensitivity.
For example if your corrected threshold is 4.2, SE = 2.6 and the grand mean
of your data was 100, then the power to detect a 10% activation would be:
1  spm_Ncdf(4.2*SE,10,SE^2) = 0.3617
i.e. 36% power. I am not sure how rigorous this is (I am not very
expert in this) but you could certainly check this with your local
statistician.
[Karl Friston 26 Jun 2000]
> I have a question regarding the power of SPM results. Specifically I have
> FDG PET studies obtained in 17 adult controls and 7 children older than 6
> year of age. I performed a routine SPM analysis at the 0.05 level and no
> significant differences can be detected. The reviewer however argues that
> this is a null finding of group differences between small groups and thus
> I cannot conclude that subtle differences do not exist.
> How can I determine what power my results have?
For voxelwise power estimates, one can apply the noncentral t or F
distributions  see for example Van Horn et al, NeuroImage 7, 97107. I once
wrote some software to do voxelwise power calculations on SPM96 analyses that
may or may not be useful to you:
ftp://ftp.mrccbu.cam.ac.uk/pub/imaging/Power
However, I'm not sure whether the voxelwise approach is answering the correct
question. For example, an obvious question might be how much power you have to
detect a change in a given brain region, and this question is more complex to
answer within SPM, as even for a small region you are likely to have more than
a single measurement's worth of data. You could of course reduce the problem
by using regions of interest.
[Matthew Brett 26 Jun 2000]
I know of at least three articles that deal with power in spatially
extended statistical processes, in addition to articles from the FIL that
address power in previous versions of SPM. Note that as some important aspects
of SPM have changed in the latest version such as relaxation of the
constraint that smoothness is equal at all voxels the calculations in these
latter papers may not hold precisely.
note that all these articles are available online at
http://www.idealibrary.com
free of charge, if your institution has a license.
Mapping VoxelBased Statistical Power on Parametric Images
John Darrell Van Horn, Timothy M. Ellmore, Giuseppe Esposito, and Karen Faith
Berman
NEUROIMAGE 7, 97–107 (1998)
ARTICLE NO. NI970317
Factors That Influence Effect Size in 15 O PET Studies:
A Metaanalytic Review
Sherri Gold,* Stephan Arndt,* , ? Debra Johnson,* Daniel S. O?Leary,* and
Nancy
C. Andreasen*
NEUROIMAGE 5, 280–291 (1997)
ARTICLE NO. NI970268
Estimation of the Probabilities of 3D Clusters in Functional Brain Images
Anders Ledberg,1 Sebastian Åkerman, and Per E. Roland
NEUROIMAGE 8, 113–128 (1998)
ARTICLE NO. NI980336
not to mention SPM roots:
Detecting Activations in PET and fMRI: Levels of Inference and Power
K. J. FRISTON,A.HOLMES, JB. POLINE,C.J.PRICE, AND C. D. FRITH
NEUROIMAGE 40, 223–235 (1996)
ARTICLE NO. 0074
and
COMMENTS AND CONTROVERSIES How Many Subjects Constitute a Study?
Karl J. Friston, Andrew P. Holmes, and Keith J. Worsley
NeuroImage 10, 1–5 (1999) Article ID nimg.1999.0439
[Christopher Gottschalk 26 Jun 2000]
> Dr Friston: you specify below data with a grand mean of 100 does this
> apply to an analysis in which grand mean scaling was set to 100 even
> though the raw data do not [usually] have this mean?
Yes. The grand mean is simply set to 100 arbitrary units.
> Equally true if ANCOVA or PS were used?
It does not really matter. You simply have to specfiy your activation
size in the appropriate units. These are usually adimensional and
scaled to the grand mean specified.
> and a further clarification: it appears what you propose applies to a
> given voxel or region, not over the SPM as a wholetrue?
Absolutely.
> The formula for the power you specified is, I assume, for the type II
> error (the beta)? In other words this is the false rejection rate (the
> probability to call a voxel as not significantly different between
> groups when they really are different)?
Strictly speaking the power is the probability of correctly rejecting
the null hypothesis. This is 1  p(type II error) conditional on the
alternate hypothesis being true.
The standard error should be the contrast of parameter estimate
divided by the t value if this helps.
[Karl Friston 27 Jun 2000]
Excerpts from SPMhelp#: 8Aug100 SPM and Power by Kris Boksman@julian.uwo.
> Can anyone recommend a source for information on how to derive power
> estimates for fMRI analyses as implemented in SPM?
See these recent SPM list answers:
http://www.mailbase.ac.uk/lists/spm/200006/0193.html
http://www.mailbase.ac.uk/lists/spm/200006/0191.html
(Search http://www.mailbase.ac.uk/lists/spm/search.html for "power"
for more.)
[Thomas Nichols 8 Aug 2000]
> I am working on a paper and I need reference literature which explain
> two functions I used.
> I was using the 'mean & decay function' for my experiment to
> describe time withinepoch adaptation.
Here you use 2 basis functions, which should capture an underlying
withinepoch adaption. I would look at it as a specific case of the
general linear model as implemented in SPM99. This could be covered by
the following two references: The first is about the general linear
model approach, providing the framework for the core of SPM.
***
Friston KJ Holmes AP Worsley KJ Poline JB Frith CD and Frackowiak RSJ
Statistical Parametric Maps in functional imaging: A general linear
approach.
Human Brain Mapping 1995;2;189210
***
The second is about basis functions for epochs, where your mean &
exponential decay function would just be another set of basis functions.
***
Friston KJ Frith CD Turner R and Frackowiak RSJ Characterizing evoked
hemodynamics with fMRI. NeuroImage 1995; 2;157165
***
> Also I used 'parametric linear modulation over time' to describe
> a general adaptation for ongoing stimulus tasks within a session.
The parametric linear modulation over time is implemented by a
convolution of the chosen basis function set with a series of 'stick
functions', where the height of each stick is modulated over time.
Essentially, this approach is largely equivalent to the one described in
the following paper, where the authors specify basis functions, which
are modulated versions (up to 2nd order) of a stimulus function.
***
Buchel C Wise RJS Mummery CJ Poline JB Friston KJ Nonlinear regression
in parametric activation studies. NeuroImage 1996;4:6066
[Stefan Kiebel 27 Jun 2000]
> I need to explain where in the analyses of the fMRI images the
> intrinsic correction for temporal autocorrelation occurs. I've not been
> able to figure this out on my own. (sorry if this question is a little
> daft)
In SPM, the temporal autocorrelation is taken into account at the
parameter estimation stage and in the statistical inference.
The design matrix and the data are both convolved with a filter kernel,
which is usually a bandpass filter, i.e. it is effectively a combination
of a userspecified lowpass and highpass filter. This changes the
autocorrelation structure of the data such that the actual
autocorrelation structure is given by convolution of the (unknown)
intrinsic autocorrelation with the bandpass filter kernel. One goal of
this filtering is to impose an autocorrelation structure on the data,
which is not too different from the assumed autocorrelation
(s. below).
At the level of statistical inference: to compute a tvalue at each
voxel, one has to estimate the intrinsic autocorrelation of the data.
Currently, in SPM99, you can do that by assuming that the intrinsic
autocorrelation before the convolution with the bandpass filter kernel
is a unity matrix or by estimating the autocorrelation with an
AR(1)model. Anything what follows at this stage, e.g. the computation
of the effective degrees of freedom is based on these estimates of the
intrinsic and actual autocorrelation structures.
Some part of all this is described in
KJ Worsley and KJ Friston, 1995. Analysis of fMRI TimeSeries Revisited
 Again.
Neuroimage, 2:173181
As far as I know, a paper by Karl Friston et al. about temporal
filtering is in press (NeuroImage).
[Stefan Kiebel 11 Jul 2000]
A. fMRI Susceptibility Artifacts
Dear SPM users,
I received a number of highly informative responses to my question
about temporal lobe susceptibility effects. Since I assume that they
may be of general interest, I list the collected responses below.
Thanks again.
[Peter Indefrey 10 Jul 2000]
Peter Indefrey wrote
Q1)I'm currently considering the pros and cons of PET vs. fMRI for a
language
paradigm with the inferior temporal lobes as the principal region of
interest. While my preference is PET , considering the known
susceptibility problems of fMRI in this region, I'd nonetheless like to
learn what can be done in fMRI to minimize (the effect of) magnetic field
inhomogeneities. Any suggestions?
1)
From: Jose' Ma. Maisog <joem@sensor.com>
Hi Peter, check out this abstract from the recent Human Brain Mapping
conference in San Antonio. They used a postprocessing statistical
correction to minimize the effect of susceptibility artifact. Perhaps
one of the authors can offer more advice.
Devlin J, Russell R, Davis M, Price C, Wilson J, Matthews PM, Tyler L,
"Susceptibility and Semantics: Comparing PET and fMRI on a Language
Task," NeuroImage Volume 11, Number 5, May 2000, Part 2 of 2 Parts,
S257.
Q2) Peter Indefrey wrote:
>maybe I should be more specific on this: it was apart from my own
>experience with PET and fMRI on similar paradigms just this talk that
>confirmed my opinion that PET would be the adequate thing to do, since
the
>statistical correction did not seem to fully compensate for the
>susceptibility artifacts. So my question was rather: does everybody agree
>on the conclusions of this paper or are there procedures, postprocessing
>or other, such as shimming, choice of slices and angles, that have proven
>effective in minimizing the problem to such an extent that fMRI is at
>least as good as PET in scanning ventral temporal regions.
2)From: Alejandro Terrazas <alex@nsma.arizona.edu>
Peter
I have some experience with field corrections. I am writing up a paper
comparing fieldmap corrected to noncorrected data. Clearly the
raw data is improved but it is still not clear whether there is an
improvement
of the activation maps. For one thing, people use smoothing to "improve"
their images and fieldmap corrections in spiral are like unsmoothing.
Things are different for EPI sequences where you get geometric
distortions. There are correction methods for this as well.
PET vs. fMRI is a tough question. It depends on the temporal dynamics
of what you wish to see. PET is probably more reliable for deeper
structures.
3)
From: Matt Davis <matt.davis@mrccbu.cam.ac.uk>
Hi Peter,
If you read the full paper that has just been published in NeuroImage:
Joseph T. Devlin, Richard P. Russell, Matt H. Davis, Cathy J. Price, James
Wilson,
Helen E. Moss, Paul M. Matthews, and Lorraine K. Tyler (2000)
SusceptibilityInduced Loss of Signal: Comparing PET and fMRI on a Semantic
Task. NeuroImage 11(6): 589^Ö600
you'll see references to three acquisition methods that have been proposed
to alleviate susceptibilityinduced problems in the anterior and inferior
portions of the temporal lobe. These are:
a) tailored RF pulses:
Chen, N., and Wyrwicz, A. M. 1999. Removal of intravoxel dephasing
in gradientecho images using a fieldmap based RF refocusing
technique. Magn. Reson. Med. 42:807^Ö812.
b) Z shimming:
Constable, R. T. 1995. Functional MRI using gradient echo EPI in
the presence of large static field inhomogeneities. J. Magn. Reson.
Imag. 5: 746^Ö752.
Yang, Q. X., Dardzinski, B. J., Li, S. Z., Eslinger, P. J., and Smith,
M. B. 1997. Multigradient echo with susceptibility inhomogeneity
compensation (MGESIC): Demonstration of fMRI in the olfactory
cortex at 3.0 T. Magn. Reson. Med. 37:331^Ö335.
Yang, Q. X., Williams, G. D., Demeure, R. J., Mosher, T. J., and
Smith, M. B. 1998. Removal of local field gradient artifacts in
T2*weighted images at high fields by gradientecho slice excitation
profile imaging. Magn. Reson. Med. 39:402^Ö409.
Constable, R. T., and Spencer, D. D. 1999. Composite image formation
in zshimmed functional MRI. Magn. Reson. Med. 42: 110^Ö117.
c) Spiral scanning:
Crelier, G. R., Hoge, R. D., Munger, P., and Pike, G. B. 1999. Perfusion
based functional magnetic resonance imaging with singleshot RARE and
GRASE acquisitions. Magn. Reson. Med. 41:132^Ö136.
As yet we have no practical experience of any of these methods are. There
is a Constable paper in NeuroImage 12(1) showing that zshimming allows
detection of activations in the hippocampus, but to my knowledge there has
been no equivalent demonstration for activations in the temporal pole. If
the inferior (especially anterior) temporal lobe is a region of interest, I
would agree with you that PET may be a superior imaging modality for your
purposes.
If you hear of any other suggestions other than the ones referred to here I
would be interested to hear of them.
4)
From: Russ Poldrack <poldrack@nmr.mgh.harvard.edu>
I didn't follow this whole thread, so I'm not sure if anyone has suggested these
steps:
1  reduce TE to increase the level of bold contrast in the regions where the
dropout occurs  the tradeoff is that you reduce the bold contrast in other
regions. at 1.5T you might try something like 25 ms as a tradeoff
2  use smaller voxels (something like cubic 3 mm is probably best)  this
definitely reduces the dropout
3  different orientations may result in different levels of artifact  you
should do test runs with different orientations and see which work best for your
particular area of interest
I've not read the paper mentioned below, but I am skeptical that these artifacts
can be overcome solely using postprocessing stats.
5)
From: L.K. Tyler <lktyler@csl.psychol.cam.ac.uk>
In our studies on semantic processing comparing activations in PET and fMRI
(3T) using a similar paradigm and materials, we definitely find more
robust inferior temporal lobe activation in PET than in fMRI. However,
there are procedures you can use to maximise the signal in the temporal
lobes in fMRI, although it's not yet clear how much this will improve
things. For the time being, PET seems to have the advantage if you want to
activate the ventral temporal regions.
6)
From: Joe Devlin <jdevlin@csl.psychol.cam.ac.uk>
From discussions with our physicists, it sounds like there are things
one could do to reduce the macroscopic susceptibility artifacts 
particularly if you are specifically interested in one region. If the
lateral surface of the ventral temporal region if the area you are most
interested in, you can often use manual shimming to reduce the field
gradients in that region. The problem is that this typically distorts
the signal from other areas but if you are doing an ROI analysis, this
may not be important. Obviously, using a small volume statistical
correction for this region will further increase your sensitivity.
It may be worthwhile piloting your experiment with one or two subjects
in fMRI and looking at their data before deciding further.