Overview of Mixed Linear Models

Mixed linear models incorporate both “fixed effects” and “random effects” (that is, “mixed effects”). The independent variables in a linear regression may be thought of as fixed effects. To solve for the random effects in a mixed model, something should be known about the variances and covariances of these random effects.

The Mixed Model Equation

Suppose there are n measurements of a phenotype which is influenced by f fixed effects and t instances of one random effect. The mixed linear model may be written as

y = X\beta + Zu + \epsilon,

where y is an n \times 1 vector of observed phenotypes, X is an n \times f matrix of fixed effects, and \beta is a f \times 1 vector representing the coefficients of the fixed effects. Z is an n \times t matrix relating the instances of the random effect to the phenotypes. We assume

Var(u) = \sigma^2_g K

and

Var(\epsilon) = \sigma^2_e I,

so that

Var(y) = \sigma^2_g ZKZ' + \sigma^2_e I.

Examples of “fixed effects” may include the mean, one or more genotypic markers, and other additional covariates that may be analyzed.

Examples of a “random effect” are:

  1. Polygenic effects from each of t subgroupings, where the n measurements have been grouped into t subgroupings such as inbred strains. Z is then an incidence matrix relating subgroupings/strains to measurements, and K should be a matrix showing the pairwise genetic relationship among the t strains.
  2. Polygenic effects from each of the n samples, where there is just one measurement per sample. Z is then just the identity matrix I, and K should be a pairwise genetic relationship or kinship matrix among the n samples.

Note

At this time, neither of the SVS Mixed Linear Model Analysis tools supports organizing measurements into subgroupings such as inbred strains.

The parameters \sigma^2_g and \sigma^2_e are called the “variance components”, and are assumed to be unknown. To solve the mixed model equation, the variance components must first be estimated. Once this is done, a generalized least squares (GLS) procedure may be used to estimate \beta.

Finding the Variance Components

The SVS Mixed Linear Model Analysis tools use an approach called EMMA (Efficient Mixed-Model Association) [Kang2008] to directly estimate the variance components \sigma^2_g and \sigma^2_e, reducing the problem to a maximization search in just one dimension.

Either the full likelihood or the restricted likelihood may be maximized. The restricted likelihood is defined as the full likelihood with the fixed effects integrated out. As stated in [Kang2008], “The restricted likelihood avoids a downward bias of maximum-likelihood estimates of variance components by taking into account the loss in degrees of freedom associated with fixed effects.”

Note

The SVS Mixed Linear Model Analysis tools always maximize the restricted likelihood rather than the full likelihood except when the Bayes Information Criterion (for the MLMM feature) is computed.

Suppose \sigma = \sigma_g, \delta =
\frac{\sigma^2_e}{\sigma^2_g}, and H = \frac{V}{\sigma^2} =
ZKZ' + \delta I, which is a function of \delta. Under the null hypothesis, the full log-likelihood function can be formulated as

l_F(y; \beta, \sigma, \delta) = \frac{1}{2}\big[-n\log(2\pi\sigma^2) - \log|H| - \frac{1}{\sigma^2}(y - X\beta)'H^{-1}(y - X\beta)\big]

and the restricted log-likelihood function can be formulated as

l_R(y;\sigma,\delta) &= l_F(y; \hat{\beta}, \sigma, \delta) \\
                     &= \frac{1}{2}\big[f\log(2\pi\sigma^2) + \log|X'X| - \log|X'H^{-1}X|\big].

The full-likelihood function is maximized when \beta is \hat{\beta} = (X'H^{-1}X)^{-1}X'H^{-1}y, and the optimal variance component is \sigma^2_F = R/n for full likelihood and \sigma^2_R = R/(n-f) for restricted likelihood, where R=(y - X\hat{\beta})'H^{-1}(y - X\hat{\beta}) is a function of \delta as well.

Using spectral decomposition, it is possible to find \xi_i and \lambda_s such that

H = ZKZ' + \delta I = U_F diag(\xi_1 + \delta, ..., \xi_n + \delta)U_F'

and

SHS &= S(ZKZ' + \delta I)S \\
    &= [U_R W_R]diag(\lambda_1 + \delta, ..., \lambda_{n-f} + \delta, 0, ..., 0)[U_R W_R]' \\
    &= U_R diag(\lambda_1 + \delta, ..., \lambda_{n-f} + \delta)U_R',

where S = I - X(X'X)^{-1}X', U_F is n
\times n, and U_R is an n \times (n-f) eigenvector matrix corresponding to the nonzero eigenvalues. W_R is an n \times f matrix corresponding to the zero eigenvalues. U_F and U_R are independent of \delta.

Let [\eta_1, \eta_2, ..., \eta_{n-f}]' = U_R'y. Then, finding the maximum-likelihood (ML) estimate is equivalent to optimizing

f_F(\delta) &= l_F(y; \hat{\beta}, \hat{\sigma}, \delta) \\
            &= \frac{1}{2}\bigg[n \log\frac{n}{2\pi} - n - n \log(\sum_{s=1}^{n-f}\frac{\eta^2_s}{\lambda_s + \delta}) - \sum_{i=1}^n \log(\xi_i + \delta)\bigg]

with respect to \delta, and finding the restricted maximum-likelihood (REML) estimate is equivalent to optimizing

f_R(\delta) &= l_R(y; \hat{\sigma}, \delta) \\
            &= \frac{1}{2}\bigg[(n-f) \log\frac{n-f}{2\pi} - (n-f) - (n-f) \log(\sum_{s=1}^{n-f}\frac{\eta^2_s}{\lambda_s + \delta}) - \sum_{s=1}^{n-f} \log(\lambda_s + \delta)\bigg]

with respect to \delta. (See the Appendix of [Kang2008] for the mathematical details, except that PS = P, not “PS = S” as that Appendix states.) These functions are continuous for \delta > 0 if and only if all the eigenvalues \lambda_s are nonnegative (and, for f_F(\delta), the eigenvalues \xi_i are nonnegative). Otherwise, if the kinship matrix is not positive semidefinite, the likelihood will be ill-defined for a certain range of \delta.

The derivatives of these functions, which may be used to find the local maxima for the functions themselves, are

f_F'(\delta) = \frac{n}{2}\frac{\sum_s\eta^2_s/(\lambda_s + \delta)^2}{\sum_s\eta^2_s/(\lambda_s + \delta)} - \frac{1}{2} \sum_i \frac{1}{\xi_i + \delta}

and

f_R'(\delta) = \frac{(n-f)}{2} \frac{\sum_s\eta^2_s/(\lambda_s + \delta)^2}{\sum_s\eta^2_s/(\lambda_s + \delta)} - \frac{1}{2} \sum_s \frac{1}{\lambda_s + \delta}.

The ML or REML may be searched for by subdividing the values of \delta into 100 intervals, evenly in log space from \delta = 10^{-5} to \delta = 10^5, and applying a method such as the Newton-Raphson algorithm or the secant method on f_F'(\delta) or f_R'(\delta) to all the intervals where the sign of the derivative function changes, then taking the optimal \delta among all the stationary points and endpoints.

Note

The secant method is used by the SVS Mixed Linear Model Analysis tools.

Notice that evaluating f_F'(\delta) or f_R'(\delta) does not require a large number of matrix multiplications or inverses at each iteration as other methods typically do – instead, the EMMA technique computes spectral decomposition only once. Thus, using the grid search indicated above, the likelihood may be optimized globally with high confidence using much less computation.

Estimating the Variance of Heritability

The formula for the estimate of the variance of heritability is derived as follows using a Taylor series expansion:

Var\left(\frac{X}{Y}\right) = \left(\frac{E(X)}{E(Y)}\right)^2
                              \left[\frac{Var(X)}{(E(X))^2} - \frac{2 Cov(X,Y)}{E(X)E(Y)}
                              + \frac{Var(Y)}{(E(Y))^2}\right]

Now, using the fact that the pseudo-heritability (or narrow-sense heritability) is:

h^2 = \frac{\hat{\sigma}_g^2}{\hat{\sigma}_g^2 + \hat{\sigma}_e^2}
    = \frac{\hat{\sigma}_g^2}{\hat{\sigma}_T^2}

We can obtain the formula for the estimate of the variance of heritability:

Var(h^2) &= Var\left(\frac{\hat{\sigma}_g^2}{\hat{\sigma}_g^2 + \hat{\sigma}_e^2}\right) \\
         &= Var\left(\frac{\hat{\sigma}_g^2}{\hat{\sigma}_T^2}\right) \\
         &= \left(\frac{\hat{\sigma}_g^2}{\hat{\sigma}_T^2}\right)^2
            \left[\frac{Var(\hat{\sigma}_g^2)}{(\hat{\sigma}_g^2)^2}
            - 2\left(\frac{Cov(\hat{\sigma}_g^2, \hat{\sigma}_T^2)}{\hat{\sigma}_g^2\hat{\sigma}_T^2}\right)
            + \frac{Var(\hat{\sigma}_T^2)}{(\hat{\sigma}_T^2)^2}\right]

Note that

Var(\hat{\sigma}_T^2) &= Var(\hat{\sigma}_g^2 + \hat{\sigma}_e^2) \\
                      &= Var(\hat{\sigma}_g^2) + 2Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2)
                         + Var(\hat{\sigma}_e^2) \\
Cov(\hat{\sigma}_g^2,\hat{\sigma}_T^2) &= Cov(\hat{\sigma}_g^2,\hat{\sigma}_g^2 + \hat{\sigma}_e^2) \\
                                       &= Cov(\hat{\sigma}_g^2,\hat{\sigma}_g^2)
                                          + Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2) \\
                                       &= Var(\hat{\sigma}_g^2) + Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2)

So:

Var(h^2) = (h^2)^2\left[\frac{Var(\hat{\sigma}_g^2)}{(\hat{\sigma}_g^2)^2}
          - \frac{Var(\hat{\sigma}_g^2) + Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2)}
            {\hat{\sigma}_g^2(\hat{\sigma}_g^2 + \hat{\sigma}_e^2)}
          + \frac{Var(\hat{\sigma}_g^2) + 2Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2) + Var(\hat{\sigma}_e^2)}
            {(\hat{\sigma}_g^2 + \hat{\sigma}_e^2)^2}\right]

The formulas and methods for calculating the variance components can be found in Finding the Variance Components. Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2) is the covariance between the estimated random effect components u and \epsilon; \hat{u}=G\hat{\gamma} and \hat{\epsilon} = y - \hat{u} so Cov(\hat{\sigma}_g^2,\hat{\sigma}_e^2) = Cov(\hat{u}, \hat{\epsilon}).

Solving the Mixed Model Equation

The generalized least squares (GLS) solution to

y = X\beta + Zu + \epsilon

may now be obtained. Note that the variance V of Zu + \epsilon is

V = \sigma^2 (ZKZ' + \delta I).

If we can find a matrix B such that

BB' = H = \frac{V}{\sigma^2} = ZKZ' + \delta I,

we can substitute y^* = B^{-1}y, X^* = B^{-1}X, and \epsilon^* = B^{-1}(Zu + \epsilon) to get

y^* = X^*\beta + \epsilon^*.

(The Cholesky decomposition of H is one way to obtain such a matrix B.) This equation can be solved for \beta through ordinary least squares (OLS), because we have

Var(\epsilon^*) = Var(B^{-1}(Zu + \epsilon)) = B^{-1}V(B^{-1})' =
\sigma^2 B^{-1}H(B^{-1})' = \sigma^2 B^{-1}BB'(B^{-1})' = \sigma^2
I.

The value of the residual sum of squares (RSS) from solving the transformed equation y^* = X^*\beta + \epsilon^* is the Mahalanobis RSS for the original equation y = X\beta + Zu +
\epsilon.

Taking advantage of the eigendecomposition of H performed in the EMMA algorithm, the computation of a valid B^{-1} can be simplified to

B^{-1} = diag(1/\sqrt{\xi_1 + \delta}, ..., 1/\sqrt{\xi_n + \delta}) U_F' .

Using the Mixed Model for Association Studies

The Exact Model

Association studies are typically carried out by testing the hypothesis H_0:\beta_k = 0 for each of m loci, one at a time, on the basis of the model

y_i = \sum_f \beta_f X_{if} + \beta_k M_{ik} + \eta_{i\bar{k}},

where M_{ik} is the minor allele count of marker k for individual i, \beta_k is a (fixed) effect size of marker k, and \sum_f \beta_f are other fixed effects such as the mean of the y_i and any fixed covariates. The error term \eta_{i\bar{k}} is

\eta_{i\bar{k}} = \sum_{s \ne k} \beta_s M_{is} + \epsilon_i .

If we assume the n individuals are unrelated and there is no dependence across the genotypes, the \eta_{i\bar{k}} values will be independently and identically distributed (i.i.d.), and thus simple linear regressions will make appropriate inferences for the k values of \beta.

However, the variance of the first term of \eta_{i\bar{k}} actually comes closer to being proportional to a matrix of the relatedness or kinship between samples. Thus, if we write

u_{i\bar{k}} = \sum_{s \ne k} \beta_s M_{is},

we see that the equation for y_i reduces to the mixed-model equation

y_i = \sum_f \beta_f X_{if} + \beta_k M_{ik} + u_{i\bar{k}} + \epsilon_i .

Note that strictly speaking, to use this equation, we should base not only the kinship, but also the variance components, upon all markers except for marker k.

The EMMAX Approximations and Technique

Even using the EMMA technique, finding the kinship matrix for, variance components for and solving for \beta_k for all k would be a daunting task. However, we may make two approximations:

  1. Let

    \eta_i = \sum_{s=1}^m \beta_s M_{is} + \epsilon_i

    approximate \eta_{i\bar{j}}, and let

    u_i = \sum_s \beta_s M_{is},

    approximate u_{i\bar{k}}. Then, we have

    y_i &= \sum_f \beta_f X_{if} + \beta_k M_{ik} + \eta_i \\
    &= \sum_f \beta_f X_{if} + \beta_k M_{ik} + u_i + \epsilon_i .

    To solve this, we need to compute the kinship matrix just once, using all markers. That kinship matrix may then be used to solve this equation for every marker k.

  2. Find the variance components once – specifically, for the system of equations

    y_i = \sum_f \beta_f X_{if} + u_i + \epsilon_i,

    using the kinship matrix K which is computed just once for all markers k. Then, use these variance components \sigma^2_g (for the variance of the u_i which is \sigma^2_g K) and \sigma^2_e (for the variance of the \epsilon_i which is \sigma^2_e
I) to apply the GLS method for solving

    y_i = \sum_f \beta_f X_{if} + \beta_k M_{ik} + u_i + \epsilon_i

    for \beta_k for every marker k.

This technique, which is called EMMAX (EMMA eXpedited) and was published in [Kang2010], allows mixed models to be used for genome-wide association testing within a very reasonable amount of computing time.

Normalizing the Kinship Matrix

The actual EMMAX technique, however, before using the kinship matrix K, scales it (and thus effectively scales u) by an amount that will make the expectation of the estimated population variance of the (scaled) u_i to be \sigma^2_g, just as the expectation of the estimated population variance for the \epsilon_i is \sigma^2_e.

This is done by defining a scaling factor w as

w = \frac{Tr(CKC)}{n - 1},

and dividing K by it to get

K_N = \frac{K}{w}.

Here, C = I - 1_n1_n'/n and 1_n is a length n vector of ones. C is called a “Gower’s centering matrix”–it has the property that when you apply it to a vector v to get Cv, it will subtract the mean 1'v/n of the components of v from each component of v:

Cv = [I - 1_n1_n'/n]v = v - 1_n[1_n'v/n] = v - (1_n'v/n)1_n

The reasoning for using this scaling factor is as follows:

Suppose we have a vector v of elements v_i. Estimate the population variance q of these elements over all the n samples i. (This estimate is sometimes called the “sample variance”.) The (unbiased) estimate would be

q &= \frac{\sum_i^n (v_i - \bar{v})^2}{n - 1} \\
  &= \frac{\sum_i^n v_i^2 - n \bar{v}^2}{n - 1},

where \bar{v} is the average of the components v_i of v.

However, another way to write this is

q = \frac{(v - \bar{v}1_n)'(v - \bar{v}1_n)}{n - 1},

since

v'v &= \sum_i^n v_i^2,\\
(\bar{v}1_n)'v &= v'(\bar{v}1_n) = \bar{v}\sum_i^n v_i = \frac{\sum_i^n v_i}n \sum_i^n v_i = n \bar{v}^2,\\
(\bar{v}1_n)'(\bar{v}1_n) &= n \bar{v}^2,

and

q &= \frac{v'v - (\bar{v}1_n)'v - v'(\bar{v}1_n) + (\bar{v}1_n)'(\bar{v}1_n)}{n - 1} \\
  &= \frac{\sum_i^n v_i^2 - 2n \bar{v}^2 + n \bar{v}^2}{n - 1} \\
  &= \frac{\sum_i^n v_i^2 - n \bar{v}^2}{n - 1}.

Two other ways to write this are

q = \frac{Tr((v - \bar{v}1_n)'(v - \bar{v}1_n))}{n - 1},

since (v - \bar{v}1_n)'(v - \bar{v}1_n) is a scalar, and

q = \frac{Tr((v - \bar{v}1_n)(v - \bar{v}1_n)')}{n - 1},

since Tr(AB) = Tr(BA) for any two matrices A and B.

We note that

Cv = v - (1_n'v/n)1_n = v - \bar{v}1_n ;

therefore, the estimated population variance q may be written as

q &= \frac{Tr((v - \bar{v}1_n)(v - \bar{v}1_n)')}{n - 1} \\
  &= \frac{Tr(Cv(Cv)')}{n - 1} \\
  &= \frac{Tr(Cvv'C)}{n - 1}.

Looking at v as a random variable of n dimensions and q as a scalar random variable, let us write the expectation of the estimated population variance q as:

E(q) &= E(\frac{Tr(Cvv'C)}{n - 1}) \\
     &= \frac{Tr(CE(vv')C)}{n - 1}.

But K = E(vv') defines a relationship matrix among the elements of possible v. (Note that the possible instances of v themselves might not be “centered” – that is, the components of the v may or may not have zero averages.) We now write the expected estimated population variance E(q) as

E(q) = \frac{Tr(CKC)}{n - 1}.

We wish to “normalize E(q) to one” – that is, set E(q) to one by normalizing K appropriately. We do that by defining K_N = K / w, where

w = \frac{Tr(CKC)}{n - 1},

and noting that

E(q_N) = \frac{Tr(CK_NC)}{n - 1} = \frac{\frac{Tr(CKC)}{n - 1}}{w} = 1.

Note that this means that the u_N found in the mixed-model equation that uses the normalized K_N will relate to the u in the original equation as u_N = u/\sqrt{w}.

Further Optimization When Covariates Are Present

The following technique is mentioned in passing in [Segura2012] and is used both in [Vilhjalmsson2012] and in the mixed-model tools of SVS.

If we have a mixed linear model with fixed-effect covariates X_f, one particular “more interesting” fixed-effect covariate X_k, and a random-effect covariate u_N for which the normalized relationship matrix is K_N,

y = X_f \beta_{kf} + X_k \beta_k + u_N + \epsilon,

and we have this model for many k and we don’t need to find the covariate coefficients \beta_{kf} for any of these models, and we have a matrix B such that BB' = K_N +
\delta I (see Solving the Mixed Model Equation), we can perform the following optimization:

  1. Solve the ordinary-least-squares (OLS) “null hypothesis” or reduced-model problem

    B^{-1}y = B^{-1}X_f \beta_{h0} + \epsilon_{h0}

    to find \hat{\beta_{h0}} as an estimate for \beta_{h0}.

    Designate the (Mahalanobis) RSS obtained from solving this equation as

    mrss_{h0} = (B^{-1}y - B^{-1}X_f \beta_{h0})'(B^{-1}y - B^{-1}X_f \beta_{h0}).

  2. Perform the QR algorithm on B^{-1}X_f to get

    QR = B^{-1}X_f

    where Q and R are the “thin”, “reduced”, or “economic” versions of Q and R.

  3. Define

    M = (I - QQ'),

    giving us

    MB^{-1} = (I - QQ')B^{-1} .

  4. Transform the original equation by pre-multiplying it by MB^{-1} to get

    MB^{-1}y = MB^{-1}X_f\beta_{kf} + MB^{-1}X_k\beta_k + MB^{-1}u_N + MB^{-1}\epsilon

    But

    MB^{-1}X_f = B^{-1}X_f - QQ'B^{-1}X_f = QR - QQ'QR = (Q - QQ'Q)R = (Q - Q)R = 0,

    because the columns of Q are “orthogonal” and of “unit length” and so Q'Q = I and QQ'Q = Q. Thus, we have

    MB^{-1}y = MB^{-1}X_k\beta_k + MB^{-1}u_N + MB^{-1}\epsilon.

    MB^{-1}y may be re-written as

    MB^{-1}y &= (I - QQ')B^{-1}y = B^{-1}y - QQ'B^{-1}y = B^{-1}y - QR\hat{\beta_{h0}} \\
   &= B^{-1}y - B^{-1}X_f\hat{\beta_{h0}},

    because \hat{\beta_{h0}} = R^{-1}Q'B^{-1}y and R\hat{\beta_{h0}} = RR^{-1}Q'B^{-1}y = Q'B^{-1}y.

    Thus,

    MB^{-1}y = B^{-1}y - B^{-1}X_f\hat{\beta_{h0}} = MB^{-1}X_k\beta_k + MB^{-1}u_N + MB^{-1}\epsilon.

    This is equivalent to the ordinary-least-squares (OLS) problem

    MB^{-1}y = B^{-1}y - B^{-1}X_f\hat{\beta_{h0}} = MB^{-1}X_k\beta_k + \epsilon_{MB},

    where the variance of \epsilon_{MB} is proportional to I. This is because if we pre-multiply the original problem simply by B^{-1}, we get

    B^{-1}y = B^{-1}X_f \beta_{kf} + B^{-1}X_k \beta_k + B^{-1}(u_N + \epsilon),

    which may be solved as an OLS (Solving the Mixed Model Equation), and because

    Var(MB^{-1}y) = Var(B^{-1}y) = Var(B^{-1}(u_N + \epsilon))

    which is proportional to I.

The ordinary-least-squares (OLS) problem

MB^{-1}y = B^{-1}y - B^{-1}X_f\hat{\beta_{h0}} = MB^{-1}X_k\beta_k + \epsilon_{MB}

may now be solved for all k.

Note

For optimization, SVS pre-computes the matrix product MB^{-1} and uses this product as one matrix to help perform all of the regressions involving X_k.

Note

The matrix M is the “annihilator matrix” for the null hypothesis problem

B^{-1}y = B^{-1}X_f \beta_{h0} + \epsilon_{h0} .

Designate the Mahalanobis Root Sum of Squares (Mahalanobis RSS) for marker k as

mrss_k = (B^{-1}y - B^{-1}X_f \hat{\beta_{kf}} - B^{-1}X_k\hat{\beta_k})'(B^{-1}y - B^{-1}X_f \hat{\beta_{kf}} - B^{-1}X_k\hat{\beta_k}),

which is optimized to

mrss_k = (MB^{-1}y - MB^{-1}X_k\hat{\beta_k})'(MB^{-1}y - MB^{-1}X_k\hat{\beta_k}).

This is the Root Sum of Squares (RSS) value for the regression as transformed by pre-multiplying it by B^{-1}.

Note

If we have a (completely fixed-effect) linear model with covariates X_f and one particular “more interesting” covariate X_k,

y = X_f \beta_{kf} + X_k \beta_k + \epsilon,

and we have this model for many k and we don’t need to find the covariate coefficients \beta_{kf} for any of these models, we can perform the same kind of optimization.

  1. Solve the ordinary-least-squares (OLS) “null hypothesis” or reduced-model problem

    y = X_f \beta_{h0} + \epsilon_{h0}

    to find \hat{\beta_{h0}} as an estimate for \beta_{h0}.

  2. Perform the QR algorithm on X_f to get

    QR = X_f

    where Q and R are the “thin”, “reduced”, or “economic” versions of Q and R.

  3. Define

    M = (I - QQ')

  4. Transform the original equation by pre-multiplying it by M to get

    My = MX_f\beta_{kf} + MX_k\beta_k + M\epsilon

    But

    MX_f = X_f - QQ'X_f = QR - QQ'QR = (Q - QQ'Q)R = (Q - Q)R = 0,

    because the columns of Q are “orthogonal” and of “unit length” and so Q'Q = I and QQ'Q = Q. Thus, we have

    My = MX_k\beta_k + M\epsilon.

    My may be re-written as

    My &= (I - QQ')y = y - QQ'y = y - QR\hat{\beta_{h0}} \\
   &= y - X_f\hat{\beta_{h0}},

    because \hat{\beta_{h0}} = R^{-1}Q'y and R\hat{\beta_{h0}} = RR^{-1}Q'y = Q'y.

    Thus,

    My = y - X_f\hat{\beta_{h0}} = MX_k\beta_k + M\epsilon.

    This is equivalent to the ordinary-least-squares (OLS) problem

    My = y - X_f\hat{\beta_{h0}} = MX_k\beta_k + \epsilon_M,

    where the variance of \epsilon_M is proportional to I. This is because

    Var(My) = Var(y) = Var(\epsilon)

    which we assume to be proportional to I.

    Note that the matrix M is the “annihilator matrix” for the null hypothesis problem

    y = X_f \beta_{h0} + \epsilon_{h0} .

Optimization when Gene-Environment Interaction Terms Are Included

If we have the full mixed linear model

y = X_c \beta_{kc} + X_i \beta_{ki} + X_k \beta_{kp} + X_{ip} \beta_{ip} + u_{full} + \epsilon_{full}

and we have

y = X_c \beta_{krc} + X_i \beta_{kri} + X_k \beta_{rkp} + u_{reduced} + \epsilon_{reduced}

as the corresponding reduced mixed linear model, where X_c are fixed covariates, X_i are fixed terms that will later be used to create gene-environment interaction terms, X_k is the current “more interesting” covariate or predictor variable, and X_{ip} are interaction terms created by multiplying the X_i element-by-element with X_k, and it is desired to determine all of the full-model beta’s, we must compute the entire linear full-model regression

B^{-1}y = B^{-1}X_c \beta_{kc} + B^{-1}X_i \beta_{ki} + B^{-1}X_k \beta_{kp} + B^{-1}X_{ip} \beta_{ip} + B^{-1}(u_{full} + \epsilon_{full}),

where the term B^{-1}(u_{full} + \epsilon_{full}) is assumed to be an error term proportional to the identity matrix, to obtain these beta terms, even while we may still optimize computing the reduced-model (Mahalanobis) RSS using the technique shown above in Further Optimization When Covariates Are Present, where Step 1 consists of solving the “further-reduced” model

B^{-1}y = B^{-1}X_c \beta_{krc} + B^{-1}X_i \beta_{kri} + \epsilon_{h0}.

Note

For the similar linear-model problem with full model

y = X_c \beta_{kc} + X_i \beta_{ki} + X_k \beta_{kp} + X_{ip} \beta_{ip} + \epsilon_{full}

and reduced model

y = X_c \beta_{krc} + X_i \beta_{kri} + X_k \beta_{rkp} + \epsilon_{reduced},

where it is desired to determine all of the full-model beta’s, we must compute the entire full-model regression itself. However, we may still optimize computing the reduced-model RSS using the technique shown above in the linear-model note to Further Optimization When Covariates Are Present, where Step 1 consists of solving the “further-reduced” model

y = X_c \beta_{krc} + X_i \beta_{kri} + \epsilon_{h0}.

The Multi-Locus Mixed Model (MLMM)

For complex traits controlled by several large-effect loci, a single-locus test may not be appropriate, especially in the presence of population structure.

Therefore, [Segura2012] has proposed a simple stepwise mixed-model regression with forward inclusion and backward elimination of genotypic markers as fixed effect covariates. This method, called the Multi-Locus Mixed Model (MLMM), proceeds as follows:

  1. Begin with an initial model that includes, as its fixed effects, only the intercept and any additional covariates you may have specified.
  2. Using this model, perform an EMMAX scan through all markers (that you have not specified as additional covariates).
  3. From the markers scanned above, select the most significant marker and add it to the model as a fixed effect, creating a new model.
  4. Repeat (2) and (3) (forward inclusion) until either the pseudo-heritability \big(\hat{\sigma^2_g} / Var(y)\big) estimate is close to zero or a pre-specified maximum number of forward steps is reached.
  5. For each selected marker in the current model, temporarily remove it from the fixed effects and perform an EMMAX scan over only that marker.
  6. Eliminate, from the current model, the marker that came out as least significant using the above test. A new smaller model is created.
  7. Repeat (5) and (6) (backward elimination) until only one selected marker is left.

The variance components are re-estimated between each forward and backward step, while the same kinship matrix is used throughout the calculations.

Model Criteria

The result of this stepwise regression is a series of models. Several model criteria have been explored by the authors of [Segura2012] for how appropriate any of the models are:

  • Bayes Information Criteria (BIC). This is calculated as BIC
= -2l_F + p\log(n), where l_F is the full-model log-likelihood, p is the number of model parameters (one for the intercept, one for \delta, one for each marker covariate used in the particular MLMM model, and finally one for each additional covariate used in all of the models), and n is the sample size/number of individuals.

    Given any two estimated models, the model with the lower value of BIC is the preferred choice.

    However, the authors of [Segura2012] believe this model is “too tolerant in the context of GWAS”.

  • Extended Bayes Information Criteria (Extended BIC). This is the BIC penalized by the model space dimension. Its formula is

    EBIC = BIC + 2 \log\biggl(\binom{n}{p-q}\biggr)
     = BIC + 2 \biggl(\sum_{i = n-(p-q)+1}^n \log(i)  +  \sum_{i = 1}^{p-q} \log(i) \biggr),

    where q is the initial number of model parameters (one for the intercept, one for \delta, and one for each additional covariate used in all of the models), and \binom{n}{p-q} is the total number of models which can be formed using p-q marker covariates under the assumption that these will only be selected from the best n markers.

  • Modified Bayes Information Criteria (Modified BIC). This adds a different penalty based not only by the model space dimension, but also by how many overall markers there are to test. Its formula is

    MBIC = BIC + 2 p \log\biggl(\frac{m}{2.2} - 1\biggr),

    where m is the total number of markers being tested in the current step.

  • Bonferroni Criterion. Only defined for models derived from forward selection, this selects the model with the most covariate marker loci for which the best p-value obtained from the preceding EMMAX scan was below the Bonferroni threshold.

  • Multiple Bonferroni Criterion. This selects the model with the most covariate marker loci all of which have individual p-values below the Bonferroni threshold. Here, “individual p-value” is as explained in the note of Outputs from the Multi-Locus Mixed Linear Model (MLMM) Method. The threshold used is 1/(20 m), where m is the total number of markers being tested in the current step.

  • Multiple Posterior Probability of Association. This selects the model with the most covariate marker loci all of which have posterior probabilities of association above a PPA threshold. The threshold used is 0.5 . Posterior probabilities of association are based on Bayesian priors pr of 1/m for every marker (and for every step), where m is the total number of markers being tested in the current step, and are computed as follows:

    • Find the Bayes factor for marker k as

      bf = \exp\left(\frac{n\log(mrss_{h0}/mrss_k) - \log(n)}{2}\right) ,

      where mrss_{h0} and mrss_k are the values of the Mahalanobis RSS for the base model and for testing with marker k, respectively.

    • Determine the posterior odds and posterior probability as

      po = bf \frac{pr}{1 - pr}

      and

      pp = \frac{po}{1 + po} .

Genomic Best Linear Unbiased Predictors (GBLUP)

Problem Statement

Suppose we have the mixed model equation

y = X_f \beta_f + u + \epsilon

over n samples, with fixed effects specified by \beta_f that include the intercept and any additional covariates you may have specified. Also suppose that the random effects u are additive genetic merits or genomic breeding values associated with these samples, and that these may be formulated from m autosomal markers as

u = M\alpha,

where M is an n \times m matrix for which M_{ik} is 2p_k, (p_k - q_k), or -2q_k, depending upon whether the genotype for the i-th sample at the k-th locus is homozygous for the minor allele, heterozygous, or homozygous for the major allele, respectively, and \alpha is a vector for which \alpha_k is the allele substitution effect (ASE) for marker k. Here, p_k and q_k are the major and minor allele frequencies for marker k, respectively. (For inclusion of non-autosomal markers, see Correcting for Gender.)

We further assume that \operatorname{E}(\alpha) = 0 (which makes \operatorname{E}(u) = 0), and that \operatorname{Var}(\alpha) = \operatorname{I}\sigma^2_M, where \sigma^2_M is an (unknown) constant which is the component of variance associated with the ASE.

Our object is to estimate both the genomic breeding value u for every sample and the ASE for every marker.

The GBLUP Genomic Relationship Matrix

Under the above assumptions, we have

\operatorname{Var}(u) = \operatorname{Var}(M\alpha) = M\operatorname{Var}(\alpha)M' = MM'\sigma^2_M.

The sum of would-be variances over all the markers if each had been at Hardy-Weinberg equilibrium is

\phi = 2 \sum_{k=1}^m p_k q_k.

We can use this to define a normalized variance matrix

G = \frac{MM'}{\phi}

to get

\operatorname{Var}(u) = MM'\sigma^2_M = \phi \sigma^2_M \frac{MM'}{\phi} = \phi \sigma^2_M G = \sigma^2_G G,

where we let \sigma^2_G = \phi \sigma^2_M.

We can see that the matrix G, which we shall call the GBLUP Genomic Relationship Matrix, may be used as a kinship matrix for solving this mixed-model equation, and that \sigma^2_G may be thought of as the variance component \sigma^2_g for u.

Note

  1. Because this method uses a kinship matrix based on genotypes rather than on actual ancestry, the results are referred to as “Genomic Best Linear Unbiased Predictors” rather than just “Best Linear Unbiased Predictors”.
  2. Unlike the other SVS mixed-model analysis tools, SVS GBLUP does not normalize its kinship matrix.
  3. Because of how it is constructed, the GBLUP Genomic Relationship Matrix is a “centered matrix” and is (thus) singular. However, it is still a positive semidefinite matrix and will work well as a kinship matrix.

Finding the Genomic Best Linear Unbiased Predictors and ASE

Using the EMMA technique (Finding the Variance Components), we can find \hat{\beta}_f, \delta, and H = G + \delta
I. The second of Henderson’s mixed-model equations, as modified to accommodate singular G, is

G X_f \hat{\beta}_f + (G + \delta I) \hat{u} = G y .

This may be rewritten as

(G + \delta I) \hat{u} = H \hat{u} = G y - G X_f \hat{\beta}_f .

This gives us

\hat{u} = H^{-1} G (y - X_f \hat{\beta}_f) .

Noting the following equalities,

G^2 + \delta G = G (G + \delta I) = (G + \delta I) G = G H = H G

H^{-1} G H = G

H^{-1} G = G H^{-1} ,

we may write

\hat{u} = GH^{-1}(y - X_f \hat{\beta}_f)

as a solution for the genomic BLUP. If we now define

\hat{\alpha} = M'H^{-1}(y - X_f\hat{\beta_f}) / \phi,

we find that

M\hat{\alpha} = \frac{MM'}{\phi}H^{-1}(y - X_f\hat{\beta_f}) = GH^{-1}(y - X_f\hat{\beta_f}) = \hat{u},

which makes \hat{\alpha} a solution for the ASE.

In SVS, this is computationally streamlined by finding

\hat{\gamma} = H^{-1}(y - X_f\hat{\beta}_f),

then computing \hat{u} = G\hat{\gamma} and \hat{\alpha} = M'\hat{\gamma}/\phi.

Correcting for Gender

To correct for gender, we take the following steps:

  • For markers within the X chromosome, we use the following entries for matrix M:

    • For females, we use p_k, (p_k - q_k)/2, or -q_k for M_{ik}, depending upon whether the genotype for the i-th sample at the k-th locus is homozygous for the minor allele, heterozygous, or homozygous for the major allele, respectively.
    • For males, we use p_k or -q_k for M_{ik}, depending upon whether the genotype for the i-th sample at the k-th locus contains the minor (X-chromosome) allele or the major (X-chromosome) allele, respectively.

    The other entries of M are left the same.

    Note

    These frequencies p_k and q_k are computed individually for males and for females.

  • To compute \phi, we continue to use 2p_kq_k as the expected-variance term for most markers. For X-chromosome markers, however, we use

    w_m (p_k q_k)  +  w_f \frac{p_k q_k}{2},

    where w_m and w_f are the fraction of the samples that are male and female, respectively.

    Note

    These are the frequencies that are computed invididually by gender.

  • We still compute

    G = \frac{MM'}{\phi},

    \hat{\gamma} = H^{-1}(y - X_f\hat{\beta}_f),

    and

    \hat{u} = G\hat{\gamma}

    as before.

  • The ASE is computed separately for females and males, although the ASE will only be different between the genders for the X-chromosome markers.

    Without loss of generality, we may consider the matrix M to be partitioned into

    \begin{bmatrix}
  M_{mX} & M_m \\
  M_{fX} & M_f
\end{bmatrix},

    where M_{mX} and M_{fX} are the male and female entries for the X chromosome and M_m and M_f are the male and female entries for the remaining chromosomes, and \hat{\gamma} to be partitioned into

    \begin{bmatrix}
  \hat{\gamma}_m \\
  \hat{\gamma}_f
\end{bmatrix},

    where \hat{\gamma}_m and \hat{\gamma}_f are the values of \hat{\gamma} corresponding to male and female samples, respectively. We then compute

    \hat{\alpha}_{Xm} &= M'_{mX} \hat{\gamma}_m / \phi \\
\hat{\alpha}_{Xf} &= M'_{fX} \hat{\gamma}_f / \phi \\
\hat{\alpha}_{nonX} = \begin{bmatrix} M'_m & M'_f \end{bmatrix} \begin{bmatrix} \hat{\gamma}_m / \phi \\ \hat{\gamma}_f / \phi \end{bmatrix}.

    The final two results are then

    \hat{\alpha}_m &= \begin{bmatrix} \hat{\alpha}_{Xm} \\ \hat{\alpha}_{nonX} \end{bmatrix} \\
\hat{\alpha}_f &= \begin{bmatrix} \hat{\alpha}_{Xf} \\ \hat{\alpha}_{nonX} \end{bmatrix}.

Normalizing the ASE

To normalize the allele substitution effects, each ASE is divided by the SNP Standard Deviation, which is the square root of the component of variance \sigma^2_M associated with the ASE. \sigma^2_M is reconstructed by dividing the additive genetic variance by the sum of would-be variances over all the markers if each had been at Hardy-Weinberg equilibrium:

\sigma_M^2 &= \frac{\sigma_G^2}{\phi}

The normalized ASE is then:

\hat{\alpha}_{norm} = \frac{1}{\sqrt{\sigma_M^2}}\hat{\alpha}

Genomic Prediction

Sometimes, it is desired to predict the random effects (genomic merit/genomic breeding values) for samples for which there is genotypic data, but no phenotype data, based on other samples for which phenotype data (as well as genotypic and covariate data) does exist. (If there is covariate data for these missing phenotype values, these values can be predicted based on the random effect predictions.)

Call the n_t samples for which there are phenotype values the “training set”, and the n_v others the “validation set”. Assume all samples have genotypic data, imputed or otherwise, for all markers. Also assume all samples in the training set have valid covariate data, if there are covariates being used.

To predict the random effects and the missing phenotypes, we do the following:

  • Without loss of generality, imagine the samples of the training set all come first, before any samples of the validation set. Define Z = [I | 0], where the width and height of I is n_t, the width of the zero matrix is n_v, and the height of the zero matrix is n_t. Also partition u, X_f, and y according to training vs. validation, as

    u = \begin{bmatrix}
     u_t \\
     u_v
    \end{bmatrix},

X_f = \begin{bmatrix}
       X_{ft} \\
       X_{fv}
      \end{bmatrix},

    and

    y = \begin{bmatrix}
     y_t \\
     y_v
    \end{bmatrix} .

  • Either compute the genomic relationship matrix or import a pre-computed genomic relationship matrix based on all samples (both training and validation sets).

    Note

    If correcting for gender has been selected, the modifications for computing M and G remain the same as noted above in Correcting for Gender.

  • Use the EMMA technique (Finding the Variance Components) on the mixed model for the training set

    y_t = X_{ft} \beta_f + Z u + \epsilon_t

    (where Var(u) = \sigma^2_G G and Var(Zu) = \sigma^2_G
ZGZ') to determine the proper values for \hat{\beta}_f, \delta, and the inverse of H_t, where H_t =
ZGZ' + \delta I..

  • Noting that the form of the second of Henderson’s mixed-model equations, as modified to accommodate singular G, is, for Z \ne I,

    G Z' X_{ft} \hat{\beta}_f + (GZ'Z + \delta I) \hat{u} = GZ' y_t ,

    we obtain

    (GZ'Z + \delta I) \hat{u} = GZ' y_t - GZ' X_{ft} \hat{\beta}_f ,

    or

    \hat{u} = (GZ'Z + \delta I)^{-1} GZ'(y_t - X_{ft}\hat{\beta}_f) .

    Noting the following equalities,

    GZ'ZGZ' + \delta GZ' = (GZ'Z + \delta I)GZ' = GZ'(ZGZ' + \delta I) = GZ'H_t

    GZ' = (GZ'Z + \delta I)^{-1} GZ'H_t

    GZ'H_t^{-1} = (GZ'Z + \delta I)^{-1} GZ',

    We may write

    \hat{u} = GZ'H_t^{-1} (y_t - X_{ft}\hat{\beta}_f)

    as a solution for the genomic BLUP. If we now define

    \hat{\alpha} = M'Z'H_t^{-1}(y_t - X_{ft}\hat{\beta}_f) / \phi,

    we find that

    M\hat{\alpha} = \frac{MM'}{\phi}Z'H_t^{-1}(y_t - X_{ft}\hat{\beta}_f) = GZ'H_t^{-1}(y_t - X_{ft}\hat{\beta}_f) = \hat{u},

    which makes \hat{\alpha} a solution for the ASE. The computational streamlining becomes computing \hat{\gamma}_t as

    \hat{\gamma}_t = H_t^{-1}(y_t - X_{ft}\hat{\beta}_f),

    computing \hat{\gamma} as

    \hat{\gamma} = Z'\hat{\gamma}_t,

    and finally computing \hat{u} = G\hat{\gamma} and the ASE \hat{\alpha} = M'\hat{\gamma}/\phi as before.

    Note

    If correcting for gender has been selected, the modifications for partitioning M and \hat{\gamma} (both of which involve all samples) and computing the ASE (\hat{\alpha}) remain the same as noted above in Correcting for Gender.

  • Finally, noting that the full mixed-model problem is

    y = X_f \beta_f + u + \epsilon

    or

    \begin{bmatrix}
     y_t \\
     y_v
    \end{bmatrix} = \begin{bmatrix}
       X_{ft} \\
       X_{fv}
      \end{bmatrix} \beta_f + \begin{bmatrix}
     u_t \\
     u_v
    \end{bmatrix} + \epsilon,

    we predict the validation phenotypes

    \hat{y}_v = X_{fv} \hat{\beta}_f + \hat{u}_v

    from (the intercept and) any validation covariates X_{fv} and the predicted values \hat{u}_v. If there are missing validation covariates, the corresponding validation phenotypes are not predicted.

Bayes C and C-pi

Problem Statement

Suppose that we have the following mixed model equation to describe the relationship between the phenotypes of our samples, their genotypes, and fixed and random effects.

y = X_f \beta_f + u + \epsilon

over n samples, with fixed effects in X_f specified in \beta_f that include the intercept and any additional covariates you may have specified. Also, the random effects, u are additive genetic merits or genomic breeding values associated with each sample.

We can define u in terms of the genotypes of each sample over m autosomal markers and the allele substitution effects.

u = M \alpha

where M is an n x m matrix containing the genotypes of each sample. M_{ik} is 0, 1, or 2, depending upon whether the genotype for the i -th sample at the k -th locus is homozygous for the major allele, heterozygous, or homozygous for the minor allele, respectively. \alpha is a vector for which \alpha_k is the allele substitution effect (ASE) for marker k.

Estimating the Model Parameters

The two Bayesian methods for fitting this mixed model implemented in SVS are Bayes C and Bayes C\pi [Habier2011]. The only difference between the two methods is the assumptions about \pi.

\pi is the prior probability that a SNP has no effect on the phenotype, the value of \pi is treated as unknown and is estimated in the Bayes C\pi method and is treated as known with a value of 0.9 [Neves2012] in Bayes C.

The Bayesian approach uses prior probabilities, beliefs about the parameters before data is analyzed, and the conditional probabilities, a probability density for a parameter based on the given data, to construct full conditional posterior probabilities. These posterior probabilities are then sampled from to make estimates about the model parameters.

SVS uses a single-site Gibbs sampler.

Prior and Posterior Distributions

There are six parameters that are sampled each iteration of the Gibbs sampler.

The parameters are:

  • \pi: the prior probability that a SNP has no effect.
  • \alpha_k: the allele substitution effect (ASE) at loci k.
  • \sigma_M^2: the component of variance associated with the ASE.
  • \sigma_e^2: the component of variance associated with the error term.
  • \beta_f: the component of the f -th fixed effect.
  • \delta: whether a marker is included in the current iteration.

The prior distributions for the parameters are as follows [Fernando2009a], [Sorensen2002]:

Parameter Prior
\pi \sim \mathcal{U}(0, 1)
\alpha_k \sim \mathcal{N}(0, \sigma_M^2)
\sigma_M^2 \sim v_M S_M^2 \chi_{v_M}^{-2}
\sigma_e^2 \sim v_e S_e^2 \chi_{v_e}^{-2}
\beta_f \propto constant
\delta \sim \mathcal{B}(numIter, 1 - \pi)

Where numIter is the number of iterations for the Gibbs sampler, v_M = 4, v_e = 2, [Fernando2009b] and S_M^2 is defined as [Habier2011]:

S_M^2 = \frac {\overset {\sim} {\sigma_M^2} (v_M - 2)} {v_M}

where \overset {\sim} {\sigma_M^2} is the initial value of \sigma_M^2 and is defined as [Habier2011]:

\overset {\sim} {\sigma_M^2} = \frac {\overset {\sim} {\sigma_s^2}} {m (1 - \pi) (\phi / m)}

where \phi is:

\phi = 2 \sum_{k = 1}^{m} p_k q_k

or the sum of would-be variances over all the markers if each had been at Hardy-Weinberg equilibrium. And \sigma_s^2 is the additive-genetic variance explained by SNPs [Habier2011] and we can set it as 0.05 [Fernando2009b].

The prior of \beta_f is constant and not proper, however the posterior is. The initial value for the first \beta_f is the mean of the phenotype values and the rest are set to 0.

The posterior distributions for these parameters are [Fernando2009a], [Sorensen2002]:

Parameter Posterior
pi \sim \text{Beta}(m - m_{iter} + 1, m_{iter} + 1)
\alpha_k \sim \mathcal{N}(\frac {M'_k(y - X\beta - M_k\prime \alpha_k\prime)} {M'_k M_k + \frac {\sigma_e^2} {\sigma_M^2}}, \frac {\sigma_e^2} {M'_k M_k + \frac {\sigma_e^2} {\sigma_M^2}})
\sigma_M^2 \sim \overset{\sim}{v_M} \overset{\sim}{S_M^2} \chi_{\overset{\sim}{v_M}}^{-2}
\sigma_e^2 \sim ((y - X\beta - M\alpha)'(y - X\beta -M\alpha) + v_e S_e^2) \chi_{v_e + n}^{-2}
\beta_f \sim \mathcal{N}((X'_f X_f)^{-1} X'_f (y - X_f\prime \beta_f\prime - M\alpha), (\frac {X'_f X_f} {\sigma_e^2})^{-1})
\delta \frac {f(r_k | \delta_k, \theta_{k\_}) Pr(\delta_k | \pi)} {f(r_k | \delta_k = 0, \theta_{k\_})\pi + f(r_k | \delta_k = 1, \theta_{k\_})(1 - \pi)}

where m_{iter} is the number of markers included in this iteration (see Deciding when to include a marker) and S_e^2 is the scale factor for the posterior distribution \sigma_e^2 and it is set at 1. And \overset {\sim} {v_M} is [Habier2011]:

\overset {\sim} {v_M} = v_M + m_{iter}

and \overset {\sim} {S_M^2} is defined as [Habier2011]:

\overset {\sim} {S_M^2} = \frac {\alpha' \alpha + v_M S_M^2} {\overset {\sim} {v_M}}

where S_M^2 is defined as above and is updated each iteration with the new value of \pi.

Note

A prime in the subscript mean “all but this.” For example, when sampling \beta_f we remove all the fixed effects and their coefficients except for the current one.

The posterior of \delta can be better explained by [Fernando2009a]:

Pr(\delta_k | y, \beta, \alpha_{\_j}, \delta_{\_j}, \sigma_M^2, \sigma_e^2, \pi) = Pr(\delta_k | r_k, \theta_{k\_})

Pr(\delta_k | r_k, \theta_{k\_}) = \frac {\delta_k, r_k | \theta_{k\_}} {f(r_k | \theta_{k\_})}
&= \frac {f(r_k | \delta_k, \theta_{k\_}) Pr(\delta_k | \pi)} {f(r_k | \delta_k = 0, \theta_{k\_})\pi + f(r_k | \delta_k = 1, \theta_{k\_})(1 - \pi)}

Deciding when to include a marker

Because of the difficulty in sampling directly from the \delta distribution we use the log-likelihoods, find the probability that \delta is 1, and sample from a uniform distribution, \mathcal{U}(0,1), to determine if a marker will be included in the current iteration, if we do not include a marker, then \alpha_k will be set to 0.

The log-likelihoods are defined as [Fernando2009b]:

log \mathcal{L}(\delta_k | 0) = - \frac {1} {2} (log(M'_k M_k \sigma_e^2) + \frac {(M'_k (y - X\beta - M_k\prime \alpha_k\prime))^2} {M'_k M_k \sigma_e^2}) + log(\pi)

log \mathcal{L}(\delta_k | 1)  = - \frac {1} {2} (log((M'_k M_k)^2 \sigma_M^2 + M'_k M_k \sigma_e^2) + \frac {(M'_k (y - X\beta - M_k\prime \alpha_k\prime))^2} {(M'_k M_k)^2 \sigma_M^2 + M'_k M_k \sigma_e^2}) + log(1 - \pi)

We can then define the probability that \delta is 1 as [Fernando2009b]:

P(\delta_k = 1) = \frac {1} {1 + e^{log \mathcal{L} (\delta_k | 0) - log \mathcal{L} (\delta_k | 1)}}

Finding the ASE and Genomic Estimated Breeding Values

To find the our estimates for \alpha and \beta we take the average over all iterations:

\hat {\alpha_k} = \frac {\sum_{t = 1}^{numIter} \alpha_{kt}} {numIter}

\hat {\beta_f} = \frac {\sum_{t = 1}^{numIter} \beta_{ft}} {numIter}

for all k and f.

The ASE values will then be the new \hat \alpha values.

To find the genomic estimated breeding values (GEBV) we use the ASE values:

\hat u = M \hat \alpha

\hat \sigma_M^2, \hat \sigma_e^2, and \hat \pi are found in the same way as \hat \alpha and \hat \beta, taking the average value over all iterations.

Gender Correction

To correct for gender, we take the following steps:

  • For markers within the X chromosome, we use the following entries for matrix M:

    • For females, we encode them the same as non-X-chromosome markers.
    • For males, we use 0 or 1 for M_{ik}, depending upon whether the genotype for the i -th sample at the k -th locus contains the major (X-chromosome) allele or the minor (X-chromosome) allele, respectively.

    The other entries of M are left the same.

  • To compute \phi, we continue to use 2p_k q_k as the expected-variance term for most markers. For X-chromosome markers, however, we use

    w_m (p_k q_k)  +  w_f \frac{p_k q_k}{2},

    where w_m and w_f are the fraction of the samples that are male and female, respectively.

  • We still compute

    G = \frac{MM'}{\phi}

    as before.

  • During the \alpha_k sampling phase of the Gibbs sampler, the y and M matrices are split into male and female sections and separate samples are taken.

  • The non-X-Chromosome ASE values are added (from the males and females) to get the final ASE values.

Normalizing the ASE

To normalize the ASE values we do:

\hat \alpha_{norm} = \frac {1} {\sqrt{\sigma_M^2}} \hat \alpha

Standardizing Phenotype Values

Phenotype values will be standardized to prevent \sigma_e^2 from becoming too large and disrupting the results.

Values will be set to their z-score:

z = \frac {x - \mu} {\sigma}

The result spreadsheet by marker will have the original phenotypes and the standardized phenotypes, if phenotype prediction is chosen, then there will be a column of predicted phenotypes and transformed back predicted phenotypes.

Genomic Prediction

To predict the phenotype of samples for which there is genotypic data but no phenotypic data (will be known as our validation set) we can run the Gibbs sampler with just the data from the samples with known phenotypic and genotypic data (the training set).

After running the Gibbs sampler with just the training set data we can estimate the GEBVs for all samples with:

\hat u = M \hat \alpha

where \hat \alpha is estimated with just the training samples.

And we can predict the phenotypes for all samples (training and validation) with:

\hat y = X \hat \beta + M \hat \alpha

However, if there are missing covariate values the phenotypes cannot be predicted for the sample and it will be dropped from the result spreadsheet.