Lecture 17 | 18 Oct 2016 | Graphs and PageRank Algorithm

We discussed the following topics in the lecture held on 18th Oct 2016:

  • Graph Structures
  • Shortest path using graphs
  • PageRank
    • Algorithm
    • Damping Factor
    • Spider Traps
    • Random Restart
  • Power Iteration
  • Spectral Decomposition
  • Degree, Adjacency and Laplacian Matrix

The detailed lecture notes can be found at the below link:


Lecture 8 |30 Aug 2016 | Linear Regression and Singular Value Decomposition

Simple Linear Regression

Linear regression attempts to model the relationship between n input variables and one output variable by fitting a linear equation to the observed data. An important assumption of a linear regression model is that there is an underlying linear model that generates the observed data with certain parameters. One more important assumption is, this model also has an error component.

The linear model for a single variable regression is given by:

y =  \theta_0  +  \theta_1  x + \epsilon

Here x and y are the input and output variables respectively. While \theta_0 and \theta_1 are the parameters of the linear model. And \epsilon denotes the error component associated with the model.

A brief note on \epsilon

Every model has an inherent error associated with it. This error component is denoted by \epsilon . We assume that \epsilon is normally distributed with a zero mean. And this type of error is called irreducible error.

A fit for the linear model

We assume that the data that we have is an unbiased sample from the population. Once we find the right fit for this sample data, it is more likely to be a good fit for the population as well. We have certain tests to figure out how good a fit our model is. We’ll see this later in this lecture.

We try to model the relationship between the dependent variable y and one or more explanatory variables denoted by X. We do this by estimating the values of the parameters \theta_0 and \theta_1 .

These estimated parameters are denoted by \hat{\theta_0} and \hat{\theta_1} . And the estimated value of y is given by:

\hat{y} = \hat{\theta_0}  +  \hat{\theta_1} x

Every estimated value \hat{y}_i may not be exactly equal to the actual value y_i . We call this difference between the observed and estimated values as residuals. This is denoted by:

e_i = y_i - \hat{y}_i

Methods of estimation of the parameters

First method to estimate the parameters is to use the normal equation to solve for \theta , as discussed in earlier class. Normal equation is given by:

\theta = (X^{T}X)^{-1}X^{T}y

Another method is least squares. Here we minimize the residual sum of squares (RSS) give by:

RSS = \sum_{i=1}^{n} e_i^{2}

=\sum_{i=1}^{n} (y_i-\hat{y}) ^{2}

= \sum_{i=1}^{n} (y_i- \hat{\theta_0}  -  \hat{\theta_1} x_i ) ^{2}

Here y_i and x_i denote the i^{th} observation.

RSS can be interpreted as the sum of squares of difference between each observed and estimated value. Our aim is to find out the value of \theta_0 , \theta_1 that minimizes RSS. We can achieve this by taking partial derivative of RSS w.r.t \theta_0 and \theta_1 , set it to zero and solve for \theta_0 and \theta_1 .

RSS can also be interpreted as the square of sum of all the residuals, hence the name Residual Sum of Squares (RSS).

Minimizing the whole quantity of RSS tends to minimize the individual residuals as a whole. And when we reduce all the residuals simultaneously RSS minimizes as well. This tells us that whatever value of \hat{y}_i we estimate it won’t be far off from y_i .

Assessing the Estimated Parameters

Going back to one of our assumptions, our sample is unbiased. If we think about getting a sample as something similar to drawing balls from a huge bag that contains numbered balls, the probability of getting balls numbered from 1 to 50 in a row is very low i.e. probability of getting a biased sample is very low. And probability of getting a sample that is equally dispersed is very high. In other words an unbiased sample is representative of the whole population. Suppose we are given another independent sample from the same population the line would look more or less like the one described by the estimated parameters.

In R linear regression can be accomplished using lm function. This is shown below for the advertising data set with TV as the input variable.

‘lm’ function in R returns details about the fit of our model. It provides the value of \theta as coefficients along with the standard error, t-value and its probability (p value) . It also shows the RSS value and R^2 value. We will look at all these terms one by one.


Let us start with the plot of the residuals for each observation.



The dream goal is to make all the residuals zero. But that never happens. We can see that the residuals are symmetric about line y = 0 i.e. x-axis, having central tendency towards y = 0 line. But can we infer that the residuals are normally distributed? Historgram gives us the answer.



We can see that the residuals are more or less normally distributed across zero. This means the y_i s are normally distributed about the line \hat{y} = \hat{\theta_0}  +  \hat{\theta_1} x .

If residuals are not normally distributed we can assume either the \epsilon is not normally distributed or alternatively we can say that the sample is bad i.e biased or the linear model is not the right fit for this sample.

Standard Error

All the parameters that we are calculating here are nothing but estimates. \hat{\theta_1} is the estimate of the slope and not the actual slope. Hence all the parameters comes with the properties associated with estimates. One such property is the Standard Error. Standard Error is the deviation that we can expect out of the estimate. That is if we draw several random samples from the population and find \hat{\theta_1} and the expectation of all the \hat{\theta_1} , it should be very close to \theta_1 .

E(\hat{\theta_1})  = \theta_1

The Standard Error of \hat{\theta_1} is given by:

SE(\hat{\theta_1}) =  \frac{\sigma _e^2}{\sum (x - \bar{x})^2}

This can be interpreted as variance of x present in the variance of the residuals. In other words the SE is proportional to the variance of the residuals and inversely proportional to the variance of x. If the error terms e_i s are huge then the scope for fitting lines with different slope increases too. This is captured in the Standard Error.

Here the variance of \epsilon is approximated as variance of e.

\sigma_\epsilon \approx \sigma _e

Total Sum of Squares (TSS)

Hypothetically assume that there is no relationship between x and y. In this case x values doesn’t affect y values in any way. In other words we can say there is no linear dependency between x and y.

For such a scenario what would be the best estimate of y? The best estimate would be the mean of y i.e. \bar{y} . TSS captures the variation in y w.r.t to the \bar{y} . It is given by:

TSS = \sum (y_i - \bar{y})^2

However, if there is a linear dependency between x and y, TSS is the worst case scenario for RSS.

Residual Sum of Squares (RSS)

RSS is the measure of deviation of the predicted values (obtained from the estimated model) from the actual data. In other words RSS shows the residual error in the predicted values from the estimated model. Hence, a small value of RSS indicates the good fit of the model to the observed data.

RSS  = \sum_{i=1}^{n} (y_i-\hat{y}) ^{2}

The main aim of linear regression is to fit a linear model to improve the value of R-squared as much as it can. The correctness of linear fit can be justified using the method of hypothesis testing.

Hypothesis Testing for Linear Regression

t- statistic

t- statistic is the ratio of the deviation of the estimated parameter from the actual value and its standard error.

t=  \frac{ \hat{y}  - \bar{y} } { SE \left( \hat{ y } \right) }

If the scatter plot of data looks linear, we can fit a linear model to it. Hypothesis testing tells us objectively how good our fit is.

The null hypothesis in this case would be that there doesn’t exist a linear relationship between the dependent (y) and independent variables (x). Mathematically, it is equivalent to saying that \theta_1  = 0 . The alternate hypothesis states that there exists a linear relationship between the variables x and y. The goal of hypothesis testing is to validate the null hypothesis.

In this case the t statistic is written as :

t=  \frac{\hat{\theta_1} - 0 }{ SE \left( \hat{\theta_1} \right) }

Suppose if there is no relation between the variables x and y then the value of \theta_1 would be zero with some standard error. The error would be due to the fact that sample drawn from the population could vary slightly and hence can show slight dependence between the variables but in general the dependence would be very weak unless the data is very skewed.

If there is no relationship between the variables then through hypothesis testing we find acceptable range in which \hat{\theta_1} lies. Precisely in the above case, the distribution of \hat{\theta_1} follows the normal distribution around zero and there is a 95% chance that the value of \hat{\theta_1} would lie in the interval \left(  -2 std.  error, +2 std.  error \right) . But if there was a relationship between the variables then there would be some other normal distribution of \hat{\theta_1} around a different (non-zero) mean.Then the probability of getting the \hat{\theta_1} around the new mean would be high. Given a sample, one value of \hat{\theta_1} is known.


Now the main problem is to know whether this \hat{\theta_1} came from right side distribution or the left side distribution (Refer above figure). The chance that this \hat{\theta_1} came from the right side distribution can’t be predicted because there is no information about this distribution. But given the value of \hat{\theta_1} , it can be said with 95% confidence that \theta_1 would lie in the interval \left(  \hat{\theta_1} -2 std.  error, \hat{\theta_1} +2 std.  error \right) if there doesn’t exist a linear relationship. Thus the main focus is to check whether \theta_1 lies in the confidence interval of \hat{\theta_1} or not. In simple words, the focus is to obtain the probability of obtaining such a sample from population given there is no linear relationship. t-statistic value is computed here to determine this.

For the advertisement example, t- statistic for TV comes out to be 17.67 which has a probability less than 10^{-16} i.e. given there is no linear dependence between TV advertisement and sales, it is almost impossible to obtain \hat{\theta_1} = 0.047 .Hence there exists a relationship between the two variables and the null hypothesis can be discarded.



Similarly, Radio advertisement and sales also show linear dependence which could be clearly interpreted from the obtained t-statistic values and probability.



Now, from the below graph it is clear that there is no linear relationship between sales and newspaper advertisement which is also justified from the low value of t-statistics of 3.3. Thus null hypothesis can’t be rejected here and there is a good chance that there doesn’t exist a linear relation between newspaper advertisement and sales.




R-Squared, also known as coefficient of determination, is used to indicate the proportion of total variance that is being accounted for by the predicted model. In other words it is a measure of how good did the model perform to fit the actual data. In mathematical terms it can be defined as:

R^{2} = \frac{(TSS - RSS)}{TSS}

The R-squared value of the advertising examples are as follows:


The actual goal of fitting a model is to account for 100% of the variance but in reality it is not possible. Thus, it tries to provide justification for as much variance as it can. It is clear from the above table that TV advertisements alone accounts for 61% of the variance, TV 33.2% and newspaper only 5.2%. Another important observation in the above table is that the sum of the R-squared value of TV and Radio advertisement is different from the R-squared value obtained when TV and Radio are fitted together. Same is the case when TV, Radio and newspaper are all taken together. This could be explained from the fact that the variables TV, newspaper and radio have non zero correlation between them. In other words, the variables are not orthogonal to each other. So, in order to account for maximum variance of response variable, it is required to select the linear combination of attributes which are orthogonal to each other and that could be done using singular value decomposition (SVD) in the next section.

Singular Value Decomposition

Any dataset can be considered as a matrix. Each column in that matrix represents an attribute. Our goal using SVD is to find the independent linear combinations of attributes that could explain the variance of response variable. On applyinig a matrix to any vector, it leads to scalaing and rotation of that vector. Scaling in the geometric sense is the measure of dispersion. Maximum variance is along the direction of maximum expansion. So given a matrix A, our aim is to find the direction and scaling coefficient of maximum expansion. This maximum value is called the first singular value of the matrix.

\max \frac{\|Av\|_2}{\|v\|_2} = \sigma_1 …(FIrst singular value)

We define v on a unit circle. So \|v\|_2 = 1 Thus,

v_1 = \underset{\|v\|_2 = 1}{\arg\max} \ \|Av\|_2

v_1 is the first singular vector of matrix A. (Note: For square matrix, first singular value is maximum eigen value and corresponding vector is eigen vector)

Let us define u_1 such that, A v_1 = \sigma_1 u_1

Now we need to find other direction in which variance is the most and is independent of v_1 i.e orthoginal to v_1. Otherwise there will be some correlation which does not provide independent piece of information in terms of variance. So consider a unit vector v_2 such that, it is perpendicular to v_1. Thus,

v_2 = \underset{\begin{subarray}{c} \|v_2\|_2 = 1 \\ v_2 \perp v_1 \end{subarray}}{\arg\max} \ \|A v_2\|_2

Accordingly for v_i it can be written as,

v_i = \underset{\begin{subarray}{c} \|v_i\|_2 = 1 \\ v_i \perp span \{ v_1, v_2, ...,v_{i-1} \} \end{subarray}}{\arg\max} \ \|A v_i\|_2

Note that all the singular vectors lie in domain of A and are orthonormal. Every addition of v_i, increases the dimension by 1. Overall dimension can maximum be n. We define rank as the number of vectors needed to extinguish the capacity of matrix. So if r is the rank of matrix A then,

Av_{r+1} = Av_{r+2} = ... = Av_{n} = 0

Thus v_{r+1}, v_{r+2},...,v_{n} lie in null space of A and v_{1}, v_{2},...,v_{r} lie in row space of A. \{v_{1}, v_{2},...,v_{r} \} forms the orthonormal basis of row space of A while \{v_{r+1}, v_{r+2},...,v_{n}\} forms the orthonormal basis of null space of A.

Let, u_1, u_2, ..., u_r be such that

A v_1 = \sigma_1 u_1, A v_2 = \sigma_2 u_2, …, A v_r = \sigma_r u_r


Source: Gilbert Strang Paper

u_1, u_2, ... ,u_r lie in column space of A. Also, \{u_{1}, u_{2},...,u_{r} \} forms the orthonormal basis of column space of A. The above equations can be written as,

\begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix}_{m \text{ x } n} \begin{pmatrix} \vdots & \vdots & \cdots & \vdots \\ v_{1} & v_{2} & \cdots & v_{r} \\ \vdots & \vdots & \cdots & \vdots \end{pmatrix}_{n \text{ x } r} = \begin{pmatrix} \vdots & \vdots & \cdots & \vdots \\ u_{1} & u_{2} & \cdots & u_{r} \\ \vdots & \vdots & \cdots & \vdots \end{pmatrix}_{m \text{ x } r} \begin{pmatrix} \sigma_1\\ & \sigma_2 & & \text{\huge0} \\ & & \ddots \\ & \text{\huge0} & & \sigma_{r-1}\\ & & & & \sigma_r \end{pmatrix}_{r \text{ x } r}

As V is orthonormal, V^T = V^{-1}

On multiplying by V^T on both sides. We get,

A_{m \text{x} n} = U_{m \text{x} m} \Sigma_{m \text{x} n} {V^T}_{n \text{x} n}

Singular Value decomposition is a method of breaking down any matrix of m rows and n columns into product of three matrices.

A = U\Sigma V^T


Source:Click Here

Illustration in R

Using ‘princomp’ function in R:


‘princomp’ performs PCA for the data. It outputs the variance explained by the principal components.

Using ‘screeplot; to plot the variances explained by different principal components