Computer Lab - Semester 3

Solution of Linear system of equations

Gauss elimination method

It is a method to solve simultaneous linear equations. It is performed in two satges

  • Elimination

  • Backsubstitution

Before we develope the algorithim for solving a set of equations by Gauss elimination method, let us try to understand the steps with the help of an example.

Suppose the set of equations are represented in matrix form by \(AX=B\). In the elimination stage, the matrix equation \(AX=B\) is transformed into the form \(UX=B\), where \(U\) is an upper triangular matrix. Let us try to understand it by taking an example. Consider the following set of equations.

\begin{align} &4x_1 - 2x_2 +x_3 = 11 \label{eq1a}\tag{1a}\\ &-2x_1 + 4x_2 -2x_3 = -16 \label{eq1b}\tag{1b} \\ &x_1 -2x_2 + 4x_3 = 17\label{eq1c}\tag{1c} \end{align}

Suppose, we have a set of equations and out which we choose a particular equation, say \((j)\), called pivot equation. In the elimination stage, we multiply the pivot equation \((j)\) by a number \(\lambda\) and subtract it from another equation, say \((i)\). Symbolically, it is represented as \begin{equation} Eq.(i)\leftarrow Eq.(i) - \lambda \times Eq.(j) \end{equation}

Let us choose Eq (\eqref{eq1a}) as our pivot equation and multiply it by \(\lambda\) so that we can eliminate \(x_1\) from Eq.s (\ref{eq1b} and \ref{eq1c}). We perform following two operations: \begin{equation} \begin{aligned} &Eq.(\ref{eq1b})\leftarrow Eq.(\ref{eq1b}) - (-0.5) \times Eq.(\ref{eq1a}) \\ &Eq.(\ref{eq1c})\leftarrow Eq.(\ref{eq1c}) - (0.25) \times Eq.(\ref{eq1a}) \end{aligned} \end{equation}

After this transformation, the set of equations become

\begin{align} 4x_1 - 2x_2 +x_3 = 11 \label{eq2a}\tag{2a}\\ 3x_2 -1.5x_3 = -10.5 \label{eq2b}\tag{2b} \\ -1.5x_2 + 3.75x_3 = 14.25\label{eq2c}\tag{2c} \end{align}

In this way, we have completed the first round of the elimination phase. Next, we choose Eq. (\eqref{eq2b}) as our pivot equation and try to eliminate \(x_2\) from Eq. (\eqref{eq2c}):

\begin{equation} Eq.(\eqref{eq2c})\leftarrow Eq.(\eqref{eq2c}) - (-0.5) \times Eq.(\eqref{eq2b}) \end{equation}

After this operation, we finally get following set of equations -

\begin{align} 4x_1 - 2x_2 +x_3 = 11 \label{eq3a}\tag{eq3a}\\ 3x_2 -1.5x_3 = -10.5 \label{eq3b}\tag{eq3b} \\ 3x_3 = 9\label{eq3c}\tag{eq3c} \end{align}

The elimation phase is now complete and the matrix \(A\) has transformed into upper triangular matrix \(U\).

In the back substitution phase, the unknowns can be computed as follows -

\begin{align} &x_3=9/3=3 \tag{eq4a}\\ &x_2=(-10.5 + 1.5x_3)/3= (-10.5 + 1.5(3))/3 = -2 \tag{eq4b} \\ &x_1 = (11 + 2x_2 - x_3)/4 = (11 + 2(-2) - 3)/4 = 1 \tag{eq4c} \end{align}

Algorithim of Gauss Elimination Method

Elimination phase

Let the \(i\) th row be the typical row below pivot equation that is to be transformed, i.e. we have to eliminate the element \(a_{ik}\) from this row. This we can achieve by multiplying the pivot row by \(\lambda = a_{ik}/a_{kk}\) and then subtracting it from \(i\) th row. The corresponding change in \(i\) th rwo are

\begin{align} &a_{ij} \leftarrow a_{ij} - \lambda a_{kj},\hspace{0.5cm}j=k, k+1, \cdots n\\ &b_i \leftarrow b_i - \lambda b_k \label{4}\tag{4} \end{align}

To transform the entire coefficient matrix into upper triangular form, \(k\) and \(i\) in Eq. (\eqref{4}) have ranges -

\begin{align} & k = 1, 2, \cdots n \hspace{2cm} \mbox{(choose the pivot row)}\\ & i = k+1, k+2, \cdots n \hspace{1cm} \mbox{(choose the rows to be transformed)} \end{align}

The code in Python can be scripted as follows -

for k in range(n):
    for i in range(k+1, n):
        if a[i, k] != 0:
            lam = a[i, k]/a[k, k]
            a[i, :] = a[i, :] - lam*a[k, :]
            b[i] = b[i] - lam*b[k]

Back substitution

After elimination phase, the augmented coefficient matrix has the form

\begin{equation} \begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ 0 & a_{22} & a_{23} & \cdots & a_{2n} \\ 0 & 0 & a_{33} & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 & \cdots & a_{nn} \\ \end{pmatrix} \end{equation}

The last equation \(a_{nn}x_n=b_n\) is solved first, yielding

\begin{equation} x_n=\frac{b_n}{a_{nn}} \end{equation}

Consider now the stages of back substitution where \(x_n\), \(x_{n-1}\), \(\cdots\), \(x_{k+1}\) have been evaluated and we are about to determine \(x_k\) from the \(k\) th equation equation

\begin{equation} a_{kk}x_k + a_{k, k+1}x_{k+1} + \cdots + a_{kn}x_n = b_k \end{equation}

The solution is

\begin{equation} x_k = \left( b_k - \sum_{j=k+1}{n}a_{kj}x_{j}\right) \frac{1}{{a_{kk}}}, \hspace{0.5cm} k = n-1, n-2, \cdots 1 \end{equation}

This can be implemented in Python with the following lines of code -

for k in range(n-1, -1, -1):
    x[k] = (b[k] - np.dot(a[k, k+1:n], x[k+1:n]))/a[k,k]

We now try to solve the set of equations which we started at the beginning.

[10]:
import numpy as np

# Matrix construction based on given system of equations

A = np.array([[4, -2, 1], [-2, 4, -2], [1, -2, 4]], dtype=float)
b = np.array([11, -16, 17], dtype=float)
X = np.copy(b)
n = len(b)
print(A, b)
[[ 4. -2.  1.]
 [-2.  4. -2.]
 [ 1. -2.  4.]] [ 11. -16.  17.]
[11]:
# Gauss Elimination Phase - converting to Upper Triangular matrix

for k in range(n):
    for i in range(k+1, n):
        if A[i, k] != 0:
            lam = A[i, k]/A[k, k]
            A[i, :] = A[i, :] - lam*A[k, :]
            b[i] = b[i] - lam*b[k]
print(A, b)
[[ 4.  -2.   1. ]
 [ 0.   3.  -1.5]
 [ 0.   0.   3. ]] [ 11.  -10.5   9. ]
[12]:
# Back substitution phase

for k in range(n-1, -1, -1):
    X[k] = (b[k] - np.dot(A[k, k+1:n], X[k+1:n]))/A[k,k]

# Printing solutions
print('The solution matrix (X) - \n'+str(X))
The solution matrix (X) -
[ 1. -2.  3.]

Gauss Siedel Method (Iterative method)

The \(n\times n\) linear system of equations can be solved by iterative method. The most fundamental of the iterative method is the Jacobi method and slight modified version is the Gauss Siedel method.

For simplicity, consider a \(3\times3\) system

\begin{align} a_{11}x_1 + a_{12}x_2 + a_{13} x_3 = b_1 \\ a_{21}x_1 + a_{22}x_2 + a_{23} x_3 = b_2 \\ a_{31}x_1 + a_{32}x_2 + a_{33} x_3 = b_3 \end{align}

When the diagonal elements of the system are non-zero, we can rewrite the above equation as

\begin{align} x_1 = \frac{1}{a_{11}}(b_1 - a_{12}x_2 - a_{13}x_3) \\ x_2 = \frac{1}{a_{22}}(b_2 - a_{21}x_1 - a_{23}x_3) \\ x_3 = \frac{1}{a_{33}}(b_3 - a_{31}x_1 - a_{32}x_2) \end{align}

Let \(x^{(0)}=(x_1^{(0)}, x_2^{(0)}, x_3^{(0)})\) be an initial guess to the true solution of \(x\), then define an iteration sequence:

\begin{align} x_1^{(m+1)} = \frac{1}{a_{11}}(b_1 - a_{12}x_2^{(m)} - a_{13}x_3^{(m)}) \\ x_2^{(m+1)} = \frac{1}{a_{22}}(b_2 - a_{21}x_1^{(m)} - a_{23}x_3^{(m)}) \\ x_3^{(m+1)} = \frac{1}{a_{33}}(b_3 - a_{31}x_1^{(m)} - a_{32}x_2^{(m)}) \end{align}

for \(m=0, 1, 2, \cdots\). This is called Jacobi Iteration method.

A modified version of Jacobi method is the Gauss Siedel method and is given by

\begin{align} &x_1^{(m+1)} = \frac{1}{a_{11}}(b_1 - a_{12}x_2^{(m)} - a_{13}x_3^{(m)}) \\ &x_2^{(m+1)} = \frac{1}{a_{22}}(b_2 - a_{21}x_1^{(m+1)} - a_{23}x_3^{(m)}) \\ &x_3^{(m+1)} = \frac{1}{a_{33}}(b_3 - a_{31}x_1^{(m+1)} - a_{32}x_2^{(m+1)}) \end{align}

[13]:
import numpy as np

a = np.array([ [3, -0.1, 0.2], [0.1, 7, -0.3], [0.3, -0.2, 10] ])
b = np.array([7.85, -19.3, 71.4])

x = np.zeros(3)
comp = np.zeros_like(b)
tolerance = 1e-5
max_iter = 10
error = []

def GaussSeidel(a, b, x):
    for i in range(len(b)):
        xOld = x.copy()
        x[i] = (b[i]- np.sum(a[i, 0:i]*x[0:i]) - np.sum(a[i, i+1:]*x[i+1:]))/a[i, i]
        e = np.linalg.norm(x - xOld)
        if e < 1e-6:
            break
    return i, x, e

def printGaussSeidel(a, b, x):
    xval= []
    for i in range(len(GaussSeidel(a, b, x)[1])):
        xval.append(GaussSeidel(a, b, x)[1][i])
    return xval

printGaussSeidel(a, b, x)
[13]:
[2.056475238095238, -2.4854211178176873, 7.02833344584854]
[ ]: