Analytically, we can usually choose any point in an interval where a change of sign takes place. However, this is subject to certain conditions that vary from method to method.
And f(a) and f(b) have opposite signs. In this case a and b are said to bracket a root. the f must have at least one root in the interval (a, b).
At each step the method divides the interval in two by computing the midpoint X1 = (a+b) / 2 of the interval and the value of the function f(X1) at that point.
Unless X1 is itself a root (which is very unlikely, but possible) there are now two possibilities:
either f(a) and f(X1) have opposite signs and bracket a root, or f(X1) and f(b) have opposite signs and bracket a root.
The method selects the subinterval that is a bracket as a new interval to be used in the next step.
In this way the interval that contains a zero of f is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small.
Explicitly, if f(a) and f(X1) are opposite signs, then the method sets X1as the new value for b, and if f(b) and f(X1) are opposite signs then the method sets X1 as the new a.
(If f(X1)=0 then X1may be taken as the solution and the process stops.)
In both cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this smaller interval
X3 – 9x + 1 = 0
Regula Falsi Method
Like the bisection method, the false position method starts with two points a0 and b0 such that f(a0) and f(b0) are of opposite signs, which implies by the intermediate value theorem that the function f has a root in the interval [a0, b0], assuming continuity of the function f.
We start this procedure also by locating two points x0 and x1 where the function has opposite signs.
We now connect the two points f(x0) and f(x1) by a straight line and find where it cuts the x-axis.
Let it cut the x-axis at x2. we find f(x2)
if f(x2) and f(x0) are of opposite signs, then we replace x1 by x2 and draw a straight line connecting f(x2)to f(x0) to find the new intersection point.
if f(x2) and f(x0) are of the same signs then x0 is replaced by x2 and proceed as before.
IN both cases the new interval of search is smaller than the initial interval.
We will now get an equation to find the successive approximations to the root.
x3 = x2 – f(x2) (x2-x1) / f(x2) – f(x1)
x4 = x3 – f(x3) (x3-x2) / f(x3) – f(x2)
Fixed Point Iteration Method
Newton Raphson method
one starts with an initial guess which is reasonably close to the true root,
then the function is approximated by its tangent line (which can be computed using the tools of calculus),
and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function’s root than the original guess, and the method can be iterated.
The next approximation, xn+1, is where the tangent line intersects the axis, so where y=0. Rearranging, we find
In linear algebra, the identity matrix or unit matrix of size n is the n×n square matrix with ones on the main diagonal and zeros elsewhere.
There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations):
A row within the matrix can be switched with another row.
Each element in a row can be multiplied by a non-zero constant.
A row can be replaced by the sum of that row and a multiple of another row.
The elementary matrix for any row operation is obtained by executing the operation on an identity matrix.
Row – echelon form – A matrix is in row – echelon form if
(i) all Zero rows ( if any) are at the bottom of the matrix and
(ii) if two successive rows are non-zero, the second row starts with more zeros that the first (moving from left to right)
for ex the matrix
0 1 0 0
0 0 1 0
0 0 0 0
0 0 0 0
Reduced Row – echelon form matrix is in reduced row – echelon form if – A
1. it is in Row – echelon form
2. the leading (leftmost non-zero) in each row is 1
3. All other elements of the column in which the leading entry 1 occurs are zeros
Solving System of linear equations
We will use the inverse of the coefficient matrix. But first we must check that this inverse exists! The conditions for the existence of the inverse of the coefficient matrix are the same as those for using Cramer’s rule, that is
1. The system must have the same number of equations as variables, that is, the coefficient matrix of the system must be square.
2. The determinant of the coefficient matrix must be non-zero. The reason, of course, is that the inverse of a matrix exists precisely when its determinant is non-zero.