Lec-2-Errors in Numerical Computing
Lec-2-Errors in Numerical Computing
Approximations and errors are an integral part of human life. They are everywhere and
unavoidable. In numerical methods, we also cannot ignore the existence of errors.
Errors come in variety of forms and sizes; some are avoidable, some are not. For
example, data conversion and round off errors cannot be avoided, but human error can be
eliminated. Although certain errors cannot be eliminated completely, we must at least
know the bounds of these errors to make use our final solution. It is therefore essential to
know how errors arise, how they grow during the numerical process, and how they affect
the accuracy of a solution.
Exact numbers
2, 1/3, 100 etc are exact numbers because there is no approximation or uncertainty
associated with them. Π, √2 etc are also exact numbers when written in this form.
Approximate Numbers
An approximate number is a number which is used as an approximation to an exact
number and differs only slightly from the exact number for which it stands. For example,
an approximate value of ∏ is 3.14 or if we desire a better approximation, it is
3.14159265. But we cannot write the exact value of ∏.
Page 1 of 6
MSA / L-2 CSE-4745 - Numerical Methods January 10, 2023
Rounding a Number
A method of approximating a number using a nearby number at a given degree of
accuracy. For example, 3.14159265... rounded to the nearest thousandth is 3.142. That is
because the third number after the decimal point is the thousandths place, and because
3.14159265... is closer to 3.142 than 3.141.
Banker’s Rounding Rule
In Banker’s rounding rule (also known as "Gaussian rounding") the value is rounded to
the nearest even number. Banker's rounding is the default method for Delphi, VB.NET
and VB6. It follows the specification of IEEE Standard 754. The rule is as follows:
To round off a number to n significant digits, discard all the digits to the right of the n th
significant digit and check the (n+1) th significant digit.
a) if it is less than 5, leave the nth significant digit unaltered
b) if it is greater than 5, add 1 to the nth significant digit
c) If it is exactly 5 then leave the n th significant digit unaltered if it is an even
number, but increase it by 1 if it is an odd number.
See the following links:
http://www.cs.umass.edu/~weems/CmpSci535/535lecture6.html
http://www.rit.edu/~meseec/eecc250-winter99/IEEE-754references.html
http://www.gotdotnet.com/Community/MessageBoard/Thread.aspx?id=260335
Example: The following numbers are rounded to 4 significant digits.
1.6583 → 1.658
30.0567 → 30.06
0.859458 → 0.8594
3.14159 → 3.142
Sources of Errors
A number of different types of errors arise during the process of numerical computing.
All these errors contribute to the total error in the final result. A taxonomy of errors
encountered in a numerical process is shown in Fig-4.1 [page-61, Balagurusamy] which
shows that every stage of the numerical computing cycle contributes to the total error.
1. Inherent Errors: Inherent errors (also known as input errors) are those that are
present in the data supplied to the model. It contain two components namely data errors
and conversion errors.
a) Data Errors: Data errors (also known as empirical errors) arises when data for a
problem are obtained by some experimental means and are therefore of limited
accuracy and precision. This may be due to some limitation in instrumentation &
reading, and therefore may be unavoidable
b) Conversion Error: Conversion Errors (also known as representation errors) arise
due to the limitation of the computer to store the data exactly. We know that the
Page 2 of 6
MSA / L-2 CSE-4745 - Numerical Methods January 10, 2023
floating point representation retains only a specified number of digits. The digits that
are not retained constitute the roundoff error.
Example 4.3: Page-64, Balagurusamy.
Page 3 of 6
MSA / L-2 CSE-4745 - Numerical Methods January 10, 2023
3. Modelling Errors: Modelling errors arise due to certain simplifying assumption in the
formulation of mathematical models. For example, while developing a model for
calculating the force acting on a falling body, we may not be able to estimate the air
resistance coefficient properly or determine the direction and magnitude of wind force
acting on the body, and so on. To simplify the model, we may assume that the force due
to air resistance is linearly proportional to the velocity of the falling body or we may
assume that there is no wind force acting on the body. All such simplifications certainly
result in errors in the output from such model which is called Modelling error.
We can reduce modelling errors by refining or enlarging the models by incorporating
more features. But the enhancement may take the model more difficult to solve or may
take more time to implement the solution process. It is also not always true that an
enhanced model will provide better results. On the other hand, an oversimplified model
may produce a result that is unacceptable. It is, therefore, necessary to strike a balance
between the level of accuracy and the complexity of the model.
4. Blunders: Blunders are errors that are cause due to human imperfections. Since these
errors are due to human mistakes, it should be possible to avoid them to a large extent by
acquiring a sound knowledge of all aspects of the problem as well as the numerical
process. Some common types of errors are:
1. lack of understanding the problem
2. wrong assumption
3. selecting a wrong numerical method for solving the mathematical model
4. making mistakes in the computer program
5. mistake in data input
6. wrong guessing of initial values
Page 4 of 6
MSA / L-2 CSE-4745 - Numerical Methods January 10, 2023
Machine Epsilon:
The round off error introduced in a number when it is represented in floating point form
is given by, Chopping error = g * 10E-d, 0 ≤ g < 1
where g represents the truncated part of the number in normalized form, d is the number
of digits permitted in the mantissa, and E is the exponent.
The absolute relative error due to chopping is then given by
Er = │(g * 10E-d ) / (f * 10E)│
The relative error is maximum when g is maximum and f is minimum. We know that the
maximum possible value of g is less than 1.0 and minimum possible value of f is 0.1. The
absolute value of the relative error therefore satisfies
Er ≤│(1.0 * 10E-d ) / (0.1 * 10E)│= 10 -d+1
The maximum relative error given above is known as machine epsilon. The name
“machine” indicates that this value is machine dependent. This is true because the
length of mantissa d is machine dependent.
For a decimal, machine that use chopping, Machine epsilon є = 10-d+1
Similarly, for a machine which uses symmetric round off,
Er ≤│(0.5 * 10E-d )/(0.1 * 10E)│= ½ * 10 -d+1
And therefore Machine epsilon є = ½ * 10 -d+1
It is important to note that the machine epsilon represents upper bound for the round off
error due to floating point representation. It also suggests that data can be represented in
the machine with d significant decimal digits and the relative error does not depend in
any way on the size of the number.
More generally, for a number x represented in a computer,
Absolute error bound = |x| * є
For a computer system with binary representation, the machine epsilon is given by
Chopping : Machine epsilon є = 2 -d+1
Symmetric rounding : Machine epsilon є = 2 -d
Here we have simply replaced the base 10 by base 2, where d indicates the length of
binary mantissa in bits.
We may generalize the expression for machine epsilon for a machine, which uses base b
with d-digit mantissa as follows:
Chopping : Machine epsilon є = b * b-d
Symmetric rounding : Machine epsilon є = b/2 * b-d
Example 4.8: Page –72, Balagurusamy.
Page 5 of 6
MSA / L-2 CSE-4745 - Numerical Methods January 10, 2023
Page 6 of 6