0% found this document useful (0 votes)
56 views

Probability Theory

This document defines key concepts in probability theory. It explains that a probability experiment generates outcomes, and examples include rolling dice or tossing a coin. The probability of an event is defined mathematically as the number of favorable outcomes divided by the total number of possible outcomes. Probability can also be defined empirically based on the frequency of outcomes over many trials of an experiment. Probability must be between 0 and 1, and the probabilities of all outcomes in an experiment must sum to 1.

Uploaded by

Alamin Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Probability Theory

This document defines key concepts in probability theory. It explains that a probability experiment generates outcomes, and examples include rolling dice or tossing a coin. The probability of an event is defined mathematically as the number of favorable outcomes divided by the total number of possible outcomes. Probability can also be defined empirically based on the frequency of outcomes over many trials of an experiment. Probability must be between 0 and 1, and the probabilities of all outcomes in an experiment must sum to 1.

Uploaded by

Alamin Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Theory of Probability

Experiment: An experiment is a type of action that generates events.

Examples: Catching fishes from a pond, tossing a coin, rolling dice, etc.

The performance of an experiment is called test or trial. The experiments generate


outcomes (results). An experiment can have more than one outcome, while a trial can have
only one outcome.

 Probability experiment: An action through which specific results (counts,


measurements, or responses) are obtained.

 A probability experiment is a test in which we perform a number of trials to


enable us to measure the chance of an event occurring in the future.

 Random Experiments: A random experiments is a process which produces


outcomes. If we toss a fair coin we may obtain either a head or a tail so, tossing
this fair coin is an experiment which can produce two out comes, either head or
tail.

Statistical Experiment
A statistical experiment is any process by which an observation or a measurement is made.

Outcome: The result of a single trial in a probability experiment. In other words, the
result of an experiment is called an outcome of the experiment.

Outcome: Roll a 2, {2}

Sample Space: The set of all possible outcomes of a probability experiment

Sample space: {1, 2, 3, 4, 5, 6}

 An element of the sample space is called a sample point.

Event: An event is one or more outcomes and is any subset of the sample space of an
experiment. Any event which consists of a single outcome in the sample space is called an
elementary or simple event. Events which consist of more than one outcome are called
compound events.

 An event is a specific outcome or type of outcome for a probability experiment.


 An event is any collection of outcomes of an experiment.

Event: Roll an even number (2, 4, 6)


Independent Events

Two events are independent if the occurrence of one of the events has no influence on
each other.

1
In probability theory we say that two events, A and B, are independent if the probability
that they both occur is equal to the product of the probabilities of the two individual
events, i.e.

The idea of independence can be extended to more than two events. For example, A, B
and C are independent if:

Dependent Events

Events are dependent when the outcome of an event depends directly on the outcome of
another event

Mutually Exclusive Events

Two events are mutually exclusive (or disjoint) if it is impossible for them to occur
together.

Formally, two events A and B are mutually exclusive if and only if

In tossing of a coin the events head and tail are mutually exclusive.

Not-mutually Exclusive Events

Events are said to be not-mutually exclusive if the occurrence of one of them does not
exclude the occurrence of all others i.e., if two or more of them can happen simultaneously
in the same trial.

For example,

Suppose we wish to find the probability of drawing either a king or a spade in a single
draw from a pack of 52 playing cards.
We define the events A = 'draw a king' and B = 'draw a spade'

Since there are 4 kings in the pack and 13 spades, but 1 card is both a king and a spade,

Equally Likely Events

Two events are said to equally likely when each of them has equal chances to occur.

Exhaustive Events

The total number of possible outcomes in any trial is known as exhaustive events or
exhaustive cases.

2
Favorable Events

The number of outcomes or cases which entail the happening of an event in a trial is called
favorable events or cases.

Conditional Probability

The usual notation for "event A occurs given that event B has occurred" is "A | B" (A
given B). The symbol | is a vertical line and does not imply division. P (A | B) denotes the
probability that event A will occur given that event B has occurred already.

The conditional probability of the event A, given that B has happened is:

Where:
P (A | B) = the (conditional) probability that event A will occur given that event B
has occurred already

= the (unconditional) probability that event A and event B both occur


P (B) = the (unconditional) probability that event B occurs

Definition of Probability

Probability means the chance of occurrence of an event. It is a numerical value that is


lying between 0 and 1. When the event is impossible probability of occurring that event is
0, when the event is sure, probability of occurring that event is 1 and all other events will
have probability between 0 and 1.

There are mainly two definitions of probability

1. Mathematical or classical definition of probability.


2. Empirical or Statistical definition of probability

Mathematical or classical definition of probability

Let a random experiment produce only a finite number of mutually exclusive and equally
likely outcomes. Then the probability of an event A is defined as
Favorable number of events A
P( A) 
Total number of events

Theoretical probability is also known as Classical or A Priori probability.


It is important to note that this method for calculating probabilities is appropriate only
when the outcomes of an experiment are equally likely. Further, both the event A and the
sample space S must have a finite number of outcomes.

Example:
Consider the experiment of tossing a coin. What is the probability of getting a tail?
Solution: -
We know that the probability of an event A is given by
P(A) = Number of favorable outcomes to A/ Total number of outcomes

3
The Sample space of the experiment of tossing a coin is given by,
S = {Head, Tail}
Number of favorable outcomes to tail = 1
Total number of outcomes = 2
So P(getting a tail) = 1/2

Empirical definition or Statistical definition of probability

Let A be an event of a random experiment. Let the experiment be repeated n number of


times out of which A occurs f times. Then f/n is called frequency ratio. The limiting
value of the frequency ratio as the number of repetitions becomes infinitely large is called
probability of the event A.

i.e Frequency of the event A f


P( A)   lim
Total frequency n  n

Relative Frequency Definition of Probability

The probability of an event E, denoted by P (E), is defined to be the relative frequency of


occurrence of E in a very large number of trials of a random experiment.

If an experiment is repeated n times, and event E occurs m times, then the relative
frequency of the event E is defined to be
rfn(E) = m/n
The probability of the event can be defined as the limiting value of the relative frequency:
P(E) = rfn(E)

Example
Experiment: Tossing a fair coin 50 times (n = 50)
Event E = 'heads'
Result: 30 heads, 20 tails, so m = 30
Relative frequency: rfn(E) = m/n = 30/50 = 3/5 = 0.6

If an experiment is repeated many, many times without changing the experimental


conditions, the relative frequency of any particular event will settle down to some value.

For example, in the above experiment, the relative frequency of the event 'heads' will settle
down to a value of approximately 0.5 if the experiment is repeated many more times.

Properties of Probability

1. The probability of an event is always greater than or equal to 0


2. The probability of an event lies between 0 and 1
3. Probability of an impossible event is 0. That is P(Φ) = 0.
4. Probability of a sure event is 1. That is P(S) = 1.

4
Example 4: A pond containing 3 types of fish: bluegills, redgills, and crappies. Each fish
in the pond is equally likely to be caught. You catch 40 fish and record the type. Each
time, you release the fish back in to the pond. The following frequency distribution shows
your results.

Fish type No. of Times Caught, (f)


Bluegill 13
Redgill 17
Crappy 10
f = 40

If you catch another fish, what is the probability that it is a bluegill?


The event is “catching a bluegill.” In your experiment, the frequency of this event is 13.
because the total of the frequencies is 40, the empirical probability of catching a bluegill
is:
P(bluegill) = 13/40 or 0.325

As you increase the number of times a probability experiment is repeated, the empirical
probability (relative frequency) of an event approaches the theoretical probability of the
event. This is known as the law of large numbers.

The complement of Event E, is the set of all outcomes in a sample space that are not
included in event E. The complement of event E is denoted by E’ and is read as “E
prime.”

For instance, you roll a die and let E be the event “the number is at least 5,” then the
complement of E is the event “the number is less than 5.” In other words, E = {5, 6} and
E’ = {1, 2, 3, 4}

Using the definition of the complement of an event and the fact that the sum of the
probabilities of all outcomes is 1, you can determine the following formulas:

P (E) + P (E’) = 1
P (E) = 1 – P (E’)
P (E’) = 1 – P (E)

Example: Use the frequency distribution in Example 4 to find the probability that a fish
that is caught is not a redgill.
A. Find the probability that the fish is a redgill.
17/40 = .425

B. Subtract the resulting probability from 1 = 1-.425 = .575

C. State the probability as a fraction and a decimal.

23/40 = .575
A. Find the probability that the fish is a redgill.
17/40 = .425

5
B. Subtract the resulting probability from 1 = 1-.425 = .575
C. State the probability as a fraction and a decimal.
23/40 = .575

Fish type Number of times caught,


(f)
Bluegill 13
Redgill 17
Crappy 10
f = 40

Addition Rule of Probability

If A and B are two not-mutually exclusive events, then the probability of happening one of
them is given by

Or P (A+B) = P (A) + P (B) - P (AB)


Where:
P (A) = probability that event A occurs
P (B) = probability that event B occurs
= probability that event A or event B occurs
= probability that event A and event B both occur
For mutually exclusive events, that is events which cannot occur together:
=0
The addition rule therefore reduces to
= P(A) + P(B)
For independent events, that is events which have no influence on each other:

The addition rule therefore reduces to

Example
Suppose we wish to find the probability of drawing either a king or a spade in a single
draw from a pack of 52 playing cards.
We define the events A = 'draw a king' and B = 'draw a spade'
Since there are 4 kings in the pack and 13 spades, but 1 card is both a king and a spade, we
have:
= 4/52 + 13/52 - 1/52 = 16/52
So, the probability of drawing either a king or a spade is 16/52 (= 4/13).

Multiplication Rule

6
The probability of the joint occurrence of two events A and B is equal to the Probability of
A multiplied by the conditional probability of B given that A has occurred.
In Symbols,

Or P (AB) = P (A).P (BA) = P (B).P (AB)


Where:
P (A) = probability that event A occurs
P (B) = probability that event B occurs
= probability that event A and event B occur
P (A | B) = the conditional probability that event A occurs given that event B has
occurred already
P (B | A) = the conditional probability that event B occurs given that event A has
occurred already
For independent events, that is events which have no influence on one another, the rule
simplifies to:

That is, the probability of the joint events A and B is equal to the product of the individual
probabilities for the two events.

Example: Suppose a pond has only three types of fish: catfish, trout, and bass, in the
ratio 5:2:3. There are 50 fish in total. Assuming you are not allowed to keep any of the

7
fish you caught and must throw them back before you catch your next fish. i) Determine
the probability of catching three consecutive trout, ii) Find the probability of catching a
trout and a catfish.

Solution:
There are 25 catfish, 10 trout and 15 bass.

i) Let A= Event that first fish is a trout


B= Event that second fish is a trout
C= Event that third fish is a trout

Ten out of 50 fish are trout, therefore, the probability of catching a trout first
10 1
is P( A)   . Since you throw the trout back in the lake, 10 out of 50 fish are trout,
50 5
10 1
therefore, the probability of the second fish caught being a trout is P( B)  
50 5
10 1
Similarly, P(C )  
50 5
1
Since three events are independent so we have P (ABC) = P (A) P (B) P(C) =
125
1
Therefore, probability of catching three trout is
125

ii) Let A= Event that first fish is a trout


B= Event that second fish is a catfish

Ten out of 50 fish are trout, therefore, the probability of catching a trout first
10 1
is P( A)   . Since you throw the trout back in the lake, 25 out of 50 fish are catfish,
50 5
1
therefore, the probability of catching a second catfish is P( B) 
2
1
Since three events are independent so we have P (AB) = P (A) P (B) =
10
1
Therefore, the probability of catching a trout and a catfish is
10

8
Example: Suppose a pond has only three types of fish: catfish, trout, and bass, in the
ratio 5:2:3. There are 50 fish in total. Assuming you are allowed to catch only three fish.
Find the probability of catching three successive trout if you do not throw them back after
each catch.

Solution:
There are 25 catfish, 10 trout and 15 bass.

i) Let A= Event that first fish is a trout


B= Event that second fish is a trout
C= Event that third fish is a trout

Ten out of 50 fish are trout, therefore, the probability of catching a trout first
10 1
is P( A)   . Since you do not throw the trout back in the lake so 9 out of 49
50 5
remaining fish are trout, therefore, the probability of catching a trout second is P(BA) =
9
. Eight out of 48 remaining fish are trout, therefore, the probability of catching a trout
49
8 1
third is P(C (BA)) = 
48 6
3
Since three events are dependent so we have P (ABC) = P (A)P (BA) P(C(BA)) =
490
3
Therefore, the probability of catching three successive trout is .
490

Example: Suppose a pond has only three types of fish: catfish, trout, and bass, in the
ratio 5:2:3. There are 50 fish in total. A fish is caught at random and found to be trout.
This fish is preserved and another fish is caught. What is the probability that second
caught fish is trout?

Example: Suppose a pond has only three types of fish: catfish, trout, and bass, in the
ratio 5:2:3. There are 50 fish in total. A fish is caught at random. What is the probability
that it will either a catfish or a bass?

You might also like