83% found this document useful (6 votes)
8K views

Estimation Theory MCQ

This document contains 35 multiple choice questions related to statistical concepts such as point estimation, properties of estimators, sufficient statistics, and regression analysis. Key concepts covered include unbiasedness, efficiency, consistency, maximum likelihood estimation, and the normal equations used in least squares regression. The questions test understanding of fundamental statistical topics like properties of estimators, sufficient statistics, and the relationship between correlation and regression coefficients.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
83% found this document useful (6 votes)
8K views

Estimation Theory MCQ

This document contains 35 multiple choice questions related to statistical concepts such as point estimation, properties of estimators, sufficient statistics, and regression analysis. Key concepts covered include unbiasedness, efficiency, consistency, maximum likelihood estimation, and the normal equations used in least squares regression. The questions test understanding of fundamental statistical topics like properties of estimators, sufficient statistics, and the relationship between correlation and regression coefficients.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Criteria to check a point estimator to be good are

Degrees of Freedom

The t-ratio

Standard Error of the Means

All of the Above

2. A quantity obtained by applying a certain rule or formula is known as

Sample

Test Statistics

Estimate

Estimator

3. Consistency of an estimator can be checked by comparing

Mean

Mean Square

Variance

Standard Deviation

4. If Var(T2)<Var(T1), then T2 is

Unbiased

Efficient

Sufficient

Consistent

5. Let X1,X2,⋯,Xn be a random sample from a density f(x|θ), where θ is a value of the random
variable Θ with known density gΘ(θ). Then the estimator τ(θ) with respect to the prior gΘ(θ) is
defined as E[τ(θ)|X1,X2,⋯,Xn] is called

Minimax Estimator

Posterior Bay’s Estimator

Bay’s Estimator

Sufficient Estimator

6. A set of jointly sufficient statistics is defined to be minimal sufficient if and only if

It is a function of every other set of sufficient statistics

It is not a function of every other set of sufficient statistics


It is a function of some other set of sufficient statistics

It is a function of any sufficient statistics in the set

7. Let Z1,Z2,⋯,Zn be independently and identically distributed random variables, satisfying E[|
Zt|]<∞. Let N be an integer-valued random variable whose value n depends only on the values of
the first n Zis. Suppose E(N)<∞, then E(Z1+Z2+⋯+Zn)=E(N)E(Zi) is called

Independence Equation

Neyman Pearson Lemma

Sequential Probability Likelihood Equation

Wald’s Equation

8. In statistical inference, the best asymptotically normal estimator is denoted by

BAN

CANE

BANE

A) and B)

9. If f(x1,x2,⋯,xn;θ)=g(θ^ ;θ)h(x1,x2,⋯,xn;θ), then θ^ is

Unbiased

Efficient

Sufficient

Consistent

^ ^
10. If Var( θ )→0 as n→0, then θ is said to be

Unbiased

Sufficient

Efficient

Consistent

11. If T=t(X1,X2,⋯,Xn) is an unbiased estimator of τ(θ), then below inequality is called

Cauchy Schwarz Inequality

Bool’s Inequality

Chebyshev’s Inequality

Cramer Rao Inequality


12. If Var(T2)<Var(T1), then T2 is

Unbiased

Efficient

Sufficient

Consistent

^ ^
13. If E( θ )=θ, then θ is said to be

Unbiased

Sufficient

Efficient

Consistent

14. Let X1,X2,⋯,Xn be a random sample from the density f(x;θ), where θ may be vector. If the
conditional distribution of X1,X2,⋯,Xn given S=s does not depend on θ for any value of s of S, then
statistic is called.

Minimax Statistics

Efficient

Sufficient Statistic

Minimal Sufficient Statistic

15. If the conditional distribution of X1,X2,⋯,Xn given S=s, does not depend on θ, for any value of
S=s, the statistics S=s(X1,X2,⋯,Xn) is called

Unbiased

Consistent

Sufficient

Efficient

16. If X1,X2,⋯,Xn is the joint density of n random variables, say, f(X1,X2,⋯,Xn;θ) which is
considered to be a function of θ. Then L(θ; X1,X2,⋯,Xn) is called

Maximum Likelihood function

Likelihood Function

Log Function

Marginal Function
^
17. For a biased estimator θ of θ, which one is correct

^ ^
MSE( θ )=SD( θ )+Bias

^ ^
MSE( θ )=Var( θ )+Bias2

^ ^
MSE( θ )=Var( θ )+Bias

^ ^
MSE( θ )=SD( θ )+Bias2

18. Roa-Blackwell Theorem enables us to obtain minimum variance unbiased estimator through:

Unbiased Estimators

Complete Statistics

Efficient Statistics

Sufficient Statistics

19. The set of equations obtained in the process of least square estimation are called:

Normal Equations

Intrinsic Equations

Simultaneous Equations

All of the Above

20. Sample median as an estimator of the population mean is always

Unbiased

Efficient

Sufficient

None of These

21. An estimator Tn is said to be a sufficient statistic for a parameter function τ(θ) if it contained all
the information which is contained in the

Population

Parametric Function τ(θ)

Sample

None of these

22. If the sample average x̄ is an estimate of the population mean μ, then x̄ is:

Unbiased and Efficient

Unbiased and Inefficient


Biased and Efficient

Biased and Inefficient

23. By the method of moments one can estimate:

All Constants of a Population

Only Mean and Variance of a Distribution

All Moments of a Population Distribution

All of the Above

24. Parameters are those constants which occur in:

Samples

Probability Density Functions

A Formula

None of these

25. Crammer-Rao inequality is valid in case of:

Upper Bound on the Variance

Lower Bound on the Variance

The Asymptotic Variance of an Estimator

None of these

26. For an estimator to be consistent, the unbiasedness of the estimator is:

Necessary

Sufficient

Neither Necessary nor Sufficient

None of these

27. The formula used to estimate a parameter is called

Estimate

Estimation

Estimator

Confidence Interval

28. In point estimation we get

More than one value

A single value
Some arbitrary interval values

None of these

29. The probability that the confidence interval does not contain the population parameter is
denoted by

1−α

1−β

30. The following statistics are unbiased

Sample means

Sample proportion

Sample variance

All of these

31. A statistic θ^ is said to be an unbiased estimator of θ, if

^
E( θ )≠θ

^
E( θ )>θ

^
E( θ )=θ

^
E( θ ) <θ

32. A specific value calculated from sample is called

Estimator

Estimate

Estimation

Bias

33. The way of finding the unknown value of population parameter from the sample values by using
a formula is called

Estimator

Estimation

Estimate

Bias

34. The following is an unbiased estimator of the population variance σ2


Second one is the answer.

35. A function that is used to estimate a parameter is called

Bias

Estimate

Estimation

Estimator

36. Fit the straight line to the following data.


X 1 2 3 4 5

Y 1 2 3 4 5
y=x
y=x+1
y=2x
y=2x+1

37.  Fit the straight line to the following data.(0,7),(5,11),(10,16),(15,20) and (20,26)

y = 0.94x + 6.6
y = 6.6x + 0.94
y = 0.04x + 5.6
y = 5.6x + 0.04

38.  If the equation y = aebx can be written in linear form Y=A + BX, what are Y, X, A, B?
Y = logy, A = loga, B=b and X=x
Y = y, A = a, B=b and X=x
Y = y, A = a, B=logb and X=logx
Y = logy, A = a, B=logb and X=x

39. The parameter E which we use for least square method is called as ____________
Sum of residues
Residues
Error
Sum of errors

40. If the two lines of regression are perpendicular to each other, the relation between the
regression coefficient is

bxy = byx
bxy .byx = 1

bxy + byx = 1

bxy + byx = 0

41.Suppose Correlation Coefficient between X and Y is 0.65. Suppose that each Y values are
divided by -5 then the new correlation will be.
> 0.65
0.65
– 0.65
0
42. If byx and bxy are two regression coefficients, they have
Same sign
Opposite sign
Either same or opposite signs
Nothing can be said
43. If byx>1 then bxy is
Less than 1
Greater than 1
Equal to 1
Equal to 0
44. If ρ=0, the lines of regression are:

Coincident

Parallel

Perpendicular to each other

None of the above

45. Regression coefficient is independent of

Origin

Scale

Both origin and scale

Neither origin nor scale

You might also like