Estimation Theory MCQ
Estimation Theory MCQ
Degrees of Freedom
The t-ratio
Sample
Test Statistics
Estimate
Estimator
Mean
Mean Square
Variance
Standard Deviation
4. If Var(T2)<Var(T1), then T2 is
Unbiased
Efficient
Sufficient
Consistent
5. Let X1,X2,⋯,Xn be a random sample from a density f(x|θ), where θ is a value of the random
variable Θ with known density gΘ(θ). Then the estimator τ(θ) with respect to the prior gΘ(θ) is
defined as E[τ(θ)|X1,X2,⋯,Xn] is called
Minimax Estimator
Bay’s Estimator
Sufficient Estimator
7. Let Z1,Z2,⋯,Zn be independently and identically distributed random variables, satisfying E[|
Zt|]<∞. Let N be an integer-valued random variable whose value n depends only on the values of
the first n Zis. Suppose E(N)<∞, then E(Z1+Z2+⋯+Zn)=E(N)E(Zi) is called
Independence Equation
Wald’s Equation
BAN
CANE
BANE
A) and B)
Unbiased
Efficient
Sufficient
Consistent
^ ^
10. If Var( θ )→0 as n→0, then θ is said to be
Unbiased
Sufficient
Efficient
Consistent
Bool’s Inequality
Chebyshev’s Inequality
Unbiased
Efficient
Sufficient
Consistent
^ ^
13. If E( θ )=θ, then θ is said to be
Unbiased
Sufficient
Efficient
Consistent
14. Let X1,X2,⋯,Xn be a random sample from the density f(x;θ), where θ may be vector. If the
conditional distribution of X1,X2,⋯,Xn given S=s does not depend on θ for any value of s of S, then
statistic is called.
Minimax Statistics
Efficient
Sufficient Statistic
15. If the conditional distribution of X1,X2,⋯,Xn given S=s, does not depend on θ, for any value of
S=s, the statistics S=s(X1,X2,⋯,Xn) is called
Unbiased
Consistent
Sufficient
Efficient
16. If X1,X2,⋯,Xn is the joint density of n random variables, say, f(X1,X2,⋯,Xn;θ) which is
considered to be a function of θ. Then L(θ; X1,X2,⋯,Xn) is called
Likelihood Function
Log Function
Marginal Function
^
17. For a biased estimator θ of θ, which one is correct
^ ^
MSE( θ )=SD( θ )+Bias
^ ^
MSE( θ )=Var( θ )+Bias2
^ ^
MSE( θ )=Var( θ )+Bias
^ ^
MSE( θ )=SD( θ )+Bias2
18. Roa-Blackwell Theorem enables us to obtain minimum variance unbiased estimator through:
Unbiased Estimators
Complete Statistics
Efficient Statistics
Sufficient Statistics
19. The set of equations obtained in the process of least square estimation are called:
Normal Equations
Intrinsic Equations
Simultaneous Equations
Unbiased
Efficient
Sufficient
None of These
21. An estimator Tn is said to be a sufficient statistic for a parameter function τ(θ) if it contained all
the information which is contained in the
Population
Sample
None of these
22. If the sample average x̄ is an estimate of the population mean μ, then x̄ is:
Samples
A Formula
None of these
None of these
Necessary
Sufficient
None of these
Estimate
Estimation
Estimator
Confidence Interval
A single value
Some arbitrary interval values
None of these
29. The probability that the confidence interval does not contain the population parameter is
denoted by
1−α
1−β
Sample means
Sample proportion
Sample variance
All of these
^
E( θ )≠θ
^
E( θ )>θ
^
E( θ )=θ
^
E( θ ) <θ
Estimator
Estimate
Estimation
Bias
33. The way of finding the unknown value of population parameter from the sample values by using
a formula is called
Estimator
Estimation
Estimate
Bias
Bias
Estimate
Estimation
Estimator
Y 1 2 3 4 5
y=x
y=x+1
y=2x
y=2x+1
37. Fit the straight line to the following data.(0,7),(5,11),(10,16),(15,20) and (20,26)
y = 0.94x + 6.6
y = 6.6x + 0.94
y = 0.04x + 5.6
y = 5.6x + 0.04
38. If the equation y = aebx can be written in linear form Y=A + BX, what are Y, X, A, B?
Y = logy, A = loga, B=b and X=x
Y = y, A = a, B=b and X=x
Y = y, A = a, B=logb and X=logx
Y = logy, A = a, B=logb and X=x
39. The parameter E which we use for least square method is called as ____________
Sum of residues
Residues
Error
Sum of errors
40. If the two lines of regression are perpendicular to each other, the relation between the
regression coefficient is
bxy = byx
bxy .byx = 1
bxy + byx = 1
bxy + byx = 0
41.Suppose Correlation Coefficient between X and Y is 0.65. Suppose that each Y values are
divided by -5 then the new correlation will be.
> 0.65
0.65
– 0.65
0
42. If byx and bxy are two regression coefficients, they have
Same sign
Opposite sign
Either same or opposite signs
Nothing can be said
43. If byx>1 then bxy is
Less than 1
Greater than 1
Equal to 1
Equal to 0
44. If ρ=0, the lines of regression are:
Coincident
Parallel
Origin
Scale