ECEN615 Fall2020 Lect15
ECEN615 Fall2020 Lect15
2
UTC Revisited
4
Five Bus Example
w = 2, 3, t (0)
= 42 , 34 , 67 , 118 , 33 , 100
T
f
= 150 , 400 , 150 , 150 , 150 , 1,000
max T
f
One 42 MW Two
Line 1 A
MVA
1.040 pu
200 MW
Line 2 A
1.050 pu 34 MW MVA
260 MW
Line 3 Line 4
slack
67 MW A
MVA
258 MW A
33 MW
A
100 MW
MVA
MVA
118 MW
Line 5
Four 1.042 pu
1.042 pu
100 MW Line 6
Three
Five 1.044 pu 118 MW
100 MW
5
Five Bus Example
= 44.0
6
Five Bus Example
u 2,3 = min2
(1)
( ) 0 ( )
2
(w)
(w)
The limiting value is line 4
f max
− f (0 )
−d2f (0 )
150 − 118 − 0.4 34
2
=
( )
2
(w) 0.8
7
Additional Comments
11
Least Squares and Sparsity
• In many contexts least squares is applied to problems
that are not sparse. For example, using a number of
measurements to optimally determine a few values
– Regression analysis is a common example, in which a line or
other curve is fit to potentially many points)
– Each measurement impacts each model value
• In the classic power system application of state
estimation the system is sparse, with measurements
only directly influencing a few states
– Power system analysis classes have tended to focus on
solution methods aimed at sparse systems; we'll consider both
sparse and nonsparse solution methods
12
Least Squares Problem
mn
• Consider Ax = b A ¡ , x ¡ n , b ¡ m
or
(a 1 ) T a 11 a 12 a 13 a 1n x 1 b 1
2 T
(a ) x = a 21 a 22 a 22 a 2n x 2 b 2
=
m T
(a ) a a mn x n b m
m1 a m 2 a m3
13
Least Squares Solution
14
Choice of p
15
Choice of p
(i) p = 1
Ax − b 1
is minimized by x * = b2
(ii) p = 2
b1 + b 2 + b 3
Ax − b 2
is minimized by x *
=
3
(iii) p =
b1 + b 3
Ax − b
is minimized by x *
=
2
16
The Least Squares Problem
= ( a ) x − b i
m
1 1 i
(x) =
2 T
Ax - b
2 2
2 i=1
17
The Least Squares Problem, cont.
– Second, the Euclidean norm is preserved under orthogonal
transformations:
(Q A ) x − Q
T T
b
2
= Ax − b 2
18
The Least Squares Problem, cont.
A T A x = A Tb
19
Proof of Fact
(x) 1 T T
0 = = ( x A Ax − x A b − b A x + b b )
T T T T
x x
x 2 x
1 T T
=
x 2
( x A A x − 2 x T
A T
b + b T
b )
x
= A T A x − A Tb
20
Implications
21
Implications
0.143 −0.071 −0.143 −0.071 0.143 5
4
−0.2
x = 3.1
−0.5
Implications
25
A Least Squares Solution Algorithm
Step 1: Compute the lower triangular part of ATA
Step 2: Obtain the Cholesky Factorization Α T Α = G T G
Step 3: Compute Α T b = bˆ
Step 4: Solve for y using forward substitution in
G T y = bˆ
and for x using backward substitution in
Gx =y
26
Practical Considerations
27
Loss of Sparsity Example
29
Numerical Conditioning Example
30
Numerical Conditioning
= max
i
l i , li is an eigenvalue of B T B ,
Keep in mind the
i.e., li is a root of the polynomial eigenvalues of a
p.d. matrix are
p ( λ) = det B B − λI
T
positive
• In other words, the 2 norm of
B is the square root of the
largest eigenvalue of BTB
31
Numerical Conditioning
• The conditioning number of a matrix B is defined as
max ( B ) the max / min stretching
(B) = B B −1 =
min ( B )
ratio of the matrix B
32
Power System State Estimation (SE)
i2
where J(x) is the scalar cost function, x are the state
variables (primarily bus voltage magnitudes and
angles), zi are the m measurements, f(x) relates the
states to the measurements and i is the assumed
standard deviation for each measurement
34
Assumed Error
i2
35
State Estimation for Linear Functions
37
Simple DC System Example
ij i j (
ij i ( j )
P meas − −V 2G + V V G cos − + B sin −
i ij ij i j (
))
))
(
B
Q meas
ij
2
i ij 2 i j ij
( i j) ij (
− V B + cap + V V G sin − − B cos −
i j
– Two measurements for four unknowns
• Other measurements, such as the flow at the other end,
and voltage magnitudes, add redundancy
41
SE Iterative Solution Algorithm
ij (
i j ij ( i j))
P meas − V V B sin −
ij (
i j ij i(
Q meas − V 2 B + V V − B cos −
i ij j ))
2.01 0 −2
H T R −1H = 1e 6 0 2 0
−2 0 2.01
2.02
1.5
1.003
−1
−1 −1.98
x = x + H R H H R
1 0 T −1 T
=
− 0.2
− 1 0.8775
0.01
−0.13
47
Assumed SE Measurement Accuracy
48
SE Observability
49
Pseudo Measurements
50
SE Observability Example
51