0% found this document useful (0 votes)
48 views

Primal-Dual Subgradient Method: - Equality Constrained Problems - Inequality Constrained Problems

The document describes the primal-dual subgradient method for solving constrained convex optimization problems. It discusses applying the method to problems with equality constraints, inequality constraints, and linear programming problems. The key steps are to formulate an equivalent augmented Lagrangian, define the KKT optimality conditions, and perform primal-dual updates using subgradients of the augmented Lagrangian. With an appropriate step size, the iterates converge to the primal-dual optimal solution.

Uploaded by

mansoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Primal-Dual Subgradient Method: - Equality Constrained Problems - Inequality Constrained Problems

The document describes the primal-dual subgradient method for solving constrained convex optimization problems. It discusses applying the method to problems with equality constraints, inequality constraints, and linear programming problems. The key steps are to formulate an equivalent augmented Lagrangian, define the KKT optimality conditions, and perform primal-dual updates using subgradients of the augmented Lagrangian. With an appropriate step size, the iterates converge to the primal-dual optimal solution.

Uploaded by

mansoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Primal-Dual Subgradient Method

• equality constrained problems


• inequality constrained problems

EE364b, Stanford University


Primal-dual subgradient method

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b

with variable x ∈ Rn, fi : Rn → R convex

• primal-dual subgradient method updates both primal and dual variables

• these converge to primal-dual optimal values

EE364b, Stanford University 1


Equality constrained problem

• convex equality constrained problem

minimize f (x)
subject to Ax = b

with variable x and optimal value p⋆


• we will work instead with (equivalent) augmented problem

minimize f (x) + (ρ/2)kAx − bk22


subject to Ax = b

where ρ > 0

EE364b, Stanford University 2


Augmented Lagrangian and optimality conditions
• augmented Lagrangian is

L(x, ν) = f (x) + ν T (Ax − b) + (ρ/2)kAx − bk22

• (x, ν) primal-dual optimal if and only if

0 ∈ ∂xL(x, ν) = ∂f (x) + AT ν + ρAT (Ax − b)


0 = −∇ν L(x, ν) = b − Ax

∂xL(x, ν)
• same as 0 ∈ T (x, ν), with z = (x, ν) and T (x, ν) =
−∇ν L(x, ν)

• T is a monotone operator (much more on this later)

EE364b, Stanford University 3


Primal-dual subgradient method

• primal-dual subgradient method is

z (k+1) = z (k) − αk T (k)

where T (k) ∈ T (z (k)) and αk is step length

• more explicitly:

x(k+1) = x(k) − αk (g (k) + AT ν (k) + ρAT (Ax(k) − b))


ν (k+1) = ν (k) + αk (Ax(k) − b)

where g (k) ∈ ∂f (x(k))

EE364b, Stanford University 4


Convergence

with step size αk = γk /kT (k)k2,


X X
γk > 0, γk = ∞, γk2 < ∞
k k

we get convergence:

f (x(k)) → p⋆, Ax(k) − b → 0

EE364b, Stanford University 5


Inequality constrained problem

• convex inequality constrained problem

minimize f0(x)
subject to fi(x) ≤ 0, i = 1, . . . , m

with variable x, optimal value p⋆

• (equivalent) augmented problem

minimize f0(x) + (ρ/2)kF (x)k22


subject to F (x) 0

where F (x) = (f1(x)+, . . . , fm(x)+), ρ > 0

EE364b, Stanford University 6


Augmented Lagrangian and optimality conditions

• augmented Lagrangian is

L(x, λ) = f0(x) + λT F (x) + (ρ/2)kF (x)k22

• (x, λ) primal-dual optimal if and only if

m
X
0 ∈ ∂xL(x, λ) = ∂f0(x) + (λi + ρfi(x)+)∂fi(x)+
i=1
0 = −∇λL(x, λ) = −F (x)

EE364b, Stanford University 7


Primal-dual subgradient method

• define z = (x, ν) and



∂xL(x, λ)
T (x, λ) =
−∇λL(x, λ)

(T is the KKT operator for the problem, and is monotone)

• primal-dual subgradient method is

z (k+1) = z (k) − αk T (k)

where T (k) ∈ T (z (k)) and αk is step length

EE364b, Stanford University 8


• more explicitly:

m
!
(k) (k) (k)
X
x(k+1) = x(k) − αk g0 + (λi + ρfi(x(k))+)gi
i=1
(k+1) (k)
λi = λi + αk fi(x(k))+, i = 1, . . . , m

(k) (k)
where g0 ∈ ∂f0(x(k)), gi ∈ ∂fi(x(k))+, i = 1, . . . , m

(k)
• note that λi can only increase with k

EE364b, Stanford University 9


Convergence

with step size αk = γk /kT (k)k2,


X X
γk > 0, γk = ∞, γk2 < ∞
k k

we get convergence:

f0(x(k)) → p⋆, fi(x(k))+ → 0, i = 1, . . . , m

EE364b, Stanford University 10


Example: Inequality constrained LP

minimize cT x
subject to Ax b

primal-dual subgradient update is



x(k+1) = x(k) − αk c + AT M (k)(λ(k) + ρ(Ax(k) − b)+)

λ(k+1) = λ(k) + αk (Ax(k) − b)+

where M (k) is a diagonal matrix

1 aTi x(k) > bi



(k)
Mii =
0 aTi x(k) ≤ bi

EE364b, Stanford University 11


problem instance with n = 20, m = 200, p⋆ ≈ −3.4
step size αk = 1/(kkT (k)k2)

2
10
|f (x(k))T− p⋆|
maxi(ai x − bi)
0
10

−2
10

−4
10

−6
10
0 500 1000 1500 2000 2500

k
EE364b, Stanford University 12

You might also like