Nothing Special   »   [go: up one dir, main page]

Assignment 5 Solutions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

ISyE/Math/CS/Stat 525 – Linear Optimization

Assignment 5 – Chapter 4

Instructions and policy: Undergraduate students should handle in the five exercises that are marked
with [U]. Graduate students should handle in the five exercises that are marked with [G]. All other exercises
are optional for keen students and should not be handled in. The assignment should be submitted
electronically in Canvas. Late submission policy: 20% of total points will be deducted per hour. Each student is
encouraged to solve all the exercises in the assignment to practice for the exams.
Students are strongly encouraged to work in groups of two on homework assignments. To find a partner
you can post on the “Discussions” section in Canvas. Only one file should be submitted for both group
members. In order to submit the assignment for your group please follow these steps in Canvas: Step 1. Click
on the “People” tab, then on “Groups”, and join one of the available groups named “Assignments Group 1”,
“Assignments Group 2”, . . . ; Step 2. When also your partner has joined the same group, one of the two can
submit the assignment by clicking on the “Assignments” tab, then on the assignment to be submitted, and
finally on “Submit assignment”. The submission will count for everyone in your group.
Groups must work independently of each other, may not share answers with each other, and solutions
must not be copied from the internet or other sources. If improper collaboration is detected, all groups
involved will automatically receive a 0. Students must properly give credit to any outside resources they use
(such as books, papers, etc.). In doing these exercises, you must justify all of your answers and cite every
result that you use. You are not allowed to share any content of this assignment.

Exercise 1 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Consider the linear programming problem:

min x1 − x2
s.t. 2x1 + 3x2 − x3 + x4 ≤ 0
3x1 + x2 + 4x3 − 2x4 ≥ 3
− x1 − x2 + 2x3 + x4 = 6
x1 ≤ 0
x2 , x3 ≥ 0.

Write down the corresponding dual problem.

Solution: The dual is

max 3p2 + 6p3


s.t. 2p1 + 3p2 − p3 ≥ 1
3p1 + p2 − p3 ≤ −1
− p1 + 4p2 + 2p3 ≤ 0
p1 − 2p2 + p3 = 0
p1 ≤ 0
p2 ≥ 0.

Page 1 of 9
Exercise 2 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points
Consider a linear programming problem in standard form and assume that the rows of A are linearly
independent. For each one of the following statements, provide either a proof or a counterexample.
(a) (3 points) Let x∗ be a basic feasible solution. Suppose that for every basis corresponding to x∗ ,
the associated basic solution to the dual is infeasible. Then, the optimal cost must be strictly less
than c0 x∗ .

Solution: True. Since x∗ is primal feasible, the optimal cost of the primal is less than or equal
than c0 x∗ . Suppose by contradiction that for every basis corresponding to x∗ , the associated
basic solution to the dual is infeasible and that the optimal primal cost is equal to c0 x∗ . Then x∗
is an optimal basic feasible solution, and for any optimal basis associated to x∗ the associated
dual basic solution is feasible.

(b) (2 points) The dual of the auxiliary primal problem considered in Phase I of the simplex method
is always feasible.

Solution: True. The auxiliary problem considered in Phase I is always feasible and its cost
function is nonnegative, thus it is not unbounded. Hence it has an optimal solution. It follows
that its dual also has an optimal solution, and is therefore feasible.

(c) (3 points) Let pi be the dual variable associated with the ith equality constraint in the primal.
Eliminating the ith primal equality constraint is equivalent to introducing the additional constraint
pi = 0 in the dual problem.

Solution: True. Let ai x = Pbi be the ith primal equality constraint and recall that the dual
constraints are of the form i pi ai ≤ ci . Eliminating the ith primal equality constraint removes
the term pi ai from the dual constraint and the term pi bi from the dual cost. This is equivalent
to setting pi to zero.

(d) (2 points) If the unboundedness criterion is satisfied when we solve the primal with the simplex
method, then the dual problem is infeasible.

Solution: True. If the unboundedness criterion is satisfied, the optimal primal cost is −∞ and
therefore the dual must be infeasible.

Exercise 3 [U] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Let A be a symmetric square matrix. Consider the linear programming problem

minimize c0 x
subject to Ax ≥ c
x ≥ 0.

Prove that if x∗ satisfies Ax∗ = c and x∗ ≥ 0, then x∗ is an optimal solution.

Solution: Consider the following linear program, P, and its dual, D,

(P) minimize c0 x (D) maximize p0 c


subject to Ax ≥ c subject to p0 A ≤ c0 ,
x ≥ 0, p ≥ 0.

Page 2 of 9
In general p0 A ≤ c0 is equivalent to A0 p ≤ c. Since A is square and symmetric, we have that A0 = A
and hence A0 p ≤ c is equal to Ap ≤ c. Furthermore the feasible vectors for P and D have the same
dimension.
By hypothesis we have a vector x∗ such that Ax∗ = c and x∗ ≥ 0. Such x∗ is feasible for both P
0
and D. Moreover, the objective values evaluated in x∗ are the same (c0 x∗ = x∗ c). Therefore, by
Corollary 4.2, x∗ is optimal for P that is the original problem.

Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Suppose we are given an oracle which, given a system of linear inequality constraints, either returns a
vector that satisfies all inequalities in the system, or states that no such vector exists. Consider a linear
programming problem that admits an optimal solution. Construct a simple algorithm that invokes the
oracle once, and that finds an optimal solution of the linear programming problem. (In particular, your
algorithm cannot use the simplex method.)

Solution: Let x and p be the primal and dual vectors, respectively, and let c and b be the cost
vectors in the primal and in the dual, respectively. We form the following system of linear inequalities
for the vector (x, p):

1. Primal feasibility
2. Dual feasibility
3. c0 x = p0 b.

For example, if the primal is in standard form, the system will be:

Ax = b
x≥0
p A ≤ c0
0

c0 x = p0 b.

By strong duality, since the primal has an optimal solution, so does the dual, and any feasible
solution (x, p) of the above system is such that x is primal optimal and p is dual optimal. Thus,
a single call to the oracle on the above system is sufficient to determine an optimal solution to the
original linear program.

Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Consider a general linear programming problem

minimize c0 x
subject to Ax ≥ b,

with A ∈ Rm×n , c ∈ Rn , b ∈ Rm . Suppose that we have a nondegenerate basic feasible solution to the
primal x∗ . Assume also that x∗i 6= 0 for every i = 1, . . . , n. Show that the complementary slackness
conditions lead to a system of equations for the dual vector that has a unique solution.

Page 3 of 9
Solution: Consider a general linear programming problem (P ), and define its dual problem (D).

(P ) minimize c0 x (D) maximize p0 b


subject to Ax ≥ b, subject to p0 A = c0 ,
p ≥ 0.

Let I(x∗ ) be the indices of the constraints on (P ) that are active at x∗ . Since x∗ is a degenerate
basic feasible solution, we know that |I(x∗ )| = n (see Definitions 2.9 and 2.10). Thus we have

a0i x∗ = bi , ∀i ∈ I(x∗ ) (1)


a0i x∗ > bi , / I(x∗ ),
∀i ∈ (2)

Moreover, the rows of A with indices in I(x∗ ) are linearly independent.


The complementarity slackness conditions are

pi (a0i x∗ − bi ) = 0, ∀i = 1, . . . , m (3)
x∗j (p0 Aj − cj ) = 0, ∀j = 1, . . . , n. (4)

/ I(x∗ ).
From (2) and (3) we obtain pi = 0 for every i ∈
Since x∗i 6= 0 for every i = 1, . . . , n, conditions (4) are equivalent to p0 Aj − cj = 0 for all j = 1, . . . , n.
By deleting variables pi , for i ∈/ I(x∗ ), in the system p0 A = c0 , we have n equations and n unknowns.
Define AI as the submatrix of A containing the rows in I(x∗ ), and AI¯ as the submatrix of A
containing the rows not in I(x∗ ). Note that AI is a n × n nonsingular matrix. Similarly, partition
p into pI and pI¯.
We obtain:
  
0 0
 AI AI
pA=c ⇐⇒ p0I p0I¯ = c0 ⇐⇒ (p0I 0)0
= c0 ⇐⇒ p0I AI = c0 .
AI¯ AI¯

Since AI is invertible, the system has a unique solution. Note that the dual vector that we obtain,
i.e. (c0 A−1 0 0
I 0 ) might have some negative component, thus it might be not dual feasible.

Exercise 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Let P = {x ∈ Rn : Ax = b, x ≥ 0} be a nonempty polyhedron, and let m be the dimension of the vector
b. We call xj a null variable if xj = 0 for all x ∈ P .
(a) Suppose there exists some p ∈ Rm for which p0 A ≥ 0, p0 b = 0, and such that the jth component of
p0 A is positive. Prove that xj is a null variable.

Solution: We set up the following primal/dual pair


min −xj max p0 b
s.t. Ax = b s.t. p0 A ≤ −e0j ,
x ≥ 0.
where ej is the jth unit vector. Suppose there exists some p̄ ∈ Rm for which p̄0 A ≥ 0, p̄0 b = 0
and such that the jth component of p̄0 A is positive, i.e. (p̄0 A)j = α > 0. We let p = − α1 p̄. We
have that p is dual feasible and that it has dual cost equal to 0. By weak duality, the optimal
cost of the primal is at least 0. This implies that no feasible primal solution can have xj > 0,
otherwise the primal cost would be negative.

Page 4 of 9
(b) Prove the converse of (a): if xj is a null variable, then there exists some p ∈ Rm for which p0 A ≥ 0,
p0 b = 0, and such that the jth component of p0 A is positive.

Solution: If xj is a null variable, since P is nonempty, the optimal primal cost is 0. By strong
duality, the dual also has a finite optimum, and the optimal dual cost is 0. Let p∗ be the
optimal dual solution. By taking its negative we obtain a vector with the desired properties.

(c) If xj is not a null variable, then by definition there exists some y ∈ P for which yj > 0. Use the
results in parts (a) and (b) to prove that there exists x ∈ P and p ∈ Rm such that:

p0 A ≥ 0, p0 b = 0, x + A0 p > 0.

Solution: For each non-null variable j we can find some vector x such that xj > 0. Consider
the average x̄ of all such vectors. We have Ax̄ = b and x̄ ≥ 0, so x̄ is primal feasible, i.e. x̄ ∈ P .
Moreover x̄j > 0 for each non-null variable h. By (a), for each null variable j, we can find
a vector p such that p0 A ≥ 0, p0 b = 0, and the jth component of p0 A is positive. By taking
the average of these vectors, we obtain a vector p̄ such that p̄0 A ≥ 0, p̄0 b = 0, and such that
all components of p̄0 A corresponding to null variables are positive. Thus, for every j, the jth
component of x (if xj is non-null) or of p0 A (if xj is null) will be positive.

Exercise 7 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Consider the following pair of linear programming problems:

minimize c0 x maximize p0 b
subject to Ax ≥ b subject to p0 A ≤ c0
x ≥ 0, p ≥ 0.

Suppose at least one of these two problems has a feasible solution. Prove that the set of feasible solutions
to at least one of the two problems is unbounded.

Solution: Let us call (P ) the primal linear programming problem above, and let (D) be the corre-
sponding dual problem. By hypothesis we have that at least one of these two problems has a feasible
solution. By Theorem 4.1 we can assume without loss of generality that (P ) is feasible. If (P ) is
also unbounded, then we are done.
So assume that (P ) has a bounded feasible set. This implies that whenever we minimize a linear
function (represented by a vector c) on the feasible set of (P ), the optimal objective value is finite.
Therefore, by Strong Duality, also problem (D) has finite optimal value for every c, hence (D) is
feasible for every vector c.
From this point onwards, we can follow different lines of reasoning. Next, I propose
three alternatives. The first two are very similar.
Reasoning 1
Let p̂ be a feasible solution to (D). To prove that (D) has an unbounded feasible set, we show that
there exists a feasible direction d at p̂ with the property that, for any θ > 0, p̂ + θd remains feasible
for (D). What properties must such direction satisfy? We need:

d 6= 0 (5)
0
(p + θd) A ≤ c ∀θ > 0 (6)
(p + θd) ≥ 0 ∀θ > 0 (7)

Page 5 of 9
Condition (6) implies d0 A ≤ 0 and condition (7) implies d ≥ 0. To guarantee that d 6= 0 we can
strengthen the first condition and impose the constraint d0 A ≤ −e, where e is the vector of all ones.
The question is now: does there exist a vector d ≥ 0 such that d0 A ≤ −e? To answer this question
we look at the primal/dual pair:

(P̄ ) minimize − e0 x (D̄) maximize p0 b


subject to Ax ≥ b subject to p0 A ≤ −e0 ,
x ≥ 0, p ≥ 0.

From the previous considerations, we know that the feasible set of (P̄ ) is bounded and that (P̄ )
admits a finite optimum. This implies that (D̄) is feasible. Thus, the feasible set of (D) is unbounded.
Reasoning 2
Consider now the following pair of linear programs, where e0 = (1, 1, . . . , 1):

(P̄ ) minimize − e0 x (D̄) maximize p0 b


subject to Ax ≥ b subject to p0 A ≤ −e0 ,
x ≥ 0, p ≥ 0.

Since (D̄) is feasible, there exists a feasible solution of (D̄). Let p∗ be a solution to (D̄), and let p̂
be a feasible solution to (D). Then p̂ + θp∗ is feasible for (D), for every θ > 0. In fact p̂ + θp∗ ≥ 0
0
and (p̂ + θp∗ )0 A = p̂0 A + θp∗ A ≤ c0 − θe0 ≤ c0 .
So we can conclude the set of feasible solutions to (D) is unbounded.
Reasoning 3
We know that the feasible set of (D) is nonempty. To prove that the feasible set of (D) is unbounded,
we show that there exixts an objective function that, maximized, has optimal value +∞. Note that
this amounts to replacing vector b with another vector b̄. Thus, the feasible set of the primal will
change, and if we can prove that (D) becomes unbounded, by weak duality the feasible set of the
primal will become empty with the new right hand side b̄.
In order to find a suitable vector b̄, we consider an arbitrary primal constraint, for example the first
one: a01 x ≥ b1 . Since the feasible set of (P ) is bounded, we know that for any feasible x we have:

b1 ≤ a01 x ≤ u < ∞.

In other words, if we maximize a01 x over the feasible set of (P ) the optimal value will be finite and
equal to some scalar u. Thus, if we change b1 into u + 1, the problem become infeasible. Consider
the primal/dual pair:

(P̄ ) minimize c0 x (D̄) maximize p0 b̄


subject to Ax ≥ b̄ subject to p0 A ≤ c0
x ≥ 0, p ≥ 0,

with b̄1 = u + 1 and b̄i = bi for i = 2, . . . , m. Since the primal is infeasible, the dual can be either
infeasible or unbounded. But the feasible set of (D̄) is the same as the feasible set of (D), thus we
know it is non empty. It follows that (D̄) is unbounded, thus the feasible set of (D) is unbounded.

Exercise 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Let A be a given matrix. Show that exactly one of the following alternatives must hold.
(a) There exists some x 6= 0 such that Ax = 0, x ≥ 0.

Page 6 of 9
(b) There exists some p such that p0 A > 00 .

Solution: Let A∈ Rm×n . Notice first that (b) is equivalent to the following condition:

˜ There exists some p̃ such that p̃0 A ≥ e0 , where e is the vector in Rn with all components equal
(b)
to 1.

˜ holds, then (b) is true too. Suppose now that (b) holds, hence let p̄ be such that
Clearly if (b)
p̄ p̄0 A
p̄0 A > 00 and define p̃ = 0 . Then p̃0 A = ≥ e0 .
min (p̄ A)i min (p̄0 A)i
i=1,...,n i=1,...,n

˜ are indeed equivalent. So showing that exactly one between (a) and (b) holds
Therefore (b) and (b)
˜ is true.
is equivalent to showing that exactly one between (a) and (b)
Define the following two linear programming problems

(P ) minimize p0 0 (D) maximize e0 x


subject to p0 A ≥ e0 , subject to Ax = 0
x ≥ 0.

It is easy to see that (D) is the dual problem of (P ).


Assume (b)˜ holds, which means that (P ) is feasible and the optimal cost is 0. The Strong Duality
Theorem tells us that also (D) is feasible with optimal objective value equal to 0. If the maximum
cost of (D) is 0, then every feasible dual solution has cost at most 0. Since x ≥ 0 for every feasible
dual solution, we must have x = 0. This implies that if Ax = 0, x ≥ 0, then x = 0. Therefore (a) is
false.
Suppose now that (b) ˜ does not hold. Then (P ) is infeasible and (D) can be either infeasible or
unbounded. Problem (D) is clearly feasible, since the zero vector satisfies all constraints. It follows
that (D) is unbounded. We can therefore conclude that there exists a vector x̄ ∈ Rn such that Ax̄
= 0, x̄ ≥ 0 and e0 x̄ > 0, implying x̄ 6= 0, thus condition (a) is true.

Exercise 9 [G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points


Let a and a1 , . . . , am be given vectors in Rn . Prove that the following two statements are equivalent:

(a) For all x ≥ 0, we have a0 x ≤ maxi a0i x.


Pm
(b) There exist nonnegative coefficients pi that sum to 1 and such that a ≤ i=1 pi ai .

Solution: First, suppose (b) is true. To prove that (a) is true observe that, for each x ≥ 0:
m
X m
X
a0 x ≤ pi a0i x ≤ pi max a0i x ≤ max a0i x,
i i
i=1 i=1

where the last inequality is true because we are assuming that the nonnegative coefficients pi sum
to 1.
Now, suppose (a) is true.
REASONING 1

Page 7 of 9
Then, there is no x ≥ 0 such that a0 x > maxi a0i x. Equivalently, there is no x ≥ 0 such that a0 x > a0i x
for all i. We consider the following linear program (P):

minimize 00 x
subject to (a − ai )0 x ≥ 1 i = 1, . . . , m
x ≥ 0.

Note that (P) is feasible if and only if there is x ≥ 0 such that a0 x > a0i x for all i: if x̄ is feasible for
(P) then x̄ ≥ 0 and (a − ai )0 x > 0 for all i; conversely, if there is x̃ ≥ 0 such that a0 x̃ > a0i x̃ for all
i, we can rescale x̃, i.e. for α > 0 we define x̂ = αx̃ so that (a − ai )0 x̂ ≥ 1 and x̂ ≥ 0. Thus, under if
(a) is true, it follows that (P) is infeasible.
The dual of (P) is (D):
m
X
maximize pi
i=1
X
subject to pi (a − ai ) ≤ 0
i
p ≥ 0.

Since (P) is infeasible and


P p = 0P is feasible for
P(D), it follows that (D) is unbounded. P Thus, there
m m
exists p ≥ 0 suchPthat i pi a ≤ i pi ai andP i=1 pi > 0. P First, we rescale p so that i=1 pi = 1.
m
Since p ≥ 0 and i=1 pi = 1, it follows that i pi a = a ≤ i pi ai , thus (b) holds.
REASONING 2
Condition (a) is equivalent to maxi a0i x − a0 x ≥ 0, for all x ≥ 0. Consider the following linear
programming problem:

minimize t − a0 x minimize t − a0 x (P )
subject to t ≥ a01 x subject to t − a01 x ≥0
.. ..
. ⇐⇒ .
t ≥ a0m x t − a0m x ≥ 0
x≥0 x ≥ 0.

From the reductions seen in class we know that (a) is true if and only if the optimal cost of (P ) is
greater than or equal to 0. On the other hand, for an arbitrary x ≥ 0 we can set t = maxi a0i x, and
obtain a feasible solution of (P ) of cost 0, Thus (P ) has a finite optimum and the optimal cost is 0.
The dual (D) of (P ) is:

maximize p0 0
subject to − p0 A ≤ −a0
m
X
pi = 1
i=1
p≥0

By strong duality, we know that the dual has a finite optimum,


Pm thus it has a feasible solution.
Consider a feasible solution p ∈ Rm to the dual. We have i=1 pi = 1 and p0 A ≥ a0 , implying that
(b) is satisfied.

Page 8 of 9
Exercise 10 [U][G] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 points
Consider the problem

min max (a0i x − bi )


i=1,...,m

s.t. x ∈ Rn .

Let v be the optimal value of the above optimization problem, let A be the m × n matrix whose rows
are a01 , . . . , a0m , and let b be the vector whose components are b1 , . . . , bm .
Pm
(a) (7 points) Consider any vector p ∈ Rm such that p0 A = 00 , p ≥ 0, and i=1 pi = 1. Show that
−p0 b ≤ v.

Solution: The objective function of the problem is piecewise linear convex. Since we have a
minimization problem, we have the following equivalent linear program:

min z (8)
x,z

s.t. − a0i x + z ≥ −bi i = 1, . . . , m (9)


n
x ∈ R , z ∈ R, (10)

where constraints (9) come from the fact that z ≥ a0i x − bi for i = 1, . . . , m. The optimal cost
of the above problem is v. The dual is:

max − p0 b (11)
p

s.t. p0 A = 00 (12)
Xm
pi = 1 (13)
i=1
p ≥ 0. (14)

By weak duality, any feasible solution of the dual, i.e. any vector p ∈ Rm that satisfies (12),
(13) and (14) is such that −p0 b is a lower bound on v.

(b) (3 points) In order to obtain the best possible lower bound of the form obtained in part (a), we
consider the linear program

max − p0 b
s.t. p0 A = 00
Xm
pi = 1
i=1
p ≥ 0.

Show that the optimal cost of the above linear program is v.

Solution: The given linear program is exactly the dual constructed in part (a). By strong
duality, the optimal dual cost is equal to v.

Page 9 of 9

You might also like