Nothing Special   »   [go: up one dir, main page]

Basic Theorem of Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

BASIC THEOREM OF LINEAR PROGRAMMING:

Consider the linear program (T):

minimize c

x
subject to
Ax = b
x 0 ,
where A is an mn matrix of rank m.
Recall the following denitions:
Denition 1.1 A feasible solution is an element x R
n
which satises the constraints
Ax = b, and x 0.
Among all solutions of the equation Ax = b, certain ones are called basic.
Denition 1.2 Let B be any m m non-singular submatrix of A consisting of linearly
independent columns of A. Then if the n m components of a solution x corresponding
to the columns of A which do not appear in B are set equal to zero, the solution, x
B
, of
the resulting set of equations is said to be a basic feasible solution with respect to the
basis consisting of columns of B. The components of x
B
associated with the columns of
B are called the basic are called basic variables.
Note that the equation Ax = b may not have any basic solutions in the general case.
In order to insure that basic solutions exist, it is usual to make certain assumptions: (a)
that n > m; (b) that the rows of A are linearly independent; (c) that the rank of A is m.
These conditions are sucient for the existence of at least one basic solution.
It may well occur that some components of a basic solution are zero.
Denition 1.3 If one or more basic variables in a basic solution has the value zero, then
that solution is said to be a degenerate basic solution.
We shall refer to a feasible solution which is also basic as a basic feasible solution.
A feasible solution that achieves the minimum value of the cost functional is said to be
1
an optimal feasible solution. If it is also basic, then it is an optimal basic feasible
solution.
Let us return to the linear programming problem T. The fundamental result is that we
need only search among the basic feasible solutions for an optimal solution. Indeed, that
is what the Simplex Method actually does.
Theorem 1.4 (The Fundamental Theorem of Linear Programming)
Given the linear programming problem T, where A is an mn matrix of rank m:
1. If there is any feasible solution, then there is a basic feasible solution.
2. If there is any optimal solution, then there is a basic optimal solution.
Proof: Suppose that a feasible solution exists. Choose any feasible solution among
those with the fewest non-zero components. If there are no non-zero components, then
x = 0 and x is a basic solution by denition. Otherwise, take the index set J :=
j
1
, j
2
, . . . , j
r
with elements corresponding to those x
j
i
> 0. Then if we denote the
corresponding columns by a
(j
1
)
, a
(j
2
)
, . . . a
(jr)
there are two possibilities:
1. The set of columns is linearly independent. In this case, we certainly have r m. If
r = m the corresponding solution is basic and the proof is complete. If r < m then,
since A has rank m, we choose mr vectores from the remaining nr columns of A
so that the resulting set of m column vectors is linearly independent. Assigning the
value zero to the corresponding mr variables yields a (degenerate) basic feasible
solution.
2. The set of columns a
(j
1
)
, a
(j
2
)
, . . . a
(jr)
is linearly dependent. Then there is a
choice of scalars
j
i
, not all zero, such that

j
1
a
(j
1
)
+
j
2
a
(j
2
)
+ . . . +
jr
a
(jr)
= 0. (1.1)
Without loss of generality, we may take
j
1
,= 0 and, indeed,
j
1
> 0 (otherwise
multiply (1.1) by 1).
Now, since x
j
i
> 0, the corresponding feasible solution x
J
is just a linear combina-
tion of the columns a
(j
i
)
. Hence
Ax =
r

i=1
x
j
i
a
(j
i
)
= b.
2
Now multiplying the dependence relation (1.1) by a real number and subtracting,
we have
A(x ) =
r

i=1
(x
j
i

j
i
) a
(j
i
)
= b,
and this equation holds for every although one or more components, x
k

j
k
may violate the non-negativity condition. Now let = (
1
, . . . ,
n
)

where

k
:=

j
i
, k = j
i
0, k ,= j
i
Then, for each value of the vector x is a solution of the constraint equations.
Note that, for the special value = 0, this vector is the original feasible solution.
Now, as increases from 0, the components of x will individually change;
each will either increase, decrease, or stay constant, depending on whether the
corresponding
i
is positive, negative, or zero.
Suppose, for some i
o
,
io
0. Then (x
io

io
) 0 for all > 0. On the other
hand, if for some i
o
,
io
> 0, then (x
io

io
) remains greater than 0 only for
suciently small > 0. Now, we take

:= min

x
i

x
i
> 0,
i
> 0

. (1.2)
Then

corresponds to the rst value of for which one or more non-zero components
of x becomes 0. For this value of the vector x

is feasible and has at
most r 1 positive components. Repeating this process if necessary, we can obtain
a feasible solution with non-zero components corresponding to linearly independent
columns. In this situation, the previous alternative applies.
This completes the proof of the rst part of the theorem.
Now assume that x

is an optimal solution. There is no guarantee that this optimal


solution is unique. In fact, we have seen cases where there is no uniqueness. Some of these
solutions may have more positive components than others. Without loss of generality, we
assume that x

has a minimal number of positive components. If x

= 0 then x

is basic
and the cost is zero. If x

,= 0 and if J is the corresponding index set, then there are


3
two cases as before. The proof of the rst case in which the corresponding columns are
linearly independent is exactly as in the previous proof of this case.
In the second case, we proceed just as with the second case above. To do this, however,
we must show that, for any , the vector x

is optimal. To show this, we note that


the associated cost is
c, x

) = c, x

) c, )
Then, for suciently small [[, the vector x

is a feasible solution for positive or


negative values of . Hence, we conclude that
c, ) = 0
since, were it not, then we could determine a small of the proper sign, to make
c, x

) < c, x

) ,
which would violate the assumption of the optimality of x

.
Having established that the new feasible solution with fewer positive components is also
optimal, the remainder of the proof may be completed exactly as in case 2 above. 2
REMARK: If we are dealing with a problem in n variables and m constraints, we can
produce an upper bound for the number of basic feasible solutions. Specically, we must
choose m columns from a set of n columns and there are at most

n
m

=
n!
m! (n m)!
,
ways to make the selection. Of course, some choices may not lead to a linearly independent
set of m columns so that this number is an upper bound.
4

You might also like