solving a sequence of sparse linear least squares subproblems

Anuncio
BIT
2000, Vol. 40, No. 4, pp. 001–004
0006-3835/00/4004-0001 $15.00
c Swets & Zeitlinger
SOLVING A SEQUENCE OF SPARSE
LINEAR LEAST SQUARES SUBPROBLEMS ∗
ÁNGEL SANTOS-PALOMO and PABLO GUERRERO-GARCÍA†
Department of Applied Mathematics, University of Málaga, Complejo Tecnológico,
Campus Teatinos, 29071 Málaga, Spain, emails: {santos,pablito}@ctima.uma.es
Abstract.
We describe how to maintain an explicit sparse orthogonal factorization in order to
solve the sequence of sparse linear least squares subproblems needed to implement an
active-set method to solve the nonnegative least squares problem for a matrix with more
columns than rows. In order to do that, we have adapted the sparse direct methodology
of Björck and Oreborn of late 80s in a similar way to Coleman and Hulbert, but without
forming the hessian matrix that is only positive semidefinite in this case; thus we have
developed a sparse counterpart of Reichel and Gragg’s QRUP package of early 90s.
We comment on our implementation on top of the sparse toolbox of Matlab 5, and
we emphasize the importance of Lawson and Hanson’s NNLS active-set method as an
alternative to Dax’s constructive proof of Farkas’ lemma.
AMS subject classification: 65F50, 65F20, 90C49, 49N15.
Key words: Sparse orthogonalization, Givens rotations, active-set methods, Farkas’
lemma.
1
The problem we want to solve.
In the least squares problem with nonnegative variables:
(1.1) minimize
1
kAy − ck22 ,
2
A ∈ Rn×m , y ∈ Rm ,
subject to y ≥ O,
it is usual [1, 5, 20] to assume n > m and that the matrix A has full column rank,
so the problem is strictly convex and its solution is unique. To compute such
solution, Lawson and Hanson’s NNLS method [16, §23.3] can be used. However,
they did not make any assumption about the dimensions of the matrix A in their
original presentation; indeed, the proof of the convergence of the NNLS method
only needs that, in the sequence of linear least squares (LLS) subproblems of
the form min kAk z − ck2 that this method has to solve, every matrix Ak has full
column rank.
We shall only deal with the case n < m, with A being a sparse full row rank
matrix. It is also a convex program (although not strictly, so there is no unique
∗ Review
(30th January 2007) of the Technical Report MA-03/03, September 2003.
author. TRs available at http://www.satd.uma.es/matap/investig.htm
† Corresponding
2
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
solution) and we call it the convex nonnegative least squares (NNLS) problem. In
fact there may be infinitely many vectors y that produce the (unique) minimum
value of the length of the residual. We do not require the solution to have
any particular distinguishing property as in algorithm 1 of Lötstedt [17] (i.e., it
does not necessarily have the least possible number of nonzero components, nor
the minimum euclidean norm among all solutions) but the columns of A which
correspond to positive components of y must be linearly independent.
The availability of a sparse algorithm for (1.1) makes possible an elegantly
simple way (due to Cline) of solving sparse least distance programming (LDP)
problems [16, §23.4]; moreover, least squares full column rank problems with
linear inequality constraints like those arising in constrained curve fitting [16,
§23.7] can be transformed into an LDP as described in [16, §23.5]. While problem
(1.1) has been used as a Phase I for a deficient-basis simplex method for linear
programming—in a similar way to the active-set algorithm P1 of Dantzig et
al. [8], it is not the purpose of this paper to test the suitability of this alternative.
Anyhow, we do not want to fail to emphasize in this paper that to obtain a
constructive proof of the Farkas’ lemma it would suffice to apply to problem
(1.1) the NNLS method itself in the original Lawson and Hanson’s [16] form.
Lemma 1.1. (Farkas’ lemma) Let A ∈ Rn×m and c ∈ Rn , then exactly one of
the following propositions must be true: or
(1.2)
c = Ay
for some
y ≥ O,
or else there exists a vector d satisfying
(1.3)
AT d ≥ O
and
cT d < 0.
In fact, Dax [10] has given another active-set method (different from NNLS)
to solve the same problem (1.1), thus providing a different constructive proof of
Farkas’ lemma. First, he proves the following theorem, whose corollary acts as
the separating hyperplane theorem.
Theorem 1.2. (Kuhn-Tucker) Let y ∗ ∈ Rm , d∗ = Ay ∗ − c ∈ Rn and w∗ =
T ∗
A d ∈ Rm ; then y ∗ is a solution of (1.1) if and only if y ∗ ≥ O, w∗ ≥ O and
(y ∗ )T w∗ = 0.
Corollary 1.3. Let y ∗ ∈ Rm be a solution of (1.1) and d∗ = Ay ∗ − c ∈ Rn .
It holds that if d∗ = O then y ∗ is a solution of Ay = c and y ≥ O; otherwise, d∗
is a solution of AT d ≥ O and cT d < 0.
In this way, the only thing that must be established is the existence of a solution
y ∗ of (1.1); to do this, Dax [10] has developed his active-set method using the
Kuhn-Tucker conditions, but the NNLS method itself can be used instead.
The algorithms of Dantzig et al. [8], Portugal et al. [20, pp. 627–628] and Dax
[10] can be thought of as variants of Lawson and Hanson’s NNLS method [16,
§23.3]. Since NNLS is the most well-known among them, it has been chosen in
this paper to illustrate how its sparse implementation can be accomplished as
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
3
an example of a situation in which a sequence of closely related LLS subproblems
has to be solved, attending thus to a need made in 1993 in the “Future work”
section of [8]:
(Algorithm) P1 should be implemented using sparse matrix methods
(. . . ) However, this would require substantial effort, as our algorithm
uses QR factorization, and would thus be more complicated to update
than the sparse LU factorization
as well as another one by Saunders in the same paper [8, p. 429]:
These results seem significant enough to spark interest in a sparse
NNLS implementation
This need has been expressed again recently (in October, 2002) by Barnes et
al. in the “Conclusions and future research” section of [3], where (1.1) occurs as
a subproblem in their primal-dual active-set LSPD method (closely related to
that of Dax [9]) for linear programming:
An efficient implementation of the LSPD algorithm using QR update
techniques for solving the NNLS problem is crucial in order to make
it competitive
This serves to highlight the importance of our main concern, namely a sparse
implementation of an active-set method for solving the convex NNLS problem.
The rest of this paper is structured as follows. A description (revealing some
interesting geometrical features) of Lawson and Hanson’s NNLS method and the
analysis of its convergence for the problem mentioned above is given in §2, along
with several remarks about closely related algorithms. In §3 we indicate how
to maintain an explicit suitable sparse orthogonal factorization by adapting the
sparse direct methodology of Björck [4] and Oreborn [18] in a similar way to
Coleman and Hulbert [6], but without forming the hessian matrix that is only
positive semidefinite in this case; thus we have developed a sparse counterpart
of Reichel and Gragg’s QRUP package [21]. After the conclusions in §4, an
appendix taken from [15, § B] with the Matlab code for the sparse technique
described in §3 is included for the interested reader to cut-and-paste.
From now on, we shall denote with k · k the euclidean vector norm. I and
O are respectively the identity and the zero matrix of appropriate dimension.
R(·) and N (·) are respectively the range and null space of a certain matrix.
The orthogonal projector onto a subspace S is denoted by PS . Subindexing and
construction of matrices is done using Matlab notation.
2
Active-set methods for convex NNLS problem.
The NNLS method works with complementary solutions, namely those vectors
[y; w] ∈ R2m satisfying w = AT (Ay − c) ∈ Rm and y T w = 0. Let us denote the
working set by Bk ⊂ 1 : m (with |Bk | = mk ) and its complement by Nk =
(1 : m)\Bk , partitioning accordingly the matrix A = [Ak , Nk ] ∈ Rn×m and the
4
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
vectors y = [yB ; yN ] and w = [wB ; wN ]. We include in Bk the indices of the
strictly positive components of y, thus Bk denotes our set of free variables and
Nk our set of fixed variables at their bounds (i.e., active constraints with respect
to y ≥ O). To obtain a complementary solution we set to zero the so-called
nonbasic variables yN and wB , whereas to obtain the values of the so-called
basic variables yB and wN we solve the unconstrained LLS subproblem
(2.1)
min kAk yB − ck
where Ak ∈ Rn×mk has full column rank, and then
(2.2)
wN = NkT (Ak yB − c).
Note that if we define d(k) , Ak yB − c then we have wB = ATk d(k) = O, since
d(k) ∈ N (ATk ) is the negative residual of the LLS problem (2.1). It is worth
noting that the solution technique employed in (2.1) is crucial for the success of
the algorithm; nevertheless, we shall defer this issue until the beginning of §3 to
clearly differentiate between the description of the method (given below) and its
sparse implementation.
Algorithm 2.1. Convex NNLS method
S0. Initialization. Let k ← 0 and define Bk = ∅, Nk = 1 : m, y (k) = O and
w(k) = −AT c.
S1. Check for optimality and pick a constraint to add. Compute wp = min{wi :
i ∈ Nk }. If wp < 0 then add p to Bk and delete p from Nk ; otherwise, stop
with y (k) as optimal solution.
S2. Determine the search direction, the maximum step length along such direction and the constraints to drop. Compute temporary variables ȳB by
solving (2.1). If ȳB > O then let y (k+1) ← [ȳB ; O] and go to S3; otherwise,
let q be a constraint such that
yq
yi
= min
: (i ∈ Bk ) and (ȳi ≤ 0)
αk =
yq − ȳq
yi − ȳi
and let y (k+1) ← [yB +αk (ȳB −yB ); O], delete from Bk all index j (q among
them) such that yj = 0, add them to Nk and go back to S2 with k ← k + 1.
S3. Prepare the next iteration. Compute wN by (2.2) and go back to S1 with
k ← k + 1.
A complementary solution is feasible if yB ≥ O and wN ≥ O. A principal
pivoting algorithm computes successive infeasible complementary solutions until
it reaches a feasible one, and such algorithm is called single pivoting if there
is only one element changing in the involved sets in every iteration (if not, it
is called block pivoting). Our description of the NNLS method above is not the
same as that given in Portugal et al. [20, pp. 627–628] as single principal pivoting
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
5
algorithm, because although both algorithms add constraints one after another,
NNLS can drop a block of constraints and also imposes that yB > O. The single
principal pivoting algorithm of Portugal et al. [20] allows yB ≥ O, which implies
two additional changes in step S2 of algorithm 2.1:
“If ȳB ≥ O” instead of “If ȳB > O”,
and “ȳi < 0” instead of “ȳi ≤ 0”.
These differences are important from both a theoretical and a practical point
of view, because it is not possible to ensure in the latter case that αk > 0 and
thus a strict improvement has to be ruled out; moreover, “If ȳB ≥ −” instead
of “If ȳB > ” would be used in fixed-precision arithmetic, where is some
prescribed tolerance, thus allowing small negative elements in yB that can cause
a change in the behaviour of the algorithm. Following Portugal et al. [20, p.
629], the problem we are dealing with cannot be solved with a block principal
pivoting algorithm, because they cannot be applied to rank-deficient linear least
squares problems with nonnegative variables. Although single addition is used
in algorithm 2.1, note that we do not rule out the possibility of multiple deletion;
nevertheless, the implementation given in §3 only deals with single deletion, and
multiple deletion is accomplished by a sequence of single deletions.
Now let us analyze the convergence of algorithm 2.1. It is straightforward to
prove that Ak has full column rank (since the constraints added are contrary to
the null-space descent direction d(k) , Ak yB − c with respect to cT x). It is also
easy to prove1 that if Ak+1 = [Ak , ap ] has full column rank and d(k) satisfies
ATk+1 d(k) = wp · emk +1 ,
wp < 0,
then ỹp is stricly positive, where ỹp is the last component of the solution ỹB of
min kAk+1 yB − (−d(k) )k. Lastly we need the following theorem too, which is not
proven in [16].
Theorem 2.1. Let Ak+1 = [Ak , ap ] be a full column rank matrix, d(k) =
−PN (ATk ) c and aTp d(k) = wp < 0. Then the last component ỹp of the solution ỹB
of min kAk+1 yB − (−d(k) )k coincides with the last component ŷp of the solution
ŷB of min kAk+1 yB − ck.
Proof. Using Schur complements there exist block-triangular matrices LS
and RS such that LS ATk+1 Ak+1 = RS , namely
#
"
T
(ATk Ak )−1 O
Ak Ak ATk ap
I
A†k ap
=
.
aTp Ak aTp ap
−aTp ATk †
1
OT aTp (ap − Ak A†k ap )
Now we can premultiply by LS both sides of the equalities
(ATk+1 Ak+1 )ỹB = −ATk+1 d(k)
leading to the systems RS ỹB =
1 It
O
−wp
and
(ATk+1 Ak+1 )ŷB = ATk+1 c,
"
#
A†k c
and RS ŷB =
.
aTp (c − Ak A†k c)
is an alternative wording of lemma (23.17) of Lawson and Hanson [16].
6
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
From the leftmost system we obtain that ỹp = −wp /(aTp (ap − Ak A†k ap )), and
from the rightmost one
ŷp =
aTp (c − Ak A†k c)
aTp (ap
−
Ak A†k ap )
,
−aTp d(k)
aTp (ap
−
Ak A†k ap )
,
−wp
aTp (ap
− Ak A†k ap )
= ỹp .
From theorem 2.1 we have that if ȳB is the solution of min kAk+1 yB −(−d(k) )k
or the solution of min kAk+1 yB − ck, then ȳp > 0. The termination of the inner
loop S2 can be obtained from the facts that at least the index q is deleted from
Bk in each iteration of this step and that at least a positive component has to be
left in Bk , namely that corresponding to ȳp . Hence, if Bk has t indices when S2
is reached, at most t − 1 iterations can be achieved before exiting towards S3.
The finite termination of the whole algorithm is proved from the fact that kd(k) k
has a strictly lower value each time S1 is reached, so y (k) and its associated set
Bk will be different from every one occurred before; since Bk is a subset of 1 : m
and there is only a finite number of such subsets, the whole algorithm finishes
after a finite number of iterations.
Another issue not treated in [16] is the occurrence of ties in S1; an illustrative
example can be found in [15, pp. 78–80]. Although this possibility cannot be
ruled out, the strict decrease of kd(k) k is not compromised at all. In practice, a
least-index tie-breaker would suffice.
Dax [10] studied the convergence of another active-set method to solve (1.1);
here we only point out that it also has to solve a sequence of least squares
problems (in this case of the form min kAk ∆yB − d(k) k), hence it can also be
implemented with the same sparse techniques we are going to introduce in the
next section. The same is true for the variants of the NNLS method given by
Dantzig et al. [8] and Portugal et al. [20, pp. 627–628].
All the algorithms mentioned so far in this section are “build-up” methods [8,
p. 429] in the sense that they avoid the calculation of an initial factorization of
A, which otherwise may well dominate the rest of the computation. Following
Clark and Osborne [7, p. 26], these algorithms are grouped under the name RLSA
and they have the advantage of being self-starting and avoiding a possibly badly
conditioned full system if the subproblems leading to an optimal solution are well
conditioned. By exchanging the roles of y and w, Clark and Osborne developed
the RLSD class of algorithms, which also dispense with the initial factorization of
A; although A was restricted to have full column rank in the original description
of these algorithms [7, and references therein], the convex case considered here
has been addressed in [19, p. 833]. Hence, the sparse counterpart of these papers
could also be dealt with the sparse technique of §3.
There exist alternative approaches based upon identifying (1.1) as a quadratic
programming (QP) problem of a rather simple special kind and particularizing
general QP primal techniques. As an example, the celebrated general QP primal
algorithm of Gill and Murray [12] has been particularized to obtain “build-down”
algorithms for (1.1), like algorithm 1 of Lötstedt [17] and LSSOL (LS1 option)
of Gill et al. [13]:
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
7
(i) Lötstedt [17, p. 210] updates a dense QR decomposition of a matrix AF
that consists of the columns of A corresponding to his set F of free variables. Moreover, F is selected in such a way that AF has full column rank
initially and such that rank(A) = rank(AF ); thus, our assumption that
rank(A) = n would imply that |F| = n and then his procedure would need
an initial QR factorization of A to choose the initial F.
(ii) The LS1 option of LSSOL maintains the dense QR factorization of A [8,
p. 427]. As pointed out in [13, §2] and [5, §5.2.3], in the full rank case this
method is essentially equivalent to Stoer’s 1971 algorithm, which is not
restricted to the full-column rank case [22, p. 405]. Björck and Oreborn
have developed a sparse implementation for a particularization of Stoer’s
algorithm to simple bounds [4] and non-negative constraints [18], but their
algorithms only consider the positive definite case [6, p. 375] and would
not be efficient in the n m context [8, p. 429].
On the other hand, we can also particularize general QP dual algorithms to
obtain “build-up” methods for (1.1). In fact, Dantzig et al. pointed out in [8,
p. 421, and references therein] that Lawson and Hanson’s NNLS method can be
obtained in this way from that of van de Panne and Whinston, and NNLS has
been generalized to deal with simple bounds [16, pp. 279–283] or general linear
inequality constraints [23]. The sparse case for this generalizations, not treated
in these works, could be addressed as described in §3.
A numerical comparison between a “build-up” (Dantzig et al.’s P1) and a
“build-down” (Gill et al.’s LSSOL) method for the dense version of the convex
NNLS problem corresponding to the smallest Netlib linear programs is included
in [8, §5.2], where a clear advantage was obtained by the former in both iteration
counts and run times, and so we do not treat this issue further here. What we
are going to show now is how the sparse methodology (not the algorithm!) of
Björck [4] and Oreborn [18] to implement a “build-down” method in the fullcolumn rank case can also be adapted to implement a “build-up” one in the
n m full-row rank context.
3
A sparse implementation.
In order to develop a sparse counterpart of Reichel and Gragg’s QRUP package
[21], let us show how to adapt the sparse direct methodology of Björck [4] and
Oreborn [18] to be able to apply the sparse NNLS algorithm with a “short-andfat” matrix A, i.e., with more columns than rows. They proposed an active-set
algorithm for the sparse least squares problem
minimize 21 y T Cy + dT y,
subject to l ≤ y ≤ u
y ∈ Rm
with C positive definite. In our problem
C = AT A and d = −AT c and ∀i ∈ 1 : m , li = 0 and ui = +∞
8
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
but C is positive semidefinite, hence to maintain a sparse QR factorization of
the working set matrix we proceed in a similar way as in Coleman and Hulbert
[6], but without forming the hessian matrix C.
Given a fixed matrix A ∈ Rn×m of full row rank with m ≥ n, we recur the
triangular factor Rk of the sparse QR factorization of Ak ∈ Rn×mk with mk ≤ n,
where Ak is a linearly independent subset of the columns of A and it is generated
from Ak−1 by adding/dropping a column. Furthermore, A is not restricted to
be triangular as in [4]. Note that in a sparse Matlab NNLS implementation
due to Adlers [1, §5.7], Rk is not updated but recomputed from scratch every
iteration by calling a sparse Cholesky routine after explicitly forming the matrix
of the normal equations, the reason being that Adler’s purpose was to compare
single and block pivoting in the strictly convex case. To be as close to the
implementation as possible, we shall use Lk , RkT to explain implementation
details and reserve Rk for more theoretical explanations.
Since we are going to do without the orthogonal factor due to sparsity reasons, the solution of the LLS subproblem (2.1) is done by the CSNE method
(see, for example, [5, p. 126]) as suggested by Björck [4, §5], which amounts to
performing a step of iterative refinement after having found the solution ȳB of
the seminormal equations
RkT Rk yB = ATk Ak yB = ATk c,
and then computing d(k) = Ak ȳB − c and wN = ATk d(k) . This contrasts with
the dense updatable QR-based approach used by Lötstedt [17, p. 210]. The
reason why a direct approach is chosen to solve (2.1) is that only one isolated
convex NNLS problem has to be solved in the Phase I application described
in §1. However, a sequence of closely related convex NNLS problems has to
be solved in the primal-dual active-set application, and an iterative approach
would perhaps be more efficient in this case as reported by Lötstedt [17] and
recommended by Dax [9]. Nevertheless, Dantzig et al. [8, §5.3] and Barnes et al.
[3] solve each convex NNLS problem with a dense NNLS direct subroutine, thus
avoiding slow convergence when dealing with ill-conditioned problems.
We have developed a Matlab toolbox based upon the following subsections
[15, §3.3]. The proof of the result (given in §3.1) needed to set up the static
data structure, as well as the correctness of the techniques to drop (§3.2) and
add (§3.3) a constraint, are essentially due to Björck [4] (see also [6, 5]). The
reason why these details are included is that we want to emphasize the fact (not
included in [4, 5, 6]) that a different choice in the order in which Givens rotations
are used in the classical dense case (see, e.g., [14, §12.5.2]) leads us to an efficient
sparse updating scheme. Moreover, in our sparse scheme Ak is not restricted to
be a column-echelon submatrix (as RFk is in [4, 5]) nor ATk Ak has to be formed
(as in [6]). To describe these techniques we use the notation of Coleman and
Hulbert [6], where `∨p denotes the bottom part of L(:, p) and `∧p the top part.
The notation for the Givens rotations is given in [14, p. 216].
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
3.1
9
Setting up the static structure.
First we determine a permutation matrix P such that P T AT AP has a Cholesky
factor as sparse as possible. In Matlab we can resort to:
>> p=colmmd(A); A=A(:,p); [ct,h,pa,po,R]=symbfact(A,’col’); spy(R’)
Note that we do not form (AP )T AP and that both colmmd and symbfact perform correctly even although AT A ∈ Rm×m is only positive semidefinite; furthermore, we a priori permute the columns of A in such a way that the natural
order coincides with the computed order. This static structure RT has enough
space to accommodate any Lk , so it constitutes an upper bound for the memory
needed [4, Theorem 4.1]. For this to be true, it is crucial that instead of labeling
each row and column of both ATk Ak and Lk with consecutive numbers, we label
them with the number of the row and column of AT A from which it comes. As
an example, if m = 6 and Bk = [2, 4, 5], then the matrix Lk will be stored in
the triangular mk × mk matrix R(Bk , Bk )T constructed by intersecting the rows
and columns of RT whose indices are in Bk :


×


⊗




×
×
T


Lk = R(Bk , Bk )T .
Bk = [2, 4, 5] and R = 

⊗
×
⊗




⊗
× × × × × ×
Therefore, the column ordering of A dictates that of Ak .
3.2
Dropping a constraint.
Let Bk = [B1 , B2 , . . . , Bi , . . . , Bmk ] be the current working set and let us drop
the constraint q , Bi to get Bk+1 . Then, to obtain the factor Lk+1 of ATk+1 Ak+1
given the factor Lk of ATk Ak , we first delete the row q of Lk to get a lower
Hessenberg matrix Hk such that Hk HkT = ATk+1 Ak+1 :




X
L11

 X X X
L11
.

Lk =  `T∧q `qq
Hk =
=
 X X X X

L21 `∨q L22
L21 `∨q L22
X X X X X
To obtain Lk+1 from Hk we could proceed [14, p. 608] as Gill and Murray
[11] did in the dense case, annihilating the diagonal of L22 by applying Givens
rotations to the column pairs (q, Bi+1 ), (Bi+1 , Bi+2 ), . . . , (Bmk −1 , Bmk ) and
then dropping the last column, namely
Gk , G(q, Bi+1 )G(Bi+1 , Bi+2 ) . . . G(Bmk −1 , Bmk ),
L11
L11
Hk Gk =
Lk+1 =
.
L21 L̃22 O
L21 L̃22
10
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
However, using this procedure L̃22 would be stored in the columns from q to
Bmk −1 ; nevertheless, L̃22 must be stored in the same place set up (and possibly
only partially used) for L22 , i.e., in the columns from Bi+1 to Bmk . This can
be done by annihilating `∨q with Givens rotations applied to the column pairs
(Bi+1 , q), (Bi+2 , q), . . . ,(Bmk , q), namely
Gk , G(Bi+1 , q)G(Bi+2 , q) . . . G(Bmk , q),
L11
L11
Hk Gk =
Lk+1 =
.
L21 O L̃22
L21 L̃22
As `∨q is sparse, only some of the rotations above have to be performed.
Summing up, if we allocate a dense intermediate column vector as column
m + 1 in our static structure to act as accumulator, we first initialize it with
the column q of Hk , restore the static structure of L(q, :) and L(:, q) (to be used
later if needed), and start the annihilation process of the elements of L(:, m + 1)
from top to bottom, by rotating with the corresponding column of L and taking
into account the intermediate fill-in in the accumulator:
Algorithm 3.1. Dropping a constraint
L(Bk , m + 1) ← [O; `∨q ]
while there are nonzero elements in L(:, m + 1) do
j ← index of first nonzero element of L(:, m + 1)
G ← Givens(L(j, j), L(j, m + 1))
{G ∈ R2×2 }
L(j : m, [j m + 1]) ← L(j : m, [j m + 1]) · G
{Sparse product}
end while
3.3
Adding a constraint.
Adding a constraint to the working set implies adding a new row aTp to the
matrix ATk . As the computational effort is minimized when the new row is added
at the bottom of the matrix, we will perform this addition in two stages: first
we will form the working set
B̃k+1 = [B1 , B2 , . . . , Bi−1 , Bi+1 , . . . , Bmk+1 , Bi ],
p , Bi
and then we will reorder it to obtain
Bk+1 = [B1 , B2 , . . . , Bi−1 , Bi , Bi+1 , . . . , Bmk+1 ],
because the order is crucial to maintain the static structure (cf. §3.1).
Let ATk Ak = Lk LTk and Ãk+1 = [Ak , ap ]; denoting with [`T , σ] the new row
to add to the bottom of Lk we have that there exists an orthogonal matrix Ṽk+1
such that
T Ak
Lk O O
ÃTk+1 =
=
Ṽk+1 , [L̃k+1 O]Ṽk+1 ,
aTp
`T σ 0
11
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
so then
ÃTk+1 Ãk+1 =
ATk Ak
O
However, it also holds that
Lk LTk
T
L̃k+1 L̃k+1 =
O
O
0
O
0
+
+
O
aTp Ak
O
`T LTk
ATk ap
aTp ap
Lk `
`T ` + σ 2
.
,
and from a comparison of the two equations above we conclude
q
Lk ` = ATk ap
and
σ = aTp ap − `T `.
Note that this technique, due to Gill and Murray [11], avoid the formation of the
column p of the matrix AT A in [6]. Moreover, it is more accurate to proceed as
in [5, §3.3.3] and consider ` , LTk δ, so CSNE can be applied to min kAk δ − ap k
and then compute σ = kAk δ − ap k.
Now we partition L̃k+1 in blocks by grouping on one side the rows and columns
with index less than p and those with index greater than p on the other side:


X




 X X X X X 
L11
L11


T
T 
.



X O X
`∧ σ `∨
L̃k+1 = L21 L22
Hk+1 =
=


T
T


X O X X
`∧
`∨ σ
L21 O L22
X O X X X
Reordering in accordance with Bk+1 , we have obtained a matrix Hk+1 that
T
satisfies Hk+1 Hk+1
= ATk+1 Ak+1 and is lower triangular but has a horizontal
spike in row p. So in order to get Lk+1 from Hk+1 we only have to apply
Givens rotations to the column pairs (p, Bmk+1 ), (p, Bmk+1 −1 ), . . . , (p, Bi+1 ) to
annihilate the elements of `T∨ from right to left using column p, thus
Lk+1 = Hk+1 G(p, Bmk+1 )G(p, Bmk+1 −1 ) . . . G(p, Bi+1 ).
Furthermore, as `T∨ is sparse we only have to perform some of the rotations
described above. Note that in the dense case [14, p. 609–610] the usual way to
proceed is


X




 X X X X X 
L11
L11


T
T

X X
O 
`∨ σ  = 
Hk+1 =  `∧
L̃k+1 =  L21 L22
,

 X X X
O 
L21 L22 O
`T∨ σ
`T∧
X X X X O
Lk+1 = Hk+1 G(Bmk+1 −1 , Bmk+1 )G(Bmk+1 −2 , Bmk+1 −1 ) . . . G(p, Bi+1 ),
but as in §3.2, the L̃22 matrix obtained after the rotations would not occupy the
same place as L22 .
12
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
Summing up, once ` and σ has been calculated, firstly we locate `T∧ at the
beginning of the row p of Lk+1 . Next, we allocate as column 0 of our static
structure a dense intermediate column vector initially zero to act as accumulator for column p of Lk+1 , and as row 0 a sparse vector whose first element
is initialized to σ and whose last part is initialized to `T∨ . Then we start the
annihilation process of the elements of L(0, p + 1: m) from right to left, by rotating column 0 with the corresponding column of L and taking into account
the intermediate fill-in in the accumulator; the sparsity of row 0 is exploited to
know which columns have to be rotated. Finally we move the contents of column 0 taking care of the static structure of L(: , p), since Barlow has proven in
[2] that non-structural non-zero elements (i.e., those that do not fit into the predicted structure due to rounding errors) can be omitted without compromising
the numerical accuracy.
Algorithm 3.2. Adding a constraint
L(p, [0 Bk ]) ← [0, `T∧ , OT ]
L(0, [0 Bk ]) ← [σ, OT , `T∨ ]
while there are nonzero elements in L(0, p + 1: m) do
j ← index of last nonzero element of L(0, p + 1: m)
G ← Givens(L(0, 0), L(0, j))
{G ∈ R2×2 }
L([0 j: m], [0 j]) ← L([0 j: m], [0 j]) · G
{Sparse product}
end while
L([0 p], 0) ← L([p 0], 0)
L(: , p) ← L(: , 0)
4
{Sparse assignement}
Conclusions.
Reichel and Gragg’s QRUP package of early 90s [21] can be used to solve a
sequence of dense closely related linear least squares subproblems. The aim of
this paper was to develop its sparse counterpart in a Matlab computational
environment, and therefore we have analyzed how to maintain an explicit sparse
orthogonal factorization in order to solve the sequence of sparse linear least
squares subproblems needed to implement an active-set method to solve the
nonnegative least squares problem for a matrix with more columns than rows.
These active-set methods constitute constructive proofs of Farkas’ lemma.
REFERENCES
1. M. Adlers, Sparse Least Squares Problems with Box Constraints, Licentiat thesis,
Department of Mathematics, Linköping University, 1998.
2. J. L. Barlow, On the use of structural zeros in orthogonal factorization, SIAM J.
Sci. Statist. Comput., 11 (1990), pp. 600–601.
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
13
3. E. Barnes, V. Chen, B. Gopalakrishnan and E. L. Johnson, A least-squares primaldual algorithm for solving linear programming problems, Oper. Res. Lett., 30 (October, 2002), pp. 289–294.
4. Å. Björck, A direct method for sparse least squares problems with lower and upper
bounds, Numer. Math., 54 (1988), pp. 19–32.
5. Å. Björck, Numerical Methods for Least Squares Problems, SIAM Publications,
Philadelphia, USA, 1996.
6. T. F. Coleman and L. A. Hulbert, A direct active set algorithm for large sparse
quadratic programs with simple lower bounds, Math. Progr., 45 (1989), pp. 373–
406.
7. D. I. Clark and M. R. Osborne, On linear restricted and interval least-squares
problems, IMA J. Numer. Anal., 8 (1988), pp. 23–36.
8. G. B. Dantzig, S. A. Leichner, and J. W. Davis, A strictly improving linear programming phase I algorithm, Ann. Oper. Res., 46/47 (1993), pp. 409–430.
9. A. Dax, Linear programming via least squares, Linear Algebra Appl., 111 (1988),
pp. 313–324.
10. A. Dax, An elementary proof of Farkas’ lemma, SIAM Rev., 39 (1997), pp. 503–507.
11. P. E. Gill, and W. Murray, A numerically stable form of the simplex algorithm,
Linear Algebra Appl., 7 (1973), pp. 99–138.
12. P. E. Gill, and W. Murray, Numerically stable methods for quadratic programming,
Math. Prog., 14 (1978), pp. 349–372.
13. P. E. Gill, and S. J. Hammarling, and W. Murray, and M. A. Saunders, and M. H.
Wright, User’s guide for LSSOL (version 1.0): A FORTRAN package for constrained linear least-squares and convex quadratic programming, Technical report,
Department of Operations Research, Stanford University, USA, January 1986.
14. G. H. Golub, and C. F. Van Loan, Matrix Computations, 3rd edition, The Johns
Hopkins University Press, Baltimore, MD, USA, 1996.
15. P. Guerrero-Garcı́a, Range-Space Methods for Sparse Linear Programs (Spanish),
Ph.D. thesis, Dept. Applied Mathematics, University of Málaga, Spain, July 2002.
16. C. L. Lawson, and R. J. Hanson, Solving Least Squares Problems, revised republication in 1995 by SIAM of the original work published by Prentice-Hall, Englewood
Cliffs, NJ, USA, 1974.
17. P. Lötstedt, Solving the minimal least squares problem subject to bounds on the
variables, BIT Numer. Math., 24 (1984), pp. 206–224.
18. U. Oreborn, A Direct Method for Sparse Nonnegative Least Squares Problems, Licentiat thesis, Department of Mathematics, Linköping University, Sweden, 1986.
19. M. R. Osborne, Degeneracy: Resolve or avoid?, J. Opl. Res. Soc., 43:8 (1992),
pp. 829–835.
20. L. F. Portugal, J. J. Júdice, and L. N. Vicente, A comparison of block pivoting
and interior-point algorithms for linear least squares problems with nonnegative
variables, Math. Comp., 63 (1994), pp. 625–643.
21. L. Reichel, and W. B. Gragg, Algorithm 686: FORTRAN subroutines for updating
the QR decomposition, ACM Trans. Math. Soft., 16:4 (1990), pp. 369–377.
22. J. Stoer, On the numerical solution of constrained least squares problems, SIAM
J. Numer. Anal., 8 (1971), pp. 382–411.
23. J. Stoer, A dual algorithm for solving degenerate linearly constrained linear least
squares problems, J. Num. Alg. App., 1 (1992), pp. 103–131.
14
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
Appendix: Matlab code.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Modified Sparse Least Squares Toolbox v1.2
==========================================
Autor:
Version:
Fecha:
Pablo Guerrero Garcia ([email protected])
1.2
Diciembre 2001
Objetivo:
Mantenimiento disperso del factor de Cholesky de
A(:,ACT)’*A(:,ACT), siendo A<-R^{nxm} y ACT<1:m.
Otras variables globales (ademas de A y ACT) son
NAC (complementario de ACT), b<-R^m y c<-R^n. El
factor de Cholesky de A(:,ACT)’*A(:,ACT) cabe en
el esqueleto predicho por L(ACT,ACT); ACT y NAC
se conservan ordenados. L<-R^{mxm+1} triangular
inferior dispersa determina cota superior de la
memoria a usar; indica con NaNs nnzs potenciales,
y la ultima columna se usa como acumulador. Capaz
de contemplar NaNs intermedios.
Contenido:
SqrIni
SqrIns
SqrDel
SqrRei
Givens
EjemMSLS
Inicializacion de estructuras de datos
Anyadir una restriccion a ACT
Eliminar una restriccion de ACT
Reinvertir A(:,ACT)
Realizar una rotacion de Givens
Ejemplo de utilizacion del toolbox
function SqrIni(modo,scal)
%SqrIni(modo,scal)
%
Inicializa un esqueleto L<-R^{mxm} triangular inferior de NaNs
%
tal que L(ACT,ACT) sea cota superior para el factor de Cholesky
%
de A(:,ACT)’*A(:,ACT) sea quien sea ACT. Si modo==’dmperm’
%
se reordenan las filas y las columnas de A (y b y c de forma
%
consistente) para que A quede en LBTF y asi el esqueleto de
%
dicho factor quede correctamente predicho en L(ACT,ACT); si
%
modo==’colmmd’ se reordenan las columnas de A (y b de forma
%
consistente) para que el esqueleto a priori en L(ACT,ACT)
%
sea cota superior para el factor de Cholesky. Tambien se asigna
%
memoria en la columna m+1 de L para que actue como acumulador
%
con n elementos distintos de cero como maximo; es necesario
%
para implementar de forma eficiente las rotaciones de Givens.
%
Se contemplan tecnicas de escalado (p.d., scal==’noscale’) como
%
la de que A(:,k) y c tengan norma euclidea unitaria (scal==’txt
%
book’), o la usada en CPLEX (normalizar A(:,k) en norma infinito,
%
y luego A(k,:), si scal==’cplex92’).
%
%
Ver tambien: dmperm, colmmd, symbfact
%
%
%
%
Autor:
Version:
Fecha:
Pablo Guerrero Garcia ([email protected])
1.3
Diciembre 2001
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
global ecuaciones L ACT NAC A b c
if nargin<1
modo = ’noperm’; scal = ’noscale’;
elseif nargin < 2 | isempty(scal)
scal = ’noscale’;
end;
[n,m] = size(A); % b <- R^m ; c <- R^n
switch lower(modo)
case ’dmperm’
% [p,q] = dmperm(DeSym(A’)); A = A(q,p);
[p,q] = dmperm(A’); A = A(q,p);
b = b(p); c = c(q); % ecuaciones = logical(ecuaciones(p));
case {’colmmd’,’colperm’,’colamd’}
% p = feval(lower(modo),DeSym(A)); A = A(:,p);
p = feval(lower(modo),A); A = A(:,p);
b = b(p); % ecuaciones = logical(ecuaciones(p));
case ’saunpm’
% [Saunders, 1972]: Demasiado denso
for k = 1:size(A,2)
priyults(:,k)=[min(find(A(:,k)));max(find(A(:,k)))];
end;
[foo,p] = sort(priyults(1,:));
A = A(:,p); b = b(p); % ecuaciones = logical(ecuaciones(p));
case {’symmmd’,’symrcm’,’symamd’}
% p = feval(lower(modo),DeSym(A’*A)); A = A(:,p);
p = feval(lower(modo),A’*A); A = A(:,p);
b = b(p); % ecuaciones = logical(ecuaciones(p));
case ’noperm’
disp(’No permuta ni filas ni columnas’);
otherwise
error(’Modo de ordenacion desconocido en SqrIni’)
end;
switch lower(scal)
case ’txtbook’
disp(’Escalado en norma euclidea ...’);
% denom = norm(DeSym(c),2);
denom = norm(c,2);
if denom > eps
for k = find(c)
A(k,:) = A(k,:)/denom; c(k) = c(k)/denom;
end;
end;
for k = 1:size(A,2)
%
denom = norm(DeSym(A(:,k)),2);
denom = norm(A(:,k),2);
if denom > eps
A(:,k) = A(:,k)/denom; b(k) = b(k)/denom;
end;
end;
case ’cplex92’
% [Bixby, 1992, p. 271]
disp(’Escalado en norma infinito como CPLEX ...’);
for k = 1:size(A,1)
%
denom = norm(DeSym(A(k,:)),inf);
denom = norm(A(k,:),inf);
if denom > eps
A(k,:) = A(k,:)/denom; c(k) = c(k)/denom;
end;
end;
15
16
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
for k = 1:size(A,2)
denom = norm(DeSym(A(:,k)),inf);
denom = norm(A(:,k),inf);
if denom > eps
A(:,k) = A(:,k)/denom; b(k) = b(k)/denom;
end;
end;
case ’noscale’
disp(’No escala ni filas ni columnas’);
otherwise
error(’Modo de escalado desconocido en SqrIni’)
end;
ACT = []; NAC = 1:m;
% [count,h,parent,post,L] = symbfact(DeSym(A),’col’); L = NaN*L’;
[count,h,parent,post,L] = symbfact(A,’col’); L = NaN*L’;
L(:,m+1) = spalloc(m,1,n);
%
function [pkap,zeta]=SqrIns(p,jota,zeta,pkap)
%[pkap,zeta]=SqrIns(p,jota,zeta,pkap)
%
Si L(ACT,ACT) es el factor de Cholesky de A(:,ACT)’*A(:,ACT)
%
y NAC(jota) = p, se elimina p de NAC y se inserta p de forma
%
ordenada en ACT; luego se actualiza L(ACT,ACT). Tanto p como
%
jota pueden ser []; ademas, jota puede no aparecer. Usa L(:,
%
end) como acumulador sccero denso por eficiencia al rotar.
%
Como variables de E/S opcionales estan zeta y pkap, respecti%
vamente solucion y residuo de min ||A(:,ACT)*z-A(:,p)||_2.
%
Capaz de contemplar NaNs intermedios.
%
%
Ver tambien: Givens
%
Autor:
Pablo Guerrero Garcia ([email protected])
%
Version:
1.2a
%
Fecha:
Octubre 2004
%
Algoritmia: [Gill&Murray,73],[Bjorck,88],[Coleman&Hulbert,89]
%
global L ACT NAC A
% NAC(jota) = p
if isempty(p)
p = NAC(jota);
elseif nargin == 1 | isempty(jota)
jota = find(NAC == p);
else
error(’Argumentos erroneos en SqrIns’);
end;
% Calcular ele y sigma usando CSNE; L(ACT,ACT) puede tener NaNs
[esqf,esqc] = find(isnan(L(ACT,ACT)));
for k = 1:length(esqf), L(ACT(esqf(k)),ACT(esqc(k))) = 0; end;
if nargin<3
zeta = L(ACT,ACT)’\(L(ACT,ACT)\(A(:,ACT)’*A(:,p)));
pkap = A(:,p)-A(:,ACT)*zeta;
% zeta = zeta + L(ACT,ACT)’\(L(ACT,ACT)\(A(:,ACT)’*pkap));
% pkap = A(:,p)-A(:,ACT)*zeta;
incz = L(ACT,ACT)’\(L(ACT,ACT)\(A(:,ACT)’*pkap)); zeta = zeta + incz;
pkap = pkap-A(:,ACT)*incz;
end;
ele = L(ACT,ACT)’*zeta;
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
sigma = norm(pkap);
% Si sigma == 0 entonces A(:,p) en R(A(:,ACT))
for k = 1:length(esqf), L(ACT(esqf(k)),ACT(esqc(k))) = NaN; end;
% Actualizar ACT y NAC
ACT1 = ACT(find(ACT < p)); la1 = length(ACT1); ACT2 = ACT(la1+1:end);
NAC(jota) = []; ACT = [ACT1 p ACT2];
% Subcolumna de L(:,end) como acumulador denso para sccero
esqf = find(L(:,end)); L(esqf,end) = 0;
% La parte superior de ele ocupa su lugar ...
if la1>0
esqf = find(L(p,ACT1) & ele(1:la1)’);
if ~isempty(esqf), L(p,ACT1(esqf)) = ele(esqf)’; end;
end;
% ... respetando el esqueleto disperso de L(p,ACT1)
irts = find(ele(la1+1:end)’);
% En ACT2(irts) estaran las columnas a rotar
for j = irts(end:-1:1)
[c,s,sigma] = Givens(sigma,ele(la1+j)); ele(la1+j) = 0;
esqf = find(L(ACT2,ACT2(j)));
% Creo que no es necesario hacer esqf(find(esqf<j)) = [];
% De entre las posiciones a rotar, poner los NaNs a 0
L(ACT2(esqf(isnan(L(ACT2(esqf),ACT2(j))))),ACT2(j)) = 0;
% Hacer la rotacion normalmente
L(ACT2(esqf),[end ACT2(j)]) = L(ACT2(esqf),[end ACT2(j)])*[c s; -s c];
% Recuperar NaNs no rellenados: no hace nada si esqueleto bien predicho
L(ACT2(setdford(esqf’,find(L(ACT2,ACT2(j)))’)’),ACT2(j))=NaN;
end;
L(p,p) = sigma;
% Mejor que hacer directamente L(ACT2,p) = sccero es ...
esqf = find(L(ACT2,p) & L(ACT2,end));
L(ACT2(esqf),p) = L(ACT2(esqf),end);
% ... pues respeta el esqueleto disperso de L(ACT2,p)
function w = SetDfOrd(u,v)
%SETDFORD Hace lo mismo que SETDIFF pero aprove%
cha que los conjuntos estan ordenados
%
y no tienen elementos repetidos.
%
% Sintaxis: w = SetDfOrd(u,v)
% Ejemplo: SetDfOrd([3 6 8 12 16],[1 3 7 12 14])
%
w=[];
while ~isempty(u)
if isempty(v), w=[w,u]; return; end;
if u(1)<v(1)
w=[w,u(1)]; u=u(2:end);
else
if u(1)==v(1), u=u(2:end); end;
v=v(2:end);
end;
end;
function SqrDel(q,indi)
%SqrDel(q,indi)
%
Si L(ACT,ACT) es el factor de Cholesky de A(:,ACT)’*A(:,ACT)
%
y ACT(indi) = q, se elimina q de ACT y se inserta q de forma
17
18
%
%
%
%
%
%
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
ordenada en NAC; luego se actualiza L(ACT,ACT). Tanto q como
indi pueden ser []; ademas, indi puede no aparecer. Usa L(:,
end) como acumulador scmmas1 denso por eficiencia al rotar.
Capaz de contemplar NaNs intermedios.
Ver tambien: Givens
%
Autor:
Pablo Guerrero Garcia ([email protected])
%
Version:
1.2
%
Fecha:
Diciembre 2001
%
Algoritmia: [Bjorck,88],[Coleman&Hulbert,89]
%
global L ACT NAC
% ACT(indi) = q
if isempty(q)
q = ACT(indi);
elseif nargin == 1 | isempty(indi)
indi = find(ACT == q);
end;
% Actualizar ACT y NAC
ACT1 = ACT(1:indi-1); ACT2 = ACT(indi+1:end);
NAC1 = NAC(find(NAC < q)); NAC2 = NAC(length(NAC1)+1:end);
ACT(indi) = []; NAC = [NAC1 q NAC2];
% Subcolumna de L(:,end) como acumulador denso para scmmas1
esqf = find(L(:,end)); L(esqf,end) = 0;
esqf = find(L(ACT2,q));
% Gestion de NaNs intermedios: no hace nada si esqueleto bien predicho
esqf = esqf(~isnan(L(ACT2(esqf),q)));
L(ACT2(esqf),end) = L(ACT2(esqf),q);
% Recuperar esqueleto de L(q,:) y de L(:,q)
esqf = find(L(q,ACT1)); L(q,ACT1(esqf)) = NaN; L(q,q) = NaN;
esqf = find(L(ACT2,q)); L(ACT2(esqf),q) = NaN;
% En rtcs estaran las columnas a rotar
rtcs = find(L(:,end));
while ~isempty(rtcs)
j = rtcs(1);
esqf = find(L(ACT2,j));
% Gestion de NaNs intermedios: no hace nada si esqueleto bien predicho
esqf = esqf(~isnan(L(ACT2(esqf),j)));
[c,s,L(j,j)] = Givens(L(j,j),L(j,end)); L(j,end) = 0;
L(ACT2(esqf(2:end)),[j end]) = L(ACT2(esqf(2:end)),[j end])*[c s; -s c];
rtcs = find(L(:,end));
end;
function SqrRei(modo)
%SqrRei(modo)
%
Recalcula el factor de Cholesky de A(:,ACT)’*A(:,ACT) y lo
%
almacena en L(ACT,ACT). Si modo==’exacto’ el esqueleto de
%
dicho factor esta correctamente predicho en L(ACT,ACT); si
%
modo==’cotsup’ el esqueleto a priori en L(ACT,ACT) es cota
%
superior para el factor de Cholesky, y hay que conservarlo
%
tras el recalculo.
%
%
Ver tambien: QR con argumento disperso
%
Autor:
Pablo Guerrero Garcia ([email protected])
SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
%
Version:
1.2
%
Fecha:
Diciembre 2001
%
global L ACT NAC A
[esqf,esqc] = find(L(:,1:end-1));
L(:,1:end-1) = sparse(esqf,esqc,NaN); L(:,end) = 0;
switch lower(modo)
case ’exacto’
L(ACT,ACT) = qr(A(:,ACT),0)’;
case ’cotsup’
[esqf,esqc] = find(L(ACT,ACT));
if length(ACT)==1
L(ACT,ACT) = A(:,ACT)’*A(:,ACT);
else % qr disperso no funciona bien con |ACT|=1 !!
L(ACT,ACT) = qr(A(:,ACT),0)’;
end;
for k = 1:length(esqf)
if L(ACT(esqf(k)),ACT(esqc(k))) == 0
L(ACT(esqf(k)),ACT(esqc(k))) = NaN;
end;
end;
otherwise
error(’Modo desconocido en SqrRei’)
end;
function [c,s,r] = Givens(a,b)
%GIVENS
Implementacion del algoritmo
%
%
[ c s; -s c ]^T [ a ; b
%
%
o equivalentemente
%
%
[ a , b ] [ c s; -s c
%
%
modificado para que devuelva
5.1.3 de [GolLoa,96]:216,
] = [ r ; 0 ]
] = [ r , 0 ]
r como en [Bjorck,96]:54.
%
Autor:
Pablo Guerrero Garcia ([email protected])
%
Version:
1.2
%
Fecha:
Diciembre 2001
%
if b==0
c = 1; s = 0; r = a;
% elseif DeSym(abs(b)) > DeSym(abs(a))
elseif abs(b) > abs(a)
cotangente = -a/b; tanalamenos1 = sqrt(1+cotangente^2);
s = 1/tanalamenos1; c = s*cotangente; r = -b*tanalamenos1;
else
cotangente = -b/a; tanalamenos1 = sqrt(1+cotangente^2);
c = 1/tanalamenos1; s = c*cotangente; r = a*tanalamenos1;
end;
function y=DeSym(x)
y=double(x);
% Cambiar por y=x; cuando no se use simbolico
19
20
%
%
%
%
Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA
Ejemplo de [GeLiNg,88]:104 con valores ajustados.
Ilustra el funcionamiento de las rutinas SqrIni,
SqrIns, SqrDel, SqrRei y Givens de la Modified
Sparse Least Squares Toolbox v1.2.
%
Autor:
Pablo Guerrero Garcia ([email protected])
%
Version:
1.2
%
Fecha:
Diciembre 2001
%
global L ACT NAC A b c
rand(’state’,0);
A=sparse([1 1 2 2 3 3 4 4 4 5 5 6 6 7 7 8 8],...
[1 6 2 4 1 3 2 4 6 5 6 1 6 3 4 2 4],randperm(17))’;
b=zeros(8,1); c=zeros(6,1); SqrIni(’dmperm’);
[fil,col]=find(A(:,[1 2 3 4 6 7]));
A(:,[1 2 3 4 6 7])=sparse(fil,col,[-3 5 3 4 sqrt(41) -sqrt(11) ...
850*sqrt(34) sqrt(533) sqrt(451)*sqrt(533) 4 -5 4 -5]);
full(A), full(L)
% Primero activamos la restriccion 7 y luego la 2 ...
SqrIns(7); ACT, NAC, full(L(ACT,ACT)), full(L)
SqrIns(2); ACT, NAC, full(L(ACT,ACT)), full(L)
% ... comprobamos reinvirtiendo ...
ACT=[2 7]; NAC=[1 3:6 8];
SqrRei(’exacto’); full(L(ACT,ACT)), full(L)
% ... despues anyadimos la restriccion 6
SqrIns(6); ACT, NAC,
% ACT=[2 6 7]; NAC=[1 3:5 8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% Ahora quitamos la 7 ...
SqrDel(7); ACT, NAC,
% ACT=[2 6]; NAC=[1 3:5 7:8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% ... y a continuacion ponemos la 3
SqrIns(3); ACT, NAC,
% ACT=[2 3 6]; NAC=[1 4:5 7:8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% ... tambien la 1 ...
SqrIns(1); ACT, NAC,
% ACT=[1 2 3 6]; NAC=[4:5 7:8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% ... y luego la 4 ...
SqrIns(4); ACT, NAC,
% ACT=[1 2 3 4 6]; NAC=[5 7:8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% Por ultimo quitamos la 2
SqrDel(2); ACT, NAC,
% ACT=[1 3 4 6]; NAC=[2 5 7:8]; SqrRei(’exacto’);
full(L(ACT,ACT)), full(L)
% Como Matlab no libera memoria al asignar un 0 en una matriz dispersa
% previamente asignada (con los NaN), al recuperar el esqueleto no
% se lanzara el reservador de memoria y podremos usar operadores
% dispersos sin problemas porque ya estan a 0 los NaN correspondientes.
%
% Reinversion final de comprobacion
ACT=[1 3 4 6]; NAC=[2 5 7:8]; SqrRei(’exacto’); full(L(ACT,ACT))
Descargar