Abstract. The purpose of this paper is to use the Tikhonov regularization method
to solve a constrained generalized complementarity problems, that is to find an
element x˜ ∈ C : g(˜ x) ≤ 0; h(˜ x) ≤ 0; ⟨g(˜ x); h(˜ x)⟩Em = 0; where C is a convex
closed subset in En, g(x) and h(x) are two continuous functions from an Euclidian
space En to Em and ⟨:; :⟩En denotes the scalar product of En. The convergence
rates for the regularized solutions is also shown under ordinary conditions in the
theory of nonlinear ill-posed problems. We also give a numerical example as
an illustration.
8 trang |
Chia sẻ: thanhle95 | Lượt xem: 198 | Lượt tải: 0
Bạn đang xem nội dung tài liệu Above regularization for constrained generalized complementarity problems, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
JOURNAL OF SCIENCE OF HNUE
Mathematical and Physical Sci., 2014, Vol. 59, No. 7, pp. 44-51
This paper is available online at
ABOVE REGULARIZATION FOR CONSTRAINED GENERALIZED
COMPLEMENTARITY PROBLEMS
Nguyen Thi Thuy Hoa1 and Nguyen Buong2
1Informatics Center, Hanoi University of Home Affairs
2Institute of Information Technology, Vietnamese Academy of Science and Technology
Abstract. The purpose of this paper is to use the Tikhonov regularization method
to solve a constrained generalized complementarity problems, that is to find an
element x˜ ∈ C : g(x˜) ≤ 0; h(x˜) ≤ 0; ⟨g(x˜); h(x˜)⟩Em = 0; where C is a convex
closed subset inEn, g(x) and h(x) are two continuous functions from an Euclidian
space En to Em and ⟨:; :⟩En denotes the scalar product of En. The convergence
rates for the regularized solutions is also shown under ordinary conditions in the
theory of nonlinear ill-posed problems. We also give a numerical example as
an illustration.
Keywords: Convex set, Tikhonov regularization, continuous function.
1. Introduction
Let g(x) and h(x) be two continuous functions from an Euclidian space En to Em
where the scalar product and norm of En are denoted by ⟨., .⟩En and ∥.∥En , respectively.
The problem of finding an element x˜ ∈ En such that
g(x˜) ≤ 0, h(x˜) ≤ 0, ⟨g(x˜), h(x˜)⟩Em = 0, (1.1)
where the symbol y = (y1, ..., ym) ≤ 0 is meant that yi ≤ 0, i = 1, ...,m, is called
a generalized complementarity problem (GCP). In the case that m = n, g(x) = −x,
and h(x) = −F (x), a continuous function in En, (1.1) is the classical nonlinear
complementarity problem (NCP), we find an element x ∈ En satisfying
x ≥ 0, F (x) ≥ 0, ⟨x, F (x)⟩En = 0,
Received September 12, 2014. Accepted October 23, 2014.
Contact Nguyen Thi Thuy Hoa, e-mail address: nguyenhoanvhn@gmail.com
44
Above regularization for constrained generalized complementarity problems
which has attracted much attention due to its various applications (see, [6, 8, 10]).
There exist several methods for the solution of the NCP [3, 4, 7]. All of these
methods are proposed for solving an equivalent minimization problem or a system
of equations. The particular class of methods to be considered are the so-called
regularization methods, which are designed to handle ill-posed problems. An ill-posed
problem may be difficult to solve since small errors in the computations can lead to a
totally wrong solution.
The Tikhonov-regularization scheme in [5, 9] for NCP consists in solving a
sequence of complementarity problems:
xε ≥ 0, Fε(xε) ≥ 0, ⟨xε, Fε(xε)⟩En = 0,
where Fε(x) = F (x) + εx, and ε is a small parameter of regularization. In [12] the
regularization xε is defined on the basis of H(ε, z) = 0 ⇐⇒ ε = 0, x ∈ S0, where
S0 denotes the solution set of NCP, z := (ε, x), H(ε, z) := (ε,G(ε, z))T , Gi(ε, x) :=
ϕ(xi, Fε,i(x)), i = 1, ..., n, where Fε,i is the ith component of Fε. The convegence of the
method is established only for the P0-function F . Moreover, estimating the convergence
rate of the regularized solutions is still an opened question.
Recently, for GCP (1.1), without P0-property for g and h, we [2] presented an other
approach to regularizing (1.1) and gave an estimate of convergence for our method. In
this paper, using this idea, we consider the following constrained GCP (CGCP): find an
element
x˜ ∈ C : g(x˜) ≤ 0, h(x˜) ≤ 0, ⟨g(x˜), h(x˜)⟩Em = 0, (1.2)
where C is a convex closed subset in En. Put
φi(x) = max{0, gi(x)}, i = 1,m,
φm+i(x) = max{0, hi(x)}, i = 1,m.
Clearly,
Si := {x ∈ En : gi(x) ≤ 0} = {x ∈ En : φi(x) = 0}, i = 1,m,
Si+m := {x ∈ En : hi(x) ≤ 0} = {x ∈ En : φm+i(x) = 0}, i = 1,m,
the functions φj are also continuous, nonegative on En and
φj(y) = 0 ∀y ∈ ∩Nj=1Sj, j = 1, ..., 2m := N.
Moreover,
∩Nj=1Sj = {x ∈ En : g(x) ≤ 0, h(x) ≤ 0}.
45
Nguyen Thi Thuy Hoa and Nguyen Buong
Evidently,
⟨g(x˜), h(x˜)⟩Em =
m∑
i=1
gi(x˜)hi(x˜) = 0⇐⇒ gi(x˜)hi(x˜) = 0, i = 1, ...,m.
Therefore, we consider the functions
fi(x) = gi(x)hi(x)
and set
S0 = {x ∈ En : fi(x) = 0, i = 1, ...,m}. (1.3)
Then, CGCP (1.2) is equivalent to find an element x˜ belonging to C ∩ (∩Ni=0Si) such that
φj(x˜) = min
x∈C∩(∩Ni=0Si)
φj(x), j = 1, ..., N, (1.4)
In order to solve this vector optimization problem, we consider the following
unconstrained scalar optimization problem: Find an element x˜ ∈ En such that
Fα(xα) = min
x∈En
Fα(x), (1.5)
where Fα(x) is defined by
Fα(x) = ∥x− PC(x)∥2 + ∥F (x)∥2Em +
N∑
j=1
αµjφj(x) + α∥x− x+∥2En ,
≤ µ1 < µj < µj+1 < 1, j = 2, ..., N − 1,
F (x) = (f1(x), ..., fm(x))
T , (1.6)
where PC(x) is the metric projection of x ∈ En on C and x+ is some element in En.
It is well-known [14] that problem (1.5) has a solution xα for each α > 0.
2. Main results
We have the following results.
Theorem 2.1. Let αk → 0, as k →∞. Then every sequence {xk}, where xk := xαk is a
solution of (1.5) with α replaced by αk, has a convergent subsequence. The limit of every
convergent subsequence is an x+ - Cminimal norm solution (x+ - CMNS). If, in addition,
the x+ - CMNS x˜ is unique, then
lim
k→∞
xk = x˜.
46
Above regularization for constrained generalized complementarity problems
Proof. From (1.5) it follows
∥xk − PC(xk)∥2 + ∥F (xk)∥2Em +
N∑
j=1
α
µj
k φj(x
k) + αk∥xk − x+∥2En
≤ ∥y − PC(y)∥2 + ∥F (y)∥2Em +
N∑
j=1
α
µj
k φj(y) + αk∥y − x+∥2En , (2.1)
for each fixed element y ∈ En. Taking y ∈ C ∩ (∩Nj=0Sj), we have that y − PC(y) = 0,
F (y) = 0 and φj(xk) ≥ φj(y) = 0, j = 1, ..., N. Then, from (2.1) it implies
∥xk − x+∥En ≤ ∥y − x+∥En . (2.2)
Consequently, {xk} is bounded. Let {xl} ⊂ {xk} be such that xl → x as l → ∞. We
shall prove that x is a solution of (1.2). Indeed, from (2.1) and φj(x) ≥ 0 ∀x ∈ En, we
obtain that
0 ≤ ∥xl − PC(xl)∥2, ∥F (xl)∥2Em ≤ αl∥y − x+∥2En .
Tending towards l→∞ in the last inequality, the continuous property of PC and F gives
x ∈ C and F (x) = 0, i.e., x ∈ C ∩ S0. Now, we prove that x ∈ S1. For any element
y ∈ C ∩ S0, from (2.1) and φj(x) ≥ 0∀x ∈ En, j = 1, ..., N , we can write
φ1(x
l) + α1−µ1l ∥xl − x+∥2En ≤ φ1(y) +
N∑
j=2
α
µj−µ1
l φj(y) + α
1−µ1
l ∥y − x+∥2En .
After passing l → ∞ in the last inequality we obtain φ1(x) ≤ φ1(y) ∀y ∈ C ∩ S0. Note
that x is a local minimizer of φ1 on C ∩ S0. But, because of C ∩ S0 ∩ S1 ̸= ∅, x also is a
global minimizer of φ1 on En. It means that x ∈ S1. Further, we prove that x ∈ S2. For
any y ∈ C ∩ S0 ∩ S1, from (2.1) and φj(x) ≥ 0 ∀x ∈ En, j = 1, ..., N we have
φ2(x
l) + α1−µ2l ∥xl − x+∥2En ≤ φ2(y) +
N∑
j=3
α
µj−µ2
l φj(y) + α
1−µ2
l ∥y − x+∥2En .
By a similar argument, we obtain x ∈ S2, and x ∈ Sj, j = 3, ..., N − 1. Consequently,
x ∈ C ∩ (∩N−1j=0 Sj). Finally, we have to prove that x is a solution of (1.2). As above, for
any y ∈ C ∩ (∩N−1j=0 Sj), from (2.1) it deduces that φN(x) ≤ φN(y) ∀y ∈ C ∩ (∩N−1j=0 Sj).
Hence, φN(x) = miny∈C∩(∩N−1j=0 Sj) φN(y), i.e., x ∈ SN . The x
+ - CMNS property of x is
followed from (2.2). The theorem is now proven.
Theorem 2.2. Assume that the following conditions hold:
(i) F is differentiable,
47
Nguyen Thi Thuy Hoa and Nguyen Buong
(ii) there exists L > 0 such that
∥F ′(x˜)− F ′(z)∥Em ≤ L∥x˜− z∥En
for z in some neighbouhood of x˜
(iii) there exists ω ∈ Em such that x˜− x+ = F ′(x˜)∗ω
(iv) L∥ω∥Em < 1.
Then, we have
∥xk − x˜∥En = O(√αk).
Proof. Using (1.5) and (1.6) with x = x˜ we obtain
∥xk − PC(xk)∥2 + ∥F (xk)∥2Em +
N∑
j=1
α
µj
k φj(x
k) + αk∥xk − x˜∥2En
≤ αk[∥x˜− x+∥2En − ∥xk − x+∥2En + ∥xk − x˜∥2En ]. (2.3)
Since x˜ ∈ ∩Nj=0Sj , then φj(xk) ≥ φj(x˜) = 0, j = 1, ..., N. From this fact and (2.3) we
have
∥F (xk)∥2Em + αk∥xk − x˜∥2En ≤ 2αk⟨ω, F ′(x˜)(x˜− xk)⟩Em . (2.4)
Note that condition (ii) implies
F (xk) = F ′(x˜)(xk − x˜) + rk (2.5)
with
∥rk∥Em ≤ 1
2
L∥xk − x˜∥2En . (2.6)
Combining (2.4)-(2.6) leads to
∥F (xk)∥2Em + αk∥xk − x˜∥2En ≤ 2αk∥ω∥Em∥F (xk)∥Em + αk∥ω∥EmL∥xk − x˜∥2En .
Thus,
∥F (xk)∥2Em + αk(1− ∥ω∥EmL)∥xk − x˜∥2Rn ≤ 2αk∥ω∥Em∥F (xk)∥Em . (2.7)
Therefore, ∥F (xk)∥Em = O(αk). Then, from (2.7) it follows
∥xk − x˜∥En = O(√αk).
The theorem is proven.
48
Above regularization for constrained generalized complementarity problems
Remarks:
(i) In the case that the given functions fi and φj are smooth, then the functional Fα, will
have the same property, if instead of φj in (1.5)-(1.6) we use φ2j (see, [14]).
(ii) In practice, the set C is chosen such that C ∩ (∩Nj=0Sj) contains only one solution of
(1.2). Then, all of the sequence {xk} converges to the solution, as in k →∞.
(iii) The regularized solution xk of (1.5) can be approximated by the conjugate gradient
method in [14], the method of trust region in [15] or [13].
3. Nummerical results
In this section, in order to show the performence of the above method, we present
some numerical results for the case:
g(x1, x2) = (x
2
1 + x
2
2 − 25,−x1 + 3),
h(x1, x2) = ((x1 − 3)2 + x22 − 4, x1 − 4), (3.1)
and the set C = C1 ∩ C2, defined as follows:
C1 = {(x1, x2) ∈ E2 : x1 − 4x2 + 1 ≤ 0},
C2 = {(x1, x2) ∈ E2 : 2x1 − x2 − 2 ≥ 0}. (3.2)
Clearly, problem (1.1) has four solutions x1 = (3,−2), x2 = (3, 2), x3 = (4,√3),
x4 = (4,−√3) and x2 = (3, 2), x3 = (4,√3) are solutions of (1.1) and (1.2), but
x2 is a minimal norm solution. Here, we choose: µ1 = 116 , µ2 =
1
8
, µ3 =
1
4
and µ4 = 12
with x+ = (0, 0). The parameter αk = (1 + k)−1 and the starting point for the case k = 1
is x0 = (20; 45) /∈ C. We minimize the following Tikhonov functional:
Fαk(x) = ∥x− PC(x)∥2E2 + ∥F (x)∥2E2 +
4∑
j=1
α
µj
k eφj(x) + αk∥x− x+∥2E2 ,
where PC(x) = 12 [PC1(x) + PC2(x)] (see, [1]), PCi(x) = x if x ∈ Ci and if x /∈ Ci
then PCi(x) = Pli(x) with the line li = {(x1, x2) : aix1 + bix2 = ci and ai, bi, ci are the
coefficients, defined in (3.2).
eφ1(x) = φ1(x) = {x21 + x22 − 25 if x21 + x22 − 25,
0 if x21 + x
2
2 − 25 ≤ 0
(3.1)
eφ2(x) = φ22(x) =
{
(x1 − 3)2 for x1 > 3 and any x2,
0 for x1 ≤ 3 and any x2,
(3.2)
49
Nguyen Thi Thuy Hoa and Nguyen Buong
eφ3(x) = φ3(x) = {(x1 − 3)2 + x22 − 4 if (x1 − 3)2 + x22 − 4 > 0,
0 if (x1 − 3)2 + x22 − 4 ≤ 0
(3.3)
eφ4(x) = φ24(x) =
{
(x1 − 4)2 for x1 > 4 and any x2,
0 for x1 ≤ 4 and any x2
(3.4)
We use the standard program, presented in [11] as a procedure, and that the final result
for each step is the starting point for the next. The following table of numerical results
indicates that the method works quite well in practice.
k x1 x2
1 1.0199206 0.2686316
10 1.7190803 1.5334557
50 2.7953346 1.9894142
100 2.9615843 1.9996203
200 2.2984413 1.9999334
400 2.9908316 1.9999745
800 2.9940854 1.9999906
1000 2.9966234 1.9999960
Acknowledgements. This work was supported by the National Foundation for Science and
Technology Development, Vietnam.
REFERENCES
[1] R. E. Brack Jr, 1973. Properties of fixed poit sets of nonexpansive mappings in
Banach spaces. Trans. Amer. Math. Soc. 179, pp. 251-262.
[2] Ng. Buong and Ng. Th. Th. Hoa, 2009. Tikhonov regularization for nonlinear
complemetarity problems. Intern. J. of Math. Analysis, 3(34), pp. 1683-1691.
[3] Sh.-q Du, 2012. A nonsmooth Levenberg-Marquardt method for generalized
complementarity problems. Journal of Information and Computing Science, 7(4),
pp. 267-271.
[4] F. Facchinei and J. Soares, 1997. A new merit function for nonlinear
complementarity problems and related algorithm. SIAM Journal on Optimization,
7, pp. 225-247.
[5] F. Facchinei and C. Kanzow, 1997. Beond monotonicity in regularization methods
for nonlinear complementarity problems. Hamburger Beitrage zur Angewandten
Mathematik, Rein A, preprint 125.
50
Above regularization for constrained generalized complementarity problems
[6] M. C. Ferris and J.-S. Pang, 1997. Engineering and economic applications of
complementarity problems. SIAM Review, 39, pp. 669-713.
[7] A. Fischer, 1997. Solution of monotone complementarity problems with locally
Lipschitzian function. Mathematical programming, pp. 513-532.
[8] M. Fukushima, 1996. Merit functions for variational inequality and
complementarity problems, in Nonlinear Optimization and Applications. G.
Di Pillo and F. Giannessi eds., Plenum Publishing Corporation, New York, pp.
155-170.
[9] M. S. Gowda and R. Sznajder, 1998. On the limiting behavior of the trajectory of
regularized solutions of a p0 complementarity problem, in M. Fukushima and L. Qi,
eds. Reformulation - Nonsmooth, Pieewise Smmoth, Semismooth and Smoothing
Methods, pp. 371-379, Kluwer Academic Publishers.
[10] P. T. Harker, 1995. Complementarity problem, in Handbook of Global Optimization.
R. Horst and P. Pardalos, eds, Kluwer Academic Publishers, Boston, pp. 271-338.
[11] J. H. Mathews, 1987. Numerical Methods for Computer Science. Enginneering and
Mathematics, Prentice-Hall, INC. Englewood Clift, New Jesey.
[12] D. Sun, 1997. A regularization Newton method for solving nonlinear
complementarity problems. Applied Mathematics Report AMR 97/15, School
of Mathematics, the university of New South Wales, Sydney.
[13] A. Vardi, 1985. A trust region algorithm for equality constrained minimization:
convergence properties and implementation. SIAM J. Numer. Anal. 4, pp. 575-591.
[14] F. P. Vasilev, 1980. Numerical methods for solving extreme problems. M: Nauka, (in
Russian).
[15] Y. Yuan, 2003. A class of globally convergent conjugate gradient methods. Science
in China, 46, pp. 253-261.
51