An Extension of the Nested Partitions Method
EBERT BREA
Proof. Getting
k by

solving

(8)

from

Lemma

2,

the

MINP
method
stops

at

iteration

kˆ

given

by

(9).
Now,
we

shall

pay

attention

on

the

estimation

of

the
probability
of

carrying

out

a

backtracking

operation

to

the
entire
region

Θ

at

any

kth

iteration,

such

that

0

< k < k.ˆ
Remark 2
Assume

Condition

1

holds.

Each

kth

promis-
ing
region

is

therefore

given

by
σ(k)
=

{z ∈−Rn ×−Zm :

lσ

−

z −uσ},
(10)
where
lσ

=

(l
(1)
σ
,...,l
σ(n)l
(n+1)
σ
,...,¯l
(n+m)
σ
)t
and

uσ

=
(uσ
(1)
,...,uσ

(n)
;u¯σ

(n+1)
,...,u¯σ

(n+m)
)t

respectively

are

the
lower
and

upper

mixed

integer

bounds

of

the

kth

promis-
ing
region.
Proof.
Since

Condition

1

is

always

satisfied,

at

each
kth
iteration

is

rightly

partitioned

the

current

promising
region
σ(k)

due

to

Propositions

1

and

2,

what

allow

us
to
describe

exactly

σ(k)

by

(10).
Note
that

by

Partitioning

procedure

in

Figure

18

clearly
defines
each

generated

kth

promising

region

σ(k).
Theorem 2
Suppose

that

there

is

no

any

stopping

rule
for
the

MINP

method.

If

ν

and

N

are

both

positive
quantities,
there

will

then

exist

a

positive

probability
that
the

algorithm

goes

forward

in

depth,

getting

a

new
depth
vector

d(k +

1)

that

precedes

to

d(k),

at

any

kth
iteration,
i.e.,

Pr{d(k +1)

≺−

d(k)}−> 0

for

all

k ∈−N.
Proof.
Using

Propositions

1

and

2,

we

clearly

have

that
the
event

{d(k +

1)

≺−

d(k)}−

takes

place

with

a

positive
probability.
The
next

result

shows

how

Theorem

2

allows

us

to

affirm
that
the

MINP

method

does

not

over

run

for

getting
a
global

solution,

namely,

it

finishes

with

a

positive
probability.
Corollary 1
Suppose

Condition

1

holds

for

globally
solving
Problem

1.

Besides,

assume

that

ν

and

N
are
both

positive

quantities;

and

the

expected

mixed
integer
vector

ε

is

fixed

to

both

real

and

integer

no
negative
components.

Then

the

MINP

method

will

meet
the
stopping

rule,

i.e.,

d(k)

≺−

ε in

a

limited

number

of
successive
iterations

with

a

positive

probability.
Proof.
Because

of

Theorem

2,

there

exists

a

positive
probability
that

the

MINP

method

reaches

the

stopping
rule
d(k)

≺−

ε,

and

then

it

does

not

carry

out

any

more
iterations,
causing

a

stopping

of

the

algorithm.
Further
results

on

the

behavior

of

the

MINP

method

will
afterward
be

presented

in

next

section.
d.
ON

THE

PERFORMANCE

MEASUREMENT
We
now

consider

the

issue

of

measuring

the

perfor-
mance
of

the

MINP

method,

when

it

has

been

collected

a
set
of

solutions

that

has

been

yielded

from

r independent
runs
of

the

MINP

method,

under

the

same

conditions.
For
this

reason,

we

shall

here

show

the

main

results
on
the

proposed

performance

measurement,

which

was
introduced
by

Brea

[25]

for

comparing

the

Game

of
Patterns
[9] versus

two

implementations

of

Genetic

Al-
gorithms,
when

these

last

mentioned

algorithms

are
used
for

globally

solving

constrained

nonlinear

problems
in
Rn
Proposition 3
Let

p

be

a

mixed

integer

bound

con-
strained
problem

in

Rn ×−Zm;

let

η()(p,n,m)

> 0

be
the
number

of

times

that

the

objective

function

has

been
evaluated
during

th

run

of

the

algorithm

for

searching

of
a
solution

to

Problem

p;

and

let

λ()(p,n,m)

≥−

0

be

the
distance
between

the

true

point

or

solution

of

Problem

p,
and
the

best

achieved

solution

by

the

MINP

method

at
the
th

replication.

Then,

for

each

∈−N+,
q()(p,n,m)
=
1
1+η()(p,n,m)·λ()(p,n,m)
,
(11)
is
on

(0,1].
Proof.
It

suffices

to

verify

the

implication

(11),

due
to
the

fact

that

η()(p,n,m)

> 0

and

λ()(p,n,m)

≥−

0,
what
easily

allows

us

to

prove

that

for

each

th

algo-
rithm
run

or

also

called

replication

is

yielded

a

sampling
q()(p,n,m)
∈−

(0,1].
Of
course,

that

any

collection

{q()(p,n,m)}rℓ=1

of

r
independent
executions

of

the

algorithm

would

then

yield
an
empirical

distribution

so

far

from

being

a

normal
distribution,
whereby

a

parametric

statistical

analysis,
under
standard

situations,

of

these

data

would

be

unsuit-
able
for

obtaining

a

right

description

of

the

here

called
random
variable

quality,

which

will

hereafter

be

denoted
by
Q(p,n,m),

and

given

by
Q(p,n,m)
=
1
1+N(p,n,m)L(p,n,m)
,
(12)
where
N(p,n,m)

and

L(p,n,m)

respectively

are

the
random
variables:

number

of

function

evaluation

and

dis-
tance
to

the

true

point,

and

whose

distribution

functions
are
unknown.
Revista
TEKHNÉ

No

25.1
Semestre
octubre-febrero

2022
ISSN
electrónico:

2790-5195
ISSN:
1316-3930
123