An Extension of the Nested Partitions Method
EBERT BREA
u¯
(j)
σ(k)
−⌈δσ(j()k)⌉−≤
&
u¯(j)
¯l(j)
2k
'
.
(17b)
Note
that

the

measurements

of

any

kth

promising

region
σ(k)
are

given

by

all

the

components

of

d(k),

and

sim-
ilarly
denoting

σ(k

as

the

baggy

hull

of

the

correspon-
dent
promising

region

σ(k),

and

whose

measurements
are
given

by

all

the

components

of

d(k

∈−N,

we

can
hence
say

that

σ(k)

⊆−σ(k).˜
We
are

now

interested

in

describing

the

Markov

chain
that
is

generated

by

the

MINP

method,

we

can

hence
say
that

due

to

Proposition

4,

the

Markov

chain

from
the
MINP

method

(MINPMC)

can

rigorously

be

ex-
plained
using

(14),

yielding

either,

under

Condition

1
an
unlimited

countable

set

of

discrete

stochastic

states
{D(κ)
=

d(k)}˜


k=0
for

all

k ∈−N;

or

under

Condition
2
a

limited

countable

set

of

discrete

stochastic

states
{D(κ)
=

d(k)}˜

kˆ
k=0,
where

kˆ

is

given

by

(9).
Observe
the

difference

that

has

been

made

between
the
counters

k

and

κ of

the

algorithm.

The

first

one
means
the

number

of

iterations

that

are

executed

by
the
algorithm

from

initial

state

D(κ)

=

d(0);˜

whilst

the
second
one

depicts

the

number

of

cumulative

iterations
that
has

been

carried

out

by

the

MINP

method

from

its
starting.
Let
p(k,ℓ)
κ
=
Pr{D(κ+1)

=

d(

|−D(κ)

=

d(k)}˜

(18)
denote
the

single-step

transition

probability

that

the
MINPMC
will

visit

state

D(κ +

1)

=

d(

at

the

cumu-
lative
iteration

κ+

1,

if

the

MINPMC

is

at

state

D(κ)

=
d(k
at

the

κth

cumulative

iteration

counter,

for

all

k,κ
N and
∈−

{0,k +

1}.

Nevertheless,

it

is

reasonable
to
assume

that

(18)

dependents

on

k,

whereby

the
MINPMC
is

said

to

be

an

nonhomogeneous

Markov
chain,
what

will

hereafter

be

denoted

by
p
(k,ℓ)
k
=Pr{D(k +1)
=

d(

|−D(k)

=

d(k)},˜
∀−
k,κ ∈−N,ℓ ∈−

{0,k +1}.
(19)
Figure
9

illustrates

the

MINPMC,

wherein

each

state

is
symbolized
by

denoted

rounded

node

d(k),˜

and

each
transition
probability

p

(k,ℓ)
k
by

directed

arcs,

which

rep-
resent
the

probability

that

the

MINPMC

changing

from
d(k
state

to

either

d(

state

for

each

∈−

{0,k +1},

at
kth
iteration

counter.
d˜(0)
d˜(1)
d˜(2)
···
...
d˜(k1)
d˜(k)
p
(1,0)
1
p
(2,0)
2
p
(k1,0)
k1
p
(k,0)
k
p
(k,k+1)
k
...
p
(0,1)
0
p
(1,2)
1
p
(2,3)
2
p
(k2,k1)
k2
p
(k1,k)
k1
Figure 9.
The MINP Markov chain
Condition 2
Suppose

that

the

MINP

method

has

suc-
cessfully
been

run

for

globally

solving

Problem

1

apply-
ing
proposed

the

algorithm

in

Figure

8.
Condition 3
Assume

that

the

MINP

method

uniformly
takes
random

samplings

by

randomized

located

trial
points
from

both

each

subregion

and

the

current

sur-
rounding
region,

using

both

independent

and

identically
distributed
(i.d.d.)

continuous

uniform

random

variables
and
i.d.d.

discrete

uniform

random

variables.
We
shall

now

introduce

a

measurement

of

mixed

integer
mixed
region

size,

which

will

allows

us

to

estimate

the
probability
of

the

MINPMC

events.
Definition 8 (Hypervolume of a region)
Let

V[σ(k)]
denote
the

hypervolume

of

σ(k),

which

is

defined

by
V[σ(k)]
=
Yn
=1

u
()
σ(k)
l
()
σ(k)

+
Ym
=1

u¯
()
σ(k)
¯l
()
σ(k)

,
k ∈−N.
(20)
Note
that

Definition

8

can

also

be

used

for

calculating

the
baggy
hull

hypervolume

of

any

promising

region,

which
is
Vσ(k)]
=
Yn
=1

u()
l()
2k
!
+
Ym
=1
&
u¯()
¯l()
2k
'
,
k ∈−N.
(21)
Based
on

(21),

we

shall

now

consider

the

following
proposition.
We
now

consider

as

a

successful

event

F,

which

has
occurred
as

a

consequence

of

that

the

MINP

method
has
discovered

the

existence

of

the

best

value

of

the
objective
function

comes

from

any

subregion,

among
all
sampled

trial

points

at

a

kth

iteration,

i.e.,

at

a

d(k
state,
and

by

contrast,

let

B−

denote

the

event

that

a
Revista
TEKHNÉ

No

25.1
Semestre
octubre-febrero

2022
ISSN
electrónico:

2790-5195
ISSN:
1316-3930
126