SlideShare a Scribd company logo
Fractional Newton-Raphson Method and Some
Variants for the Solution of Nonlinear Systems
A. Torres-Hernandez ,a, F. Brambila-Paz †,b, and E. De-la-Vega ‡,c
aDepartment of Physics, Science Faculty - UNAM, Mexico
bDepartment of Mathematics, Science Faculty - UNAM, Mexico
cIngenieria, Universidad Panamericana-Aguascalientes, Mexico
Abstract
The following document presents some novel numerical methods valid for one and several variables, which
using the fractional derivative, allow us to find solutions for some nonlinear systems in the complex space using
real initial conditions. The origin of these methods is the fractional Newton-Raphson method, but unlike the
latter, the orders proposed here for the fractional derivatives are functions. In the first method, a function is
used to guarantee an order of convergence (at least) quadratic, and in the other, a function is used to avoid the
discontinuity that is generated when the fractional derivative of the constants is used, and with this, it is possible
that the method has at most an order of convergence (at least) linear.
Keywords: Iteration Function, Order of Convergence, Fractional Derivative.
1. Introduction
When starting to study the fractional calculus, the first difficulty is that, when wanting to solve a problem related
to physical units, such as determining the velocity of a particle, the fractional derivative seems to make no sense,
this is due to the presence of physical units such as meter and second raised to non-integer exponents, opposite
to what happens with operators of integer order. The second difficulty, which is a recurring topic of debate in the
study of fractional calculus, is to know what is the order “optimal” α that should be used when the goal is to solve
a problem related to fractional operators. To face these difficulties, in the first case, it is common to dimensionless
any equation in which non-integer operators are involved, while for the second case different orders α are used
in fractional operators to solve some problem, and then it is chosen the order α that provides the “best solution”
based about an established criteria.
Based on the two previous difficulties, arises the idea of looking for applications with dimensionless nature
such that the need to use multiple orders α can be exploited in some way. The aforementioned led to the study
of Newton-Raphson method and a particular problem related to the search for roots in the complex space for
polynomials: if it is needed to find a complex root of a polynomial using Newton-Raphson method, it is necessary
to provide a complex initial condition x0 and, if the right conditions are selected, this leads to a complex solution,
but there is also the possibility that this leads to a real solution. If the root obtained is real, it is necessary to
change the initial condition and expect that this leads to a complex solution, otherwise, it is necessary to change
the value of the initial condition again.
The process described above, it is very similar to what happens when using different values α in fractional
operators until we find a solution that meets some desired condition. Seeing Newton-Raphson method from the
perspective of fractional calculus, one can consider that an order α remains fixed, in this case α = 1, and the initial
conditions x0 are varied until obtaining one solution that satisfies an established criteria. Then reversing the be-
havior of α and x0, that is, leave the initial condition fixed and varying the order α, the fractional Newton-Raphson
method [1, 2] is obtained, which is nothing other things than Newton-Raphson method using any definition of
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
DOI :10.5121/mathsj.2020.7102 13
fractional derivative that fits the function with which one is working. This change, although in essence simple,
allows us to find roots in the complex space using real initial conditions because fractional operators generally do
not carry polynomials to polynomials.
1.1. Fixed Point Method
A classic problem in applied mathematics is to find the zeros of a function f : Ω ⊂ Rn → Rn, that is,
{ξ ∈ Ω : f (ξ) = 0},
this problem often arises as a consequence of wanting to solve other problems, for instance, if we want to
determine the eigenvalues of a matrix or want to build a box with a given volume but with minimal surface; in
the first example, we need to find the zeros (or roots) of the characteristic polynomial of the matrix, while in
the second one we need to find the zeros of the gradient of a function that relates the surface of the box with its
volume.
Although finding the zeros of a function may seem like a simple problem, in general, it involves solving non-
linear equations, which in many cases does not have an analytical solution, an example of this is present when we
are trying to determine the zeros of the following function
f (x) = sin(x) −
1
x
.
Because in many cases there is no analytical solution, numerical methods are needed to try to determine the
solutions to these problems; it should be noted that when using numerical methods, the word “determine” should
be interpreted as approach a solution with a degree of precision desired. The numerical methods mentioned above
are usually of the iterative type and work as follows: suppose we have a function f : Ω ⊂ Rn → Rn and we search
a value ξ ∈ Rn such that f (ξ) = 0, then we can start by giving an initial value x0 ∈ Rn and then calculate a value xi
close to the searched value ξ using an iteration function Φ : Rn → Rn as follows [3]
xi+1 := Φ(xi), i = 0,1,2,··· , (1)
this generates a sequence {xi}∞
i=0, which under certain conditions satisfies that
lim
i→∞
xi → ξ.
To understand the convergence of the iteration function Φ it is necessary to have the following definition [4]:
Definition 1.1. Let Φ : Rn → Rn be an iteration function. The method given in (1) to determine ξ ∈ Rn, it is called
(locally) convergent, if exists δ > 0 such that for all initial value
x0 ∈ B(ξ;δ) := y ∈ Rn
: y − ξ < δ ,
it holds that
lim
i→∞
xi − ξ → 0 ⇒ lim
i→∞
xi = ξ, (2)
where · : Rn → R denotes any vector norm.
When it is assumed that the iteration function Φ is continuous at ξ and that the sequence {xi}∞
i=0 converges to
ξ under the condition given in (2), it is true that
ξ = lim
i→∞
xi+1 = lim
i→∞
Φ(xi) = Φ lim
i→∞
xi = Φ(ξ), (3)
the previous result is the reason why the method given in (1) is called Fixed Point Method [4].
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
14
1.1.1. Convergence and Order of Convergence
The (local) convergence of the iteration function Φ established in (2), it is useful for demonstrating certain intrinsic
properties of the fixed point method. Before continuing it is necessary to have the following definition [3]
Definition 1.2. Let Φ : Ω ⊂ Rn → Rn. The function Φ is a contraction on a set Ω0 ⊂ Ω, if exists a non-negative
constant β < 1 such that
Φ(x) − Φ(y) ≤ β x − y , ∀x,y ∈ Ω0, (4)
where β is called a contraction constant.
The previous definition guarantee that if the iteration function Φ is a contraction on a set Ω0, then it has at
least one fixed point. The existence of a fixed point is guaranteed by the following theorem [5]
Theorem 1.3. Contraction Mapping Theorem: Let Φ : Ω ⊂ Rn → Rn. Assuming that Φ is a contraction on a closed
set Ω0 ⊂ Ω, and that Φ(x) ∈ Ω0 ∀x ∈ Ω0. Then Φ has a unique fixed point ξ ∈ Ω0 and for any initial value x0 ∈ Ω0, the
sequence {xi}∞
i=0 generated by (1) converges to ξ. Moreover
xk+1 − ξ ≤
β
1 − β
xk+1 − xk , k = 0,1,2,··· , (5)
where β is the contraction constant given in (4).
When the fixed point method given by (1) is used, in addition to convergence, exists a special interest in the
order of convergence, which is defined as follows [4]
Definition 1.4. Let Φ : Ω ⊂ Rn → Rn be an iteration function with a fixed point ξ ∈ Ω. Then the method (1) is called
(locally) convergent of (at least) order p (p ≥ 1), if exists δ > 0 and exists a non-negative constant C (with C < 1 if
p = 1) such that for any initial value x0 ∈ B(ξ;δ) it is true that
xk+1 − ξ ≤ C xk − ξ p
, k = 0,1,2,··· , (6)
where C is called convergence factor.
The order of convergence is usually related to the speed at which the sequence generated by (1) converges. For
the particular cases p = 1 or p = 2 it is said that the method has (at least) linear or quadratic convergence, respec-
tively. The following theorem for the one-dimensional case [4], allows characterizing the order of convergence of
an iteration function Φ with its derivatives
Theorem 1.5. Let Φ : Ω ⊂ R → R be an iteration function with a fixed point ξ ∈ Ω. Assuming that Φ is p-times
differentiable in ξ for some p ∈ N, and furthermore
Φ(k)(ξ) = 0, ∀k ≤ p − 1, if p ≥ 2,
Φ(1)(ξ) < 1, if p = 1,
(7)
then Φ is (locally) convergent of (at least) order p.
Proof. Assuming that Φ : Ω ⊂ R → R is an iteration function p-times differentiable with a fixed point ξ ∈ Ω, then
we can expand in Taylor series the function Φ(xi) around ξ and order p
Φ(xi) = Φ(ξ) +
p
s=1
Φ(s)(ξ)
s!
(xi − ξ)s
+ o((xi − ξ)p
),
then we obtain
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
15
|Φ(xi) − Φ(ξ)| ≤
p
s=1
Φ(s)(ξ)
s!
|xi − ξ|s
+ o(|xi − ξ|p
),
assuming that the sequence {xi}∞
i=0 generated by (1) converges to ξ and also that Φ(s)(ξ) = 0 ∀s < p, the previous
expression implies that
|xi+1 − ξ|
|xi − ξ|p =
|Φ(xi) − Φ(ξ)|
|xi − ξ|p ≤
Φ(p)(ξ)
p!
+
o(|xi − ξ|p
)
|xi − ξ|p −→
i→∞
Φ(p)(ξ)
p!
,
as consequence, exists a value k > 0 such that
|xi+1 − ξ| ≤
Φ(p)(ξ)
p!
|xi − ξ|p
, ∀i ≥ k.
A version of the previous theorem for the case n-dimensional may be found in the reference [3].
1.2. Newton-Raphson Method
The previous theorem in its n-dimensional form is usually very useful to generate a fixed point method with an
order of convergence desired, an order of convergence that is usually appreciated in iterative methods is the (at
least) quadratic order. If we have a function f : Ω ⊂ Rn → Rn and we search a value ξ ∈ Ω such that f (ξ) = 0, we
may build an iteration function Φ in general form as [6]
Φ(x) = x − A(x)f (x), (8)
with A(x) a matrix like
A(x) := [A]jk(x) =


[A]11(x) [A]12(x) ··· [A]1n(x)
[A]21(x) [A]22(x) ··· [A]2n(x)
...
...
...
...
[A]n1(x) [A]n2(x)
... [A]nn(x)


, (9)
where [A]jk : Rn → R (1 ≤ j,k ≤ n). Notice that the matrix A(x) is determined according to the order of
convergence desired. Before continuing, it is necessary to mention that the following conditions are needed:
1. Suppose we can generalize the Theorem 1.5 to the case n-dimensional, although for this it is necessary to
guarantee that the iteration function Φ given by (8) near the value ξ can be expressed in terms of its Taylor
series in several variables.
2. It is necessary to guarantee that the norm of the equivalent of the first derivative in several variables of the
iteration function Φ tends to zero near the value ξ.
Then, we will assume that the first condition is satisfied; for the second condition we have that the equivalent
to the first derivative in several variables is the Jacobian matrix of the function Φ, which is defined as follows [5]
Φ(1)
(x) := [Φ]
(1)
jk (x) =


∂1[Φ]1(x) ∂2[Φ]1(x) ··· ∂n[Φ]1(x)
∂1[Φ]2(x) ∂2[Φ]2(x) ··· ∂n[Φ]2(x)
...
...
...
...
∂1[Φ]n(x) ∂2[Φ]n(x)
... ∂n[Φ]n(x)


, (10)
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
16
where
[Φ]
(1)
jk = ∂k[Φ]j(x) :=
∂
∂[x]k
[Φ]j(x), 1 ≤ j,k ≤ n,
with [Φ]k : Rn → R, the competent k-th of the iteration function Φ. Now considering that
lim
x→ξ
Φ(1)
(x) = 0 ⇒ lim
x→ξ
∂k[Φ]j(x) = 0, ∀j,k ≤ n, (11)
we can assume that we have a function f (x) : Ω ⊂ Rn → Rn with a zero ξ ∈ Ω, such that all of its first partial
derivatives are defined in ξ. Then taking the iteration function Φ given by (8), the k-th component of the iteration
function may be written as
[Φ]k(x) = [x]k −
n
j=1
[A]kj(x)[f ]j(x),
then
∂l[Φ]k(x) = δlk −
n
j=1
[A]kj(x)∂l[f ]j(x) + ∂l[A]kj(x) [f ]j(x) ,
where δlk is the Kronecker delta, which is defined as
δlk =
1, si l = k,
0, si l k.
Assuming that (11) is fulfilled
∂l[Φ]k(ξ) = δlk −
n
j=1
[A]kj(ξ)∂l[f ]j(ξ) = 0 ⇒
n
j=1
[A]kj(ξ)∂l[f ]j(ξ) = δlk, ∀l,k ≤ n,
the previous expression may be written in matrix form as
A(ξ)f (1)
(ξ) = In ⇒ A(ξ) = f (1)
(ξ)
−1
,
where f (1) and In are the Jacobian matrix of the function f and the identity matrix of n × n, respectively.
Denoting by det(A) the determinant of the matrix A, then any matrix A(x) that fulfill the following condition
lim
x→ξ
A(x) = f (1)
(ξ)
−1
, det f (1)(ξ) 0, (12)
guarantees that exists δ > 0 such that the iteration function Φ given by (8), converges (locally) with an order of
convergence (at least) quadratic in B(ξ;δ). The following fixed point method can be obtained from the previous
result
xi+1 := Φ(xi) = xi − f (1)
(xi)
−1
f (xi), i = 0,1,2,··· , (13)
which is known as Newton-Raphson method (n-dimensional), also known as Newton’s method [7].
Although the condition given in (12) could seem that Newton-Raphson method always has an order of conver-
gence (at least) quadratic, unfortunately, this is not true; the order of convergence is now conditioned by the way
in which the function f is constituted, the mentioned above may be appreciated in the following proposition
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
17
Proposition 1.6. Let f : Ω ⊂ R → R be a function that is at least twice differentiable in ξ ∈ Ω. So if ξ is a zero of f with
algebraic multiplicity m (m ≥ 2), that is,
f (x) = (x − ξ)m
g(x), g(ξ) 0,
the Newton-Raphson method (one-dimensional) has an order of convergence (at least) linear.
Proof. Suppose we have f : Ω ⊂ R → R a function with a zero ξ ∈ Ω of algebraic multiplicity m ≥ 2, and that f is
at least twice differentiable in ξ, then
f (x) =(x − ξ)m
g(x), g(ξ) 0,
f (1)
(x) =(x − ξ)m−1
mg(x) + (x − ξ)g(1)
(x) ,
as consequence, the derivative of the iteration function Φ of Newton-Raphson method may be expressed as
Φ(1)(x) = 1 −
mg2(x) + (x − ξ)2 g(1)(x)
2
− g(x)g(2)(x)
mg(x) + (x − ξ)g(1)(x)
2
,
therefore
lim
x→ξ
Φ(1)
(x) = 1 −
1
m
,
and by the Theorem 1.5, the Newton-Raphson method under the hypothesis of the proposition converges
(locally) with an order of convergence (at least) linear.
2. Basic Definitions of the Fractional Derivative
2.1. Introduction to the Definition of Riemann-Liouville
One of the key pieces in the study of fractional calculus is the iterated integral, which is defined as follows [8]
Definition 2.1. Let L1
loc(a,b), the space of locally integrable functions in the interval (a,b). If f is a function such that
f ∈ L1
loc(a,∞), then the n-th iterated integral of the function f is given by
aIn
x f (x) = aIx aIn−1
x f (x) =
1
(n − 1)!
x
a
(x − t)n−1
f (t)dt, (14)
where
aIxf (x) :=
x
a
f (t)dt.
Considerate that (n − 1)! = Γ (n) , a generalization of (14) may be obtained for an arbitrary order α > 0
aIα
x f (x) =
1
Γ (α)
x
a
(x − t)α−1
f (t)dt, (15)
similarly, if f ∈ L1
loc(−∞,b), we may define
xIα
b f (x) =
1
Γ (α)
b
x
(t − x)α−1
f (t)dt, (16)
the equations (15) and (16) correspond to the definitions of right and left fractional integral of Riemann-
Liouville, respectively. The fractional integrals satisfy the semigroup property, which is given in the following
proposition [8]
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
18
Proposition 2.2. Let f be a function. If f ∈ L1
loc(a,∞), then the fractional integrals of f satisfy that
aIα
x aI
β
x f (x) = aI
α+β
x f (x), α,β > 0. (17)
From the previous result, and considering that the operator d/dx is the inverse operator to the left of the
operator aIx, any integral α-th of a function f ∈ L1
loc(a,∞) may be written as
aIα
x f (x) =
dn
dxn
(aIn
x aIα
x f (x)) =
dn
dxn
(aIn+α
x f (x)). (18)
Considering (15) and (18), we can built the operator Fractional Derivative of Riemann-Liouville aDα
x , as
follows [8, 9]
aDα
x f (x) :=



aI−α
x f (x), if α < 0,
dn
dxn
(aIn−α
x f (x)), if α ≥ 0,
(19)
where n = α + 1. Applying the operator (19) with a = 0 and α ∈ R  Z to the function xµ, we obtain
0Dα
x xµ
=



(−1)α Γ (−(µ + α))
Γ (−µ)
xµ−α, if µ ≤ −1,
Γ (µ + 1)
Γ (µ − α + 1)
xµ−α, if µ > −1.
(20)
2.2. Introduction to the Definition of Caputo
Michele Caputo (1969) published a book and introduced a new definition of fractional derivative, he created this
definition with the objective of modeling anomalous diffusion phenomena. The definition of Caputo had already
been discovered independently by Gerasimov (1948). This fractional derivative is of the utmost importance since it
allows us to give a physical interpretation of the initial value problems, moreover to being used to model fractional
time. In some texts, it is known as the fractional derivative of Gerasimov-Caputo.
Let f be a function, such that f is n-times differentiable with f (n) ∈ L1
loc(a,b), then the (right) fractional
derivative of Caputo is defined as [9]
C
a Dα
x f (x) :=aIn−α
x
dn
dxn
f (x) =
1
Γ (n − α)
x
a
(x − t)n−α−1
f (n)
(t)dt, (21)
where n = α + 1. It should be mentioned that the fractional derivative of Caputo behaves as the inverse
operator to the left of fractional integral of Riemann-Liouville , that is,
C
a Dα
x (aIα
x f (x)) = f (x).
On the other hand, the relation between the fractional derivatives of Caputo and Riemann-Liouville is given
by the following expression [9]
C
a Dα
x f (x) = aDα
x

f (x) −
n−1
k=0
f (k)(a)
k!
(x − a)k

,
then, if f (k)(a) = 0 ∀k < n, we obtain
C
a Dα
x f (x) = aDα
x f (x),
considering the previous particular case, it is possible to unify the definitions of fractional integral of Riemann-
Liouville and fractional derivative of Caputo as follows
C
a Dα
x f (x) :=



aI−α
x f (x), if α < 0,
aIn−α
x
dn
dxn
f (x) , if α ≥ 0.
(22)
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
19
3. Fractional Newton-Raphson Method
Let Pn(R), the space of polynomials of degree less than or equal to n with real coefficients. The zeros ξ of a
function f ∈ Pn(R) are usually named as roots. The Newton-Raphson method is useful for finding the roots of a
function f . However, this method is limited because it cannot find roots ξ ∈ CR, if the sequence {xi}∞
i=0 generated
by (13) has an initial condition x0 ∈ R. To solve this problem and develop a method that has the ability to find
roots, both real and complex, of a polynomial if the initial condition x0 is real, we propose a new method called
fractional Newton-Raphson method, which consists of Newton-Raphson method with the implementation of the
fractional derivative. Before continuing, it is necessary to define the fractional Jacobian matrix of a function
f : Ω ⊂ Rn → Rn as follows
f (α)
(x) := [f ]
(α)
jk (x) , (23)
where
[f ]
(α)
jk = ∂α
k [f ]j(x) :=
∂α
∂[x]α
k
[f ]j(x), 1 ≤ j,k ≤ n.
with [f ]j : Rn → R. The operator ∂α/∂[x]α
k denotes any fractional derivative, applied only to the variable [x]k,
that satisfies the following condition of continuity respect to the order of the derivative
lim
α→1
∂α
∂[x]α
k
[f ]j(x) =
∂
∂[x]k
[f ]j(x), 1 ≤ j,k ≤ n,
then, the matrix (23) satisfies that
lim
α→1
f (α)
(x) = f (1)
(x), (24)
where f (1)(x) denotes the Jacobian matrix of the function f .
Taking into account that a polynomial of degree n it is composed of n+1 monomials of the form xm, with m ≥ 0,
we can take the equation (20) with (13), to define the following iteration function that results in the Fractional
Newton-Raphson Method [1, 2]
xi+1 := Φ (α,xi) = xi − f (α)(xi)
−1
f (xi), i = 0,1,2,··· . (25)
3.1. Fractional Newton Method
To try to guarantee that the sequence {xi}∞
i=0 generated by (25) has an order of convergence (at least) quadratic, the
condition (12) is combined with (24) to define the following function
αf ([x]k,x) :=
α, if |[x]k| 0 and f (x) ≥ δ,
1, if |[x]k| = 0 or f (x) < δ,
(26)
then, for any fractional derivative that satisfies the condition (24), and using (26), the Fractional Newton
Method may be defined as
xi+1 := Φ(α,xi) = xi − Nαf
(xi)
−1
f (xi), i = 0,1,2,··· , (27)
with Nαf
(xi) given by the following matrix
Nαf
(xi) := [Nαf
]jk(xi) = ∂
αf ([xi]k,xi)
k [f ]j(xi) . (28)
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
20
The difference between the methods (25) and (27), is that the just for the second can exists δ > 0 such that if
the sequence {xi}∞
i=0 generated by (27) converges to a root ξ of f , exists k > 0 such that ∀i ≥ k, the sequence has an
order of convergence (at least) quadratic in B(ξ;δ).
The value of α in (25) and (26) is assigned with the following reasoning: when we use the definitions of
fractional derivatives given by (19) and (22) in a function f , it is necessary that the function be n-times integrable
and n-times differentiable, where n = α +1, therefore |α| < n and, on the other hand, for using Newton method it
is just necessary that the function be one-time differentiable, as a consequence of (26) it is obtained that
−2 < α < 2, α −1,0,1. (29)
Without loss of generality, to understand why the sequence {xi}∞
i=0 generated by the method (25) or (27) when
we use a function f ∈ Pn(R), has the ability to enter the complex space starting from an initial condition x0 ∈ R, it
is only necessary to observe the fractional derivative of Riemann-Liouville of order α = 1/2 of the monomial xm
0D
1
2
x xm
=
√
π
2Γ m + 1
2
xm− 1
2 , m ≥ 0,
whose result is a function with a rational exponent, contrary to what would happen when using the conven-
tional derivative. When the iteration function given by (25) or (27) is used, we must taken an initial condition
x0 0, as a consequence of the fact that the fractional derivative of order α > 0 of a constant is usually propor-
tional to the function x−α.
The sequence {xi}∞
i=0 generated by the iteration function (25) or (27), presents among its behaviors, the follow-
ing particular cases depending on the initial condition x0:
1. If we take an initial condition x0 > 0, the sequence {xi}∞
i=0 may be divided into three parts, this occurs because
it may exists a value M ∈ N for which {xi}M−1
i=0 ⊂ R+ with {xM} ⊂ R−, in consequence {xi}i≥M+1 ⊂ C.
2. On the other hand, if we take an initial x0 < 0 condition, the sequence {xi}∞
i=0 may be divided into two parts,
{x0} ⊂ R− and {xi}i≥1 ⊂ C.
Unlike classical Newton-Raphson method; which uses tangent lines to generate a sequence {xi}∞
i=0, the frac-
tional Newton-Raphson method uses lines more similar to secants (see Figure 1). A consequence of the fact that
the lines are not tangent when using (25), is that different trajectories can be obtained for the same initial condition
x0 just by changing the order α of the derivative (see Figure 2).
Figure 1: Illustration of some lines generated by the fractional Newton-Raphson method, the red line corresponds
to the Newton-Raphson method.
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
21
a) α = −0.77 b) α = −0.32 c) α = 0.19 d) α = 1.87
Figure 2: llustrations of some trajectories generated by the fractional Newton-Raphson method for the same initial
condition x0 but with different values of α.
3.1.1. Finding Zeros
A typical inconvenience that arises in problems related to fractional calculus, it is the fact that it is not always
known what is the appropriate order α to solve these problems. As a consequence, different values of α are
generally tested and we choose the value that allows to find the best solution considering an criterion of precision
established. Based on the aforementioned, it is necessary to follow the instructions below when using the method
(25) or (27) to find the zeros ξ of a function f :
1. Without considering the integers −1, 0 and 1, a partition of the interval [−2,2] is created as follows
−2 = α0 < α1 < α2 < ··· < αs < αs+1 = 2,
and using the partition, the following sequence {αm}s
m=1 is created.
2. We choose a non-negative tolerance T OL < 1, a limit of iterations LIT > 1 for all αm, an initial condition
x0 0 and a value M > LIT .
3. We choose a value δ > 0 to use αf given by (26), such that T OL < δ < 1. In addition, it is taken a fractional
derivative that satisfies the condition of continuity (24), and it is unified with the fractional integral in the
same way as in the equations (19) and (22).
4. The iteration function (25) or (27) is used with all the values of the partition {αm}s
m=1, and for each value αm
is generated a sequence {mxi}
Rm
i=0, where
Rm =



K1 ≤ LIT , if exists k > 0 such that f (mxk) ≥ M ∀k ≥ i,
K2 ≤ LIT , if exists k > 0 such that f (mxk) ≤ T OL ∀k ≥ i.
LIT , if f (mxi) > T OL ∀i ≥ 0,
then is generated a sequence xRmk
r
k=1
, with r ≤ s, such that
f xRmk
≤ T OL, ∀k ≥ 1.
5. We choose a value ε > 0 and we take the values xRm1
and xRm2
, then is defined X1 = xRm1
. If the following
condition is fulfilled
X1 − xRm2
X1
≤ ε and Rm2
≤ Rm1
, (30)
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
22
is taken X1 = xRm2
. On the other hand if
X1 − xRm2
X1
> ε, (31)
is defined X2 = xRm2
. Without loss of generality, it may be assumed that the second condition is fulfilled,
then is taken X3 = XRm3
and are checked the conditions (30) and (31) with the values X1 and X2. The process
described before is repeated for all values Xk = xRmk
, with k ≥ 4, and that generates a sequence {Xm}t
m=1, with
t ≤ r, such that
Xi − Xj
Xi
> ε, ∀i j.
By following the steps described before to implement the methods (25) and (27), a subset of the solution set of
zeros, both real and complex, may be obtained from the function f . We will proceed to give an example where is
found the solution set of zeros of a function f ∈ Pn(R).
Example 3.1. Let the function:
f (x) = −57.62x16 − 56.69x15 − 37.39x14 − 19.91x13 + 35.83x12 − 72.47x11 + 44.41x10 + 43.53x9
+59.93x8 − 42.9x7 − 54.24x6 + 72.12x5 − 22.92x4 + 56.39x3 + 15.8x2 + 60.05x + 55.31,
(32)
then the following values are chosen to use the iteration function given by (27)
T OL = e − 4, LIT = 40, δ = 0.5, x0 = 0.74, M = e + 17,
and using the fractional derivative given by (20), we obtain the results of the Table 1
αm
mξ mξ − m−1ξ 2
f (mξ) 2 Rm
1 −1.01346 −1.3699527 1.64700e − 5 7.02720e − 5 2
2 −0.80436 −1.00133957 9.82400e − 5 4.36020e − 5 2
3 −0.50138 −0.62435277 9.62700e − 5 2.31843e − 6 2
4 0.87611 0.58999224 − 0.86699687i 3.32866e − 7 6.48587e − 6 11
5 0.87634 0.36452488 − 0.83287821i 3.36341e − 6 2.93179e − 6 11
6 0.87658 −0.28661369 − 0.80840642i 2.65228e − 6 1.06485e − 6 10
7 0.8943 0.88121183 + 0.4269622i 1.94165e − 7 6.46531e − 6 14
8 0.89561 0.88121183 − 0.4269622i 2.87924e − 7 6.46531e − 6 11
9 0.95944 −0.35983764 + 1.18135267i 2.82843e − 8 2.53547e − 5 24
10 1.05937 1.03423976 1.80000e − 7 1.38685e − 5 4
11 1.17776 −0.70050491 − 0.78577099i 4.73814e − 7 9.13799e − 6 15
12 1.17796 −0.35983764 − 1.18135267i 4.12311e − 8 2.53547e − 5 17
13 1.17863 −0.70050491 + 0.78577099i 8.65332e − 7 9.13799e − 6 18
14 1.17916 0.58999224 + 0.86699687i 7.05195e − 7 6.48587e − 6 12
15 1.17925 0.36452488 + 0.83287821i 2.39437e − 6 2.93179e − 6 9
16 1.22278 −0.28661369 + 0.80840642i 5.36985e − 6 1.06485e − 6 9
Table 1: Results obtained using the iterative method (27).
Although the methods (25) and (27) were originally defined for polynomials, the methods can be extended to
a broader class of functions, as shown in the following examples
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
23
Example 3.2. Let the function:
f (x) = sin(x) −
3
2x
, (33)
then the following values are chosen to use the iteration function given by (27)
T OL = e − 4, LIT = 40, δ = 0.5, x0 = 0.26, M = e + 6,
and using the fractional derivative given by (20), we obtain the results of Table 2
αm
mξ mξ − m−1ξ 2
f (mξ) 2 Rm
1 −1.92915 1.50341195 2.80000e − 7 2.93485e − 9 6
2 −0.07196 −2.49727201 9.99500e − 5 6.53301e − 9 8
3 −0.03907 −1.50341194 6.29100e − 5 4.37493e − 9 7
4 0.19786 −18.92888307 4.00000e − 8 1.97203e − 9 20
5 0.20932 −9.26211143 9.60000e − 7 4.77196e − 9 12
6 0.2097 −15.61173324 5.49000e − 6 2.05213e − 9 18
7 0.20986 −12.6848988 3.68000e − 5 3.29282e − 9 15
8 0.21105 −6.51548968 9.67100e − 5 2.05247e − 9 10
9 0.21383 −21.92267274 6.40000e − 6 2.03986e − 8 24
10 1.19522 6.51548968 7.24900e − 5 2.05247e − 9 13
11 1.19546 9.26211143 1.78200e − 5 4.77196e − 9 14
12 1.19558 12.6848988 7.92100e − 5 3.29282e − 9 14
13 1.19567 15.61173324 7.90000e − 7 2.05213e − 9 12
14 1.1957 18.92888307 1.00000e − 8 1.97203e − 9 12
15 1.19572 21.92267282 1.46400e − 5 5.91642e − 8 14
16 1.23944 2.4972720 6.30000e − 7 9.43179e − 10 11
Table 2: Results obtained using the iterative method (27).
In the previous example, a subset of the solution set of zeros of the function (33) was obtained, because this
function has an infinite amount of zeros. Using the methods (25) and (27) do not guarantee that all zeros of a
function f can be found, leaving an initial condition x0 fixed and varying the orders αm of the derivative. As in
the classical Newton-Raphson method, finding most of the zeros of the function will depend on giving a proper
initial condition x0. If we want to find a larger subset of zeros of the function (33), there are some strategies that
are usually useful, for example:
1. To change the initial condition x0.
2. To use a larger amount of values αm.
3. To increase the value of M.
4. To increase the value of LIT .
In general, the last strategy is usually the most useful, but this causes the methods (25) and (27) to become
more costly, because a longer runtime is required for all values αm.
Example 3.3. Let the function:
f (x) = x2
1 + x3
2 − 10,x3
1 − x2
2 − 1
T
, (34)
then the following values are chosen to use the iteration function given by (27)
T OL = e − 4, LIT = 40, δ = 0.5, x0 = (0.88,0.88)T , M = e + 6,
and using the fractional derivative given by (20), we obtain the results of Table 3
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
24
αm
mξ1
mξ2
mξ − m−1ξ 2
f (mξ) 2 Rm
1 −0.58345 0.22435853 + 1.69813926i −1.13097646 + 2.05152306i 3.56931e − 7 8.18915e − 8 12
2 −0.50253 0.22435853 − 1.69813926i −1.13097646 − 2.05152306i 1.56637e − 6 8.18915e − 8 10
3 0.74229 −1.42715874 + 0.56940338i −0.90233562 − 1.82561764i 1.13040e − 6 7.01649e − 8 11
4 0.75149 1.35750435 + 0.86070348i −1.1989996 − 1.71840823i 3.15278e − 7 4.26428e − 8 12
5 0.76168 −0.99362838 + 1.54146499i 2.2675011 + 0.19910814i 8.27969e − 5 1.05527e − 7 13
6 0.76213 −0.99362838 − 1.54146499i 2.2675011 − 0.19910815i 2.15870e − 7 6.41725e − 8 15
7 0.77146 −1.42715874 − 0.56940338i −0.90233562 + 1.82561764i 3.57132e − 6 7.01649e − 8 15
8 0.78562 1.35750435 − 0.86070348i −1.1989996 + 1.71840823i 3.16228e − 8 4.26428e − 8 17
9 1.22739 1.67784847 1.92962117 9.99877e − 5 2.71561e − 8 4
Table 3: Results obtained using the iterative method (27).
Example 3.4. Let the function:
f (x) = x1 + x2
2 − 37,x1 − x2
2 − 5,x1 + x2 + x3 − 3
T
, (35)
then the following values are chosen to use the iteration function given by (27)
T OL = e − 4, LIT = 40, δ = 0.5, x0 = (4.35,4.35,4.35)T , M = e + 6,
and using the fractional derivative given by (20), we obtain the results of Table 4
αm
mξ1
mξ2
mξ3
mξ − m−1ξ 2
f (mξ) 2 Rm
1 0.78928 −6.08553731 + 0.27357884i 0.04108101 + 3.32974848i 9.04445631 − 3.60332732i 6.42403e − 5 3.67448e − 8 14
2 0.79059 −6.08553731 − 0.27357884i 0.04108101 − 3.32974848i 9.04445631 + 3.60332732i 1.05357e − 7 3.67448e − 8 15
3 0.8166 6.17107462 −1.08216201 −2.08891261 6.14760e − 5 4.45820e − 8 9
4 0.83771 6.0 1.0 −4.0 3.38077e − 6 0.0 6
Table 4: Results obtained using the iterative method (27).
3.2. Fractional Quasi-Newton Method
Although the previous methods are useful for finding multiple zeros of a function f , they have the disadvantage
that in many cases calculating the fractional derivative of a function is not a simple task. To try to minimize this
problem, we use that commonly for many definitions of the fractional derivative, the arbitrary order derivative of
a constant is not always zero, that is,
∂α
∂[x]α
k
c 0, c = constant. (36)
Then, we may define the function
gf (x) := f (x0) + f (1)
(x0)x, (37)
it should be noted that the previous function is almost a linear approximation of the function f in the initial
condition x0. Then, for any fractional derivative that satisfies the condition (36), and using (23) the Fractional
Quasi-Newton Method may be defined as
xi+1 := Φ(α,xi) = xi − Qgf ,β(xi)
−1
f (xi), i = 0,1,2,··· , (38)
with Qg,β(xi) given by the following matrix
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
25
Qgf ,β(xi) := [Qgf ,β]jk(xi) = ∂
β(α,[xi]k)
k [gf ]j(xi) , (39)
where the function β is defined as follows
β(α,[xi]k) :=
α, if |[xi]k| 0,
1, if |[xi]k| = 0.
(40)
Since the iteration function (38), the same way that (25), does not satisfy the condition (12), then any sequence
{xi}∞
i=0 generated by this iteration function has at most one order of convergence (at least) linear. As a consequence,
the speed of convergence is slower compared to what would be obtained when using (27), and then it is necessary
to use a larger value of LIT . It should be mentioned that the value α = 1 in (40), is not taken to try to guarantee
an order of convergence as in (26), but to avoid the discontinuity that is generated when using the fractional
derivative of constants in the value [xi]k = 0. An example is given using the fractional quasi-Newton method,
where is found a subset of the solution set of zeros of the function f .
Example 3.5. Let the function:
f (x) =
1
2
sin(x1x2) −
x2
4π
−
x1
2
, 1 −
1
4π
e2x1 − e +
e
π
x2 − 2ex1
T
, (41)
then the following values are chosen to use the iteration function given by (38)
T OL = e − 4, LIT = 200, x0 = (1.52,1.52)T , M = e + 6,
and using the fractional derivative given by (20), we obtain the results of Table 5
αm
mξ1
mξ2
mξ − m−1ξ 2
f (mξ) 2 Rm
1 −0.28866 2.21216549 − 13.25899819i 0.41342314 + 3.94559327i 1.18743e − 7 9.66251e − 5 163
2 1.08888 1.29436489 −3.13720898 1.89011e − 6 9.38884e − 5 51
3 1.14618 1.43395246 −6.82075021 2.24758e − 6 9.74642e − 5 94
4 1.33394 0.50000669 3.14148062 9.74727e − 6 9.99871e − 5 133
5 1.35597 0.29944016 2.83696105 8.55893e − 5 4.66965e − 5 8
6 1.3621 1.5305078 −10.20223066 2.38437e − 6 9.88681e − 5 120
7 1.37936 1.60457254 −13.36288413 2.32459e − 6 9.52348e − 5 93
8 1.88748 −0.26061324 0.62257513 2.69146e − 5 9.90792e − 5 21
Table 5: Results obtained using the iterative method (38).
4. Conclusions
The fractional Newton-Raphson method and its variants are useful for finding multiple solutions of nonlinear
systems, in the complex space using real initial conditions. However, it should be clarified that they present
some advantages and disadvantages between each of them, for example, although the fractional Newton method
generally has an order of convergence (at least) quadratic, this method has the disadvantage that it is not an
easy task to find the fractional Jacobian matrix for many functions, and also the need to reverse this matrix must
be added for each new iteration. But it has an advantage over the other methods, and this is because it can be
used with few iterations, which allows occupying a greater number of αm values belonging to the partition of the
interval (−2,2).
The quasi-Newton fractional method has the advantage that the fractional Jacobian matrix with which it works,
compared to the fractional Newton method, is easy to obtain. But a disadvantage is that the method may have
at most an order of convergence (at least) linear, so the speed of convergence is lower and it is necessary to use a
greater number of iterations to ensure success in the search for solutions. As a consequence, the method is more
costly because it requires a longer runtime to use all values αm. An additional advantage of the method is that
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
26
if the initial condition is close enough to a solution, its behavior is very similar to fractional Newton-Raphson
method and may converge with a relatively small number of iterations, but it still has the disadvantage that we
need to reverse a matrix in each iteration.
These methods may solve some nonlinear systems and are really efficient to find multiple solutions, both real
and complex, using real initial conditions. It should be mentioned that these methods are extremely recommended
in systems that have infinite solutions or a large number of them.
All the examples in this document were created using the Julia language (version 1.3.1.), it is necessary to
mention that for future works, it is intended to use the numerical methods presented here on applications related
to physics and engineering.
References
[1] F. Brambila-Paz and A. Torres-Hernandez. Fractional newton-raphson method. arXiv preprint
arXiv:1710.07634, 2017. https://arxiv.org/pdf/1710.07634.pdf.
[2] F. Brambila-Paz, A. Torres-Hernandez, U. Iturrar´an-Viveros, and R. Caballero-Cruz. Fractional newton-
raphson method accelerated with aitken’s method. arXiv preprint arXiv:1804.08445, 2018. https://arxiv.
org/pdf/1804.08445.pdf.
[3] Josef Stoer and Roland Bulirsch. Introduction to numerical analysis, volume 12. Springer Science & Business
Media, 2013.
[4] Robert Plato. Concise numerical mathematics. Number 57. American Mathematical Soc., 2003.
[5] James M Ortega. Numerical analysis: a second course. SIAM, 1990.
[6] Richard L Burden and J Douglas Faires. An´alisis num´erico. Thomson Learning,, 2002.
[7] James M Ortega and Werner C Rheinboldt. Iterative solution of nonlinear equations in several variables, vol-
ume 30. Siam, 1970.
[8] Rudolf Hilfer. Applications of fractional calculus in physics. World Scientific, 2000.
[9] AA Kilbas, HM Srivastava, and JJ Trujillo. Theory and Applications of Fractional Differential Equations. Elsevier,
2006.
[10] A. Torres-Hernandez, F. Brambila-Paz, and C. Torres-Mart´ınez. Proposal for use the fractional derivative
of radial functions in interpolation problems. arXiv preprint arXiv:1906.03760, 2019. https://arxiv.org/
pdf/1906.03760.pdf.
[11] Carlos Alberto Torres Mart´ınez and Carlos Fuentes. Applications of radial basis function schemes to frac-
tional partial differential equations. Fractal Analysis: Applications in Physics, Engineering and Technology,
2017. https://www.intechopen.com/books/fractal-analysis-applications-in-physics
-engineering-and-technology.
[12] Benito F Mart´ınez-Salgado, Rolando Rosas-Sampayo, Anthony Torres-Hern´andez, and Carlos Fuentes. Ap-
plication of fractional calculus to oil industry. Fractal Analysis: Applications in Physics, Engineering and Tech-
nology, 2017. https://www.intechopen.com/books/fractal-analysis-applications-in-physics
-engineering-and-technology.
Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020
27

More Related Content

PDF
lassomodel, sparsity, multivariate modeling, NIR spectroscopy, biodiesel from...
PDF
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
PDF
PDF
Existance Theory for First Order Nonlinear Random Dfferential Equartion
PDF
Numerical
PPT
MATLAB ODE
PDF
He laplace method for special nonlinear partial differential equations
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
lassomodel, sparsity, multivariate modeling, NIR spectroscopy, biodiesel from...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Existance Theory for First Order Nonlinear Random Dfferential Equartion
Numerical
MATLAB ODE
He laplace method for special nonlinear partial differential equations
Welcome to International Journal of Engineering Research and Development (IJERD)

What's hot (19)

PDF
Boyd chap10
PDF
MetiTarski: An Automatic Prover for Real-Valued Special Functions
PPTX
ROOT OF NON-LINEAR EQUATIONS
DOC
Applications of differential equations by shahzad
PPSX
Ch 05 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
PDF
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
DOCX
stochastic notes
PPT
Partial
PDF
R180304110115
PPTX
Cramer row inequality
PDF
Berans qm overview
PDF
Numerical_PDE_Paper
PPTX
Presentation on Matlab pde toolbox
PDF
Senior Seminar: Systems of Differential Equations
PPTX
Systems Of Differential Equations
PDF
Free Ebooks Download ! Edhole
PPT
Limits And Derivative
PPTX
Matlab Assignment Help
PDF
Introduction to the theory of optimization
Boyd chap10
MetiTarski: An Automatic Prover for Real-Valued Special Functions
ROOT OF NON-LINEAR EQUATIONS
Applications of differential equations by shahzad
Ch 05 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
stochastic notes
Partial
R180304110115
Cramer row inequality
Berans qm overview
Numerical_PDE_Paper
Presentation on Matlab pde toolbox
Senior Seminar: Systems of Differential Equations
Systems Of Differential Equations
Free Ebooks Download ! Edhole
Limits And Derivative
Matlab Assignment Help
Introduction to the theory of optimization
Ad

Similar to Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlinear Systems (20)

PDF
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
PDF
Fractional Newton-Raphson Method The Newton-Raphson
PDF
AJMS_402_22_Reprocess_new.pdf
PDF
Mit18 330 s12_chapter4
PDF
B02110105012
PDF
The International Journal of Engineering and Science (The IJES)
PDF
Uniformity of the Local Convergence of Chord Method for Generalized Equations
PDF
A Comparison Of Iterative Methods For The Solution Of Non-Linear Systems Of E...
PPTX
ppt.pptx fixed point iteration method no
PDF
Roots equations
PDF
Roots equations
PDF
Nonlinear_system,Nonlinear_system, Nonlinear_system.pdf
PDF
Ijetcas14 546
PDF
Fractional pseudo-Newton method and its use in the solution of a nonlinear sy...
PDF
Fractional pseudo-Newton method and its use in the solution of a nonlinear sy...
PDF
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
PDF
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
PDF
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
PDF
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
PDF
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method The Newton-Raphson
AJMS_402_22_Reprocess_new.pdf
Mit18 330 s12_chapter4
B02110105012
The International Journal of Engineering and Science (The IJES)
Uniformity of the Local Convergence of Chord Method for Generalized Equations
A Comparison Of Iterative Methods For The Solution Of Non-Linear Systems Of E...
ppt.pptx fixed point iteration method no
Roots equations
Roots equations
Nonlinear_system,Nonlinear_system, Nonlinear_system.pdf
Ijetcas14 546
Fractional pseudo-Newton method and its use in the solution of a nonlinear sy...
Fractional pseudo-Newton method and its use in the solution of a nonlinear sy...
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
A NEW STUDY TO FIND OUT THE BEST COMPUTATIONAL METHOD FOR SOLVING THE NONLINE...
Ad

More from mathsjournal (20)

PDF
DID FISHING NETS WITH CALCULATED SHELL WEIGHTS PRECEDE THE BOW AND ARROW? DIG...
PDF
MULTIPOINT MOVING NODES FOR P ARABOLIC EQUATIONS
PDF
THE VORTEX IMPULSE THEORY FOR FINITE WINGS
PDF
On Ideals via Generalized Reverse Derivation On Factor Rings
PDF
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
PDF
DID FISHING NETS WITH CALCULATED SHELL WEIGHTS PRECEDE THE BOW AND ARROW? DIG...
PDF
COMMON FIXED POINT THEOREMS IN COMPATIBLE MAPPINGS OF TYPE (P*) OF GENERALIZE...
PDF
MODIFIED ALPHA-ROOTING COLOR IMAGE ENHANCEMENT METHOD ON THE TWO-SIDE 2-DQUAT...
PDF
APPROXIMATE ANALYTICAL SOLUTION OF NON-LINEAR BOUSSINESQ EQUATION FOR THE UNS...
PDF
On Nano Semi Generalized B - Neighbourhood in Nano Topological Spaces
PDF
A Mathematical Model in Public Health Epidemiology: Covid-19 Case Resolution ...
PDF
On a Diophantine Proofs of FLT: The First Case and the Secund Case z≡0 (mod p...
PDF
APPROXIMATE ANALYTICAL SOLUTION OF NON-LINEAR BOUSSINESQ EQUATION FOR THE UNS...
PDF
MODELING OF REDISTRIBUTION OF INFUSED DOPANT IN A MULTILAYER STRUCTURE DOPANT...
PDF
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
PDF
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
PDF
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
PDF
SENTIMENT ANALYSIS OF COMPUTER SCIENCE STUDENTS’ ATTITUDES TOWARD PROGRAMMING...
PDF
SENTIMENT ANALYSIS OF COMPUTER SCIENCE STUDENTS’ ATTITUDES TOWARD PROGRAMMING...
PDF
OPTIMIZATION OF WEIGHT FUNCTION FOR (3+1)D PHONON PROPAGATION IN WEYL FERMION...
DID FISHING NETS WITH CALCULATED SHELL WEIGHTS PRECEDE THE BOW AND ARROW? DIG...
MULTIPOINT MOVING NODES FOR P ARABOLIC EQUATIONS
THE VORTEX IMPULSE THEORY FOR FINITE WINGS
On Ideals via Generalized Reverse Derivation On Factor Rings
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
DID FISHING NETS WITH CALCULATED SHELL WEIGHTS PRECEDE THE BOW AND ARROW? DIG...
COMMON FIXED POINT THEOREMS IN COMPATIBLE MAPPINGS OF TYPE (P*) OF GENERALIZE...
MODIFIED ALPHA-ROOTING COLOR IMAGE ENHANCEMENT METHOD ON THE TWO-SIDE 2-DQUAT...
APPROXIMATE ANALYTICAL SOLUTION OF NON-LINEAR BOUSSINESQ EQUATION FOR THE UNS...
On Nano Semi Generalized B - Neighbourhood in Nano Topological Spaces
A Mathematical Model in Public Health Epidemiology: Covid-19 Case Resolution ...
On a Diophantine Proofs of FLT: The First Case and the Secund Case z≡0 (mod p...
APPROXIMATE ANALYTICAL SOLUTION OF NON-LINEAR BOUSSINESQ EQUATION FOR THE UNS...
MODELING OF REDISTRIBUTION OF INFUSED DOPANT IN A MULTILAYER STRUCTURE DOPANT...
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
SENTIMENT ANALYSIS OF COMPUTER SCIENCE STUDENTS’ ATTITUDES TOWARD PROGRAMMING...
SENTIMENT ANALYSIS OF COMPUTER SCIENCE STUDENTS’ ATTITUDES TOWARD PROGRAMMING...
OPTIMIZATION OF WEIGHT FUNCTION FOR (3+1)D PHONON PROPAGATION IN WEYL FERMION...

Recently uploaded (20)

PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Pre independence Education in Inndia.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
master seminar digital applications in india
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Computing-Curriculum for Schools in Ghana
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
GDM (1) (1).pptx small presentation for students
Renaissance Architecture: A Journey from Faith to Humanism
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Pre independence Education in Inndia.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
O5-L3 Freight Transport Ops (International) V1.pdf
RMMM.pdf make it easy to upload and study
Microbial disease of the cardiovascular and lymphatic systems
TR - Agricultural Crops Production NC III.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Supply Chain Operations Speaking Notes -ICLT Program
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
master seminar digital applications in india
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Microbial diseases, their pathogenesis and prophylaxis
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Computing-Curriculum for Schools in Ghana
2.FourierTransform-ShortQuestionswithAnswers.pdf

Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlinear Systems

  • 1. Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlinear Systems A. Torres-Hernandez ,a, F. Brambila-Paz †,b, and E. De-la-Vega ‡,c aDepartment of Physics, Science Faculty - UNAM, Mexico bDepartment of Mathematics, Science Faculty - UNAM, Mexico cIngenieria, Universidad Panamericana-Aguascalientes, Mexico Abstract The following document presents some novel numerical methods valid for one and several variables, which using the fractional derivative, allow us to find solutions for some nonlinear systems in the complex space using real initial conditions. The origin of these methods is the fractional Newton-Raphson method, but unlike the latter, the orders proposed here for the fractional derivatives are functions. In the first method, a function is used to guarantee an order of convergence (at least) quadratic, and in the other, a function is used to avoid the discontinuity that is generated when the fractional derivative of the constants is used, and with this, it is possible that the method has at most an order of convergence (at least) linear. Keywords: Iteration Function, Order of Convergence, Fractional Derivative. 1. Introduction When starting to study the fractional calculus, the first difficulty is that, when wanting to solve a problem related to physical units, such as determining the velocity of a particle, the fractional derivative seems to make no sense, this is due to the presence of physical units such as meter and second raised to non-integer exponents, opposite to what happens with operators of integer order. The second difficulty, which is a recurring topic of debate in the study of fractional calculus, is to know what is the order “optimal” α that should be used when the goal is to solve a problem related to fractional operators. To face these difficulties, in the first case, it is common to dimensionless any equation in which non-integer operators are involved, while for the second case different orders α are used in fractional operators to solve some problem, and then it is chosen the order α that provides the “best solution” based about an established criteria. Based on the two previous difficulties, arises the idea of looking for applications with dimensionless nature such that the need to use multiple orders α can be exploited in some way. The aforementioned led to the study of Newton-Raphson method and a particular problem related to the search for roots in the complex space for polynomials: if it is needed to find a complex root of a polynomial using Newton-Raphson method, it is necessary to provide a complex initial condition x0 and, if the right conditions are selected, this leads to a complex solution, but there is also the possibility that this leads to a real solution. If the root obtained is real, it is necessary to change the initial condition and expect that this leads to a complex solution, otherwise, it is necessary to change the value of the initial condition again. The process described above, it is very similar to what happens when using different values α in fractional operators until we find a solution that meets some desired condition. Seeing Newton-Raphson method from the perspective of fractional calculus, one can consider that an order α remains fixed, in this case α = 1, and the initial conditions x0 are varied until obtaining one solution that satisfies an established criteria. Then reversing the be- havior of α and x0, that is, leave the initial condition fixed and varying the order α, the fractional Newton-Raphson method [1, 2] is obtained, which is nothing other things than Newton-Raphson method using any definition of Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 DOI :10.5121/mathsj.2020.7102 13
  • 2. fractional derivative that fits the function with which one is working. This change, although in essence simple, allows us to find roots in the complex space using real initial conditions because fractional operators generally do not carry polynomials to polynomials. 1.1. Fixed Point Method A classic problem in applied mathematics is to find the zeros of a function f : Ω ⊂ Rn → Rn, that is, {ξ ∈ Ω : f (ξ) = 0}, this problem often arises as a consequence of wanting to solve other problems, for instance, if we want to determine the eigenvalues of a matrix or want to build a box with a given volume but with minimal surface; in the first example, we need to find the zeros (or roots) of the characteristic polynomial of the matrix, while in the second one we need to find the zeros of the gradient of a function that relates the surface of the box with its volume. Although finding the zeros of a function may seem like a simple problem, in general, it involves solving non- linear equations, which in many cases does not have an analytical solution, an example of this is present when we are trying to determine the zeros of the following function f (x) = sin(x) − 1 x . Because in many cases there is no analytical solution, numerical methods are needed to try to determine the solutions to these problems; it should be noted that when using numerical methods, the word “determine” should be interpreted as approach a solution with a degree of precision desired. The numerical methods mentioned above are usually of the iterative type and work as follows: suppose we have a function f : Ω ⊂ Rn → Rn and we search a value ξ ∈ Rn such that f (ξ) = 0, then we can start by giving an initial value x0 ∈ Rn and then calculate a value xi close to the searched value ξ using an iteration function Φ : Rn → Rn as follows [3] xi+1 := Φ(xi), i = 0,1,2,··· , (1) this generates a sequence {xi}∞ i=0, which under certain conditions satisfies that lim i→∞ xi → ξ. To understand the convergence of the iteration function Φ it is necessary to have the following definition [4]: Definition 1.1. Let Φ : Rn → Rn be an iteration function. The method given in (1) to determine ξ ∈ Rn, it is called (locally) convergent, if exists δ > 0 such that for all initial value x0 ∈ B(ξ;δ) := y ∈ Rn : y − ξ < δ , it holds that lim i→∞ xi − ξ → 0 ⇒ lim i→∞ xi = ξ, (2) where · : Rn → R denotes any vector norm. When it is assumed that the iteration function Φ is continuous at ξ and that the sequence {xi}∞ i=0 converges to ξ under the condition given in (2), it is true that ξ = lim i→∞ xi+1 = lim i→∞ Φ(xi) = Φ lim i→∞ xi = Φ(ξ), (3) the previous result is the reason why the method given in (1) is called Fixed Point Method [4]. Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 14
  • 3. 1.1.1. Convergence and Order of Convergence The (local) convergence of the iteration function Φ established in (2), it is useful for demonstrating certain intrinsic properties of the fixed point method. Before continuing it is necessary to have the following definition [3] Definition 1.2. Let Φ : Ω ⊂ Rn → Rn. The function Φ is a contraction on a set Ω0 ⊂ Ω, if exists a non-negative constant β < 1 such that Φ(x) − Φ(y) ≤ β x − y , ∀x,y ∈ Ω0, (4) where β is called a contraction constant. The previous definition guarantee that if the iteration function Φ is a contraction on a set Ω0, then it has at least one fixed point. The existence of a fixed point is guaranteed by the following theorem [5] Theorem 1.3. Contraction Mapping Theorem: Let Φ : Ω ⊂ Rn → Rn. Assuming that Φ is a contraction on a closed set Ω0 ⊂ Ω, and that Φ(x) ∈ Ω0 ∀x ∈ Ω0. Then Φ has a unique fixed point ξ ∈ Ω0 and for any initial value x0 ∈ Ω0, the sequence {xi}∞ i=0 generated by (1) converges to ξ. Moreover xk+1 − ξ ≤ β 1 − β xk+1 − xk , k = 0,1,2,··· , (5) where β is the contraction constant given in (4). When the fixed point method given by (1) is used, in addition to convergence, exists a special interest in the order of convergence, which is defined as follows [4] Definition 1.4. Let Φ : Ω ⊂ Rn → Rn be an iteration function with a fixed point ξ ∈ Ω. Then the method (1) is called (locally) convergent of (at least) order p (p ≥ 1), if exists δ > 0 and exists a non-negative constant C (with C < 1 if p = 1) such that for any initial value x0 ∈ B(ξ;δ) it is true that xk+1 − ξ ≤ C xk − ξ p , k = 0,1,2,··· , (6) where C is called convergence factor. The order of convergence is usually related to the speed at which the sequence generated by (1) converges. For the particular cases p = 1 or p = 2 it is said that the method has (at least) linear or quadratic convergence, respec- tively. The following theorem for the one-dimensional case [4], allows characterizing the order of convergence of an iteration function Φ with its derivatives Theorem 1.5. Let Φ : Ω ⊂ R → R be an iteration function with a fixed point ξ ∈ Ω. Assuming that Φ is p-times differentiable in ξ for some p ∈ N, and furthermore Φ(k)(ξ) = 0, ∀k ≤ p − 1, if p ≥ 2, Φ(1)(ξ) < 1, if p = 1, (7) then Φ is (locally) convergent of (at least) order p. Proof. Assuming that Φ : Ω ⊂ R → R is an iteration function p-times differentiable with a fixed point ξ ∈ Ω, then we can expand in Taylor series the function Φ(xi) around ξ and order p Φ(xi) = Φ(ξ) + p s=1 Φ(s)(ξ) s! (xi − ξ)s + o((xi − ξ)p ), then we obtain Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 15
  • 4. |Φ(xi) − Φ(ξ)| ≤ p s=1 Φ(s)(ξ) s! |xi − ξ|s + o(|xi − ξ|p ), assuming that the sequence {xi}∞ i=0 generated by (1) converges to ξ and also that Φ(s)(ξ) = 0 ∀s < p, the previous expression implies that |xi+1 − ξ| |xi − ξ|p = |Φ(xi) − Φ(ξ)| |xi − ξ|p ≤ Φ(p)(ξ) p! + o(|xi − ξ|p ) |xi − ξ|p −→ i→∞ Φ(p)(ξ) p! , as consequence, exists a value k > 0 such that |xi+1 − ξ| ≤ Φ(p)(ξ) p! |xi − ξ|p , ∀i ≥ k. A version of the previous theorem for the case n-dimensional may be found in the reference [3]. 1.2. Newton-Raphson Method The previous theorem in its n-dimensional form is usually very useful to generate a fixed point method with an order of convergence desired, an order of convergence that is usually appreciated in iterative methods is the (at least) quadratic order. If we have a function f : Ω ⊂ Rn → Rn and we search a value ξ ∈ Ω such that f (ξ) = 0, we may build an iteration function Φ in general form as [6] Φ(x) = x − A(x)f (x), (8) with A(x) a matrix like A(x) := [A]jk(x) =   [A]11(x) [A]12(x) ··· [A]1n(x) [A]21(x) [A]22(x) ··· [A]2n(x) ... ... ... ... [A]n1(x) [A]n2(x) ... [A]nn(x)   , (9) where [A]jk : Rn → R (1 ≤ j,k ≤ n). Notice that the matrix A(x) is determined according to the order of convergence desired. Before continuing, it is necessary to mention that the following conditions are needed: 1. Suppose we can generalize the Theorem 1.5 to the case n-dimensional, although for this it is necessary to guarantee that the iteration function Φ given by (8) near the value ξ can be expressed in terms of its Taylor series in several variables. 2. It is necessary to guarantee that the norm of the equivalent of the first derivative in several variables of the iteration function Φ tends to zero near the value ξ. Then, we will assume that the first condition is satisfied; for the second condition we have that the equivalent to the first derivative in several variables is the Jacobian matrix of the function Φ, which is defined as follows [5] Φ(1) (x) := [Φ] (1) jk (x) =   ∂1[Φ]1(x) ∂2[Φ]1(x) ··· ∂n[Φ]1(x) ∂1[Φ]2(x) ∂2[Φ]2(x) ··· ∂n[Φ]2(x) ... ... ... ... ∂1[Φ]n(x) ∂2[Φ]n(x) ... ∂n[Φ]n(x)   , (10) Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 16
  • 5. where [Φ] (1) jk = ∂k[Φ]j(x) := ∂ ∂[x]k [Φ]j(x), 1 ≤ j,k ≤ n, with [Φ]k : Rn → R, the competent k-th of the iteration function Φ. Now considering that lim x→ξ Φ(1) (x) = 0 ⇒ lim x→ξ ∂k[Φ]j(x) = 0, ∀j,k ≤ n, (11) we can assume that we have a function f (x) : Ω ⊂ Rn → Rn with a zero ξ ∈ Ω, such that all of its first partial derivatives are defined in ξ. Then taking the iteration function Φ given by (8), the k-th component of the iteration function may be written as [Φ]k(x) = [x]k − n j=1 [A]kj(x)[f ]j(x), then ∂l[Φ]k(x) = δlk − n j=1 [A]kj(x)∂l[f ]j(x) + ∂l[A]kj(x) [f ]j(x) , where δlk is the Kronecker delta, which is defined as δlk = 1, si l = k, 0, si l k. Assuming that (11) is fulfilled ∂l[Φ]k(ξ) = δlk − n j=1 [A]kj(ξ)∂l[f ]j(ξ) = 0 ⇒ n j=1 [A]kj(ξ)∂l[f ]j(ξ) = δlk, ∀l,k ≤ n, the previous expression may be written in matrix form as A(ξ)f (1) (ξ) = In ⇒ A(ξ) = f (1) (ξ) −1 , where f (1) and In are the Jacobian matrix of the function f and the identity matrix of n × n, respectively. Denoting by det(A) the determinant of the matrix A, then any matrix A(x) that fulfill the following condition lim x→ξ A(x) = f (1) (ξ) −1 , det f (1)(ξ) 0, (12) guarantees that exists δ > 0 such that the iteration function Φ given by (8), converges (locally) with an order of convergence (at least) quadratic in B(ξ;δ). The following fixed point method can be obtained from the previous result xi+1 := Φ(xi) = xi − f (1) (xi) −1 f (xi), i = 0,1,2,··· , (13) which is known as Newton-Raphson method (n-dimensional), also known as Newton’s method [7]. Although the condition given in (12) could seem that Newton-Raphson method always has an order of conver- gence (at least) quadratic, unfortunately, this is not true; the order of convergence is now conditioned by the way in which the function f is constituted, the mentioned above may be appreciated in the following proposition Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 17
  • 6. Proposition 1.6. Let f : Ω ⊂ R → R be a function that is at least twice differentiable in ξ ∈ Ω. So if ξ is a zero of f with algebraic multiplicity m (m ≥ 2), that is, f (x) = (x − ξ)m g(x), g(ξ) 0, the Newton-Raphson method (one-dimensional) has an order of convergence (at least) linear. Proof. Suppose we have f : Ω ⊂ R → R a function with a zero ξ ∈ Ω of algebraic multiplicity m ≥ 2, and that f is at least twice differentiable in ξ, then f (x) =(x − ξ)m g(x), g(ξ) 0, f (1) (x) =(x − ξ)m−1 mg(x) + (x − ξ)g(1) (x) , as consequence, the derivative of the iteration function Φ of Newton-Raphson method may be expressed as Φ(1)(x) = 1 − mg2(x) + (x − ξ)2 g(1)(x) 2 − g(x)g(2)(x) mg(x) + (x − ξ)g(1)(x) 2 , therefore lim x→ξ Φ(1) (x) = 1 − 1 m , and by the Theorem 1.5, the Newton-Raphson method under the hypothesis of the proposition converges (locally) with an order of convergence (at least) linear. 2. Basic Definitions of the Fractional Derivative 2.1. Introduction to the Definition of Riemann-Liouville One of the key pieces in the study of fractional calculus is the iterated integral, which is defined as follows [8] Definition 2.1. Let L1 loc(a,b), the space of locally integrable functions in the interval (a,b). If f is a function such that f ∈ L1 loc(a,∞), then the n-th iterated integral of the function f is given by aIn x f (x) = aIx aIn−1 x f (x) = 1 (n − 1)! x a (x − t)n−1 f (t)dt, (14) where aIxf (x) := x a f (t)dt. Considerate that (n − 1)! = Γ (n) , a generalization of (14) may be obtained for an arbitrary order α > 0 aIα x f (x) = 1 Γ (α) x a (x − t)α−1 f (t)dt, (15) similarly, if f ∈ L1 loc(−∞,b), we may define xIα b f (x) = 1 Γ (α) b x (t − x)α−1 f (t)dt, (16) the equations (15) and (16) correspond to the definitions of right and left fractional integral of Riemann- Liouville, respectively. The fractional integrals satisfy the semigroup property, which is given in the following proposition [8] Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 18
  • 7. Proposition 2.2. Let f be a function. If f ∈ L1 loc(a,∞), then the fractional integrals of f satisfy that aIα x aI β x f (x) = aI α+β x f (x), α,β > 0. (17) From the previous result, and considering that the operator d/dx is the inverse operator to the left of the operator aIx, any integral α-th of a function f ∈ L1 loc(a,∞) may be written as aIα x f (x) = dn dxn (aIn x aIα x f (x)) = dn dxn (aIn+α x f (x)). (18) Considering (15) and (18), we can built the operator Fractional Derivative of Riemann-Liouville aDα x , as follows [8, 9] aDα x f (x) :=    aI−α x f (x), if α < 0, dn dxn (aIn−α x f (x)), if α ≥ 0, (19) where n = α + 1. Applying the operator (19) with a = 0 and α ∈ R Z to the function xµ, we obtain 0Dα x xµ =    (−1)α Γ (−(µ + α)) Γ (−µ) xµ−α, if µ ≤ −1, Γ (µ + 1) Γ (µ − α + 1) xµ−α, if µ > −1. (20) 2.2. Introduction to the Definition of Caputo Michele Caputo (1969) published a book and introduced a new definition of fractional derivative, he created this definition with the objective of modeling anomalous diffusion phenomena. The definition of Caputo had already been discovered independently by Gerasimov (1948). This fractional derivative is of the utmost importance since it allows us to give a physical interpretation of the initial value problems, moreover to being used to model fractional time. In some texts, it is known as the fractional derivative of Gerasimov-Caputo. Let f be a function, such that f is n-times differentiable with f (n) ∈ L1 loc(a,b), then the (right) fractional derivative of Caputo is defined as [9] C a Dα x f (x) :=aIn−α x dn dxn f (x) = 1 Γ (n − α) x a (x − t)n−α−1 f (n) (t)dt, (21) where n = α + 1. It should be mentioned that the fractional derivative of Caputo behaves as the inverse operator to the left of fractional integral of Riemann-Liouville , that is, C a Dα x (aIα x f (x)) = f (x). On the other hand, the relation between the fractional derivatives of Caputo and Riemann-Liouville is given by the following expression [9] C a Dα x f (x) = aDα x  f (x) − n−1 k=0 f (k)(a) k! (x − a)k  , then, if f (k)(a) = 0 ∀k < n, we obtain C a Dα x f (x) = aDα x f (x), considering the previous particular case, it is possible to unify the definitions of fractional integral of Riemann- Liouville and fractional derivative of Caputo as follows C a Dα x f (x) :=    aI−α x f (x), if α < 0, aIn−α x dn dxn f (x) , if α ≥ 0. (22) Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 19
  • 8. 3. Fractional Newton-Raphson Method Let Pn(R), the space of polynomials of degree less than or equal to n with real coefficients. The zeros ξ of a function f ∈ Pn(R) are usually named as roots. The Newton-Raphson method is useful for finding the roots of a function f . However, this method is limited because it cannot find roots ξ ∈ CR, if the sequence {xi}∞ i=0 generated by (13) has an initial condition x0 ∈ R. To solve this problem and develop a method that has the ability to find roots, both real and complex, of a polynomial if the initial condition x0 is real, we propose a new method called fractional Newton-Raphson method, which consists of Newton-Raphson method with the implementation of the fractional derivative. Before continuing, it is necessary to define the fractional Jacobian matrix of a function f : Ω ⊂ Rn → Rn as follows f (α) (x) := [f ] (α) jk (x) , (23) where [f ] (α) jk = ∂α k [f ]j(x) := ∂α ∂[x]α k [f ]j(x), 1 ≤ j,k ≤ n. with [f ]j : Rn → R. The operator ∂α/∂[x]α k denotes any fractional derivative, applied only to the variable [x]k, that satisfies the following condition of continuity respect to the order of the derivative lim α→1 ∂α ∂[x]α k [f ]j(x) = ∂ ∂[x]k [f ]j(x), 1 ≤ j,k ≤ n, then, the matrix (23) satisfies that lim α→1 f (α) (x) = f (1) (x), (24) where f (1)(x) denotes the Jacobian matrix of the function f . Taking into account that a polynomial of degree n it is composed of n+1 monomials of the form xm, with m ≥ 0, we can take the equation (20) with (13), to define the following iteration function that results in the Fractional Newton-Raphson Method [1, 2] xi+1 := Φ (α,xi) = xi − f (α)(xi) −1 f (xi), i = 0,1,2,··· . (25) 3.1. Fractional Newton Method To try to guarantee that the sequence {xi}∞ i=0 generated by (25) has an order of convergence (at least) quadratic, the condition (12) is combined with (24) to define the following function αf ([x]k,x) := α, if |[x]k| 0 and f (x) ≥ δ, 1, if |[x]k| = 0 or f (x) < δ, (26) then, for any fractional derivative that satisfies the condition (24), and using (26), the Fractional Newton Method may be defined as xi+1 := Φ(α,xi) = xi − Nαf (xi) −1 f (xi), i = 0,1,2,··· , (27) with Nαf (xi) given by the following matrix Nαf (xi) := [Nαf ]jk(xi) = ∂ αf ([xi]k,xi) k [f ]j(xi) . (28) Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 20
  • 9. The difference between the methods (25) and (27), is that the just for the second can exists δ > 0 such that if the sequence {xi}∞ i=0 generated by (27) converges to a root ξ of f , exists k > 0 such that ∀i ≥ k, the sequence has an order of convergence (at least) quadratic in B(ξ;δ). The value of α in (25) and (26) is assigned with the following reasoning: when we use the definitions of fractional derivatives given by (19) and (22) in a function f , it is necessary that the function be n-times integrable and n-times differentiable, where n = α +1, therefore |α| < n and, on the other hand, for using Newton method it is just necessary that the function be one-time differentiable, as a consequence of (26) it is obtained that −2 < α < 2, α −1,0,1. (29) Without loss of generality, to understand why the sequence {xi}∞ i=0 generated by the method (25) or (27) when we use a function f ∈ Pn(R), has the ability to enter the complex space starting from an initial condition x0 ∈ R, it is only necessary to observe the fractional derivative of Riemann-Liouville of order α = 1/2 of the monomial xm 0D 1 2 x xm = √ π 2Γ m + 1 2 xm− 1 2 , m ≥ 0, whose result is a function with a rational exponent, contrary to what would happen when using the conven- tional derivative. When the iteration function given by (25) or (27) is used, we must taken an initial condition x0 0, as a consequence of the fact that the fractional derivative of order α > 0 of a constant is usually propor- tional to the function x−α. The sequence {xi}∞ i=0 generated by the iteration function (25) or (27), presents among its behaviors, the follow- ing particular cases depending on the initial condition x0: 1. If we take an initial condition x0 > 0, the sequence {xi}∞ i=0 may be divided into three parts, this occurs because it may exists a value M ∈ N for which {xi}M−1 i=0 ⊂ R+ with {xM} ⊂ R−, in consequence {xi}i≥M+1 ⊂ C. 2. On the other hand, if we take an initial x0 < 0 condition, the sequence {xi}∞ i=0 may be divided into two parts, {x0} ⊂ R− and {xi}i≥1 ⊂ C. Unlike classical Newton-Raphson method; which uses tangent lines to generate a sequence {xi}∞ i=0, the frac- tional Newton-Raphson method uses lines more similar to secants (see Figure 1). A consequence of the fact that the lines are not tangent when using (25), is that different trajectories can be obtained for the same initial condition x0 just by changing the order α of the derivative (see Figure 2). Figure 1: Illustration of some lines generated by the fractional Newton-Raphson method, the red line corresponds to the Newton-Raphson method. Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 21
  • 10. a) α = −0.77 b) α = −0.32 c) α = 0.19 d) α = 1.87 Figure 2: llustrations of some trajectories generated by the fractional Newton-Raphson method for the same initial condition x0 but with different values of α. 3.1.1. Finding Zeros A typical inconvenience that arises in problems related to fractional calculus, it is the fact that it is not always known what is the appropriate order α to solve these problems. As a consequence, different values of α are generally tested and we choose the value that allows to find the best solution considering an criterion of precision established. Based on the aforementioned, it is necessary to follow the instructions below when using the method (25) or (27) to find the zeros ξ of a function f : 1. Without considering the integers −1, 0 and 1, a partition of the interval [−2,2] is created as follows −2 = α0 < α1 < α2 < ··· < αs < αs+1 = 2, and using the partition, the following sequence {αm}s m=1 is created. 2. We choose a non-negative tolerance T OL < 1, a limit of iterations LIT > 1 for all αm, an initial condition x0 0 and a value M > LIT . 3. We choose a value δ > 0 to use αf given by (26), such that T OL < δ < 1. In addition, it is taken a fractional derivative that satisfies the condition of continuity (24), and it is unified with the fractional integral in the same way as in the equations (19) and (22). 4. The iteration function (25) or (27) is used with all the values of the partition {αm}s m=1, and for each value αm is generated a sequence {mxi} Rm i=0, where Rm =    K1 ≤ LIT , if exists k > 0 such that f (mxk) ≥ M ∀k ≥ i, K2 ≤ LIT , if exists k > 0 such that f (mxk) ≤ T OL ∀k ≥ i. LIT , if f (mxi) > T OL ∀i ≥ 0, then is generated a sequence xRmk r k=1 , with r ≤ s, such that f xRmk ≤ T OL, ∀k ≥ 1. 5. We choose a value ε > 0 and we take the values xRm1 and xRm2 , then is defined X1 = xRm1 . If the following condition is fulfilled X1 − xRm2 X1 ≤ ε and Rm2 ≤ Rm1 , (30) Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 22
  • 11. is taken X1 = xRm2 . On the other hand if X1 − xRm2 X1 > ε, (31) is defined X2 = xRm2 . Without loss of generality, it may be assumed that the second condition is fulfilled, then is taken X3 = XRm3 and are checked the conditions (30) and (31) with the values X1 and X2. The process described before is repeated for all values Xk = xRmk , with k ≥ 4, and that generates a sequence {Xm}t m=1, with t ≤ r, such that Xi − Xj Xi > ε, ∀i j. By following the steps described before to implement the methods (25) and (27), a subset of the solution set of zeros, both real and complex, may be obtained from the function f . We will proceed to give an example where is found the solution set of zeros of a function f ∈ Pn(R). Example 3.1. Let the function: f (x) = −57.62x16 − 56.69x15 − 37.39x14 − 19.91x13 + 35.83x12 − 72.47x11 + 44.41x10 + 43.53x9 +59.93x8 − 42.9x7 − 54.24x6 + 72.12x5 − 22.92x4 + 56.39x3 + 15.8x2 + 60.05x + 55.31, (32) then the following values are chosen to use the iteration function given by (27) T OL = e − 4, LIT = 40, δ = 0.5, x0 = 0.74, M = e + 17, and using the fractional derivative given by (20), we obtain the results of the Table 1 αm mξ mξ − m−1ξ 2 f (mξ) 2 Rm 1 −1.01346 −1.3699527 1.64700e − 5 7.02720e − 5 2 2 −0.80436 −1.00133957 9.82400e − 5 4.36020e − 5 2 3 −0.50138 −0.62435277 9.62700e − 5 2.31843e − 6 2 4 0.87611 0.58999224 − 0.86699687i 3.32866e − 7 6.48587e − 6 11 5 0.87634 0.36452488 − 0.83287821i 3.36341e − 6 2.93179e − 6 11 6 0.87658 −0.28661369 − 0.80840642i 2.65228e − 6 1.06485e − 6 10 7 0.8943 0.88121183 + 0.4269622i 1.94165e − 7 6.46531e − 6 14 8 0.89561 0.88121183 − 0.4269622i 2.87924e − 7 6.46531e − 6 11 9 0.95944 −0.35983764 + 1.18135267i 2.82843e − 8 2.53547e − 5 24 10 1.05937 1.03423976 1.80000e − 7 1.38685e − 5 4 11 1.17776 −0.70050491 − 0.78577099i 4.73814e − 7 9.13799e − 6 15 12 1.17796 −0.35983764 − 1.18135267i 4.12311e − 8 2.53547e − 5 17 13 1.17863 −0.70050491 + 0.78577099i 8.65332e − 7 9.13799e − 6 18 14 1.17916 0.58999224 + 0.86699687i 7.05195e − 7 6.48587e − 6 12 15 1.17925 0.36452488 + 0.83287821i 2.39437e − 6 2.93179e − 6 9 16 1.22278 −0.28661369 + 0.80840642i 5.36985e − 6 1.06485e − 6 9 Table 1: Results obtained using the iterative method (27). Although the methods (25) and (27) were originally defined for polynomials, the methods can be extended to a broader class of functions, as shown in the following examples Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 23
  • 12. Example 3.2. Let the function: f (x) = sin(x) − 3 2x , (33) then the following values are chosen to use the iteration function given by (27) T OL = e − 4, LIT = 40, δ = 0.5, x0 = 0.26, M = e + 6, and using the fractional derivative given by (20), we obtain the results of Table 2 αm mξ mξ − m−1ξ 2 f (mξ) 2 Rm 1 −1.92915 1.50341195 2.80000e − 7 2.93485e − 9 6 2 −0.07196 −2.49727201 9.99500e − 5 6.53301e − 9 8 3 −0.03907 −1.50341194 6.29100e − 5 4.37493e − 9 7 4 0.19786 −18.92888307 4.00000e − 8 1.97203e − 9 20 5 0.20932 −9.26211143 9.60000e − 7 4.77196e − 9 12 6 0.2097 −15.61173324 5.49000e − 6 2.05213e − 9 18 7 0.20986 −12.6848988 3.68000e − 5 3.29282e − 9 15 8 0.21105 −6.51548968 9.67100e − 5 2.05247e − 9 10 9 0.21383 −21.92267274 6.40000e − 6 2.03986e − 8 24 10 1.19522 6.51548968 7.24900e − 5 2.05247e − 9 13 11 1.19546 9.26211143 1.78200e − 5 4.77196e − 9 14 12 1.19558 12.6848988 7.92100e − 5 3.29282e − 9 14 13 1.19567 15.61173324 7.90000e − 7 2.05213e − 9 12 14 1.1957 18.92888307 1.00000e − 8 1.97203e − 9 12 15 1.19572 21.92267282 1.46400e − 5 5.91642e − 8 14 16 1.23944 2.4972720 6.30000e − 7 9.43179e − 10 11 Table 2: Results obtained using the iterative method (27). In the previous example, a subset of the solution set of zeros of the function (33) was obtained, because this function has an infinite amount of zeros. Using the methods (25) and (27) do not guarantee that all zeros of a function f can be found, leaving an initial condition x0 fixed and varying the orders αm of the derivative. As in the classical Newton-Raphson method, finding most of the zeros of the function will depend on giving a proper initial condition x0. If we want to find a larger subset of zeros of the function (33), there are some strategies that are usually useful, for example: 1. To change the initial condition x0. 2. To use a larger amount of values αm. 3. To increase the value of M. 4. To increase the value of LIT . In general, the last strategy is usually the most useful, but this causes the methods (25) and (27) to become more costly, because a longer runtime is required for all values αm. Example 3.3. Let the function: f (x) = x2 1 + x3 2 − 10,x3 1 − x2 2 − 1 T , (34) then the following values are chosen to use the iteration function given by (27) T OL = e − 4, LIT = 40, δ = 0.5, x0 = (0.88,0.88)T , M = e + 6, and using the fractional derivative given by (20), we obtain the results of Table 3 Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 24
  • 13. αm mξ1 mξ2 mξ − m−1ξ 2 f (mξ) 2 Rm 1 −0.58345 0.22435853 + 1.69813926i −1.13097646 + 2.05152306i 3.56931e − 7 8.18915e − 8 12 2 −0.50253 0.22435853 − 1.69813926i −1.13097646 − 2.05152306i 1.56637e − 6 8.18915e − 8 10 3 0.74229 −1.42715874 + 0.56940338i −0.90233562 − 1.82561764i 1.13040e − 6 7.01649e − 8 11 4 0.75149 1.35750435 + 0.86070348i −1.1989996 − 1.71840823i 3.15278e − 7 4.26428e − 8 12 5 0.76168 −0.99362838 + 1.54146499i 2.2675011 + 0.19910814i 8.27969e − 5 1.05527e − 7 13 6 0.76213 −0.99362838 − 1.54146499i 2.2675011 − 0.19910815i 2.15870e − 7 6.41725e − 8 15 7 0.77146 −1.42715874 − 0.56940338i −0.90233562 + 1.82561764i 3.57132e − 6 7.01649e − 8 15 8 0.78562 1.35750435 − 0.86070348i −1.1989996 + 1.71840823i 3.16228e − 8 4.26428e − 8 17 9 1.22739 1.67784847 1.92962117 9.99877e − 5 2.71561e − 8 4 Table 3: Results obtained using the iterative method (27). Example 3.4. Let the function: f (x) = x1 + x2 2 − 37,x1 − x2 2 − 5,x1 + x2 + x3 − 3 T , (35) then the following values are chosen to use the iteration function given by (27) T OL = e − 4, LIT = 40, δ = 0.5, x0 = (4.35,4.35,4.35)T , M = e + 6, and using the fractional derivative given by (20), we obtain the results of Table 4 αm mξ1 mξ2 mξ3 mξ − m−1ξ 2 f (mξ) 2 Rm 1 0.78928 −6.08553731 + 0.27357884i 0.04108101 + 3.32974848i 9.04445631 − 3.60332732i 6.42403e − 5 3.67448e − 8 14 2 0.79059 −6.08553731 − 0.27357884i 0.04108101 − 3.32974848i 9.04445631 + 3.60332732i 1.05357e − 7 3.67448e − 8 15 3 0.8166 6.17107462 −1.08216201 −2.08891261 6.14760e − 5 4.45820e − 8 9 4 0.83771 6.0 1.0 −4.0 3.38077e − 6 0.0 6 Table 4: Results obtained using the iterative method (27). 3.2. Fractional Quasi-Newton Method Although the previous methods are useful for finding multiple zeros of a function f , they have the disadvantage that in many cases calculating the fractional derivative of a function is not a simple task. To try to minimize this problem, we use that commonly for many definitions of the fractional derivative, the arbitrary order derivative of a constant is not always zero, that is, ∂α ∂[x]α k c 0, c = constant. (36) Then, we may define the function gf (x) := f (x0) + f (1) (x0)x, (37) it should be noted that the previous function is almost a linear approximation of the function f in the initial condition x0. Then, for any fractional derivative that satisfies the condition (36), and using (23) the Fractional Quasi-Newton Method may be defined as xi+1 := Φ(α,xi) = xi − Qgf ,β(xi) −1 f (xi), i = 0,1,2,··· , (38) with Qg,β(xi) given by the following matrix Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 25
  • 14. Qgf ,β(xi) := [Qgf ,β]jk(xi) = ∂ β(α,[xi]k) k [gf ]j(xi) , (39) where the function β is defined as follows β(α,[xi]k) := α, if |[xi]k| 0, 1, if |[xi]k| = 0. (40) Since the iteration function (38), the same way that (25), does not satisfy the condition (12), then any sequence {xi}∞ i=0 generated by this iteration function has at most one order of convergence (at least) linear. As a consequence, the speed of convergence is slower compared to what would be obtained when using (27), and then it is necessary to use a larger value of LIT . It should be mentioned that the value α = 1 in (40), is not taken to try to guarantee an order of convergence as in (26), but to avoid the discontinuity that is generated when using the fractional derivative of constants in the value [xi]k = 0. An example is given using the fractional quasi-Newton method, where is found a subset of the solution set of zeros of the function f . Example 3.5. Let the function: f (x) = 1 2 sin(x1x2) − x2 4π − x1 2 , 1 − 1 4π e2x1 − e + e π x2 − 2ex1 T , (41) then the following values are chosen to use the iteration function given by (38) T OL = e − 4, LIT = 200, x0 = (1.52,1.52)T , M = e + 6, and using the fractional derivative given by (20), we obtain the results of Table 5 αm mξ1 mξ2 mξ − m−1ξ 2 f (mξ) 2 Rm 1 −0.28866 2.21216549 − 13.25899819i 0.41342314 + 3.94559327i 1.18743e − 7 9.66251e − 5 163 2 1.08888 1.29436489 −3.13720898 1.89011e − 6 9.38884e − 5 51 3 1.14618 1.43395246 −6.82075021 2.24758e − 6 9.74642e − 5 94 4 1.33394 0.50000669 3.14148062 9.74727e − 6 9.99871e − 5 133 5 1.35597 0.29944016 2.83696105 8.55893e − 5 4.66965e − 5 8 6 1.3621 1.5305078 −10.20223066 2.38437e − 6 9.88681e − 5 120 7 1.37936 1.60457254 −13.36288413 2.32459e − 6 9.52348e − 5 93 8 1.88748 −0.26061324 0.62257513 2.69146e − 5 9.90792e − 5 21 Table 5: Results obtained using the iterative method (38). 4. Conclusions The fractional Newton-Raphson method and its variants are useful for finding multiple solutions of nonlinear systems, in the complex space using real initial conditions. However, it should be clarified that they present some advantages and disadvantages between each of them, for example, although the fractional Newton method generally has an order of convergence (at least) quadratic, this method has the disadvantage that it is not an easy task to find the fractional Jacobian matrix for many functions, and also the need to reverse this matrix must be added for each new iteration. But it has an advantage over the other methods, and this is because it can be used with few iterations, which allows occupying a greater number of αm values belonging to the partition of the interval (−2,2). The quasi-Newton fractional method has the advantage that the fractional Jacobian matrix with which it works, compared to the fractional Newton method, is easy to obtain. But a disadvantage is that the method may have at most an order of convergence (at least) linear, so the speed of convergence is lower and it is necessary to use a greater number of iterations to ensure success in the search for solutions. As a consequence, the method is more costly because it requires a longer runtime to use all values αm. An additional advantage of the method is that Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 26
  • 15. if the initial condition is close enough to a solution, its behavior is very similar to fractional Newton-Raphson method and may converge with a relatively small number of iterations, but it still has the disadvantage that we need to reverse a matrix in each iteration. These methods may solve some nonlinear systems and are really efficient to find multiple solutions, both real and complex, using real initial conditions. It should be mentioned that these methods are extremely recommended in systems that have infinite solutions or a large number of them. All the examples in this document were created using the Julia language (version 1.3.1.), it is necessary to mention that for future works, it is intended to use the numerical methods presented here on applications related to physics and engineering. References [1] F. Brambila-Paz and A. Torres-Hernandez. Fractional newton-raphson method. arXiv preprint arXiv:1710.07634, 2017. https://arxiv.org/pdf/1710.07634.pdf. [2] F. Brambila-Paz, A. Torres-Hernandez, U. Iturrar´an-Viveros, and R. Caballero-Cruz. Fractional newton- raphson method accelerated with aitken’s method. arXiv preprint arXiv:1804.08445, 2018. https://arxiv. org/pdf/1804.08445.pdf. [3] Josef Stoer and Roland Bulirsch. Introduction to numerical analysis, volume 12. Springer Science & Business Media, 2013. [4] Robert Plato. Concise numerical mathematics. Number 57. American Mathematical Soc., 2003. [5] James M Ortega. Numerical analysis: a second course. SIAM, 1990. [6] Richard L Burden and J Douglas Faires. An´alisis num´erico. Thomson Learning,, 2002. [7] James M Ortega and Werner C Rheinboldt. Iterative solution of nonlinear equations in several variables, vol- ume 30. Siam, 1970. [8] Rudolf Hilfer. Applications of fractional calculus in physics. World Scientific, 2000. [9] AA Kilbas, HM Srivastava, and JJ Trujillo. Theory and Applications of Fractional Differential Equations. Elsevier, 2006. [10] A. Torres-Hernandez, F. Brambila-Paz, and C. Torres-Mart´ınez. Proposal for use the fractional derivative of radial functions in interpolation problems. arXiv preprint arXiv:1906.03760, 2019. https://arxiv.org/ pdf/1906.03760.pdf. [11] Carlos Alberto Torres Mart´ınez and Carlos Fuentes. Applications of radial basis function schemes to frac- tional partial differential equations. Fractal Analysis: Applications in Physics, Engineering and Technology, 2017. https://www.intechopen.com/books/fractal-analysis-applications-in-physics -engineering-and-technology. [12] Benito F Mart´ınez-Salgado, Rolando Rosas-Sampayo, Anthony Torres-Hern´andez, and Carlos Fuentes. Ap- plication of fractional calculus to oil industry. Fractal Analysis: Applications in Physics, Engineering and Tech- nology, 2017. https://www.intechopen.com/books/fractal-analysis-applications-in-physics -engineering-and-technology. Applied Mathematics and Sciences: An International Journal (MathSJ), Vol. 7, No. 1, March 2020 27