Numerical Approximation of Spike-Type Solutions to a One-Dimensional Sub-Diffusive Gierer-Meinhardt Model with Controlled Precision. by Nehemie Nguimbous a thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science (Hons.) in the Department of Mathematics & Statistics ©Nehemie Nguimbous 2024 We accept this thesis as conforming to the required standards: Yana Nec Supervisor’s Name Dept. of Mathematics & Statistics Thesis Supervisor Shirin Boroushaki First Examiner’s Name Dept. of Mathematics & Statistics Richard Brewster Second Examiner’s Name Dept. of Mathematics & Statistics Dated April 8 2024, Kamloops, British Columbia, Canada THOMPSON RIVERS UNIVERSITY DEPARTMENT OF MATHEMATICS & STATISTICS Permission is herewith granted to Thompson Rivers University to circulate and to have copied for non-commercial purposes, at its discretion, the above title upon request of individuals or institutions. -------------------------------Signature of Author the author reserves other publication rights, and neither the thesis nor extensive extracts from it may be printed or otherwise reproduced without the author’s written permission. the author attests that permission has been obtained for the use of any copy-righted material appearing in this thesis (other than brief excerpts requiring only proper acknowledgement in scholarly righting) and that all such use is clearly acknowledged. ii Abstract This thesis focuses on numerically solving the sub-diffusive Gierer-Meinhardt model with controlled precision. We start by defining and explaining basic concepts of reaction diffusion, highlighting the main differences between normal and anomalous diffusion. Sub-diffusion is modelled at continuum level by fractional derivatives replacing regular ones in PDEs. Therefore a crucial part of our study involves fractional calculus, which provides a solid framework for describing subdiffusive processes. We explore integer and fractional derivatives and integrals, their main properties, those that can be extended from classical to fractional calculus, and the reasons for limitations in some cases. We then delve into the well-known Gierer-Meinhardt model, a reaction-diffusion system used to describe pattern formation in biological systems. Leveraging the matched asymptotic expansion technique, which is applicable due to the asymptotic smallness of certain parameters in the system, we transform the differential Gierer-Meinhardt model into a differintegroalgebraic system.This differintegro-algebraic system contains a fractional operator denoted Dtγ , which involves the integral of a complex function impossible to determine analytically. This operator depends on multiple parameters, and the number of subdivisions needed for numerical computation varies significantly with these parameters and the desired precision. To address this challenge, we have developed a program capable of precalculating the required number of subdivisions before computation, thus saving significant computation time. All elements in place, we use these tools to study the dynamics of the obtained spikes. iii Acknowledgements I extend my gratitude to my thesis supervisor for her invaluable guidance and support throughout this research endeavor. Additionally, I am deeply grateful for the financial support provided by the Undergraduate Reseach Apprenticeship Fund and the Telus Exellence in Science Award , which made this undergraduate thesis project possible. My sincere thanks also go to Thompson Rivers University for their provision of resources and facilities essential for this study. iv Contents Abstract iii Acknowledgements iv 1 Reaction-Diffusion 1 2 Classical and Fractional Calculus 2.1 Background, Properties, Similarities and Differencies . . . . . . . . . . . . . . . 2.1.1 Classical or Integer Calculus . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Fractional Derivatives and Integrals: Definitions and Equivalencies . . . 2.1.3 Differentiation of Simple Functions . . . . . . . . . . . . . . . . . . . . . 2.1.4 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Auxiliary Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 4 12 24 30 41 3 Gierer-Meinhardt Model 3.1 Representation, Matched Asymptotic Expansion and Solution . . . . . . . . . . 3.1.1 Neumann Boundaries Conditions . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Asymptotic Solution of the Gierer-Meinhardt Model . . . . . . . . . . . 3.1.3 Auxiliary Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 48 49 50 63 4 Numerical Approximation of Dtγ u with a Controlled Precision 4.1 Introduction, Method and Verification . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Regularization of Dtγ u . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.5 Numerical Approximation of I with a Controlled Precision . . . . . . . 4.1.6 Number of Subdivisions Required to Compute I for Continuous Values of t and Discrete Values of p and γ . . . . . . . . . . . . . . . . . . . . . 4.1.7 Number of Subdivisions Required to Compute I with a Controlled Precision for Continuous Values of t, p and γ . . . . . . . . . . . . . . . . . 4.1.8 Summary of the Interpolation Process . . . . . . . . . . . . . . . . . . . 72 72 72 73 74 75 77 v 91 93 94 4.1.9 Assessment of the Accuracy of the Results . . . . . . . . . . . . . . . . . 96 5 Conclusion 100 Appendix A Structure of the Project 103 References 105 vi Chapter 1 Reaction-Diffusion Definitions and Types Reaction-diffusion is a dynamic process in which the spatial distribution of chemical substances evolves over time due to the interplay between those chemicals and their diffusion through space. The system typically involves an ”activator,” a chemical component that promotes its own production and that of other substances. Simultaneously, an ”inhibitor” is present, which suppresses its own production and that of other substances. The delicate balance between the concentrations of the activator and the inhibitor, along with their diffusion properties, gives rise to intricate patterns and structures such as stripes, spots, and spikes. These patterns are observed in various animals’ skins, such as zebras, cheetahs, or giraffes. Moreover, reactiondiffusion is also responsible for the regulation that must take place during organism growth, such as the formation of new tissue. Types of Diffusion Depending on specific system parameters and conditions, we can distinguish between two main types of diffusion: normal and anomalous diffusion. 1. Normal diffusion: is characterized by Fick’s law: J = −D∇C, where J represents the diffusion flux, D the diffusion coefficient, and C the concentration of the substance. This process involves the random linear movement of particles from regions of higher concentration to lower concentration. The mean square displacement of the particles is expressed as a linear function of time. r2 (t) ∼ t. 2. Anomalous diffusion is characterized by a non-linear displacement of particles. The mean square displacement of the particles is expressed as a power function of time: 1 2 r2 (t) ∼ tγ . There exist two main types of anomalous diffusion: sub-diffusion and super-diffusion. The former occurs when the spread of particles is slower than that of normal diffusion, characterized by γ < 1, while the latter arises when the displacement of the particles is faster, with γ > 1. Chapter 2 Classical and Fractional Calculus 2.1 Background, Properties, Similarities and Differencies This section is based in its entirety on the textbook [7]. The main topic of the text is to generalise integer derivatives and integrals to fractional orders. Before embarking on this journey, an auxiliary topic needed throughout the book is the Γ function, distinguished by its extensive usage throughout the text and its significant contributions to key results. The Γ function, defined as Z ∞ Γ(z) = tz−1 e−t dt for Re(z) > 0, 0 and it has several fundamental properties, including: 1. Factorial Property: For a positive integer n, Γ(n) = (n − 1)!. 2. Recurrence Relation: Γ(z + 1) = zΓ(z). 3. Reflection Formula: Γ(z)Γ(1 − z) = π , sin(πz) z ̸∈ Z. nz n! . n→∞ z(z + 1) · · · (z + n)   1 1 2z−1 √ . 5. Duplication Formula: Γ(2z) = 2 Γ(z)Γ z + 2 π 4. Euler’s Identity: Γ(z) = lim 6. Asymptotic expansion: " # Γ(j − q) q[q + 1] −1−q −2 ∼j 1+ + O(j ) , Γ(j + 1) 2j 3 q ∈ R, j ∈ N, and j → ∞. (2.1) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 4 A function that is closely related to the Gamma function is the Beta function B (p, q). For positive values of the two parameters, p and q, the function is defined by the Beta integral Z 1 B (p, q) = y p−1 [1 − y]q−1 dy, p > 0 < q, (2.2) 0 also known as Euler’s integral of second kind. If either p or q is non-positive, the integral diverges and the Beta function is defined by the relationship B (p, q) = 2.1.1 Γ(p)Γ(q) . Γ(p + q) (2.3) Classical or Integer Calculus Integer calculus, commonly known as classical calculus, has its origins deeply rooted in the monumental contributions of Isaac Newton and Gottfried Wilhelm Leibniz during the 17th century. These mathematical pioneers independently developed the fundamental principles of differentiation and integration, forming the bedrock of classical calculus. Hereinafter we will review the classical notions of differentiation and integration with the view to unify them as one overarching concept and from there generalise to a non-integer order. Integer Derivative In classical calculus, the derivative of a function represents the rate at which the function’s output changes with respect to its input. The first order integer derivative is defined as ( ) h i−1 h i d1 f (x) df (x) = = lim δx f (x) − f (x − δx) , (2.4) [d(x − a)1 ] d(x − a) δx→0 x−a , with a being an immaterial real number less than x and N being a positive where δx = N integer greater or equal to 1. Subsequently, the second order integer derivative, ensued by applying the derivative once to (2.4) is defined as ( ) h i−2 h i d2 f (x) = lim δx f (x) − 2f (x − δx) + f (x − 2δx) . (2.5) [d(x − a)2 ] δx→0 In the same way, the third order integer derivative is defined as ( ) h i−3 h i d3 f (x) = lim δx f (x) − 3f (x − δx) + 3f (x − 2δx) − f (x − 3δx) . [d(x − a)3 ] δx→0 The nth order integer derivative is generalized as follows ( ) n h h i−n X ij n dn f (x) = lim δx −1 f (x − j δx) . [d(x − a)n ] δx→0 j j=0 (2.6) (2.7) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 5 Integer Integration or Antidifferentiation The integral of a function represents the accumulation of a quantity described by the function over an interval. Geometrically, it corresponds to the area under the curve of the function in a given interval. Because integration and differentiation are inverse operations, it is natural to unify their symbolism. Using the definition of an integral as a limit of a Riemann sum, we have d−1 f (x) = [d(x − a)−1 ] Z x ( " #) h i   f (t) dt = lim δx f (x) + f (x − δx) + f (x − 2δx) + . . . + f x − (N − 1)δx δx→0 a = lim ( h δx→0 ) −1 i NX δx f (x − j δx) . j=0 (2.8) Applying the same definition to a double integration leads to Z x Z x1 d−2 f (x) = f (t) dt dx1 [d(x − a)−2 ] a a ( " #) h i2   = lim δx f (x) + 2f (x − δx) + 3f (x − 2δx) + . . . + N f x − (N − 1)δx δx→0 = lim ( h δx→0 ) −1h i2 NX i δx j + 1 f (x − j δx) . j=0 (2.9) We need to integrate one more time to get a better grasp of the overall formula: ( ) Z x Z x2 Z x1 N −1 X d−3 f (x) [j + 1][j + 2] = f (t) dt dx1 dx2 = lim [δx]3 f (x − j δx) . δx→0 [d(x − a)−3 ] 2 a a a j=0 (2.10)   j+n−1 This time, we observe that the coefficients accumulate as , with n representing j the integral’s order, and all coefficients are positive. Hence, ( )  −1  h in NX d−n f (x) j+n−1 = lim δx f (x − j δx) . (2.11) [d(x − a)−n ] δx→0 j j=0 BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 6 Matching of the Differentiation and Integration Expressions: Differintegration   n Using the convention that = 0 when j > n, with both numbers integers, equation (2.7) j can be reformulated as: ( ) −1 h h i−n NX ij n dn f (x) f (x − j δx) , (2.12) = lim δx −1 [d(x − a)n ] δx→0 j j=0 where N − 1 > n. This convention is consistent with the Γ function interpretation, since Γ of a negative integer is infinite. By comparing formulas (2.7) and (2.11), and recalling that h ij n j − n − 1 Γ(j − n) −1 = = , Γ(−n)Γ(j + 1) j j equations (2.7) and (2.11) can be unified in the following equation: ( ) N −1 [δx]−q X Γ(j − q) dq f (x) = lim f (x − j δx) , [d(x − a)q ] δx→0 Γ(−q) Γ(j + 1) (2.13) j=0 with q being an integer of either sign. Note that for a positive q the expression Γ(−q) is infinite, but for j ≤ q within the sum the infinite Γ(j − q) divided by Γ(j + 1) will together give a finite value after using the recursion formula for the Gamma function. For all other j > q the factor Γ(−q) will nullify the contribution. Composition Rule For positive integers, we have ( ) ( ) dn dN f (x) dN dn f (x) dn+N f (x) = = (x). [d(x − a)n ] [d(x − a)N ] [d(x − a)N ] [d(x − a)n ] [d(x − a)n+N ] For negative integers, we have ( ) ( ) d−n d−N f (x) d−N d−n f (x) d−n−N f (x) = = . [d(x − a)−n ] [d(x − a)−N ] [d(x − a)−N ] [d(x − a)−n ] [d(x − a)−n−N ] (2.14) (2.15) However, a problem arises when it comes to mixed composition. In fact, we will show that in some cases, for orders of opposite signs the composition is not commutative. On one hand, let us adopt the notation dN f (x) = f N (x) with N a positive integer. [d(x − a)N ] For n = 1 we have d−1 f (N ) (x) = [d(x − a)−1 ] Z x a f (N ) (x)dx = f (N −1) (x) − f (N −1) (a). BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 7 For n = 2 we have Z x  d−2 f (N ) (x) (N −1) (N −1) = f (x) − f (a) dx = f (N −2) (x) − f (N −2) (a) − (x − a)f (N −1) (a). [d(x − a)−2 ] a (2.16) Based on the expressions for different values of n, the general expression for any positive integer n is n−1 X [x − a]k d−n f N (x) (N −n) = f (x) − f [N +k−n] (a). (2.17) [d(x − a)−n ] k! k=0 On the other hand, by substituting N = 0 in (2.17), we have n−1 X [x − a]k d−n f (x) (−n) = f (x) − f (k−n) (a). [d(x − a)−n ] k! (2.18) k=0 Upon differentiating (2.18) once we have ( ) n−1 X [x − a]k−1 d d−n f (x) (1−n) = f (x) − f (k−n) (a). [d(x − a)] [d(x − a)−n ] (k − 1)! k=1 After N such differentiations, the equation ( ) n−1 X [x − a]k−N dN d−n f (x) (N −n) = f (x) − f (k−n) (a) [d(x − a)N ] [d(x − a)−n ] (k − N )! k=N emerges. This expression encompasses the case N ≥ n where n−1 X k=N [x − a]k−N (k−n) f (a) = 0 (k − N )! and the case N < n. Finally, we have established that ( ) n−1 X [x − a]k−N dN d−n f (x) (N −n) f (k−n) (a) = f (x) − [d(x − a)N ] [d(x − a)−n ] (k − N )! = f (N −n) (x) − = f (N −n) (x) − k=N n−1−N X k=0 n−1 X k=0 + n−1 X k=n−N = [x − a]k (k−n+N ) f (a) k! [x − a]k (k−n+N ) f (a) k! [x − a]k (k−n+N ) f (a) k! n−1 X [x − a]k d−n f N (x) + f (k−n+N ) (a). [d(x − a)−n ] k! k=n−N (2.19) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 8 ( ) dQ f (x) dq+Q f (x) dq = In summary, the composition rule (x) holds unless [d(x − a)q ] [d(x − a)Q ] [d(x − a)q+Q ] Q is positive and q is negative. In other words, unless f is first differentiated and then integrated. In this case the equality holds only if f (a) = 0 and if all derivatives of f through the (N − 1)th are also zero at x = a. Product Rule for Multiple Integrals In this section our focus is on establishing a rule for iterated integration of a product of two functions, akin to Leibniz’s theorem for repeated differentiation of a product. To initiate this process, we start with the well-known formula of integration by parts Z x Z x g(y)dv(y) = g(x)v(x) − g(a)v(a) − v(y)dg(y). (2.20) a a Let Z y v(y) = f (z)dz, a then Z x Z x f (z)dz − g(y)f (y)dy = g(x) a a Z x hZ y a f (z)dz i dg(y) a dy dy, (2.21) or in the previous symbolism we have ( ) −1 f d d−1 [f g] d−1 f d−1 g (1) . =g − [d(x − a)]−1 [d(x − a)]−1 [d(x − a)]−1 [d(x − a)]−1 (2.22) Upon applying (2.22) recursively to the product within the braces and invoking the composition rule, we obtain: d−1 [f g] d−1 f d−2 f (1) = g − g [d(x − a)]−1 [d(x − a)]−1 [d(x − a)]−2 ( ) (2.23) d−1 d−2 f (2) g . + [d(x − a)]−1 [d(x − a)]−2 Repeatedly applying this process indefinitely yields: ∞ h i X d−1 [f g] d−1−j f j (j) = (−1) g [d(x − a)]−1 [d(x − a)]−1−j j=0  ∞  X −1 (j) d−1−j f = g . j [d(x − a)]−1−j j=0 (2.24) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 9 When (2.24) is integrated, using this same formula inside summation, and the composition rules for integrals and derivatives are applied, we obtain  ∞   ∞  X d−2 [f g] −1 X −1 (j+k) d−j−k−2 f = g j k [d(x − a)]−2 [d(x − a)]−j−k−2 j=0 k=0   ∞ X ∞  X −1 −1 d−2−l f = g (l) j l−j [d(x − a)]−2−l j=0 l=j (2.25)   ∞ X l  X −1 −1 d−2−l f (l) = g j l−j [d(x − a)]−2−l l=0 j=0  ∞  X −2 (l) d−2−l f = g . l [d(x − a)]−2−l l=0 In the final steps of (2.25), we have made use of the permutation (2.95) ∞ X k X = k=0 j=0 and the summation   l  X −1 −1 j=0 j l−j = ∞ X ∞ X , j=0 k=j   −2 . The latter result is derived from the welll known summation formula (2.93)  l   X q Q k=0 k l−k  =  q+Q , l the proof of which can be found in subsection 2.1.5. The proofs of all equalities and summations employed here are established in Section 3.1.3. Continuing the iteration of the procedure that yielded (2.25) from (2.24), we arrive at the desired formula:  ∞  X d−n [f g] −n (j) d−n−j f g = , (2.26) j [d(x − a)]−n [d(x − a)]−n−j j=0 with n being a positive integer greater or equal to 1. A general expression of the product rule for multiple differintegration combining (2.26) with the Leibniz’s rule for repeated differentiation of a product is established in Section 2.1.4. The Chain Rule for Multiple Derivatives The chain rule for differentiation states that for two differentiable functions g and f defined in R, d d d f (g(x)) = f (u) g(x) = g ′ (x)f ′ (g(x)) . (2.27) dx du dx BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 10 This formula, facilitating the differentiation of g(u) with respect to x, given the derivatives of g(u) with respect to u and u with respect to x, stands as one of the most crucial tools in integer differential calculus. However, the chain rule doesn’t apply in fractional calculus due to the non-local nature of fractional derivatives, which incorporate information from the entire function rather than just a specific point or neighborhood. The proof for the chain rule from [2] heavily depends on the local nature of integer derivatives, as demonstrated below. Suppose f is differentiable at u = g(a), g is differentiable at a, and h(x) = f (g(x)). According to the definition of the derivative of h: h′ (a) = lim x→a h(x) − h(a) f (g(x)) − f (g(a)) = lim . x→a x−a x−a (2.28) We assume that g(a) ̸= g(x) for values of x near a but not equal to a. g(x) − g(a) , which equals 1, and let v = g(x), and We multiply the right hand of (2.28) by g(x) − g(a) u = g(a). This result is f (g(x)) − f (g(a)) g(x) − g(a) g(x) − g(a) x−a f (v) − f (u) g(x) − g(a) = lim . x→a v−u x−a h′ (a) = lim x→a By assumption, g is differentiable at a; therefore it is continuous at a. This means that limx→a g(x) = g(a), so v → u as x → a. Consequently, f (v) − f (u) g(x) − g(a) lim = f ′ (u)g ′ (a), v→u x→a v−u x−a h′ (a) = lim f (v) − f (u) g(x) − g(a) and g ′ (a) = limx→a . Because f and g are differenv→u v−u x−a tiable at u and a, respectively, the two limits in this expression exist; therefore h′ (a) exists. Noting that u = g(a), we have h′ (a) = f ′ (g(a)) g ′ (a). Replacing a with the variable x gives the chain rule h′ (x) = g ′ (x)f ′ (g(x)) . with f ′ (u) = lim Iterated Integrals Consider the formula d−1 f = [d(x − a)]−1 Z x f (y)dy = a 1 dn n! dxn Z x [x − y]n f (y)dy, n = 0, 1, 2, 3, · · · (2.29) a For n = 0, (2.29) is the identity (in this context the notational convention 0!=1 is used), while for n = 1, it follows easily from Leibniz’s theorem for differentiation and integration. For general integer n, one need only notice that the evaluation of the integrand on the right-hand side at the upper limit x gives 0, while the differentiation n times inside the integral produces n!f (y). A single integration of (2.29) for n = 1 produces: BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES d−2 f = [d(x − a)]−2 Z x Z x1 a a 1 f (x0 ) dx0 dx1 = 1! 11 Z x [x − y] f (y)dy, (2.30) a and an (n − 1)-fold integration produces Cauchy’s formula for repeated integration. Z x Z xn−1 Z x1 Z x 1 d−n f f (x0 ) dx0 · · · dxn−1 = ··· [x − y]n−1 f (y)dy. (2.31) = (n − 1)! [d(x − a)]−n a a a a Thus an iterated integral may be expressed as a weighted integral with a weight function, a fact that provides an important clue for generalizations involving non-integer orders. Differentiation and Integration of Series Many functions are often represented by infinite series expansions. Understanding the conditions for term-by-term differentiation or integration of such series is crucial. Below are two key results, with further extensions discussed in Section 2.1.4 for differintegrals of any order. Let f0 , f1 , . . . be functions defined and continuous on the interval a ≤ x ≤ b. Then, ∞ ∞ nX o X d−1 fj d−1 , a ≤ x ≤ b, (2.32) f = j [d(x − a)]−1 j=0 [d(x − a)]−1 j=0 X assuming that the series fj converges uniformly in the interval a ≤ x ≤ b. The conditions necessary to apply differentiation across the terms of an infinite series are slightly distinct. In this case, it is necessary for each fj to possess continuous derivatives on a ≤ x ≤ b. Then, ∞ ∞ j=0 j=0 d n X o X dfj , fj = dx dx X a ≤ x ≤ b, (2.33) X dfj converges uniformly on the interval a ≤ x ≤ b. dx This demonstrates that a series of continuous functions, converging uniformly (thus defining a continuous function), can be integrated term by term. Similarly, a series of continuously differentiable functions, converging pointwisely, can be differentiated term by term, provided the resulting series of derivatives converges uniformly. provided fj converges pointwise and Differentiation and Integration of Powers We gather here the elementary formulas that express dq [x − a]p for possible positive and [d(x − a)]q negative integer values of q. We have dn [x − a]p p![x − a]p−n p−n = , n = p[p − 1] · · · [p − n + 1][x − a] [d(x − a)] (p − n)! n = 0, 1, · · · (2.34) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES and d−n [x − a]p = [d(x − a)]−n = Z x1 Z x Z xn−1 ··· a a 12 [x0 − a]p dx0 · · · dxn−1 dx. a  [x − a]p+n     [p + 1][p + 1] · · · [p + n] , p > −1,     ∞, n = 1, 2, · · · p ≤ −1, (2.35) Let us highlight that for p > −1, [x − a]p+n p![x − a]p+n [x − a]p+n p![x − a]p−(−n) = = = . (p + n)! [p + 1][p + 2] · · · [p + n] (p + n)! (p − (−n))! p! Let us denote by q an integer of either sign and p > −1. Upon substituting q = n, a positive integer, in equation (2.34), we obtain p![x − a]p−q Γ(p + 1)[x − a]p−q dq [x − a]p = . q = [d(x − a)] (p − q)! Γ(p − q + 1) (2.36) Similarly, q = −n in (2.35) leads to Γ(p + 1)[x − a]p−q dq [x − a]p . q = [d(x − a)] Γ(p − q + 1) Combining (2.36) and (2.37) yields   ∞, q = −1, −2, · · ·, p ≤ −1,     q p d [x − a] =  [d(x − a)]q q = 0, 1, 2, · · ·, Γ(p + 1)[x − a]p−q     Γ(p − q + 1) q = −1, −2, · · ·, The coefficient 2.1.2 (2.37) all p (2.38) p > −1. Γ(p + 1) may be positive, negative or zero. Γ(p − q + 1) Fractional Derivatives and Integrals: Definitions and Equivalencies Equation (2.13) defines an entity that we will refer to as a differintegral of an integer order. This can be extended to any real order q. In fact, nothing in the expression of (2.13) prevents it from being applied to real orders, and the Γ function present in its formula, which is an extension of the factorial to real numbers, smooths the whole process. In this chapter, we compare several definitions of the differintegral of a function f of an arbitrary order q. Perhaps the least ambiguous symbolism for the value at x of the differintegral of order q of a function defined on an interval a ≤ y ≤ x would be BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES dq f , [d(y − a)]q 13 q ∈ R. We shall eventually relate this differintegral to an ordinary integral in which y is a ”dummy” variable of integration, and a and x are limits of integration. In line with conventions adopted in Section 2.1.1, our normal abbreviations for the q th differintegral of the function f will be h i dq f dq f (x0 ) dq f . , and or [d(x − a)]q [d(x − a)]q [d(x − a)]q x=x0 It being understood that f and dq f are functions of the independent variable x when [d(x − a)]q the x is omitted. Differintegrable Functions It is now time to define the class of functions to which we will apply differintegration operators. We will primarily focus on classically defined functions. For such classically defined functions, we follow the principles of integral calculus and stipulate that our candidate functions must be defined on the closed interval a ≤ y ≤ x, bounded everywhere in the half-open interval a < y ≤ x, and be ”well behaved” at the lower limit a (i.e not diverge at the lower limit a). Good examples of such functions are polynomials, exponential, logarithms and all functions whose differintegrals can be determined. In this section, we will direct our attention to a category known as ”differintegrable series”, defined as finite sums of functions, each of which can be expressed as f (y) = [y − a]p ∞ X j cj [y − a] n , c0 ̸= 0, p > −1, n ≥ 1, an integer. (2.39) j=0 Notice that p has been chosen to ensure that the leading coefficient is nonzero. Such differintegrable series f then satisfy n o lim [y − a]f (y) = 0. y→a An important consequence of this representation is that given the fact that ∞ X j j=0 n 1 2 n−1 + +···+ + n n n       1 2 n−1 1+ 1+ + 1+ +··· 1+ + n n n       1 2 n−1 2+ 2+ + 2+ + ··· 2 + + n n n       1 2 n−1 N+ N+ + N+ + ··· N + + ··· n n n = 0+ BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 14 Let us consider the following result: ∞ X j j=0 n = 0 + 1 + 2 + 3 + 4 + · · · + N + ···       1 1 1 1 + + 1+ + 2+ +···+ N + +··· n n n n       2 2 2 2 + 2+ +···+ N + + ··· + + 1+ n n n n       n−1 n−1 n−1 n−1 + + 2+ +···+ N + +··· + 1+ n n n n Let us denote ∞ X j1 = 0 + 1 + 2 + 3 + 4 + · · · + N + · · · j1 =0      ∞  X   1 1 1 1 1 = + 1+ + 2+ +···+ N + + ··· j2 + n n n n n j2 =0        ∞  X n−1 n−1 n−1 n−1 n−1 = + 1+ + 2+ +···+ N + + ··· . jn + n n n n n jn =0 We then finally have ∞ X j j=0 n = ∞ X j1 =0   ∞  ∞  X X n−1 1 +···+ . j1 + jn + j2 + n n jn =0 j2 =0 Utilizing this result in (2.39) leads to the following decomposition of f as a finite sum of units of power series. f (y) = [y−a]p ∞ X cj1 [y−a]j1 +[y−a][np+1]/n j1 =0 ∞ X j2 =0 cj2 [y−a]j2 +···+[y−a][np+n−1]/n ∞ X cjn [y−a]jn , jn =0 of n differintegrable units fv , each of which is a product of a power(greater than -1) of (y − a) and a function analytic in (y − a). The desirability of this property will become more apparent in the subsequent sections. Fundamental Definitions The initial definition we present is what we consider the most fundamental, because it imposes the fewest restrictions on the functions it applies to and avoids explicit use of the concepts of ordinary derivative and integral. This definition, which directly extends and unifies notions BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 15 of difference quotients and Riemann sums, was first introduced by Grünwald (1867) and later expanded by Post (1930). Referring to Section 2.1.1 and the discussion leading to equation (2.13), we define the differintegral of order q using the formula: ( ) N −1 dq f (x) [δx]−q X Γ(j − q) = lim f (x − j δx) , (2.40) [d(x − a)q ] δx→0 Γ(−q) Γ(j + 1) j=0 where q is arbitrary. Note: On one hand, if q < 0, N ≥ 1 represents the number of subdivisions employed in defining an integral as a limit of sums. On the other hand, if q ≥ 0, in order to unify the formula of the nth derivative with one that defines an integral as a limit of sums, we define the derivatives in terms of a restricted limit. This limit occurs as δx tends to zero through discrete values only, [x − a] , N = 1, 2, · · · . by setting δx = N It is worth noting that this definition solely relies on evaluations of the function itself; no explicit use is made of derivatives or integrals of f . However, the most commonly encountered definition of an integral of fractional order is through an integral operator known as the Riemann-Liouville integral. To justify this definition, one only needs to consider Cauchy’s formula in equation (2.31) and replace ”−n” with q, suggesting the generalization to non-integer orders: " # Z x dq f 1 [x − y]−q−1 f (y)dy, q < 0. (2.41) = [d(x − a)]q Γ(−q) a R−L In equation (2.41), we have utilized the subscript [· · ·]R−L to denote the Riemann-Liouville fractional integral, which may be distinct from our more fundamental definition in (2.40). We will now demonstrate that the two definitions produce identical outcomes, and the subscript [· · ·]R−L will be omitted thereafter. Formula (2.41) will be retained as the q < 0 definition of the differintegral; it is extended to q ≥ 0 by insisting that equation (2.41) satisfies by the ”restricted composition”. " # " # dq f dn dq−n f = n , (2.42) [d(x − a)]q dx [d(x − a)]q−n R−L where R−L dn denotes ordinary n-fold differentiation and n is an integer chosen large enough so dxn that q − n < 0. Together with equation (2.41), definition (2.42) then defines the operator " # dq f , (2.43) [d(x − a)]q R−L BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 16 for all q. Before demonstrating the equivalence between the two differintegral formulas (2.40) and (2.43), it is necessary to establish the ”restricted composition rule” for (2.40). This rule is utilized for both (2.40) and (2.41) in the proof of equivalence. Based on definition (2.40), we aim to demonstrate that # " dn dn+q f dq f = , (2.44) q dxn [d(x − a)] [d(x − a)]n+q for all positive integers n and all q. One might consider this property as a limited composition law, i.e., a rule for composing orders of the generalized differintegral. To validate this assertion, x−a let δx = , N ( ) N −1 dq f (x) [δx]−q X Γ(j − q) = lim f (x − j δx) . δx→0 Γ(−q) [d(x − a)]q Γ(j + 1) j=0 Upon subdividing the interval a ≤ y ≤ x − δx into only N − 1 equally spaced subintervals, we see that ( ) N −2 dq f [δx]−q X Γ(j − q) (x − δx) = lim f (x − δx − j δx) δx→0 Γ(−q) [d(x − a)]q Γ(j + 1) j=0 ) ( N −1 [δx]−q X Γ(j − q − 1) f (x − j δx) . = lim δx→0 Γ(−q) Γ(j) j=1 On differentiation making use of restricted limits as explained at the start of section (2.1) to d define , one gets dx #) " # " ( qf qf d dq f d d (x) − (x − δx) = lim [δx]−1 N →∞ dx [d(x − a)]q [d(x − a)]q [d(x − a)]q ( " ( )#) N −1 X [δx]−q−1 Γ(j − q) Γ(j − q − 1) = lim Γ(−q)f (x) + − . N →∞ Γ(−q) Γ(j + 1) Γ(j) j=1 Making use of the recurrence properties of the Γ function, Γ(j − q) Γ(j − q − 1) Γ(−q)Γ(j − q − 1) − = , Γ(j + 1) Γ(j) Γ(−q − 1)Γ(j + 1) is obtained. Therefore, " # ( " N −1 #) d [δx]−q−1 X Γ(j − q − 1) dq f = lim f (x − j δx) N →∞ Γ(−q − 1) dx [d(x − a)]q Γ(j + 1) j=0 = dq+1 [d(x − a)]q+1 Equation (2.44) follows by induction. . BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 17 Equivalence of Definitions It is now relevant to inquire whether the Riemann-Liouville definition, based on equation (2.41) for negative q and its extension to q ≥ 0 by means of equation (2.42), yields operators that coincide for all functions f . We will demonstrate that this is indeed the case. Firstly, we establish the identity for a subset of q values and then utilize property (2.42) to extend the identity to all orders q. To begin, let f be an arbitrary but fixed function on the interval x−a a ≤ y ≤ x. As before, we define δx = , then the difference N " # dq f dq f (x) − (x) ∆= [d(x − a)]q [d(x − a)]q R−L ( ) Z N −1 x−a −q X [δx] f (x − u) Γ(j − q) = lim f (x − j δx) − du δx→0 Γ(−q) Γ(j + 1) Γ(−q)u1+q 0 j=0   −1 N −1  [δx]−q NX  X Γ(j − q) f (x − j δx)δx = lim f (x − j δx) − lim  δx→0 δx→0  Γ(−q) Γ(j + 1) Γ(−q)[j δx]1+q j=0 j=0 ( " #) N −1 [δx]−q X Γ(j − q) = lim − j −1−q f (x − j δx) N →∞ Γ(−q) Γ(j + q) j=0 ( N −1 " #) X  N x − jx + ja  Γ(j − q) [x − a]−q lim f Nq − j −1−q . = Γ(−q) N →∞ N Γ(j + q) (2.45) j=0 The N terms within the summation are divided into two groups: 0 ≤ j ≤ J − 1 and J ≤ j ≤ N − 1, where J is independent of N . Thus " #) ( J−1 X  N x − jx + ja  [x − a]−q q Γ(j − q) −1−q ∆= lim N −j f Γ(−q) N →∞ N Γ(j + q) j=0 (2.46) ( N −1 #" #) [x − a]−q 1 X  N x − jx + ja h j i−2−q q[q + 1] O(j −1 ) + lim + f , Γ(−q) N →∞ N N N 2N N j=J where N is sufficiently large to validate the asymptotic expansion for terms in the second group. Now, for q < −1, the J bracketed terms within the first summation are bounded.  N x − jx + ja  Hence, if f is also bounded for j within the first group, the presence of the N q N factor ensures that the first sum vanishes in the limit N → ∞. Examining the three factors h j i−2−q within the second summation, we note that is always less than unity if q ≤ −2, and N  N x − jx + ja  that the third factor tends to zero as N → ∞. Therefore, if f is bounded for N BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 18 1 j within the second group, each term in the second summation vanishes since → 0 when N N → ∞. The above demonstrates that if f is bounded on a < y ≤ x and if q ≤ −2, then " # dq f dq f ∆= (x) − (x) ≡ 0, (2.47) [d(x − a)]q [d(x − a)]q R−L Thus, the two definitions, when applied to functions bounded in this manner, are indeed identical for q ≤ −2. This fact, along with property (2.42) and requirement (2.44) , demonstrates that the two definitions are identical for any q. Indeed, for arbitrary q, we know that for any positive integer n " ) # ( dq f dn dq−n f = n [d(x − a)]q dx [d(x − a)]q−n and " dq f [d(x − a)]q # R−L dn = n dx ( ) dq−n f . [d(x − a)]q−n One need only choose n sufficiently large that q − n ≤ −2 and make use of (2.47) to complete the proof. Fractional Calculus for Complex Numbers A different avenue for motivating the definition of the differintegral stems from consideration of Cauchy’s integral formula. The Cauchy’s integral formula is a powerful result in complex analysis which provides a method for computing higher order derivatives of a complex valued function within a simply connected domain using contour integration. The formula states that if f (z) is an analytic function within a simply connected domain D containing a closed contour C, and z is a point inside C, then the nth derivative of f (z) at z is given by: I dn f (z) f (ξ) n! = dξ, n dz 2πi C [ξ − z]n+1 where: H • C denotes the counter-clockwise integral along the closed contour C. Contour integrals involve integrating a complex valued function along a path in the complex plane, known as a contour. Contour refers to a curve or a path in the complex plane. These curves are typically defined by parametric equations or by specifying points along the curve. • f (ξ) represents the value of the function f (z) at point ξ on the contour C. √ • i denotes the imaginary unit and is defined as i = −1. BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 19 • A region D in the complex plane is said to be simply connected if any two points in D can be joined by a path lying entirely within D, and if every simple closed curve in D can be continuously shrunk to a point without leaving D. When the positive integer n is replaced by a non-integer q, the expression [ξ − z]−q−1 no longer has a pole at ξ = z but a branch point. In this case, one cannot freely deform the contour C surrounding z since the integral will depend on the location of the point at which C crosses the branch line for [ξ − z]−q−1 . A branch line is a line in the complex plane across which a complex-valued function undergoes a discontinuity or a change in behavior. The point is chosen to be 0 and the branch line to be the straight line joining 0 and z, in the quadrant ℜ(ξ) ≤ 0, ℑ(ξ) ≤ 0 as shown in Figure 2.1. Then, for q not a negative integer, one simply defines: I dq f (z) Γ(q + 1) f (ξ) = dξ, (2.48) q q+1 dz 2πi C [ξ − z] where the contour C begins and ends at ξ = 0 enclosing z once in a contour traversed counterclockwise. To uniquely specify the denominator of the integrand, one defines [ξ − z]q+1 = exp ([q + 1] log (ξ − z)), where log (ξ − z) is real if ξ and z are both real numbers and ξ > z. We can relate definition (2.48) to that of Riemann-Liouville by first deforming the contour C into a contour C ′ lying on both sides of the branch line. One finds that ) f (ξ) dξ , q+1 C3 [ξ − z] (2.49) where C1 denotes the segment below the branch line and joining the point 0 to a close neighborhood of z, C2 the circular contour around z and C3 the segment above the branch line and joining a close neighborhood of z to the point 0 as shown in Figure 2.1. Γ(q + 1) 2πi I f (ξ) Γ(q + 1) dξ = q+1 2πi C ′ [ξ − z] (Z f (ξ) dξ + q+1 C1 [ξ − z] Z f (ξ) dξ + q+1 C2 [ξ − z] Parameterization: • C1 : ξ = µ − iϵ, 0 ≤ µ ≤ z, ϵ → 0+ , • C2 : ξ − z = r exp (iθ), −π ≤ θ ≤ π, and r → 0+ , • C3 : ξ = z − µ + iϵ, ϵ → 0+ . 0≤µ≤z Thus, Z I1 = lim ϵ→0+ f (ξ) dξ = lim q+1 ϵ→0+ C1 [ξ − z] Z z 0 f (µ − iϵ) dµ. [µ − iϵ − z]q+1 Z BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 20 Figure 2.1: The Cauchy contour C is deformed into C ′ for the purpose of implementing the Cauchy’s integral formula. Subsequently, Z f (ξ) dξ = q+1 C2 [ξ − z] Z π f (z + r exp (iθ)) ri exp (iθ)dθ. q+1 −π [r exp (iθ)] We have Z π f (z + r exp (iθ)) ri exp (iθ)dθ ≤ q+1 −π [r exp (iθ)] Z π f (z + r exp (iθ)) ri exp (iθ) dθ [r exp (iθ)]q+1 −π Z π f (z + r exp (iθ)) = −π [r exp (iθ)]q+1 ri exp (iθ) dθ Z π f (z + r exp (iθ)) = −π |r|q+1 |r|dθ. Since q < 0, f (z + r exp (iθ)) lim r→0+ |r|q+1 Z π |r| = 0 ⇒ lim r→0+ f (z + r exp (iθ)) ri exp (iθ)dθ = 0. q+1 −π [r exp (iθ)] Then, for r → 0+ , Z f (ξ) dξ → 0. q+1 C2 [ξ − z] BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 21 Moreover, Z z Z z Z f (z − µ + iϵ) f (z − µ + iϵ) f (ξ) dξ = (−dµ) = − dµ. I3 = q+1 q+1 q+1 [ξ − z] [z − µ + iϵ − z] 0 0 [−µ + iϵ] C3 Upon setting ζ = z − µ, this leads to Z z I3 = − lim ϵ→0 0 f (ζ + iϵ) dζ. [ζ − z + iϵ]q+1 (2.50) Let us compare I1 andI3 . Since I1 and I3 have the same integration boundaries, we just have to compare their integrands. Additionally, their integrands are both complex functions; comparing them is equivalent to comparing their modulis and arguments. Let A1 = f (µ − iϵ) f (µ − iϵ) ⇒ |A1 | = . q+1 [µ − iϵ − z] [µ − iϵ − z]q+1 Knowing that   [µ − iϵ − z]q+1 = exp [q + 1] log (µ − iϵ − z) , with log (µ − iϵ − z) = ln |µ − iϵ − z| + i Arg (µ − iϵ − z), and Arg(θ) being the argument of θ on the principal branch. We finally have     [µ − iϵ − z]q+1 = exp [q + 1] ln |µ − iϵ − z| exp i[q + 1] Arg (µ − iϵ − z) . This leads to ! f (µ − iϵ) arg (A1 ) = arg [µ − iϵ − z]q+1     = arg f (µ − iϵ) − arg [µ − iϵ − z]q+1   = arg f (µ − iϵ) − arg     exp [q + 1] ln |µ − iϵ − z| exp i[q + 1] Arg (µ − iϵ − z)     = arg f (µ − iϵ) − [q + 1] Arg µ − iϵ − z . For µ = ζ, we finally have |A1 | = and f (ζ − iϵ) , [ζ − iϵ − z]q+1     arg (A1 ) = arg f (ζ − iϵ) − [q + 1] Arg ζ − iϵ − z . ! BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 22 Similarly, |A3 | = and f (ζ + iϵ) , [ζ + iϵ − z]q+1     arg (A3 ) = arg f (ζ + iϵ) − [q + 1] Arg ζ + iϵ − z . Going from ζ − iϵ − z to ζ + iϵ − z requires a rotation of 2π, then     Arg ζ + iϵ − z = Arg ζ − iϵ − z + 2π. This leads to       arg (A3 ) = arg f (ζ + iϵ) − [q + 1] Arg ζ − iϵ − z + 2π . When ϵ → 0+ , we finally have |A1 | = |A3 | and arg (A3 ) = arg (A1 ) − 2π[q + 1] ⇒ ⇒   A3 = A1 exp − 2πi[q + 1]   I3 = I1 exp − 2πi[q + 1] . We finally obtain for ζ = ξ I Z z Z z f (ξ) f (ξ) f (ξ) dξ = dξ − exp (−2πi[q + 1]) dξ q+1 q+1 q+1 C ′ [ξ − z] 0 [ξ − z] 0 [ξ − z]  Z z f (ξ) = 1 − exp (−2πi[q + 1]) dξ. q+1 0 [ξ − z] Substituting this result in (2.44) leads to I Z z Γ(q + 1) f (ξ) Γ(q + 1)  f (ξ) dξ = 1 − exp (−2πi[q + 1]) dξ. q+1 2πi [ξ − z] 2πi [ξ − z]q+1 ′ C 0 Let us compute On one hand,  Γ(q + 1)  1 − exp (−2πi[q + 1]) : 2πi n o 1 − exp (−2πi[q + 1]) = 1 − cos (2π[q + 1]) − i sin (2π[q + 1]) (2.51) = 1 − cos (2π[q + 1]) + i sin (2π[q + 1]). On the other hand, using the reflection property of the Γ function, we have Γ(q + 1) π 1     = = . 2πi 2πi sin π(q + 1) Γ(−q) 2i sin π(q + 1) Γ(−q) (2.52) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 23 Combining (2.51) and (2.52) gives  1 − cos (2π[q + 1]) + i sin (2π[q + 1]) Γ(q + 1)    1 − exp (−2πi[q + 1]) = 2πi 2i sin π(q + 1) Γ(−q)       2 sin2 π(q + 1) + 2i cos π(q + 1) sin π(q + 1)   = 2i sin π(q + 1) Γ(−q)     sin π(q + 1) + i cos π(q + 1) = iΓ(−q)     −i sin π(q + 1) + cos π(q + 1) = Γ(−q) exp (−πi[q + 1]) = . Γ(−q) Then, Z z f (ξ) f (ξ) Γ(q + 1)  1 − exp (−2πi[q + 1]) dξ = dξ q+1 q+1 2πi C ′ [ξ − z] 0 [ξ − z] Z f (ξ) exp (−πi[q + 1]) z dξ = Γ(−q) [ξ − z]q+1 0 Z z 1 f (ξ) = dξ Γ(−q) 0 exp (πi[q + 1])[ξ − z]q+1 Z z f (ξ) 1 = h iq+1 dξ Γ(−q) 0 exp (πi)[ξ − z] Z z 1 f (ξ) = h iq+1 dξ Γ(−q) 0 (−1) · [ξ − z] Z z 1 f (ξ) dξ. = Γ(−q) 0 [z − ξ]q+1 I This is the Riemann-Liouville definition with a = 0. Definition (2.48) is attributed by Oster (1970a) to Nekrassov (1808). Erdélyi (1964) defined a q-th-order differintegral of a function f (z) with respect to the function z n by: Z z dq f (z) 1 f (ξ)nξ n−1 = dξ. [d(z n − an )]q Γ(−q) a [z n − ξ n ]q+1 The types of derivatives reviewed above are considered classical fractional derivatives, since they were posed shortly after the integer calculus became widely accepted. In the 20th century numerous other types of derivatives were defined and some of the properties discussed above were lost in favour of other more important ones. This will be discussed further in the following section. BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 24 Other Formulas Applicable To Analytic Functions dq concerning real [d(x − a)]q analytic functions. These functions, denoted as ϕ, possess convergent power series expansions within the interval a ≤ y ≤ x. Such representations introduce computational flexibility for evaluating q-th order differintegrals for specific ϕ choices. Initially, our focus is on q < 0, allowing us to utilize the Riemann-Liouville definition. Thus, Z x Z x−a dq ϕ 1 ϕ(y) 1 ϕ(x − v) dy = dv, (2.53) q = q+1 [d(x − a)] Γ(−q) a (x − y) Γ(−q) 0 v q+1 The aim of this section is to explore alternative representations for with v = x − y. Upon Taylor expansion of ϕ(x − v) about x, one has ∞ (1) ϕ(x − v) = ϕ − vϕ X (−v)k ϕ(k) v2 + ϕ(2) − · · · = . 2! k! (2.54) k=0 The representation (2.54) involves no remainder since we have assumed that ϕ possesses a convergent power series expansion, and since such an expansion is unique. When this expansion is inserted into (2.53) and term-by-term integration is performed, the result is ∞ X (−1)k (x − a)k−q ϕ(k) dq ϕ = . [d(x − a)]q Γ(−q)(k − q)k! (2.55) k=0 2.1.3 Differentiation of Simple Functions The purpose of this chapter is to calculate the q th order differintegral of certain simple functions. The simple functions considered are examples of power functions [x−a]p . Thus, we first examine the instances where p = 0 and p = 1, with the general case being explored just after. The Zero Function When definition (2.13) is applied to the function defined by f ≡ c, c any constant including zero, we see that ( ) −1 h N iq NX dq [c] dq [1] Γ(j − q) =c = c lim N →∞ [d(x − a)]q [d(x − a)]q x−a Γ(−q)Γ(j + 1) j=0 (2.56) [x − a]−q =c . Γ(1 − q) The final expression is obtained upon applying first (2.91): N −1 X j=0 Γ(j − q) Γ(N − q) = , Γ(−q)Γ(j + 1) Γ(1 − q)Γ(N ) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 25 and (2.87): " lim j→∞   ∞, c > 0, c+q Γ(j − q) = 1, c = 0, j  Γ(j)  0, c < 0, # the proof of which can be found in subsection 2.1.5. Since dq [1] is never infinite for [d(x − a)]q x > a, we conclude by setting c = 0 that dq [0] = 0, [d(x − a)]q for all q. (2.57) Result (2.56) may appear obvious. However, as an example of its importance, observe that it provides a powerful counterexample to the thesis that if dq f = g, [d(x − a)]q then d−q g = f, [d(x − a)]−q for if f yields zero upon differentiation to order q, f cannot be restored by q-order integration. Here again, we encounter the so called composition rule, this time for non-integer orders. This subject will be more thoroughly explored in section 2.1.4. The Unit Function We consider the differintegral to order q of the function f ≡ 1. A straightforward application of (2.40) to the function f ≡ 1 gives ( ) −1 h N iq NX dq [1] Γ(j − q) = lim . N →∞ [d(x − a)]q x−a Γ(−q)Γ(j + 1) j=0 Application first of (2.91) and (2.87) as for the zero function yields ( ) h N iq Γ(N − q) dq [1] [x − a]−q = , = lim N →∞ [d(x − a)]q x − a Γ(1 − q)Γ(N ) Γ(1 − q) (2.58) as a result. As an example of the application of the unit function and its differintegrals, consider the combination of formulas (2.55) and (2.58) into ∞ X dq ϕ dq−k [1] k Γ(1 + k − q) ϕ(k) , = [−1] [d(x − a)]q Γ(−q)[k − q]k! [d(x − a)]q−k k=0 (2.59) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 26 valid for any analytic function ϕ. Application of the recurrence property of the Gamma function as well as (2.89), the proof of which can be found in subsection 2.1.5 leads to the concise representation ∞   X dq ϕ q dq−k [1] dk ϕ = , (2.60) [d(x − a)]q k [d(x − a)]q−k [d(x − a)]k k=0 where   q Γ(q + 1) = . k k!Γ(q − k + 1) The function x − a For the function f (x) = x − a, definition (2.40) gives ( ) −1 h N iq NX i Γ(j − q) h N x − jx + ja dq [x − a] = lim −a N →∞ [d(x − a)]q x−a Γ(−q)Γ(j + 1) N j=0 " ( N −1 ) ( )# N −1 X X Γ(j − q) Γ(j − q) 1−q q q−1 = [x − a] lim N − lim N j N →∞ N →∞ Γ(−q)Γ(j + 1) Γ(−q)Γ(j + 1) j=0 j=0 " ( )# Γ(N − q) −qΓ(N − q) = [x − a]1−q lim N q − lim N q−1 . N →∞ Γ(1 − q)Γ(N ) N →∞ Γ(2 − q)Γ(N − 1) Similarly as we proved in (2.87), when N → ∞, we have Γ(N − q) ∼ N −q , Γ(N ) Γ(N − q) ∼ N −q+1 . Γ(N − 1) Substituting this result into the equation leads to "( )# q q−1 dq [x − a] N 1 N q = [x − a]1−q lim · + lim · N →∞ N q Γ(1 − q) N →∞ N q−1 Γ(2 − q) [d(x − a)]q )# "( q 1 + = [x − a]1−q Γ(1 − q) Γ(2 − q) " # Γ(2 − q) + qΓ(1 − q) = [x − a]1−q . Γ(1 − q)Γ(2 − q) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 27 On application of the recurrence property of the Γ function, this leads to " # dq [x − a] 1−q (1 − q)Γ(1 − q) + qΓ(1 − q) = [x − a] [d(x − a)]q Γ(1 − q)Γ(2 − q) " # Γ(1 − q) = [x − a]1−q Γ(1 − q)Γ(2 − q) = [x − a]1−q . Γ(2 − q) Alternatively, a similar result can be achieved using the Riemann-Liouville formula (2.41). On substituting w = x − y, Z x dq [x − a] 1 [y − a] = dy [d(x − a)]q Γ(−q) a (x − y)q+1 Z x−a 1 [x − a − w] = dw Γ(−q) 0 wq+1 " # 1 [x − a]1−q [x − a]1−q = − Γ(−q) −q 1−q = [x − a]1−q , [−q][1 − q]Γ(−q) q < 0. The denominator of which equals Γ(2 − q) by applying the recurrence formula −qΓ(−q) = Γ(1 − q) and (1 − q)Γ(1 − q) = Γ(2 − q). Use of equation (2.42) yields dq−n [x − a] [x − a]1−q+n , = Γ(2 − q + n) [d(x − a)]q−n where dq [x − a] dn q = [d(x − a)] [d(x − a)]q ( dq−n [x − a] [d(x − a)]q−n ) Γ(2 − q + n) [x − a]1−q [x − a]1−q = = , Γ(2 − q) Γ(2 − q + n) Γ(2 − q) (2.61) follows by equation (2.36). We note, as expected, that formula (2.61) reduces to zero when [x − a]n+1 q = 2, 3, 4, · · · , to unity when q = 1, to x − a when q = 0, and to when (n + 1)! q = −n = −1, −2, −3 · · · . Notice also, on comparison of formulas (2.61) and (2.58), that the q th differintegral of x − a equals the (q − 1)th differintegral of unity. BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 28 The Function [x − a]p In this section, we focus on the function f (x) = [x − a]p , where p is initially arbitrary. However, it becomes apparent that p must exceed −1 for differintegration to satisfy the required properties. For integers q of any sign, we can use the classical calculus formula (2.34). Our exploration of non-integer q will initially be limited to negative values, allowing us to leverage the Riemann-Liouville definition. dq [x − a]p 1 = [d(x − a)]q Γ(−q) Z x a [y − a]p 1 dy = [x − y]q+1 Γ(−q) Z x−a a vp dv, [x − a − v]q+1 q < 0, where v has replaced y − a. By further replacement of v by [x − a]u, the integral may be cast into the standard Beta integral form: Z dq [x − a]p [x − a]p−q 1 p u [1 − u]−q−1 du, q < 0. (2.62) = [d(x − a)]q Γ(−q) 0 The definite integral in (2.62) is the Beta function (2.2), which is defined as Z 1 B (p, q) = y p−1 [1 − y]q−1 dy, p > 0 < q. 0 Therefore, dq [x − a]p [x − a]p−q Γ(p + 1)[x − a]p−q B (p + 1, −q) = , q = [d(x − a)] Γ(−q) Γ(p − q + 1) q < 1, p > −1, (2.63) where the Beta function has been replaced by its Gamma function equivalent using (2.3): Γ(p)Γ(q) B (p, q) = . Γ(p + q) Complex variable approach: For comparison of techniques, we verify (2.56) with a completely different approach, starting with the definition (2.48). We replace x − a by z such that I dq [x − a]p dq z p Γ(q + 1) ξp = = dξ, q q+1 [d(x − a)] dz p 2πi C [ξ − z] where the contour C in the complex ξ plane begins and ends at ξ = 0 enclosing z once in the positive sense. If one sets ξ = zs, then I dq z p Γ(q + 1) p−q = z sp [s − 1]−q−1 ds, dz p 2πi C where the integral is over a contour encircling the point s = 1 once in the positive sense and beginning and ending at s = 0. When such a contour is deformed into the one shown in Figure BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 29 2.1, then " #Z 1 dq z p Γ(q + 1)z p−q 1 − exp (−2πi[q + 1]) sp [s − 1]−q−1 ds = dz p 2πi 0 " #Z 1 Γ(q + 1)z p−q = exp (−iπ[q + 1]) exp (iπ[q + 1]) − exp (−iπ[q + 1]) sp [s − 1]−q−1 ds 2πi 0 " #Z 1 Γ(q + 1)z p−q [−1][−q−1] exp (iπ[q + 1]) − exp (−iπ[q + 1]) sp [s − 1]−q−1 ds = 2πi 0 " #Z 1 Γ(q + 1)z p−q = exp (iπ[q + 1]) − exp (−iπ[q + 1]) sp [1 − s]−q−1 ds 2πi 0 Z 1 Γ(q + 1) p−q = z 2i sin (π[q + 1]) sp [1 − s]−q−1 ds 2πi 0 Γ(p + 1)z p−q = , p > −1, q < 0, Γ(p − q + 1) π and property sin(πz) (2.3) of the Beta integral. We may again use equation (2.42) together with the classical formula (2.36) to extend our treatment to positive q. Following this technique, " # dq [x − a]p dn dq−n [x − a]p = n [d(x − a)]q dx [d(x − a)]q−n # " Z dn [x − a]p−q+n 1 p n−q−1 u [1 − u] du = n dx Γ(n − q) 0 " # dn Γ(p + 1)[x − a]p−q+n = n dx Γ(p − q + n + 1) where use has been made of the reflection property of Γ, Γ(z)Γ(1 − z) = Γ(p + 1) dn [x − a]p−q+n Γ(p − q + n + 1) dxn Γ(p + 1) Γ(p − q + n + 1) = [x − a]p−q Γ(p − q + n + 1) Γ(p − q + 1) Γ(p + 1)[x − a]p−q = , p > −1. Γ(p − q + 1) = where, since we chose 0 ≤ q < n , we were able to use (2.58) to evaluate the [q − n]th differintegral of [x − a]p . The classical formula (2.34) then leads to dq [x − a]p Γ(p + 1)[x − a]p−q = , [d(x − a)]q Γ(p − q + 1) q ≥ 0, p > −1, BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 30 straightforwardly. Unification of this result with (2.63) yields the formula dq [x − a]p Γ(p + 1)[x − a]p−q = , [d(x − a)]q Γ(p − q + 1) p > −1, (2.64) valid for all q. As required for an acceptable formula in our generalized calculus, equation (2.64) incorporates the classical formula (2.35). The formula dq xp Γ(p + 1) = xp−q q dx Γ(p − q + 1) was important in being the basis of the concept of fractional differentiation as developed by Gemant (1936). Thus far, this section has only considered instances where p > −1 for [x − a]p . We now briefly consider the case where p ≤ −1. The generalized derivative (2.62) breaks down for p ≤ −1, because the Beta integrals then diverge. A suggestion of the type dq [x − a]p = ∞, [d(x − a)]q p ≤ −1, all q, would, however, be unacceptable because it would fail to incorporate the classical result (2.34) for positive integer q. Similarly, formula (2.64) cannot be extended to p ≤ −1, because although it does not incorporate (2.34), it also does not reproduce (2.35) for negative integer q. Moreover, we know of no generalization of formula (2.64) that incorporates both of the requirements (2.34) and (2.35) for p ≤ −1. The breakdown of (2.64) for p ≤ −1 is associated with the pole of order unity or greater which occurs at x = a for the functions [x − a]p , p ≤ −1. Functions for which such a pole occurs anywhere on the open interval from a to x lead to similar difficulties and for reasons such as this we have purposely excluded these functions from the class of differintegrable series. 2.1.4 General Properties This chapter delves into the properties of differintegral operators that we anticipate will extend classical formulas for derivatives and integrals. These properties serve as our primary framework for comprehending and employing fractional calculus. While some classical properties generalize seamlessly, others require adjustments. Unless stated otherwise, we presume that all encountered functions are differintegrable as outlined in Section 2.1.2. Throughout this chapter, we often limit our focus to differintegrable series for practical reasons. Linearity The linearity of the differintegral operator, by which we mean dq [f1 + f2 ] dq f1 dq f2 , q = q + [d(x − a)] [d(x − a)] [d(x − a)]q is an immediate consequence of any of the definitions of the differintegral. (2.65) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 31 Differintegration Term by Term The linearity of the differintegral operators implies that they can be distributed through the terms of a finite sum; i.e., n n j=0 j=0 X X dq fj dq . fj = q [d(x − a)] [d(x − a)]q (2.66) We aim to investigate the circumstances permitting term-by-term differintegration of an infinite series of functions. Our primary objective in this section is to establish the term-by-term differintegrability of general differintegrable series (see Section 2.1.2). We will frequently rely on classical results concerning the differentiation and integration of infinite series term by term (see Section 2.1.1). To utilize these results effectively, we must ensure that the terms fj of the series are either continuous or continuously differentiable. If we focus on summands fj that are differintegrable series, the structure of such series and its term-by-term derivative demonstrate that the necessary continuity assumptions hold away from the lower limit x = a. Going forward, we will examine infinite sums of differintegrable series and establish results regarding the termby-term differintegrability of such sums, which will generally hold in open intervals such as a < x < a + X, where X represents the radius of convergence of the differintegrable series. First, however, we need to establish some facts about the radius of convergence. Consider first the ordinary power series ∞ X ϕ(j) (a) ϕ= cj [x − a]j , cj = , j! j=0 convergent for |x−a| ≤ X. One knows from classical results that ϕ, together with all of its termby-term derivatives and integrals, converges uniformly in the interval 0 ≤ |x − a| < X. What can be said about the series obtained, more generally, from term-by-term differintegration of ϕ? dq Making use of equation (2.64), the series obtained by applying to every summand [d(x − a)]q of ϕ is the series ∞ ∞ X X cj Γ(j + 1) ϕ(j) (a) j−q −q [x − a] = [x − a] [x − a]j . Γ(j − q + 1) Γ(j − q + 1) j=0 j=0 We know that the series for ϕ converges for |x − a| < X, where, by the ratio test, X = lim j→∞ cj cj+1 (j + 1)ϕ(j) (a) , j→∞ ϕ(j+1) (a) = lim BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 32 while the differintegral series will converge for ϕ(j) (a)Γ(j − q + 2) j→∞ ϕ(j+1) (a)Γ(j − q + 1) |x − a| < lim (j − q + 1)ϕ(j) (a) j→∞ ϕ(j+1) (a) = lim (j + 1)ϕ(j) (a) qϕ(j) (a) − j→∞ ϕ(j+1) (a) ϕ(j+1) (a) = lim (j + 1)ϕ(j) (a) ϕ(j) (a) + −q lim j→∞ j→∞ ϕ(j+1) (a) ϕ(j+1) (a) ≤ lim ϕ(j) (a) (j + 1)ϕ(j) (a) + |q| lim j→∞ ϕ(j+1) (a) j→∞ ϕ(j+1) (a) = lim = X ± qA, depending on the sign of q, where ϕ(j) (a) . j→∞ ϕ(j+1) (a) A = lim Since q is finite and j → ∞, (j + 1) ≫ q ⇒ lim (j + 1) j→∞ ϕ(j) (a) ϕ(j) (a) ≫ q lim j→∞ ϕ(j+1) (a) ϕ(j+1) (a) (j + 1)ϕ(j) (a) ϕ(j) (a) ≫ q lim . j→∞ j→∞ ϕ(j+1) (a) ϕ(j+1) (a) ⇒ lim (j + 1)ϕ(j) (a) , we have X ≫ qA, which implies that qA is consistently negj→∞ ϕ(j+1) (a) ligible compared to X, indicating that the differintegrated series converges within the open interval 0 < |x − a| < X. Moreover, the same conclusion applies to the differintegrable series, where the j th term isX cj [x − a]j+p (as this series shares the same radius of convergence as its analytic component, cj [x−a]j , and thus extends to general differintegrable series). In other words, if the differintegrable series f , which is a finite sum of functions, each representable as For X = lim p [x − a] ∞ X j1 =0 j1 [np+1]/n cj1 [x − a] + [x − a] ∞ X j2 =0 j2 [np+n−1]/n cj2 [x − a] + · · · + [x − a] ∞ X cjn [x − a]jn , jn =0 converges for |x − a| < X, then so does the series obtained by differintegrating each ”unit” term by term, except possibly at the endpoint x = a. This fact will be important in what BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 33 follows. Let f be any differintegrable series. Since f may be decomposed as a finite sum of differintegrable series units, ∞ X p fv = [x − a] cj [x − a]j , j=0 where p > −1 and c0 ̸= 0, the term-by-term differintegrability of f will follow from that of fv . Accordingly, our objective is to establish that ( ) ∞ ∞ X X dq dq [x − a]p+j p j [x − a] (2.67) c [x − a] = c j j [d(x − a)]q [d(x − a)]q j=0 j=0 for all q. More specifically, the equality (2.67) will be proven valid inside the interval of ∞ X convergence of the differintegrable series cj [x − a]p+j . For q ≤ 0, a stronger result that j=0 directly extends the classical theorem on term-by-term integration X is easy to establish as follows. Suppose the infinite series of differintegrable functions fj converges uniformly in 0 < |x − a| < X; then ∞ ∞ X X dq fj dq , fj = q [d(x − a)] [d(x − a)]q j=0 q ≤ 0, (2.68) j=0 and the right-hand series also converges uniformly in 0 < |x − a| < X. To demonstrate this result, let ∞ N X X f= fj , SN = fj . j=0 j=0 Since q < 0, the Riemann-Liouville representations Z x Z x fj (y) dq fj f (y) dq f 1 1 dy, dy q = q = q+1 [d(x − a)] Γ(−q) a [x − y] [d(x − a)] Γ(−q) a [x − y]q+1 are valid, and dq f dq SN 1 q − q = [d(x − a)] [d(x − a)] Γ(−q) Z x a [f (y) − SN ] dy. [x − y]q+1 The property of uniform convergence means that, given ϵ > 0, there is an integer N = N (ϵ) such that |f (y) − Sn (y)| < ϵ, BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 34 for n > N and for all y in the interval a ≤ y ≤ x with |x − a| < X. Then Z x dq fj 1 dq f [f (y) − SN ] − = dy [d(x − a)]q [d(x − a)]q Γ(−q) a [x − y]q+1 Z x 1 |f (y) − SN (y)|dy ≤ Γ(−q) a [x − y]q+1 Z x ϵ [x − y]−q−1 dy < Γ(−q) a ϵ[x − a]−q = qΓ(−q) ϵX −q < , qΓ(−q) which can be made small, with ϵ → 0, independently of x, in the interval 0 < |x − a| < X. X dq fj dq f converges uniformly to in 0 < |x − a| < X. The This proves that [d(x − a)]q [d(x − a)]q result just established shows that equation (2.67) is valid for q ≤ 0, and thus, if f is any dq may be distributed through the several infinite differintegrable series, the operator [d(x − a)]q series that define f as long as q ≤ 0. Applying result (2.64) to equation (2.67) gives ∞ X cj Γ(p + j + 1) dq fv [x − a]p+j−q , q = [d(x − a)] Γ(p − q + j + 1) q ≤ 0. (2.69) j=0 Equations (2.67) and (2.69) are also valid for q > 0, as we now establish. First, we decompose the series for fv into two expressions: fv = ∞ X j=0 cj [x − a]p+j = X cj [x − a]p+j + j∈J1 X cj [x − a]p+j , j∈J2 where J1 is the set of nonnegative integers j for which Γ(p − q + j + 1) is infinite, and J2 consists of all nonnegative integers not in J1 . Γ(p − q + j + 1) being infinite implies that p − q + j + 1 is either zero or a negative integer. However, as j increases, and since p − q + 1 is finite, p − q + 1 + j is guaranteed to become greater than zero for finite values of j. This implies that the values of j for which p − q + 1 + j can be either negative or zero are also finite. Therefore, J1 is finite. ) ) ( ( X X dq fv dq dq p+j p+j = cj [x − a] + cj [x − a] [d(x − a)]q [d(x − a)]q [d(x − a)]q j∈J1 j∈J2 ( ) X dq [x − a]p+j X dq = + cj [x − a]p+j , [d(x − a)]q [d(x − a)]q j∈J1 j∈J2 BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 35 dq . Now we see that the proof of equation (2.69) [d(x − a)]q for q > 0 depends only upon establishing that making use only of the linearity of dq [d(x − a)]q ( ) X cj [x−a]p+j j∈J2 = X j∈J2 cj X cj Γ(p + j + 1) dq [x − a]p+j = [x−a]p+j−q , [d(x − a)]q Γ(p − q + j + 1) q > 0. j∈J2 (2.70) Given the uniform convergence of the series for fv in 0 < |x − a| < X, the series on the right-hand side of equation (2.70) will also converge uniformly, as demonstrated earlier in this d−1 section. Hence, the operator can be applied to the terms of this series to obtain [d(x − a)]−1 ( ) X cj Γ(p + j + 1) d−1 [x − a]p+j−q [d(x − a)]−1 j∈J2 Γ(p − q + j + 1) ( ) X cj Γ(p + j + 1) d−1 p+j−q [x − a] = [d(x − a)]−1 Γ(p − q + j + 1) j∈J2 X cj Γ(p + j + 1)Γ(p − q + j + 1) [x − a]p+j−q+1 Γ(p − q + j + 1)Γ(p − q + j + 2) = (2.71) j∈J2 X cj Γ(p + j + 1) [x − a]p+j−q+1 Γ(p − q + j + 2) = j∈J2 X = j∈J2 cj dq−1 p+j . q−1 [x − a] [d(x − a)] Moreover, the last series converges uniformly in 0 < |x − a| < X, as does the series obtained by differentiating each term. The cancellation required to derive the penultimate expression in equation (2.71) can be justified since the definition of set J2 ensures that Γ(p − q + j + 1) is finite. Applying the classical theorem on term-by-term differentiation (see Section 2.1.1) to X dq−1 the series cj [x − a]p+j yields [d(x − a)]q−1 ( ) X d X dq−1 dq p+j p+j cj [x − a] = c . j q [x − a] q−1 dx [d(x − a)] [d(x − a)] j∈J2 j∈J2 Arguing similarly, we find that ( ) X X dn dq−n dq p+j p+j c [x − a] = c , j j q [x − a] q−n dxn [d(x − a)] [d(x − a)] j∈J j∈J 2 2 (2.72) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 36 for every positive integer n. Choosing the smallest n to make q − n < 0 permits us to apply equation (2.69) with the result that ( ) X X dq−n dq−n p+j p+j cj = cj [x − a] . q−n [x − a] q−n [d(x − a)] [d(x − a)] j∈J j∈J 2 2 Differentiating both sides of this equation n times, we see that ) ( ( ) X X dq dn dq−n p+j p+j [x − a] = cj [x − a] . cj dxn [d(x − a)]q [d(x − a)]q−n j∈J2 j∈J2 Utilizing equation (2.72) gives, finally, ( ) X X dq dq p+j [x − a]p+j , cj [x − a] = cj q [d(x − a)] [d(x − a)]q j∈J2 q > 0, j∈J2 as we wanted to show. Thus the representation (2.70) is valid for q > 0 and, hence, for arbitrary q. Scale Change By a scale change of the function f with respect to a lower limit a, we mean its replacement by f (β − β + a), where β is a constant termed the scaling factor. To clarify this definition, consider a = 0; then the scale change converts f (x) to f (βx), in contrast to the homogeneity operation to the previous section which converted f (x) to Cf (x). dq In this section we seek a procedure by which the effect to the generalized [d(x − a)]q dq operation upon f (βx−βa+a) can be found, if is known. We shall find it convenient [d(x − a)]q to use the abbreviation X = x + [a − aβ]/β, and to adopt the Riemann-Liouville definition (2.41). Using Y as a replacement for βy −βa+a, we proceed as follows: Z x dq f (βy − βa + a) 1 f (βy − βa + a) dq f (βX) = = dy q q [d(x − a)] [d(x − a)] Γ(−q) a [x − y]q+1 Z βx f (Y )[dY /β] 1 = Γ(−q) a [βX − Y ]/β q+1 (2.73) Z βX f (Y ) βq = dY Γ(−q) a [βX − Y ]q+1 dq f (βX) = βq . [d(βX − a)]q BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 37 The unity of the formula (2.73) is greatest when a = 0, for then X = x and the scale change is simply a multiplication of the independent variable by a constant, the formula being q dq f (βX) q d f (βX) = β . [dx]q [dβx]q (2.74) Leibniz’s Rule The rule for differentiating a product of two functions is a well-known result in elementary calculus. It states that n   dn [f g] X n dn−j f dj g = (2.75) j dxn−j dxj dxn j=0 and is, of course, limited to non-negative integers n. In Section 2.1.1, we derived, employing integration by parts, the subsequent product rule for multiple integrals:  ∞  X d−n [f g] d−n−j f −n dj g = . j [d(x − a)]−n−j [d(x − a)]j [d(x − a)]−n j=0 When we observe that the finite sum in (2.75) could equally well extend to infinity [since  n = 0 for j > n], we might expect the product rule to generalize to arbitrary order q as j ∞   X dq [f g] q dq−j f dj g = . [d(x − a)]q j [d(x − a)]−q−j [d(x − a)]j j=0 (2.76) That such generalization is indeed valid for real analytic functions ϕ(x) and ψ(x) will now be established. Starting with equation (2.60) and substituting for ϕ the product ϕψ, we obtain ∞   X dq [ϕψ] q dq−k [1] = [ϕψ](k) [d(x − a)]q k [d(x − a)]q−k k=0 ∞   k   X X q dq−k [1] k (k−j) (j) = ϕ ψ , q−k j k [d(x − a)] j=0 k=0 making use of (2.75). Note that, since j is an integer, the repeated derivative ψ (j) with respect to x equals that with respect to x − a. The permutation (2.94) ∞ X k X k=0 j=0 = ∞ X ∞ X j=0 k=j (2.77) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 38 may be applied to give ∞ ∞    X X dq [ϕψ] q k dq−k [1] (j) ϕ(k−j) ψ q = [d(x − a)] k j [d(x − a)]q−k j=0 k=j   ∞ ∞  X X q l+j dq−j−l [1] (j) ϕ(l) = ψ −q−j−l l+j j [d(x − a)] j=0 l=0     ∞ ∞ X q−j X q dq−j−l [1] = ψ (j) ϕ(l) −q−j−l l j [d(x − a)] j=0 l=0   ∞ X q dq−j [1] = ϕ, ψ (j) j [d(x − a)]−q−j j=0 where we have made use of the identity (2.95)       q l+j q q−j = , l+j j j l the proof of which can be found in subsection 2.1.5, and the subsequent application of (2.60). We established (2.55) under the assumption that ϕ is a real analytic function and utilized (2.55) to demonstrate (2.60), which, in turn, was employed to establish (2.76). Therefore, the latter is proven only if both ϕ and ψ are real analytic functions. Composition Rule dq , we aim to [d(x − a)]q In our quest for a comprehensive composition rule for the operator uncover the connection between dq dQ , q · [d(x − a)] [d(x − a)]Q and dq+Q [d(x − a)]q+Q , which we temporarily abbreviate to dq dQ f and dq+Q respectively. Naturally, for these symbols to have general significance, we must assume not only that f is differintegrable but also that dQ f is differintegrable. In this section, we confine our focus to differintegrable series as defined in Section 2.1.2. Thereby, the most general non-zero differintegrable series is a finite sum of differintegrable ”units,” each taking the form fv = [x − a]p ∞ X cj [x − a]j , p > −1, c0 ̸= 0. (2.78) j=0 We shall see that the composition rule may be valid for some units of f but possibly not for others. It follows from the linearity of differintegral operators that dq dQ f = dq+Q f (2.79) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 39 if dq dQ fv = dq+Q fv (2.80) for every unit fv of f . Accordingly, we shall first assess the validity of the composition rule (2.80) for differintegrable series unit functions fv . Obviously, if fv ≡ 0, then dQ fv = 0 for every Q by equation (2.57), and so dq dQ [0] = dq+Q [0] = 0. While the composition rule is trivially satisfied for the differintegrable function fv ≡ 0, we shall see that the possibility fv ̸= 0, but dQ fv = 0, is exactly the condition that prevents the composition rule (2.80), and therefore (2.79), from being satisfied generally. Having dealt with the case fv ≡ 0, we now assume fv ̸= 0 and use equation (2.69) to evaluate dQ fv : Q d fv = ∞ X Q p+j cj d [x − a] j=0 = ∞ X cj Γ(p + j + 1)[x − a]p+j−Q j=0 Γ(p + j − Q + 1) . (2.81) Furthermore, we observe that since p > −1, it implies that p + j > −1, ensuring that Γ(p + j + 1) is always finite but nonzero. Hence, individual terms in dQ fv will only vanish when the coefficient cj is zero or when the denominatorial Γ function Γ(p + j + 1 − Q) is infinite. Consequently, we realize that a necessary and sufficient condition for dQ fv ̸= 0 is Γ(p + j + 1 − Q) is finite for each j for which cj ̸= 0. (2.82) This awkward condition (2.82) may be shown to be equivalent to fv − d−Q dQ fv = 0; (2.83) that is, to the condition that the differintegrable unit fv be regenerated upon the application, first of dQ , then d−Q . Assuming (2.83) temporarily, we find that dq may then be applied to equation (2.81) to give q Q d d fv = ∞ X cj Γ(p + j + 1)Γ(p + j − Q + 1)[x − a]p+j−Q−q j=0 Γ(p + j − Q + 1)Γ(p + j − Q − q + 1) . On the other hand, the same technique shows that dq+Q fv = ∞ X j=0 cj dq+Q fv = ∞ X cj Γ(p + j + 1)[x − a]p+j−Q−q j=0 Γ(p + j − Q − q + 1) = dq dQ fv . Thus, the composition rule (2.80) is obeyed for the unit fv as long as condition (2.83) is satisfied. However, when (2.83) is violated, dQ fv = 0 so that dq dQ fv = 0. On the other hand, BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 40 it is not necessarily the case that dq+Q fv = 0. For example, we may choose fv = x−1/2 , a = 0, 1 1 Q = , and q = − . Then 2 2 1 Γ( ) fv − d−Q dQ fv = x−1/2 − d−1/2 d1/2 x−1/2 = x−1/2 − d−1/2 2 x−1/2 = x−1/2 ̸= 0, Γ(0) so that condition (2.83) is certainly violated. Therefore, dQ fv = 0 and dq dQ fv = 0 while dq+Q fv = x−1/2 ̸= 0. Generalizing, we easily see the relationship between dq dQ fv and dq+Q fv in case fv − d−Q dQ fv ̸= 0 to be 0 = dq dQ fv = dq+Q fv − dq+Q {fv − d−Q dQ fv }. (2.84) The preceding discussion for differintegrable units fv is summarized in the Table 2.1. fv ≡ 0 dQ fv = 0 fv − d−Q dQ fv = 0 q Q q+Q d d fv = d dQ fv ̸= 0 fv ̸= 0 fv = 0 Not attainable fv − d−Q dQ fv ̸= 0 0 = dq dQ fv = dq+Q fv − dq+Q [fv − d−Q dQ fv ] fv − d−Q dQ fv = 0 dq dQ fv = dq+Q fv . Table 2.1: Summary of the composition rule for differintegrable units fv While equation (2.84) is a trivial identity for differintegrable units, its significance becomes more apparent and thus more useful for general differintegrable series. Because equation (2.79) holds true for general differintegrable series f if and only if equation (2.80) is valid for every differintegrable unit fv of f , applying the theory developed for units fv to derive the composition rule for general f is straightforward. The only difference lies in the conditions fv ̸= 0 and fv − d−Q dQ fv = 0 (2.85) for units fv guaranteed that dQ fv ̸= 0, this is no longer the case for arbitrary f . The reason, of course, is that some units of f may satisfy (2.85) while others do not. This will make it possible to violate the composition rule (2.79) even though f ̸= 0 and dQ f ̸= 0. BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 41 Then, in conjunction with the preceding requirement, it is imperative to include an additional condition fv − d−Q dQ fv = 0 (2.86) for each differintegrable unit, as delineated in Table 2.2. Note: Section 2.1.5 provides proofs for certain results utilized in Chapter 2. If these have been previously reviewed, the reader should then proceed to Chapter 3. f ≡0 f ̸= 0 f − d−Q dQ f ̸= 0 f − d−Q dQ f ≡ 0 dQ f ≡ 0 q Q q+Q d d f =d 0 = dq dQ f = dq+Q f f =0 − dq+Q [f − d−Q dQ f ] if f − d−Q dQ f ≡ 0 then dq dQ f = dq+Q f dQ f ̸= 0 Not attainable if f − d−Q dQ f ̸= 0 then dq dQ f = dq+Q f − dq+Q [f − d−Q dQ f ]. Table 2.2: Summary of the composition rule for arbitrary differintegrable functions, f . 2.1.5 Auxiliary Proofs The following properties, along with their accompanying proofs, were frequently utilized to derive important results in this chapter: 1. We aim to prove " j c+q+1 lim j→∞ " Given that lim j→∞ # j Γ(j − q) = lim j→∞ Γ(j + 1) c+q+1 Γ(j − q) Γ(j + 1) # " " = lim j→∞ j   ∞, c > 0, c+q Γ(j − q) j = 1, c = 0,  Γ(j)  0, c < 0 # c+q Γ(j − q) Γ(j) (2.87) # . Using the asymptotic expan- BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES Γ(j − q) (2.1), we have Γ(j + 1) " # Γ(j − q) q[q + 1] ∼ j −1−q 1 + + O(j −2 ) , Γ(j + 1) 2j 42 sion of q ∈ R, j ∈ N, and j → ∞. Subsequently, " # Γ(j − q) Γ(j − q) q[q + 1] Γ(j − q) −q −2 = ⇒ ∼j 1+ + O(j ) . Γ(j + 1) jΓ(j) Γ(j) 2j Additionally, j→∞⇒ Γ(j − q) q[q + 1] →0⇒ ∼ j −q . 2j Γ(j) Applying this result in (2.87) leads to " lim j→∞   ∞, c > 0, j c+q c+q Γ(j − q) c j = lim q = lim j = 1, c = 0, j→∞ j j→∞  Γ(j)  0, c < 0. # (2.88) 2. We aim to prove that [−1]j     n j−n−1 Γ(j − n) . = = Γ(−n)Γ(j + 1) j j (2.89)     n j−n−1 Γ(j − n) Let Q1 , Q2 and Q3 be . In order to derive that j j Γ(−n)Γ(j + 1) proof, we show that both Q1 and Q2 are equal to Q3 . Let us first show that Q2 are equal to Q3 . be [−1]j Q2 =   j−n−1 Γ(j − n) = . Γ(j + 1)Γ(−n) j Let us now show that Q1 are equal to Q3 Q1 = [−1]j   n Γ(n + 1) = [−1]j . j Γ(n − j + 1)Γ(j + 1) From the previous result, we only need to show that [−1]j Γ(n + 1) Γ(j − n) = . Γ(n − j + 1) Γ(−n) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 43 On one hand, using the reflection property of Γ, and the fact that cos (jπ) = [−1]j for all positive integers j, we have −π sin((n − j)π)Γ(n − j + 1) −π = Γ(n − j + 1) (sin(πn) cos(πj) − cos(πn) sin(πj)) −π = . Γ(n − j + 1) sin(πn)[−1]j Γ(j − n) = Γ (−(n − j)) = On the other hand, taking advantage of the reflection property again, we have Γ(−n) = −π . Γ(n + 1) sin (πn) Finally, using the former results, we have Γ(j − n) −π Γ(n + 1) sin(πn) = j Γ(−n) Γ(n − j + 1) sin(πn)[−1] −π −j Γ(n + 1)[−1] = Γ(j − j + 1) Γ(n + 1) = [−1]j . Γ(n − j + 1) 3. Let us establish that  n  X j−q−1 j=0 j   n−q = . n (2.90) We aim to show using induction that ∀n ∈ N,  n  X j−q−1 j=0 j   n−q = . n Initial condition: n = 1. The left hand side of the equation gives  1  X j−q−1 j=0 j         0−1−q 1−q−1 −q − 1 −q = + = + = 1 − q. 0 1 0 1 The right hand side of the equation gives   1−q = 1 − q. 1 BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 44 Let us suppose now the equation is true for n = k and let us prove that it is also true for n = k + 1. We need to show that    k+1  X j−q−1 k+1−q = . j k+1 j=0  k+1  X j−q−1 j=0 j =  k  X j−q−1 j=0 j  k  X j−q−1   k+1−q−1 + k+1  k−q = + j k+1 j=0     k−q k−q = + . k k+1    k−q Γ(k − q + 1) Γ(k − q + 1) = = . k Γ(k + 1)Γ(−q + 1) Γ(k + 1)(−q)Γ(−q)   k−q Γ(k − q + 1) Γ(k − q + 1) = = . k+1 Γ(k + 2)Γ(−q) (k + 1)Γ(k + 1)Γ(−q)     k−q k−q Γ(k − q + 1) Γ(k − q + 1) + = + k k+1 Γ(k + 1)(−q)Γ(−q) (k + 1)Γ(k + 1)Γ(−q) ( ) Γ(k − q + 1) Γ(k − q + 1) 1 + = Γ(k + 1)Γ(−q) −q k+1 Γ(k − q + 1)(k + 1 − q) Γ(k − q + 2) = (−q)Γ(k + 1)Γ(−q)(k + 1) Γ(−q + 1)Γ(k + 2)   k−q+1 = . k+1 = 4. Let us prove that N −1 X j=0 Γ(j − q) Γ(N − q) = . Γ(−q)Γ(j + 1) Γ(1 − q)Γ(N ) (2.91) This result is derived by setting N = n + 1 and expressing the binomial coefficients as their equivalent Γ function combination in (2.90) as follows:    n  n X X j−q−1 n−q Γ(j − q) Γ(n − q + 1) = ⇔ = . j n Γ(j + 1)Γ(−q) Γ(n + 1)Γ(−q + 1) j=0 j=0 By setting N = n + 1, we obtain N −1 X j=0 Γ(j − q) Γ(N − q) = . Γ(j + 1)Γ(−q) Γ(N )Γ(1 − q) BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 45 5. Our purpose is to demonstrate that N −1 X j=0 −qΓ(N − q) Γ(j − q) = . Γ(−q)Γ(j) Γ(2 − q)Γ(N − 1) (2.92) For this proof, let us consider the following binomial coefficient identity which can also be verified using induction as shown in (2.90).  n  X j−q−1 j=1 j−1  n X n−q Γ(j − q) Γ(n − q + 1) ⇔ = . n−1 Γ(j)Γ(−q + 1) Γ(n)Γ(2 − q)  = j=1 By setting N = n + 1, we obtain, this finally leads to N −1 X j=0 Γ(N − q) Γ(j − q) = . Γ(j)Γ(−q + 1) Γ(N − 1)Γ(2 − q) Using the recurrence property of the Γ function leads to N −1 X j=0 N −1 X Γ(j − q) Γ(N − q) −qΓ(N − q) Γ(j − q) = ⇔ = . Γ(j)(−q)Γ(−q) Γ(N − 1)Γ(2 − q) Γ(j)Γ(−q) Γ(N − 1)Γ(2 − q) j=0 6. Let us establish the binomial result    j   X Q q+Q q = . j−k j k (2.93) k=0 For qand Qintergers, this equality implies that choosing j elements out of q+Q elements,   q+Q q i.e., , is equivalent to choosing k elements out of q, i.e., , and j − k elements j k   Q out of Q, i.e., . Finally, by considering all the possible values of k between 0 and j−k j, we obtain the desired result. For q and Q reals numbers, we have  ∞  ∞   X ∞   X X q+Q j q q Q Q q+Q q Q x = (1 + x) = (1 + x) (1 + x) = x x . j k l j=0 k=0 Upon applying the summation formula ∞ X n=0 an ∞ X n=0 bn = ∞ X n X n=0 k=0 ak bn−k , l=0 BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 46 we obtain  ∞  ∞ X X q+Q j x = (1 + x)q+Q = j j=0 j=0 ! j   X q Q xj k j−k k=0    X j   q Q q+Q . ⇔ = k j−k j k=0 7. We aim to establish the binomial equality       q l+j q q−j = . l+j j j l (2.94) A good way to understand this binomial equality is to consider the example of a box containing q balls. Out of those q balls, such that we have l which are red and j which are blue. This equality represents 2 identical ways to choose those l red balls and j blue balls out of those q balls. The first way (the left-hand side) involves selecting l + j balls then, assigning  l to be red and j to be blue (a total of l + j) all at once out of the q q balls . Subsequently, out of the (l + j) selected, we select either j blue balls or l+j     l+j l+j l red balls since = . The second way (the right-hand side) consists of j l selecting  each color ball one after the other. We first choose the j red onesout ofthe q q q−j total . Then, out of the q − j balls remaining, we select the blue ones . j l 8. ∞ X k X k=0 j=0 = ∞ X ∞ X . (2.95) j=0 k=j The summation equality is established graphically: ∞ X k X Z ∞Z x is the discrete equivalent to the double integral dy dx, which represents 0 k=0 j=0 0 the area of the lower region delimited by the x-axis, the y-axis, and the line y = x, ∞ X ∞ X as shown in Figure 2.2 (red). Similarly, is the discrete equivalent to the double Z ∞Z ∞ integral j=0 k=j dx dy, representing the area of the upper region delimited by the x-axis, 0 y the y-axis, and the line y = x, as shown in Figure 2.2 (black). From Figure 2.2, these areas are equal, and then, (g) follows. BACKGROUND, PROPERTIES, SIMILARITIES AND DIFFERENCIES 47 Figure 2.2: Upper and lower regions of equal areas. The discret version of this proof involves ∞ X k X sums over all pairs (k, j) where j ≤ k, k=0 j=0 and ∞ X ∞ X j=0 k=j sums over all pairs (k, j) where k ≥ j. Chapter 3 Gierer-Meinhardt Model 3.1 Representation, Matched Asymptotic Expansion and Solution Our study will focus solely on sub-diffusion with Gierer-Meinhardt reaction kinetics. The Gierer-Meinhardt model is a mathematical model that describes the formation of spatial patterns in biological systems. Proposed by Hans Meinhardt and Leo Gierer in the late 1960s, the model studies the interaction between the activator and the inhibitor. The original Gierer-Meinhardt model is represented by a system of two partial differential equations characterizing the rate of change of the concentrations of both activator and inhibitor species with respect to space and time: ∂t a = ϵ2 axx − a + ap , hq τ ∂t h = Dhxx − h + ϵ−1 am , hs (3.1) where a(x, t) and h(x, t) are the activator and inhibitor concentrations at position x and time t. Here ϵ2 and D denote the constant diffusivities, τ is the reaction time constant, and the exponents (p, q, m, s) satisfy p > 1, q > 0, m > 0, s ≥ 0, p−1 m < . q s+1 When integrating sub-diffusion into the system, involving some concepts of fractional calculus, we obtain the following system ∂tγ a = ϵ2γ axx − a + ap , hq τ ∂tγ h = Dhxx − h + ϵ−γ am , hs (3.2) where the anomaly exponent γ is in the range 0 < γ < 1 . The original Gierer-Meinhardt model can be recovered by substituting γ = 1 in the new system (3.2). Setting γ → 0+ 48 REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 49 is inappropriate since ϵγ becomes arbitrarily small as ϵ → 0 when 0 < γ < 1, whereas ϵγ = 1 for arbitrarily small ϵ when γ = 0. The Gierer-Meinhardt model, being a dynamic system, implies the existence of equilibrium points. An equilibrium point in our context denotes a point where the partial derivatives of a and h with respect to time and position vanish. However, upon closer examination of the definition of the fractional derivative utilizing the Riemann-Liouville integral (2.41), we observe it vanishes solely when the differentiated function is zero. Consequently, we opt to employ a variant of the Riemann-Liouville integral expressed as dγ f (x) −1 = γ [d(x − c)] Γ(−γ) Z x f (x) − f (x − ξ) dξ, ξ γ+1 c 0 < γ < 1. For c = 0, and x = t we have dγ f (t) ∂tγ f (t) = = γ dt 3.1.1 −1 Γ(−γ) Z t f (t) − f (t − ξ) dξ, ξ γ+1 0 0 < γ < 1. (3.3) Neumann Boundaries Conditions As previously discussed, the interaction between activator and inhibitor molecules gives rise to distinctive spike patterns. Since our study revolves around a one-dimensional domain, these spikes propagate in both directions along the line until equilibrium is reached. For the purpose of our investigation, we have opted to focus on the symmetric interval [−1, 1] as our chosen study domain. Neumann boundary conditions represent a fundamental type of boundary condition extensively employed in mathematical and physical models. They provide a framework for understanding how a given quantity evolves or interacts with its surroundings at the boundaries of a specified domain. In the context of the sub-diffusion reaction under examination, the Neumann boundary conditions play a crucial role by articulating how the rate of change in concentration (or flux) for both activators and inhibitors becomes null at x = ±1. This insight sheds light on the stability of the reaction at these specific points, preventing the entrance of any molecules into the study domain. ∂tγ a = ϵ2γ axx − a + ap hq τ ∂tγ h = Dhxx − h + ϵ−γ ax (±1, t) = hx (±1, t) = 0, am hs − 1 < x < 1, t > 0. − 1 < x < 1, a(x, 0) = a0 (x), with t = 0 representing the beginning of the reaction. (3.4a) t > 0. (3.4b) h(x, 0) = h0 (x). (3.4c) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 3.1.2 50 Asymptotic Solution of the Gierer-Meinhardt Model An earlier analysis of the system composed of equations (1.1), (1.2), and (1.3) from [5] unveiled the existence of a solution (a, h) as ϵ approaches zero. This seminal study showed that the concentration a nullifies across the entire range of the problem, except at a finite set of points xi , where i ranges from 1 to n. A time scale σ = ϵα t allows to establish a local region around each xi , where the concentration of the activator does not vanish. As a result, the behaviour of the solution function a can be captured separately within the inner and outer regions. A popular method for constructing these specific types of solutions that differ in nature from one region to another is called the ”matched asymptotic expansions”. Matched asymptotic expansion In mathematics, the technique known as matched asymptotic expansions is a common approach for deriving an approximation to the solution of an equation or a system of equations. This method involves the discovery of multiple distinct approximate solutions, each valid for a specific segment within the range of the independent variable. In each segment, the solution is approximated by an asymptotic series. Subsequently, these distinct solutions are combined to formulate a unique approximated solution, applicable across the entirety of the independent variable’s value range. In the case of the GiererMeinhardt model the solution (a, h) is to be found separately in the outer and inner regions and matched. Upon examination of equation (3.4a), the presence of the parameter am ϵ → 0 preceding the highest derivative axx and the reaction term s in (3.4b) can be h noticed. This scenario presents us with what is commonly referred to as a ”singular perturbation problem”. The application of the matched asymptotic expansions is still possible and has been successfully done when the time derivatives in system (3.2) were integer [5]. Hereinafter we will follow [6] in order to construct such a solution in the fractional case. Perturbation theory This section aims to provide valuable information about perturbation theory and is based on [9]. Perturbation methods: Perturbation methods are methods which rely on there being a dimensionless parameter ϵ in the problem that is small: ∥ϵ∥ ≪ 1. Regular perturbation expansions The Taylor formula is one of the most known series expansions and is surely familiar to all of us: for an analytic function f (x), we can REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 51 expand close to a point x = a as: 1 f (a + ϵ) = f (a) + ϵf ′ (a) + ϵ2 f ′′ (a) + · · · 2 However, for general functions f (x) there are many ways this expansion can fail, including lack of convergence of the series, or simply an inability of the series to capture the behaviour of the function in a reasonable number of terms, after which the series is truncated; but the paradigm of the expansion in which a small change to x entails a small change to f (x) is a powerful one, and the basis of singular perturbation expansions. The basic principle of perturbation expansions is as follows. Given a problem, algebraic or differential, for a quantity f with independent variables (t, x1 , x2 , ...) and a parameter ϵ, follow these steps. (a) Set ϵ = 0 and solve the resulting system (solution f0 for definiteness). (b) Perturb the system by allowing ϵ to be nonzero (but small in some sense). (c) Formulate the solution to the new, perturbed system as a series. f = f0 + ϵf1 + ϵ2 f2 + · · · (d) Expand the governing equations as a series in ϵ, collecting terms with equal powers of ϵ; solve them in turn as far as the solution is required. Depending on where ϵ appears in the problem, it might be considered regular or singular. An example of a singular problem is, for instance, where ϵ multiplies the highest derivative of a differential equation or appears in the denominator in some expressions. Dynamics of quasi-equilibrium patterns As mentioned earlier, when we approach the limit ϵ → 0 in equation (3.4a), a solution is constructed. The activator’s concentration becomes negligible across most of the domain, with notable increases occurring at specific points. However, upon closer examination, we find that these seemingly sudden jumps are, in fact, gradual. By scaling both time, x − xi (σ) from t to σ = ϵα t and x to yi = , we observe that the concentration gradually ϵγ rises until reaching its peak at x = xi . This phenomenon can be likened to someone traveling from Vancouver to Kamloops. Initially, they use a map of the entire province to navigate highways and cities. But upon arriving in Kamloops, they switch to a more detailed map specific to the town’s neighborhoods and streets. The BC map corresponds to the original space and time scale (x, t) while the Kamloops map represents the new one (yi , σ). REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 52 Concentration of the activator and the inhibitor in the inner region Let us define by A and H to be the concentrations of the activator and inhibitor in the inner region. Applying singular perturbation expansions on A and H , we obtain for each yi (0) (1) (2) (0) (1) (2) A(yi , σ) = a(xi + ϵγ yi , ϵ−α σ) = Ai (yi , σ) + ϵγ Ai (yi , σ) + ϵ2γ Ai (yi , σ) + · · · (3.5a) H(yi , σ) = h(xi + ϵγ yi , ϵ−α σ) = Hi (yi , σ) + ϵγ Hi (yi , σ) + ϵ2γ Hi (yi , σ) + · · · (3.5b) Substituting (3.5a) in (3.4a) we obtain       (0) (1) (0) (1) (0) (1) ϵαγ ∂σγ Ai + ϵγ Ai + · · · = ϵ2γ ∂x2 Ai + ϵγ Ai + · · · − Ai + ϵγ Ai + · · ·  p (0) (1) Ai + ϵγ Ai + · · · q . + (0) (1) Hi + ϵγ Hi + · · · (3.6) We have ( !) ∂x2 Ai = ∂x2 Ai (yi ) = ∂x ϵ−γ ∂yi Ai (yi ) = ϵ−2γ ∂y2i Ai (yi ). Applying that result to equation (3.6) we obtain ! ϵαγ ∂σγ (0) (1) Ai + ϵγ Ai + · · · = !  ! (0) (1) ∂y2i Ai + ϵγ Ai + · · · (0) (1) Ai + ϵγ Ai + · · · − + (0) (1) Ai + ϵγ Ai + · · · (0) Hi (1) + ϵγ Hi p +··· q . Subsequently, (1)   (0) (1) Ai + ϵγ Ai + · · · (0) Hi (1) + ϵγ Hi p +··· (0)p q = Ai (0)q Hi (0)p = Ai (0)q Hi Ai 1 + ϵγ (0) +··· Ai !p !q (1) H 1 + ϵγ i(0) + · · · Hi !p !−q (1) (1) γ Ai γ Hi 1 + ϵ (0) + · · · 1+ϵ +··· . (0) Ai Hi REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 53 Additionally, (1) A 1 + ϵγ i(0) + · · · Ai !p p = (1 + ξ1 ) ∼ (1 + pξ1 ) =   A(1) i +··· . with ξ1 = ϵγ (0) Ai Similarly, we have !−q (1) γ Hi 1+ϵ +··· = (1 + ξ2 )−q ∼ (1 − qξ2 ) = (0) Hi with ξ2 =   (0)  (0) 1−q (1)  Ai ϵγ (0) +··· Ai  ! , (1)  γ Hi ϵ + · · · (0) Hi ! , (1)  Hi γ ϵ + · · · . Thus, (0) Hi (1) Ai + ϵγ Ai + · · · Hi 1+p  (1) + ϵγ Hi p +··· (0)p q =  (0)q Hi (0)p = (1)  A 1 + p ϵγ i(0) + · · · Ai Ai ! (1)  H 1 − q ϵγ i(0) + · · · Hi  ! (1) (1) (1) (1) H A A H 1 − qϵγ i(0) + · · · + pϵγ i(0) − pqϵ2γ i(0) i(0) + · · · Hi Ai Ai Hi Ai (0)q Hi ! . Upon truncating the equation at order O(ϵ2γ ) we obtain ! ϵαγ ∂σγ (0) (1) Ai + ϵγ Ai ! ∼ ∂y2i (0) (1) Ai + ϵγ Ai (0)p + Ai (0)q Hi ! − (0) (1) Ai + ϵγ Ai ! (1) (1) H A 1 − qϵγ i(0) + pϵγ i(0) . Hi Ai (3.7) Following the same process, and upon substituting (3.5b) in (3.4b) we obtain       (0) (1) (0) (1) (0) (1) − Hi + ϵγ Hi ∼ ϵ−2γ D∂y2i Hi + ϵγ Hi τ ϵαγ ∂σγ Hi + ϵγ Hi ( !) (0)m (1) (1) Ai Hi −γ Ai γ +ϵ 1 + ϵ m (0) − s (0)s . (0)s Hi Ai Hi (3.8) Grouping terms of same magnitude in an asymptotic expansion The method of truncation we recently employed in equations (3.7) and (3.8) is a technique used in mathematics to simplify complex functions or equations by focusing on their REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 54 dominant behavior as variables approach certain limits. It involves breaking down a complex mathematical expression into its constituent terms and organizing them based on their relative orders of magnitude, as a variable approaches a certain limit, often denoted as a small parameter ϵ. The goal is to identify which terms dominate as ϵ approaches the limit and which terms become negligible. In our particular scenario, terms in equation (3.7) are grouped by order O(1) and O(ϵγ ) representing respectively the equilibrium state and the drift. Similarly, terms in equation (3.8) are grouped by order O(ϵ−2γ ) and O(ϵ−γ ). This allows for an accurate approximation of a and h and captures the essential characteristics of the original equations while shedding unnecessary complexity. As we mentioned previously, the main objective of using perturbation expansions is to group terms of the same order into equations that are comparatively simpler and which satisfiy specific conditions. Since equations (3.7) and (3.8) depend on the α and τ parameters, a judicious choice of their magnitudes would greatly facilitate the resolution of these equations. On one hand, let us suppose α < γ + 1, for example α = γ + 1 − k with k a real number. We obtain 2 (0) 2 (1) ϵγ −kγ+γ ∂σγ Ai + ϵγ −kγ+2γ ∂σγ Ai (0) (1) (0) (1) ∼ ∂y2i (Ai + ϵγ Ai ) − (Ai + ϵγ Ai ) ! (0)p (1) (1) Ai H A + (0)q 1 − qϵγ i(0) + pϵγ i(0) . Hi Hi Ai 2 From equation (3.33) in section(3.1.3), ∂σγ Ai ∼ O(ϵ−γ ). The previous expression is composed of terms of order O(ϵ−kγ+γ ), O(ϵ−kγ+2γ ), O(1), O(ϵγ ), and O(ϵ2γ ). By collecting terms of order O(ϵ−kγ+γ ) we obtain (0) ∂σγ Ai (σ) = 0, (0) which is contradictory because Ai is not constant. On the other hand, for α > γ + 1, such as α = γ + 1 + k we obtain 2 (0) 2 (1) ϵγ +kγ+γ ∂σγ Ai + ϵγ +kγ+2γ ∂σγ Ai (0) (1) (0) (1) ∼ ∂y2i (Ai + ϵγ Ai ) − (Ai + ϵγ Ai ) ! (0)p (1) (1) Ai H A + (0)q 1 − qϵγ i(0) + pϵγ i(0) . Hi Hi Ai By collecting terms of order O(1) and O(ϵγ ), we respectively obtain (0)p Ai (0) 2 (0) ∂yi Ai − Ai + (0)q = 0. Hi (0)p Ai (1) 2 (1) ∂yi Ai − Ai + (0)q Hi (1) p Ai (0)p Ai (3.9) (1) −q Hi (0)q Hi ! = 0. (3.10) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 55 The system composed of equations (3.9) and (3.10) does not make sense because the former equation represents the stationary state condition, while the latter one of next order is intended to be time-dependent, illustrating the change in activator concentration over time, often referred as ”drift”. Since neither α > γ + 1 nor α < γ + 1 work, α needs to be γ + 1. Solving order O(1) problem: For α = γ + 1, and upon collecting terms of order O(1)and O(ϵ−2γ ) in equations (3.7) and (3.8) respectively, we obtain (0)p Ai (0) 2 (0) ∂yi Ai (yi ) − Ai + (0)q = 0, Hi (0) ∂y2i Hi (3.11a) −∞ < y < ∞. = 0, (3.11b) From (3.11b), we have (0) (0) ∂y2i Hi (yi , σ) = 0 ⇒ Hi (yi , σ) = c1 (σ)yi + c2 (σ), (0) with c1 (σ), and c2 (σ), functions to be determined. At infinity, Hi is finite which imposes (0) c1 to be zero and Hi (yi , σ) to be yi independent. H̄i (σ) will then be used in place of (0) Hi (yi , σ) throughout the rest of the document to express the independence from yi . Finally, in order to simplify the denominator of the last term in equation (3.11a), we (0) scale Ai with a power of H̄i . We then set (0) Ai (σ) = H̄iB (σ)u(yi ), with B and u to be determined. (0) By substituting Ai into (3.11a), we obtain H̄iB ∂y2i u − H̄iB u + H̄iB p p u = 0, H̄iq   H̄iB ∂y2i u − u + H̄iB p−q−B up = 0. Subsequently, H̄iB = 0 or ∂y2i u − u + H̄iB p−q−B up = 0. REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION Since H̄iB ̸= 0, we get 56 ∂y2i u − u + H̄iB p−q−B up = 0. In order to get rid of the term H̄ B p−q−B and remain with a second order nonlinear differential equation, we set H̄iB p−q−B = 1 ⇒ B p − q − B = 0 ⇒ B = q . p−1 We finally obtain the equation d2 u − u + up = 0. dyi2 (3.12) Homoclinic Solution: In mathematics and dynamics, a homoclinic solution is a curve that connects different equilibrium states within a system. This path, often represented as u, starts at one equilibrium state (in our case 0) and returns at the same equilibrium state. It is essentially a self-referential route that shows how a system settles at the same equilibrium state. To contrast, a ’heteroclinic solution’ would connect two different equilibrium states in the system. The homoclinic solution is given by  u(y) =  1 p+1 2 (p − 1)y p − 1 sech . 2 2 It is the unique solution of the following boundary value problem u′′ − u + up = 0, u′ (0) = 0, u(0) > 0, −∞ < y < ∞, (3.13a) and (3.13b) lim u = 0. |y|→∞ Solving order O(ϵγ ) problem: Upon collecting terms of order O(ϵγ ) and O(ϵ−γ ) in equations (3.7) and (3.8), we have ! (0)p (1) (1) Ai Ai Hi (1) αγ−γ γ (0) 2 (1) ϵ ∂ σ A i = ∂ yi A i − A i + p (0) − q , (3.14a) H̄iq H̄i A i (0)m Ai (1) 2 D∂yi Hi (yi (σ)) = − . H̄is (0) Substituting Ai = H̄iB (σ)u(yi ) in both equations (3.14a) and (3.14b) leads to (3.14b) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 57 (0)p (1) (1) (0)p Ai Ai Hi Ai 2 γ (1) (1) γ B 2 − ϵ ∂σ (H̄i u) = ∂yi Ai − Ai + p q , H̄iq A(0) H̄iq H̄i i (1) D∂y2i Hi = −H B m−s um . Subsequently, (1) Hi 2 γ B p−q p (1) (1) (1) B(p−1)−q p−1 γ B 2 u , u −q Hi ϵ ∂σ (H̄i u) = ∂yi Ai − Ai + pAi H̄i Hi (1) D∂y2i Hi = −HiB m−s um .  dxi d2 dxi γ γ Dyi u and L0 = 2 − 1 + pup−1 a dσ dσ dyi linear operator in lemma (2.2) from [6], we finally obtain !   (1) Hi dxi dxi γ γ B (1) p L0 Ai = q u − sign Dyi u Hi , (3.15a) dσ dσ Hi 2 Since B p − B −q = 0, ∂σγ u ∼ −ϵ−γ sign  (1) D∂y2i Hi = −H̄iB m−s um . (3.15b) Note: Dtγ u(t) = sign  dxi dσ  1 Γ(−γ)      γ+1 Z ∞ dxi 1 u(t) − u t + sign y dy. dσ y 0 The fractional operator Dyγi can be regarded as the left or right propagating fractional dxi derivative according to whether is positive or negative, and will be study in more dσ details in section 4.1. Equation (3.15a) is of the form f (a) = b with f being a linear operator and a and b common functions. In order to find a in the previous example, we have two options. In the case where f is invertible, we have a = f −1 (b), which is not possible in this case du ̸= ∅. Otherwise, the Fredholm alternative stipulates that a can be because ker L0 = dyi found if ker f is orthogonal to b with a and b integrable functions. This condition implies that < ker f, b >= 0 which leads to ! * +   (1) γ Hi dx du dx B i i Dyγi u Hi , q up − sign = 0. dyi dσ dσ Hi We then obtain the solvability condition REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION Z ∞ du −∞ dyi Subsequently, Z ∞ (1)  dxi dσ  ! dxi γ γ B Dyi u Hi dyi = 0. dσ   dxi γ dσ H q i up − sign Hi (1) du Hi q up dyi − sign Hi −∞ dyi dxi dσ 58 Z ∞ du γ Dyi u dyi = 0. −∞ dyi (3.16) By integrating by parts the first term of equation (3.16), we obtain   Z ∞ ∞ q (1) p+1 (1) p+1 H u ∂yi Hi u dyi − − Hi (p + 1)  i  −∞ −∞Z dxi dxi γ ∞ du γ sign Dyi u dyi = 0. dσ dσ −∞ dyi Subsequently, since u decays exponentially at the boundaries we obtain   Z ∞ Z dxi dxi γ ∞ du γ q (1) p+1 ∂yi Hi u dyi − sign Dyi u dyi = 0. − dσ dσ Hi (p + 1) −∞ −∞ dyi By performing a second integration by parts on the first term, we have   Z yi  Z ∞ Z yi ∞ q (1) (1) up+1 (t)dt dyi − − up+1 (t)dt − ∂y2i Hi ∂yi Hi Hi (p + 1) 0 0 −∞ −∞   Z dxi dxi γ ∞ du γ sign Dyi u dyi = 0. dσ dσ −∞ dyi Z yi p+1 up+1 (t)dt is odd. Also, from Since u is even, it implies that u is also even and 0 Z ∞ (1) (1) p+1 2 2 (3.15b) we have ∂yi Hi even, thus ∂yi Hi u (t) dt is odd and finally 0 Z ∞  Z ∞ (1) ∂y2i Hi up+1 (t)dt dyi = 0. We then obtain −∞ 0 Z ∞  Z −∞ q (1) (1) p+1 p+1 − u dyi lim ∂yi Hi − u dyi lim ∂yi Hi yi →∞ yi →−∞ Hi (p + 1) 0 0   γZ ∞ dxi dxi du γ Dyi u dyi = 0. − sign dσ dσ dy i −∞ Subsequently, Z ∞  Z ∞ q (1) (1) p+1 p+1 − u dyi lim ∂yi Hi + u dyi lim ∂yi Hi − yi →∞ yi →−∞ Hi (p + 1) 0 0   Z dxi γ ∞ du γ dxi sign Dyi u dyi = 0. dσ dσ −∞ dyi Then, REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION − 59 Z ∞   (1) (1) − up+1 dyi lim ∂yi Hi + lim ∂yi Hi yi →∞ yi →−∞ −∞   Z dxi dxi γ ∞ du γ sign Dyi u dyi = 0, dσ dσ −∞ dyi q 2Hi (p + 1) and finally   Z ∞ q (1) (1) p+1 u dyi lim ∂yi Hi + lim ∂yi Hi yi →∞ yi →−∞ 2Hi (p + 1) −∞ .   γZ ∞ dxi du γ dxi = − sign Dyi u dyi dσ dσ −∞ dyi (3.17) Concentration of the activator and the inhibitor in the outer region Let a and h denote respectively the concentrations of the activator and the inhibitor in the outer region.The concentration of the activator is null in the outer region except at the limits. Therefore, our focus is on studying the concentration of the inhibitor in the outer region and the limits. Similar to the inner region, applying the singular perturbation expansions on h leads to h(x, t) = h(0) (x, σ) + ϵγ h(1) (x, σ) + O(ϵ2γ ). (3.18) x − xi As previously mentioned, we have yi = with i = 1, · · ··, n. This implies that the ϵγ domain contains n positions xi where the concentration of the activator is not null. If am (x, t) we returned to equation (3.4b) of the model, the nonlinear function or term ϵ−γ s h (x, t) has the following characteristics: am (x, t) • ϵ−γ s is obtained by collecting and summing its values around each position h (x, t) xi . am (x, t) • ϵ−γ s vanishes on the entire outer domain as a also vanishes there. h (x, t) • Since a vanishes everywhere except around x = xi , and h is independent of x in the same region, we have m ϵ−γ (0) am (x, t) −γ A ∼ ϵ = ϵ−γ H̄iB m−s um , hs (x, t) H (0)s then Z x+ i x− i ϵ m (x, t) Z ∞ (0)m Ai dx ∼ dyi = H̄iB m−s s s h (x, t) H̄ −∞ i −γ a Z ∞ −∞ um dyi = constant. REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION Due to these localised behaviors, ϵ−γ 60 am (x, t) can be expressed as a linear combination hs (x, t) of the weighted Dirac δ functions: n ϵ−γ X am (x, t) ∼ bm H̄iB m−s δ(x − xi ), s h (x, t) (3.19) i=1 with Z ∞ bm = −∞ um dyi = 2 p−1  1 m  m Γ( )Γ( ) p+1 p−1 2 p−1 , 1 m 2 Γ( + ) 2 p−1 as proved in section (3.1.3) and δ from [8, Page 64] defined as ( Z ∞ 0 if t ̸= 0 δ(t) = and δ(t)dt = 1. ∞ if t = 0 −∞ For t = x − xi it can be rewritten as ( 0 if x ̸= xi δ(x − xi ) = ∞ if x = xi Z x+ i , and x− i δ(x − xi )dx = 1. Substituting (3.19) in (3.4b) leads to (0) τ ϵαγ ∂σγ h(0) (σ) ∼ Dhxx − h(0) + bm n X H̄iB m−s δ(x − xi ). i=1 Upon collecting terms of order O(1), we obtain (0) Dh(0) = −bm xx − h n X H̄iB m−s δ(x − xi ). (3.20) i=1 The solution of (3.20) is expressed as h(0) (x, t) = bm n X H̄iB m−s G(x; xi ), i=1 where the Green’s function G(x; xi ) satisfies the equation DGxx − G = −δ(x − xi ), Gx (±1; xi ) = 0. The perturbation expansions of h is expressed as −1 < x < 1, (3.21a) (3.21b) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 61 h(x, σ) ∼ h(0) (x, σ) + ϵγ h(1) (x, σ) + O(ϵ2γ ). At infinity, we have h(0) = H̄i (1) h(1) = Hi . and Upon matching the concentrations of the inhibitor in the inner and outer region, we obtain   (1) lim h = lim H̄i + ϵγ Hi , yi →±∞ yi →±∞ and then (1) dHi lim yi →±∞ dyi = lim ϵ−γ yi →±∞  ∂h(0) ∂ h(xi + ϵγ yi , σ) − H̄i = lim . ∂ yi ∂x x→x± i We then have (1) lim ∂y H = y →−∞ i i lim x→x− i i (0) hx = lim bm x→x− i n X H̄iB m−s Gx (x; xi ). i=1 Subsequently, (1) lim ∂yi Hi yi →−∞ = bm n X H̄iB m−s Gx (x; xj ) + bm H̄ B m−s Gx (x− i ; xi ). j=1 j̸=i Similarly, (1) lim ∂yi Hi yi →∞ = bm n X H̄iB m−s Gx (x; xj ) + bm H̄ B m−s Gx (x+ i ; xi ). j=1 j̸=i Therefore, we have (1) (1) lim ∂y H + lim ∂yi Hi = 2bm y →∞ y →−∞ i i i i n X H̄iB m−s Gx (x; xj ) j=1 j̸=i  B m−s + bm H̄ B m−s Gx (x− Gx (x+ i ; xi ) + H̄ i ; xi ) . Then, (1) (1) lim ∂y H + lim ∂yi Hi = 2bm y →∞ y →−∞ i i i with i n X j=1 j̸=i ! H̄iB m−s Gx (x; xj ) + H̄ B m−s ⟨Gx ⟩i , (3.22) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION ⟨Gx ⟩i = 62  1 − − + Gx (xi ; xi ) + G+ (x ; x ) , i x i 2 where G+ and G− are defined as G− (x; xi ) = w0 cosh (w0 xi − w0 ) cosh (w0 x + w0 ), sinh (2w0 ) −1 < x < xi , w0 cosh (w0 xi + w0 ) cosh (w0 x − w0 ), sinh (2w0 ) xi < x < 1, G+ (x; xi ) = 1 from section 3.1.3 with w0 = D 2 . − Upon substituting (3.22) in the solvability condition (3.17), we obtain   ! Z ∞ n X  q B m−s p+1 B m−s u dyi 2bm  H̄i Gx (x; xj ) + H̄ ⟨Gx ⟩i    2(p + 1)H̄i −∞ j=1 j̸=i dxi dσ =− γ  sign dxi dσ Z ∞ du γ Dyi u dyi . dy i −∞ Subsequently,  dxi γ sign − dσ  dxi dσ   Z ∞ up+1 dyi n X  qbm −∞  = H̄ B m−s Gx (x; xj ) + H̄ B m−s ⟨Gx ⟩i  .  Z ∞ du γ (p + 1)H̄i  j=1 i Dyi u dyi j̸=i −∞ dyi We finally obtain the differential algerbraic system  − dxi γ sign dσ  dxi dσ  =  n X  qbm  H̄iB m−s Gx (x; xj ) + H̄ B m−s ⟨Gx ⟩i    f (p; y), (p + 1)H̄i j=1 j̸=i (3.23a) H̄i (σ) = h (0) (x, t) = bm n X H̄iB m−s G(x; xi ), i=1 Z ∞ f (p; y) = p+1 u −∞ dyi  . Z ∞ Z ∞ bm = um dyi , (3.23b) −∞  du γ Dyi udyi . −∞ dyi (3.23c) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 63 Note: Section 3.1.3 provides proofs for certain results utilized in Chapter 3. If these have been previously reviewed, the reader should then proceed to Chapter 4. 3.1.3 Auxiliary Proofs The following properties, along with their accompanying proofs, were frequently utilized to derive important results in this chapter: 2 A) We aim to show that ∂σγ A ∼ ϵ−γ , with A being the concentration of the activator in the inner region. Starting with the definition in equation (3.3), we have ! Z σ   dζ 1 ∂σγ A(yi (σ)) = − A(yi (σ)) − A yi (σ − ζ) . (3.24) Γ(−γ) 0 ζ γ+1 Next, let us introduce a new variable ξ defined as:   ξ = ϵ−γ xi (σ − ζ) − xi (σ) . (3.25) To find ζ in terms of ξ, when ϵ ≪ 1, we expand xi as (xi ∈ C ∞ ): dxi 1 d2 xi 2 ζ+ ζ −+···. dσ 2 dσ 2 Substituting equation (3.26) into (3.25), we obtain xi (σ − ζ) − xi (σ) = −   dxi 1 d2 xi 2 ξ = ϵ−γ − ζ+ ζ − + · ·· . dσ 2 dσ 2 (3.26) (3.27) From Equation (3.27), we deduce  ζ=− dxi dσ −1   1 d2 xi 2 γ ϵ ξ− ζ − + · ·· . 2 dσ 2 (3.28) By a recursive substitution of ζ in (3.28), one obtains  ζ=−  −1  dxi 1 d2 xi ϵγ ξ −  dσ 2 dσ 2  −  −1  !2  2 dxi 1 d xi 2 ϵγ ξ − ζ +···−··· .  dσ 2 dσ 2 (3.29) Subsequently, we find REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION γ  ζ ∼ −ϵ dxi dσ −1 ξ + O(ϵ2γ ). 64 (3.30) Additionally, we have x − xi (σ) + xi (σ) − xi (σ − ξ) x − xi (σ − ξ) = = yi − ξ. (3.31) γ ϵ ϵγ Upon substituting equations (3.30) and (3.29) into (3.24) and the value of ζ in equation (3.28), we obtain )  −1 Z −∞sign( dxi ) (    dx 1 γ+1 −γ 2 ϵ dx i i dσ ∂σγ A(yi (σ)) = − − A(yi (σ))−A yi (σ−ξ) − dξ, Γ(−γ) dσ dσ ξ 0 (3.32) with the integrals boundaries obtained as follows.     dxi −1 −dxi −1 γ γ We have ζ ∼ −ϵ ξ ⇒ dζ = ϵ dξ. dσ dσ Moreover,   dxi −γ ξ∼ϵ − ζ, dσ   dxi −γ σ, ζ → 0 ⇒ ξ → 0, ζ → σ ⇒ ξ → ϵ − dσ   dxi ϵ → 0 ⇒ ξ → − sign ∞. dσ Let us rewrite ∂σγ A(yi (σ)) as 2 ∂σγ A(yi (σ)) ∼ −ϵ−γ sign  dxi dσ  dxi γ γ Dyi A(yi ), dσ (3.33) with      γ+1 dxi 1 A(yi ) − A yi + sign ξ1 dξ1 . dσ ξ1 0 (3.34) Dyγi can be regarded as the left or right propagating fractional derivative according to dxi whether is positive or negative. dσ We have dxi ( )  Z 2     dx 1 γ+1 ϵ−γ dxi −1 −∞ sign( dσ ) i γ ∂σ A(yi (σ)) = − − A(yi (σ))−A yi (σ−ξ) − dξ. Γ(−γ) dσ dσ ξ 0 Dyγi A(yi ) = sign  dxi dσ  1 Γ(−γ) Z ∞ REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION  dxi ξ1 . dσ dxi Firstly let us suppose < 0, then ξ = ξ1 , and dσ 65  Let us have ξ = − sign ξ→0 ξ→∞ ⇒ ⇒ dξ = dξ1 . ξ1 → 0, ξ1 → ∞. )   Z (    dx 1 γ+1 dxi −1 ∞ i − − A(yi (σ)) − A yi (σ − ξ1 ) dξ1 dσ dσ ξ1 0 )  Z ( 2    dξ dxi γ ∞ ϵ−γ 1 − A(yi (σ)) − A yi (σ − ξ1 ) =− . Γ(−γ) dσ ξ1γ+1 0 2 ϵ−γ ∂σγ A(yi (σ)) = − Γ(−γ) We have   dxi dxi dxi dxi sign − , = − = − dσ dσ dσ  dσ γ dxi dxi γ = . − dσ dσ Then, 2 ∂σγ A(yi (σ)) = − dxi γ ϵ−γ Γ(−γ) dσ Secondly, let us use suppose ) Z ∞(   dξ 1 A(yi (σ)) − A yi (σ − ξ1 ) γ+1 . ξ1 0 dxi > 0, then ξ = −ξ1 , dσ and (3.35) dξ = −dξ1 . ξ → 0 ⇒ ξ1 → 0, ξ → −∞ ⇒ ξ1 → ∞. Subsequently, ) −1 Z ∞ (    dx 1 γ+1 i A(yi (σ)) − A yi (σ + ξ1 ) (−dξ1 ) dσ ξ1 0 )  Z ( 2     1 γ+1 ϵ−γ dxi γ ∞ =− A(yi (σ)) − A yi (σ + ξ1 ) dξ1 . Γ(−γ) dσ ξ1 0 2 ϵ−γ ∂σγ A(yi (σ)) = − Γ(−γ)  dxi dσ Then, 2 ∂σγ A(yi (σ)) = − dxi γ ϵ−γ Γ(−γ) dσ ) Z ∞(    1 γ+1 A(yi (σ)) − A yi (σ + ξ1 ) dξ1 . ξ1 0 Considering (3.35) and (3.36) , we obtain the general form (3.33). (3.36) REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION du du is the only element of ker L0 . Let us first prove that ∈ ker L0 dy dy B) Let us prove that or L0 66 du = 0. dy du d2 L0 = 2 dy dy   du dy du du d − + p up−1 = dy dy dy  2  d u d p −u+u = (u′′ − u + up ) = 0. 2 dy dy du is the sole element of ker L0 . As a consequence of the dy Sturm-Liouville theory, it follows that for a specific eigenvalue problem L0 ψ = νψ, there exists a unique eigenfunction ψ associated with each eigenvalue ν. Furthermore, the number of zeros of the eigenfunction ψ is equal to the number of its corresponding eigenvalue within the entire set of eigenvalues minus one. In our specific case, we find L0 u1 = 0, which can be expressed as L0 u1 = 0 · u1 , with u1 an eigenfunction associated with the eigenvalue 0. Thus, u1 represents the unique eigenfunction associated with the eigenvalue du 0. This conclusion implies that u1 = is the only solution to the differential equation dyi du L0 = 0 and, consequently, the sole element within ker L0 . dyi Let’s now demonstrate that Z ∞ um dy, we have C) Let us find bm = −∞ Z ∞ bm = um dy = −∞ −∞ Z ∞ = −∞ For u= (p − 1) y, 2 Z ∞ ( p+1 2  sech2  (p − 1) y 2 ) m p−1 dy 2m   m  p+1 p−1 (p − 1) sech p − 1 y dy. 2 2 dy = 2 du, and sech even, p−1 2m  m Z p+1 p−1 ∞ sech p − 1 (u)du 2 0 −2m   m Z 4 p+1 p−1 ∞ = cosh p − 1 (u)du. p−1 2 0 4 bm = p−1 Let us express B from [1]. bm  in terms of the Beta function using the steps described in Appendix REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION Z 1 B(µ, ν) = 67 tµ−1 (1 − t)ν−1 dt. 0 Using the fact that B(µ, ν) = B(ν, µ) and the change of variable y= 1−t 1 1 ⇒t= ⇒ dt = − dy, t y+1 (y + 1)2 we obtain Z ∞ B(µ, ν) = 0 For y = t2 , 1 y+1 ν+1  y y+1 µ−1 t2 t2 + 1 µ−1 dy. dy = 2tdt, Z ∞ B(µ, ν) = 0 For t = sinh (y) dt = cosh (y)dy, Z ∞ B(µ, ν) = 0 1 2 t +1 ν+1  2tdt. we obtain 1 2 sinh (y) + 1 Z ∞ B(µ, ν) = 2 ν+1  sinh2 (y) sinh2 (y) + 1 µ−1 2 sinh (y) cosh (y)dy. cosh−2(µ+ν)+1 (y) sinh2µ−1 (y)dy. 0 1 For µ = , 2 bm = and ν= 2 p−1  m , p−1 we have   m   m  2 1 m p+1 p−1 p+1 p−1 B (µ, ν) = B , , 2 p−1 2 2 p−1 with the Beta function expressed in terms of the Γ function from definition 8.384 in [4] and defined as B (µ, ν) = Γ(µ)Γ(ν) . Γ(µ + ν) Unlike its fellow, the Γ function is available in the C language, and this expression is designed to calculate precise values for the Beta function. REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 68 D E D E D) Let us prove that L0 is self-adjoint. We need to show L0 (u), v = u, L0 (v) , for all integrable functions u and v. Knowing that both u and ∂y u have exponential decay at infinity, we have Z ∞ Z ∞ D E Z ∞   L0 (u), v = ∂y2 u − u + pup−1 u vdy = v · ∂y2 udy + pup−1 u − u vdy −∞ −∞ −∞ Z ∞ Z ∞ h i∞  ∂y u · ∂y vdy + pup−1 u − u vdy − = ∂y u · v −∞ −∞ Z −∞ Z ∞ h i∞ ∞  pup−1 u − u vdy u · ∂y2 vdy + + = − u · ∂y v −∞ −∞ −∞ Z ∞ D E  ∂y2 v − v + pup−1 v udy = u, L0 (v) . = −∞ E) Expression of the Green’s functions. If we denote by G− and G+ the values of the Green’s function in respectively ]−1, xi [ and ]xi , 1[. We can rewrite equations (3.21a) and (3.21b) as + DG+ xx − G = −δ(x − xi ), G+ (1; xi ) = 0, −1 < x < xi , (3.37a) − DG− xx − G = −δ(x − xi ), G− (−1; xi ) = 0, 0 < x < xi , (3.37b) with G− (x; xi ) = G+ (x; xi ). Homogeneous solutions: The homogeneous equations of (3.37a) and (3.37b) are respectively + DG+ xx − G = 0, (3.38a) − DG− xx − G = 0. (3.38b) Characteristic equations: Both (3.38a) and (3.38b) have the same characteristic equation 1 r2 − w02 = 0, with w02 = . D r2 − w02 = 0 ⇒ r = ±w0 , then, the solutions of the homogeneous equations are in span{ew0 x , e−w0 x }. In particular, we have   w0 x   w0 x e + e−w0 x − e−w0 x + e G =A +B = A+ sinh (w0 x) + B + cosh (w0 x), 2 2 (3.39a) + +    w0 x  w0 x e + e−w0 x − e−w0 x − e G =A +B = A− sinh (w0 x) + B − cosh (w0 x). 2 2 (3.39b) − − REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 69 with A− , A+ , B − , B + constants to be determined. From (3.39a) and (3.39b), we have G+ (1; xi ) = 0 ⇒ B + = − tanh (w0 )A+ , (3.40) G− (−1; xi ) = 0 ⇒ B − = tanh (w0 )A− . (3.41) and Substituting (3.40) and (3.41) in (3.39a) and (3.39b) leads to G+ = A+ sech (w0 ) cosh(w0 x − w0 ). (3.42) G− = A− sech (w0 ) cosh(w0 x + w0 ). (3.43) Particular solution: Z xi +ξ Z xi +ξ DGxx − G = −δ(x − xi ) ⇒ D Gxx dx − xi −ξ h ixi +ξ ⇒ D Gx = −1. xi −ξ Z xi +ξ Gdx = − xi −ξ δ(x − xi )dx xi −ξ Then, − G+ x (x; xi ) − Gx (x; xi ) = − 1 D ⇒ − 2 G+ x (x; xi ) − Gx (x; xi ) = −w0 . (3.44) Replacing (3.42) and (3.43) in (3.44) leads to A+ sech (w0 ) sinh (w0 xi − w0 ) − A− sech (w0 ) sinh (w0 xi + w0 ) = −w0 . (3.45) Using (3.44), (3.45) and the fact that G− (x; xi ) = G+ (x; xi ) leads to cosh (w0 xi − w0 ) . cosh (w0 xi + w0 ) (3.46) A+ = w0 cosh(w0 xi + w0 ) , sech (w0 ) sinh (2w0 ) (3.47) A− = w0 cosh(w0 xi − w0 ) . sech (w0 ) sinh (2w0 ) (3.48) A− = A+ Substituting (3.46) in (3.47) gives and Finally, (3.48) and (3.47) in (3.42) and (3.43) leads to G− = w0 cosh(w0 xi − w0 ) cosh(w0 x + w0 ), sinh (2w0 ) (3.49) G+ = w0 cosh(w0 xi + w0 ) cosh(w0 x − w0 ). sinh (2w0 ) (3.50) and REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 70 F) Expression of the homoclinic solution u: 1 1 up+1 u′′ − u + up = 0 ⇒ u′′ u′ − uu′ + u′ up = 0 ⇒ u′2 − u2 − = c1 . 2 2 p+1 Since lim u = lim u′ = 0, |y|→∞ c1 = 0. This gives us |y|→∞ 1 ′2 1 2 up+1 u − u − = 0. 2 2 p+1 (3.51) Subsequently, s 2up+1 u′2 = u2 − p+1 s u′ = u2 − Let A = 2up+1 u′ ⇒s =1⇒ p+1 p+1 2u u2 − p+1 u′ Z u′ = ± u2 − ⇒ dy s , and 2up+1 . p+1 u′ Z s 2up+1 u2 − p+1 dy = y + c2 . B = y + c2 . 2up+1 u2 − p+1 Since u′ dy = du, we have Z A= du s u2 − Substituting  u= and Z = 2up+1 p+1 r du . p + 1r 2 p + 1 1−p u 2 u −1 p+1 2 (3.52) p + 1 1−p u with cosh2 (t) leads to 2 2  1 2 1 − p cosh 1 − p (t), p+1  1−p  1 p + 1 2  t = arccosh u 2  2  , REPRESENTATION, MATCHED ASYMPTOTIC EXPANSION AND SOLUTION 2 du = 1−p  71 p+1  1 2 1 − p sinh (t) cosh 1 − p (t)dt. p+1 Substituting the previous results in (3.53) leads to Z 2 2 A= dt = t + c2 . 1−p 1−p Equating A and B gives p−1 y + c4 . 2   1−p  1 p + 1 2  t = arccosh u 2 (0) . From (3.51), we have 2 t= For y = 0, 1 ′2 1 2 up+1 2up+1 (0) 2 u − u − = 0 ⇒ u2 (0) = ⇒ u1−p (0) = 2 2 p+1 p+1 p+1 1−p   1 p + 1 −2 ⇒ u 2 (0) = . 2 Substituting the result in the expression of t results in ⇒ t = arccosh (1) = 0 c4 = 0. Subsequently,  1−p  1 p + 1 2 p − 1  arccosh u 2 y, = 2 2  1−p  1   p+1 2 p−1 u 2 y , = cosh 2 2 1 p−1     p+1 2 p−1 u 2 = sech y , 2 2 and then ( u(y) = p+1 2  sech2  1 ) p − 1 p−1 y . 2 (3.53) Chapter 4 γ Numerical Approximation of Dt u with a Controlled Precision 4.1 Introduction, Method and Verification 4.1.1 Introduction and Background As previously stated, the primary objective of this study is to numerically solve the onedimensional Gierer-Meinhardt model with subdiffusion. By applying matched asymptotic expansion to the differential system (3.4), we have obtained an differential algebraic system (3.23). dxi Our goal in addressing this system is to compute the quantities and Hi (σ), which represent dσ the rates of change of spike positions and their respective heights. Monitoring both spike positions and heights enables us to identify regions where the activator and inhibitor significantly interact, as well as the differences in the magnitude of their concentrations. Our ability to determine these quantities within the system is primarily constrained by the computation of the fractional operator Dtγ u, defined as        γ+1 Z ∞ dxi 1 dxi 1 γ Dt u(t) = sign u(t) − u t + sign y dy, dσ Γ(−γ) 0 dσ y a task discussed in this section. The fractional operator Dtγ u from (3.34) is an inherent part of this system. It is essential to compute it with consistent precision to prevent any form of contamination, since this term is only one of many in the numerical computation of (3.23), but the only one, whose accuracy is not immediately determined by a known numerical method. Hereunder, we chose t instead of y as in [6] as the dependent variable of Dtγ u to symbolize its time dependency, as it is conceptually the case. The fractional operator Dtγ u depends on two parameters: p and γ, with p being a parameter related to the function u (main kinetic exponent for the activator equation). As shown in the next section, numerically computing Dtγ u implies the integration of a spiketype fractional power function whose shape varies significantly, as shown in Figure 4.1 of 72 INTRODUCTION, METHOD AND VERIFICATION 73 section 4.1.5. Since that function is bell-shaped with a very steep slope, computing its integral requires a significant number of subdivisions. Moreover, since the spikes are defined in an infinite domain, the number of subdivisions required increases even further. Even though computing Dtγ u requires a high number of subdivisions in all cases, their order of magnitude varies significantly depending on the shape of the spikes. For example, a normal spike with a bell width of order O(ϵ) requires a number of subdivisions of order O(1/ϵ). Moreover, the spike tail decays exponentially, which makes it easy to be captured in a relatively small interval. For instance, a spike of width 0.1 centered at the origin on a domain of (−1, 1) will need 1/0.1 = 10 subdivisions within (−0.05, 0.05) to be properly captured. An important consideration arises: any standard method of numerical integration provides an error bound that depends on the number of subdivisions. It is logical to question why we do not use such a formula to determine the number of subdivisions n based on the desired error. Let us explore this using Simpson’s method as an example, which we will utilize further. According to [3, page 203], considering f (4) as the continuous fourth derivative of a function f , and M as any upper bound for the values of |f (4) | on [a, b], the error |Ef | in the Simpson’s rule approximation of the integral of f from a to b satisfies the inequality: |Ef | ≤ (b − a)5 M . 180 n4 However, this approach faces two primary challenges. Firstly, it involves M , supposedly the maximum for the fourth derivative of f . Yet, determining a maximum for a complicated function like u(4) is unfeasible. Secondly, an issue also arises with the term b − a. Operating within a theoretically infinite domain where a and b can tend to infinity, the term b−a becomes an indeterminate form. In summary, to efficiently compute Dtγ u, it is essential to find a way to determine the number of subdivisions required for controlled precision, depending on the shape of those spikes and the domain in which they are defined. Answering this question will be the subject of our study. 4.1.2 Procedure 1. We split Dtγ u using two integrations by parts to simplify it and replace the original improper integral by proper computable terms. 2. The result of these integrations is a sum of constants and a yet-to-be-evaluated integral. We evaluate the integral using the composite Simpson method. We selected the C programming language due to its speed and efficiency, providing a significant advantage when dealing with memory-intensive and power-demanding data. 3. We gradually increment the number of subdivisions used by our program to compute these integrals until a precision of 10−10 is reached. These computations are performed for a predefined set of values of the parameters p and γ. INTRODUCTION, METHOD AND VERIFICATION 74 4. We attempted to plot the correspondence between the number of subdivisions n and the value of the integral obtained, but the resulting functions were neither smooth nor continuous. We then shifted our focus toward the residual function. 5. The shapes of the residual functions are similar to that of the hyperbolic arctangent function, which is then used as the fitting function. 6. Finally, we use the inverse functions of the fitting curves to approximate the number of subdivisions n given a residual value. In summary, by implementing the above procedure, we expect to build a program capable of computing Dtγ u for all values of p and γ with a precision of 10−10 . 4.1.3 Plan 1. Section 4.1.4 covers Step 1 in the procedure for regularizing Dtγ u. 2. Steps 2 and 3, starting from section 4.1.4 up to section 4.1.5, involve determining the number of subdivisions n needed to compute I (4.5). This calculation is for (t, p, γ) values within the ranges: {0.1, 1, 5} × {1.5, 2, 2.5, . . . , 4.5} × {0.1, 0.2, 0.3, . . . , 0.9}. 3. Steps 4, 5, and 6, spanning from section 4.1.5 to 4.1.6, describe the fitting process and can be visualized as follows: We fit the correspondence n(t,p,γ) → R(t,p,γ) (n), using a variant of the hyperbolic-arctangent function, denoted as f . Consequently, n(t,p,γ) → f(t,p,γ) (n) ∼ R(t,p,γ) . To find the number of subdivisions based on a specific residual (the inverse path), we determine the inverse f −1 of f so that −1 f(t,p,γ) (R) → n(t,p,γ) , where n(t,p,γ) and R(t,p,γ) represent the number of subdivisions and the corresponding residual values for specific t, p, and γ values. 4. The process is now generalized for (t, p, γ) ∈ [0, 5] × [1.5, 4.5] × [0.1, 0.9] from section 4.1.7 to 4.1.8 using a series of linear and bi-linear interpolations. 5. Section 4.1.9 involves the verification process where the accuracy of our results is assessed. INTRODUCTION, METHOD AND VERIFICATION 4.1.4 75 Regularization of Dtγ u This section serves as the first step of the plan outlined in section 4.1.3. It aims at removing the singularity inherent in the operator’s Dtγ u expression. From equation (3.34), we have       γ+1  Z ∞ 1 dxi 1 dxi γ u(t) − u t + sign y Dt u(t) = sign dy, (4.1) dσ Γ(−γ) 0 dσ y with γ D(−t) u(−t) dxi >0 dσ = −Dtγ u(t) dxi dσ <0 . From (4.1), the expression of Dtγ u is not directly computable because of the singularity present in the expression. Hence, we perform a double integration by parts and employ a Taylor dxi expansion of u around a certain point to get rid of the singularity. For < 0 and t > 0, we dσ have Z t∞ u(t) − u(t − y) 1 γ dy. Dt u(t) = − Γ(−γ) 0 y γ+1 The first integration by parts a(y) = u(t) − u(t − y), db 1 = γ+1 , dy y leads to Dtγ u(t) = − 1 Γ(−γ) ( 1 u(t) − u(t − y) u(t) − u(t − t∞ ) + lim + − y→0 γ yγ γtγ∞ Z t∞ 0 ) u′ (t − y) dy , γy γ u(t) − u(t − y) = 0. The proof for this is straightforward since y→0 yγ   u′′ (t)y 2 ′ u(t) − u(t) + u (t)(−y) + +···+··· u(t) − u(t − y) 2! ∼ = yγ yγ u′′ (t)y 2−γ u′ (t)y 1−γ − + · · ·, 2 with lim with 0 < γ < 1. We obtain Dtγ u(t) = − 1 Γ(−γ) ( u(t) − u(t − t∞ ) − + γtγ∞ Z t∞ 0 ) u′ (t − y) dy . γy γ By applying a second integration by parts on the last terms in the brackets, a(y) = u′ (t − y), db 1 = γ, dy y INTRODUCTION, METHOD AND VERIFICATION 76 we obtain Z t∞ 0 u′ (t − y) t1−γ 1 ∞ dy = − u′ (t − t∞ ) − γ γy γ(γ − 1) γ(γ − 1) Z t∞ u′′ (t − y)y 1−γ dy. 0 This finally leads to ) ( Z t∞ 1−γ t−γ 1 t∞ 1 ∞ γ ′′ 1−γ ′ u (t−y)y dy . − Dt u(t) = − (u(t) − u(t − t∞ )− u (t−t∞ )− Γ(−γ) γ γ(γ − 1) γ(γ − 1) 0 (4.2) We have −γΓ(−γ) = Γ(1 − γ) and γ(γ − 1)Γ(−γ) = Γ(2 − γ). Additionally, upon substituting t − y with x, and then x with y, this leads to Z t∞ ′′ u (t − y)y 1−γ Z t−t∞ dy = − u′′ (y)(t − y)1−γ dy. t 0 We finally obtain Dtγ u(t) = Z t−t∞   t−γ t1−γ 1 ∞ ∞ ′ u′′ (y)(t−y)1−γ dy. u(t)−u(t−t∞ ) + u (t−t∞ )− Γ(1 − γ) Γ(2 − γ) Γ(2 − γ) t (4.3) Similarly, for dxi > 0, and dσ γ D(−t) u(−t) dxi >0 dσ = −Dtγ u(t) dxi dσ <0 , 1−γ   t−γ t∞ ∞ u(t) − u(t − t∞ ) − u′ (t − t∞ ) Γ(1 − γ) Γ(2 − γ) Z t−t∞ 1 + u′′ (y)(t − y)1−γ dy, Γ(2 − γ) t γ D(−t) u(−t) = − with 0 < γ < 1, and u defined as  u(t) = p+1 (p − 1)t sech2 2 2  1 p−1 , and verifying the following differential equation u′′ − u + up = 0, −∞ < t < ∞. (4.4a) INTRODUCTION, METHOD AND VERIFICATION u′ (0) = 0, u(0) > 0, and 77 lim u = 0. |t|→∞ (4.4b) Let us have I1 = Z t−t∞ −γ 1−γ 1−γ   t∞ t∞ t∞ u′ (t−t∞ ), I3 = u(t−t∞ )−u(t) , I2 = u′′ (y)(t−y)1−γ dy. Γ(1 − γ) Γ(2 − γ) Γ(2 − γ) t In both cases, Dtγ u is expressed as the sum of three terms, the first two being constants, and the third being an integral. The first two terms are exact values, hence our approximation will focus solely on the integral. Furthermore, it is important to note that the integrand as well dxi dxi as the integration bounds remain the same for both < 0 and > 0 . Therefore, the dσ dσ number of subdivisions required to compute the integral with the desired precision is the same in both cases. Hence, we will concentrate solely on determining Z t−t∞ u′′ (y)(t − y)1−γ dy, (4.5) I= t for dxi < 0. dσ 4.1.5 Numerical Approximation of I with a Controlled Precision In this section, along with subsequent sections up to 4.1.6, and in alignment with Step 2 of our plan, we determine the necessary number of subdivisions for the numerical computation of the integral I. The integrand of I depends on three parameters: p, γ, and t∞ . Based on extensive numerical trials and as demonstrated in the results section, the value of t∞ = 5 has proved to be sufficient to approach I with the desired precision. In the subsequent subsections of this document, when referring to discrete values of p and γ, we respectively mean p ∈ {1.5, . . . , 4.5} and γ ∈ {0.1, . . . , 0.9}. Conversely, when discussing continuous values of p and γ, we are referring to p ∈ [1.5, 4.5] and γ ∈ [0.1, 0.9]. As mentioned earlier, the shape and gradient exhibit significant variation based on these parameters, influencing the number of subdivisions required for computation. To streamline the process and avoid redundant calculations of I, we seek a method to predict the necessary number of subdivisions and assess precision as we compute the integral. Below are the curves of the integrand for some values of the parameters. INTRODUCTION, METHOD AND VERIFICATION (a) u′′ (y)(t − y)1−γ for t = 5 78 (b) u′′ (y)(t − y)1−γ for t = 0.5 Figure 4.1: Plots of u′′ (t)(t − y)1−γ for p = 2, γ = 0.1 for t = 5(right) and t = 0.5(left). The horizontal axis represents range of y-values and the vertical axis the corresponding integrand values. Computing I using the composite Simpson’s method In addition to being a well-established and widely used numerical integration method, we chose the composite Simpson’s method for its adaptability. The composite Simpson’s method can be applied to both evenly and unevenly spaced intervals, making it versatile for various types of functions and integration domains. This adaptability allows for efficient integration over complex domains or functions with irregular behavior. The following algorithm from [3, page 204] is used to compute the integral I: n/2−1 i h X h f (x)dx ≈ f (x2i ) + 4f (x2i+1 ) + f (x2i+2 ) , 3 a Z b i=0 with h = b−a . n Implementation of the Simpson’s method in C: The method computes the integral of the function named integrand using the composite Simpson’s method. Here, integrand represents the actual integrand of I. Parameter list: • lower: lower bound of the integral. • upper: upper bound of the integral. • subInterval: number of sub-intervals. • g: gamma parameter. • A = pow((p+1.0)/2.0, 1.0/(p-1.0)). • B = 2.0/(p-1.0). • C = (p-1.0)/2.0. Process: INTRODUCTION, METHOD AND VERIFICATION 79 • Calculate the step size. • Sum the image by integrand of the lower and upper bounds of the integral first. • Add the images by integrand of the values between the lower and upper bounds depending on whether the discretization index is odd or even. • Multiply the result by one-third of the stepSize. 1 2 3 4 double simpson ( double lower , double upper , long int subInterval , double p , double g ) { 5 double stepSize = ( upper - lower ) /( subInterval ) ; 6 7 int i ; double integration = integrand ( lower , p , g , y ) + integrand ( upper , p , g, y); for ( i = 1; i <= subInterval - 1; i ++) { double k = lower + i * stepSize ; if ( i % 2 == 0) { integration = integration + 2 * integrand (k , p , g , y ) ; } else { integration = integration + 4 * integrand (k , p , g , y ) ; } } integration = integration * stepSize / 3; return integration ; 8 9 10 11 12 13 14 15 16 17 18 19 20 21 } // implementation of the integrand 22 23 24 double integrand ( double y , double p , double g , double t ) { 25 return ( u (y , p ) - pow ( u (y , p ) , p ) ) * pow ( t - y , 1 - g ) ; 26 27 28 } 29 30 // implementation of u 31 double u ( double t , double p ) { 34 double A = pow ( ( p +1.0) /2.0 , 1.0/( p -1.0) ) ; 35 double B = 2.0/( p -1.0) ; 36 double C = (p -1.0) /2.0; 32 33 INTRODUCTION, METHOD AND VERIFICATION 80 37 return ( A * pow ( sech ( C * t ) , B ) ) ; 38 39 } Listing 4.1: Implementation of the composite Simpson method with its dependencies in C Note: The number of subInterval used by the Simpson method must be even. Lines 48 to 50 from Listing 4.10 always ensure it is the case. Computing and printing the result of the Simpson method using the WriteFile method: To enhance readability, we have opted to display the return values of the integral I alongside the corresponding number of subdivisions needed for its calculation, as shown in Table 4.4. This formatting enables us to gather and analyze the data more efficiently. For each combination of t, p, and γ, the WriteFile method creates a file and prints, side by side, the return values of the Simpson method with their corresponding number of subdivisions until the desired precision is reached. Considering that we are handling very large values of n, and only the last values of n are of interest, we double the increment value each time we run the first loop until we reach a precision of 5 × 10−10 . Once that precision is achieved, new values of the increments are used until 1 × 10−10 is reached. The value of the increments used in the second loop is returned by the getIncrement function and represents 5% of the number of subdivisions required to reach the 5 × 10−10 precision in the first loop. This WriteFile function is used to print the returned value of the Simpson method for discrete values of p ∈ {1.5, . . . , 4.5} and γ ∈ {0.1, . . . , 0.9}. Parameter list: • p: discrete values of p. • g: discrete values of γ. • t: ∈ {0.1, 1, 5}. • n: max number of subdivisions allowed. Process: • Extract the integer and decimal part of p and g. • Write or create a file for a specific combination of p and g where the returned value of the Simpson method will be printed. • Run the first loop and double the increment until 1 × 10−10 < error < 5 × 10−10 . • Run the second loop with a custom increment until error1 < 10−10 . • error and error1 are expresed as Simpson(lower, upper, n) - Simpson(lower, upper, 2n). INTRODUCTION, METHOD AND VERIFICATION 81 Please note that ”Simpson(lower, upper, n)” represents the result of the Simpson method with the given parameters lower, upper, and n. 1 void writeFile ( double p , double g , double t , long int n ) { 2 3 4 5 6 7 8 double double double double double double A = pow (( p + 1.0) / 2.0 , 1.0 / ( p - 1.0) ) ; B = 2.0 / ( p - 1.0) ; C = ( p - 1.0) / 2.0; upper = t ; tmax = 5; lower = t - tmax ; 9 10 11 12 13 int int_g = 10 * g ; double p1 = ( int ) p ; double p2 = 10 * ( p - p1 ) ; long int i = 0; 14 15 16 17 18 19 20 double g1 = ( int ) g ; double g2 = 10 * ( g - g1 ) ; long int increment = 1000; long int next ; double precision1 = 5e -10; double precision2 = 1e -10; 21 22 23 24 25 long char FILE FILE int j ; str_i [25]; * diag ; * diag1 ; 26 27 sprintf ( str_i , " p %0.0 fp %0.0 f_g %0.0 fp %0.0 f . txt " , p1 , p2 , g1 , g2 ) ; 28 29 30 31 diag = fopen ( str_i , " a " ) ; if (! diag ) { printf ( " Failed to open diagonal dominance file .\ n " ) ; 32 33 } 34 35 for ( i = 1000; i < n ; i += i ) { 36 37 double value = simpson ( lower , upper , i , g , A , B , C ) ; 38 39 double value5 = simpson ( lower , upper , 2 * i , g , A , B , C ) ; 40 41 42 double error = fabs ( value - value5 ) ; INTRODUCTION, METHOD AND VERIFICATION 82 if ( error > precision1 ) { 43 44 fprintf ( diag , " % ld %0.15 f \ n " , i , value5 ) ; } else if ( error <= precision1 && error > precision2 ) { next = i ; break ; } 45 46 47 48 49 50 } 51 52 increment = getIncrement (t , p , g ) ; // Find the increment depending on t , p and gamma 53 54 for ( j = next ; j < n ; j += increment ) { 55 56 double value1 = simpson ( lower , upper , j , g , A , B , C ) ; 57 58 double value10 = simpson ( lower , upper , 2 * j , g , A , B , C ) ; 59 60 double error1 = fabs ( value1 - value10 ) ; 61 62 if ( error1 < precision2 ) { 63 64 break ; } else { 65 66 67 fprintf ( diag , " % ld %0.15 f \ n " , j , value10 ) ; 68 69 } 70 71 } 72 73 fclose ( diag ) ; 74 75 } Listing 4.2: Implementation WriteFile method in C Number of subdivisions returned by the WriteFile method for t = 0.1 , 1 and 5: Once the desired precision is achieved, the final line printed by the WriteFile function contains the return value of the Simpson method with the correct precision, along with the corresponding number of subdivisions necessary to attain that precision. The tables below present the maximum number of subdivisions for each t, p, and γ. INTRODUCTION, METHOD AND VERIFICATION p/g 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 35290 99900 216940 609778 1693204 4718250 16113048 76333764 383072000 2.0 51740 135800 344940 865778 2362408 7300300 28265572 132864908 730144000 2.5 61610 163900 433880 1101630 3386408 9970550 38166882 182371458 1018144000 3.0 73870 199800 522820 1377778 3721010 12464400 46087930 223956960 1028288000 83 3.5 80450 217750 567290 1495704 4390214 14066550 54550882 263749554 1028288000 4.0 87030 235700 611760 1731556 5059418 15668700 60491668 301374532 1028288000 4.5 93610 253650 689880 1849482 5394020 17270850 68412716 333058724 1028288000 Table 4.1: Number of subdivisions for t = 0.1 p/g 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 28000 71900 175500 444800 1141700 3547200 11817400 52417200 260332000 2.0 26500 67550 166000 397600 1078730 3172400 10530500 46626150 232504000 2.5 11000 23900 60500 134800 318970 886800 2691450 12460850 44212000 3.0 22000 55700 130500 316800 826850 2335800 7956700 34400600 162934000 3.5 32500 83750 204000 515600 1393580 4109400 14626500 63653600 325868000 4.0 39500 103500 251500 633600 1716670 5233800 18487200 83600550 451094000 4.5 42500 111400 280000 700800 1905580 5983400 21061000 95826100 506750000 Table 4.2: Number of subdivisions for t = 1 p/g 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 17100 42200 98200 234500 596400 1684000 5458000 21119000 101674000 2.0 11600 28000 63800 146500 364800 992000 3070000 11412000 51754000 2.5 9100 21700 48800 110500 268600 716000 2171000 7847000 34392000 3.0 8000 18800 41600 94000 227900 596000 1768000 6357000 27318000 3.5 7300 17300 38200 84500 202000 526000 1566000 5551000 23650000 4.0 6900 16100 35800 78500 187200 486000 1442000 5055000 21292000 4.5 6600 15500 34000 75500 178700 456000 1349000 4714000 19720000 Table 4.3: Number of subdivisions for t = 5 Fitting the residual R instead of the integral I The fitting process begins from this paragraph and extends to the end of this section, aligning with Stage 3 of our plan. In accordance with the plan, an initial attempt was made to directly manipulate the integral values I; however, the resulting plots exhibited neither smoothness nor continuity. Consequently, the decision was made to utilize the residual R instead. Upon execution of the WriteFile method, output files are generated, containing two columns: the first column denotes the number of subdivisions, while the second column displays INTRODUCTION, METHOD AND VERIFICATION 84 the return values yielded by the Simpson method. Table 4.4 provides an illustration of a sample file produced by the WriteFile function for t = 1, p = 4, and γ = 0.8. Given our objective to determine the number of subdivisions required to achieve a specific level of precision or error, it is logical to establish a relationship between N and the error or residual (i.e., the difference between the last value printed by the WriteFile function and the preceding values). Table 4.5 represents an updated version of Table 4.4, now featuring residual values as the second column. The fitting process spans across three files: test1.m, fit_curve.m, and fit_curvef.m, which are described subsequently. N 75879150 76522600 77166050 77809500 78452950 79096400 79739850 80383300 81026750 81670200 82313650 82957100 83600550 I −0.590274039200124 −0.590274039199605 −0.590274039198941 −0.590274039197867 −0.590274039197240 −0.590274039196180 −0.590274039195590 −0.590274039194982 −0.590274039194301 −0.590274039193123 −0.590274039192075 −0.590274039191671 −0.590274039191116 Table 4.4: Integral values N 75879150 76522600 77166050 77809500 78452950 79096400 79739850 80383300 81026750 81670200 82313650 82957100 83600550 R 1.2251e − 11 1.0456e − 11 9.9369e − 12 9.2729e − 12 8.1990e − 12 7.5719e − 12 6.5120e − 12 5.9219e − 12 5.3140e − 12 4.6330e − 12 3.4550e − 12 2.4070e − 12 2.0030e − 12 Table 4.5: Residual values Inside the test1.m file: During the fitting process of the residual function, we opted for the logarithm of the residual instead of the actual residual function. In fact, the residual function is a power function, which implies that its values are either extremely small or large over most of its domain. Attempting to gain insights from its plot has proven to be quite challenging. Also, the fit would be highly non-uniform. For example, how do we ensure that the small values are fit as well as the large values? How do we measure what is a good fit when the disparity in magnitude is so great? The log is the answer: it turns a very strongly varying power function into quasi-straight lines. Straight lines are simpler to handle in terms of quantifying their properties. If the original function is not a single power (which is our case), the log would not be a straight line, but the magnitude disparity issue is resolved nonetheless. The other aspect of it all is that we are indeed interested in the error magnitude, not its absolute value. We will always describe the residual as a magnitude. Thus, the log treatment also makes sense conceptually, beyond being a technical convenience. In the subsequent sections of the document, when we mention the residual function, we are actually referring to its logarithm. Process: • Extract the integer and decimal parts of p and g and convert them into strings. INTRODUCTION, METHOD AND VERIFICATION 85 • Open the file printed by the WriteFile function for that specific combination of p and g. • Load this file into the vector U. • Compute the residual function. • Fit the residual function. • Plot the results. In the context of the test1.m file, the goal is to establish a functional relationship between the number of subdivisions N and the error or residual R for a given combination of p and γ. By fitting the plot of the residual against the number of subdivisions with a continuous and easily invertible function, the program can determine the number of subdivisions required to achieve a specific precision. 1 2 3 for p =1.5:0.5:4.5 for g = 0.1:0.1:0.9 4 5 % extract the integer and decimal part of p 6 7 8 p1 = fix ( p ) ; p2 =10*( p - p1 ) ; 9 % extract the integer and decimal part of g g1 = fix ( g ) ; 12 g2 =10*( g - g1 ) ; 10 11 13 % convert them into strings sp1 = num2str ( p1 ) ; 16 sp2 = num2str ( p2 ) ; 17 sg1 = num2str ( g1 ) ; 18 sg2 = num2str ( g2 ) ; 14 15 19 20 21 % open the file containing the return value of the WriteFile % method depending on a sepcific combination of p and g 22 23 24 25 file =[ ’C :/ Users / nguim / OneDrive / Documents / research_papers / Code / newdata / P_G / t5 / p ’ sp1 ’p ’ sp2 ’ _g ’ sg1 ’p ’ sg2 ’. txt ’ ]; 26 27 28 % load the file as a 2 dimoentional vector U = load ( file ) ; 29 30 N = U (: ,1) ; % the vector N contains the subdivisions INTRODUCTION, METHOD AND VERIFICATION 31 86 G = U (: ,2) ; % the vector G contains the actual return value the WriteFile method 32 33 34 35 36 % R is the residual vector containing the difference between the last value of G and the other ones % R = log10 ( abs ( G (1: end -1) -G ( end ) ) ) ; N = N (1: end -1) ; 37 % c is a parameter vector that will minimizes the euclidian distance between 39 % the Residual functions and their fitted curves 40 c0 =[ -1 -6]; % initial parameter of the fminunc function 41 c = fminunc ( @ ( c ) fit_curve (c ,R , N ) , c0 ) ; 38 42 43 44 45 46 47 [f , imin , imax ]= fit_curvef (c , N ) ; N1 = N ; N1 ([ imin imax ]) =[]; plot (N ,R , N1 ,f , ’ -- ’) ; hold on 48 49 50 endfor endfor Listing 4.3: Fitting the residual function. Inside the fit curvef.m file: The function fit_curvef is tasked with fitting the residual using the general form of the atanh function, c1 atanh (n1 ) + c2 , by determining optimal values for the scaling factor c1 , the slope n1 , and the vertical shift c2 . Parameters list: • c: a two-dimensional vector. • n: the number of subdivisions. Process: • Get the minimum value of the vector n. • Get the maximum value of the vector n. • Remove these extreme values from the vector n. • Determine the fitting curve using the parameter c. INTRODUCTION, METHOD AND VERIFICATION 87 1 2 function [f , imin , imax ]= fit_curvef (c , n ) 3 [ minn , imin ]= min ( n ) ; [ maxn , imax ]= max ( n ) ; 6 n ([ imin imax ]) =[]; 7 n1 =2*( n - minn ) /( maxn - minn ) -1; 8 f = c (1) * atanh ( n1 ) + c (2) ; 4 5 Listing 4.4: Implementation of the fit curvef method . Inside the fit curve.m file: The fit_curve function returns the Euclidean distance between the residuals and their fitting curves. Specifically, the parameters c1 and c2 are selected to minimize this Euclidean distance. Parameters list: • c: a two-dimensional vector returned by the fminunc function. • u: the function being fitted; in our case, it represents the residual function. • n: represents the number of subdivisions. Process: • Call the fit_curvef function and pass into it the vector c. • fit_curvef returns the fitting curve f, as well as Imax and Imin, which respectively represent the maximum and minimum values of n. • Remove Imax and Imin from n. • Determine the Euclidean distance between the fitting curve f and u. 1 2 function R = fit_curve (c ,u , n ) 3 [f , imin , imax ]= fit_curvef (c , n ) ; u ([ imin imax ]) =[]; 6 R = sqrt ( sum (( f - u ) .^2) ) ; 4 5 Listing 4.5: Implementation of the fit curve method. INTRODUCTION, METHOD AND VERIFICATION 88 Figure 4.2: specific case for t = 5, p = 4, and γ = 0.8. The x-axis represents the number of subdivisions and the y-axis the logarithm of the corresponding residuals. The solid curve represents the numerical results while the dashed one represents the fitting curve. Interpolation of the number of subdivisions N using the fitting curve Once completing the fitting with the atanh function , its inverse can now be utilised to determine the corresponding R for every value of n. From the fit_curvef.m file, we have 2 × (n − minn) n1 = − 1 and R ≡ f = c1 × atanh(n1 ) + c2 . The inverse process leads to (maxn − minn) (n1 + 1) × (maxn − minn) tanh(f − c2 ) and n = + minn. This expression is used in the n1 = c1 2 n_approx function to approximate the number of subdivisions n as subsequently explained. Inside the n approx function: The function n approx iterates through the data files and tanh(f − c2 ) substitutes the variables R, c1 , c2 , maxn, and minn in the expressions n1 = and c1 (n1 + 1) × (maxn − minn) n= + minn with their corresponding values given the parameters t, 2 p, and γ. It calculates and returns the number of subdivisions n given t, p, g, and the residual R. Parameters list: • t: upper bound of the integral. • p: between 1.5 and 4.5 with an increment of 0.5. • g: between 0.1 and 0.9 with an increment of 0.1. • R: the residual value. Process: INTRODUCTION, METHOD AND VERIFICATION 89 • Load the c.txt containing the different values of the c vector for a given t. • Load the Nmax_Nmin.txt file containing the maximum and minimum number of subdivisions for a given t. • Compute n using the new formula. 1 double n_approx ( double t , double p , double g ) { 2 3 4 5 6 7 8 9 10 11 12 double n1 , n , pos_p , pos_g ; FILE * fp1 ; FILE * fp2 ; FILE * fp3 ; char file_name1 [25]; char file_name2 [33]; char file_name3 [25]; float c1 = 0 , c2 = 0 , nMax = 0 , nMin = 0 , R ; char * param ; int i = 0 , location = 0; 13 14 15 pos_p = 1 + ( p - 1.5) / 0.5; pos_g = g / 0.1; 16 17 // find the location of c1 and c2 in the C . txt file as well as the location of Nmax and Nmin in the N . txt file knowing p and g 18 19 location = ( int ) (( pos_p - 1) * 9 + pos_g ) ; 20 21 // assign a value to param based on the value of t 22 23 24 25 26 27 28 29 if ( t == 0.1) { param = " 0 p1 " ; } else if ( t == 1) { param = " 1 " ; } else { param = " 5 " ; } 30 31 // create a path to the C . txt , N . txt , and R . txt files using the param variable 32 33 34 35 sprintf ( file_name1 , " ../../ Data / C / t % s / C . txt " , param ) ; sprintf ( file_name2 , " ../../ Data / Nmax_Nmin / t % s / N . txt " , param ) ; sprintf ( file_name3 , " ../../ Data / R / t % s / R . txt " , param ) ; 36 37 38 // open N . txt , C . txt , and R . txt INTRODUCTION, METHOD AND VERIFICATION 90 fp1 = fopen ( file_name1 , " r " ) ; fp2 = fopen ( file_name2 , " r " ) ; fp3 = fopen ( file_name3 , " r " ) ; 39 40 41 42 if (! fp2 ) { printf ( " fail_2 " ) ; } 43 44 45 46 if (! fp1 ) { printf ( " fail_1 " ) ; } if (! fp3 ) { printf ( " fail_3 " ) ; } 47 48 49 50 51 52 53 for ( i = 0; i < location ; i ++) { fscanf ( fp1 , " % f % f " , & c1 , & c2 ) ; // find c1 and c2 using the location variable fscanf ( fp2 , " % f % f " , & nMin , & nMax ) ; // find nMin and nMax using the location variable fscanf ( fp3 , " % f " , & R ) ; // find R using the location variable 54 55 56 57 58 } 59 60 61 fclose ( fp1 ) ; fclose ( fp2 ) ; 62 63 64 n1 = tanh (( R - c2 ) / c1 ) ; n = (( n1 + 1) * ( nMax - nMin ) ) / 2 + nMin ; // find n 65 66 67 return n ; 68 69 70 } Listing 4.6: Implementation of the n approx method. Number of subdivisions obtained by executing n approx for t = 0.1, 1, and 5: Tables 4.6, 4.7, and 4.8 display the approximate number of subdivisions returned by the n approx function for t = 0.1, t = 1, and t = 5. Having a program capable of predicting the number of subdivisions n required to compute I for discrete values of t ∈ {0.1, 1, 5}, p ∈ {1.5, . . . , 4.5}, and γ ∈ {0.1, . . . , 0.9}, we intend to expand the program’s scope to encompass continuous values of t ∈ [0, 5], p ∈ [1.5, 4.5], and γ ∈ [0.1, 0.9]. To achieve this, we will gradually transition from discrete to continuous values, starting with t in the subsequent section. INTRODUCTION, METHOD AND VERIFICATION g/p 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 35065 99002 214565 602747 1674913 1674913 4662043 15930379 76155108 2.0 51414 134710 342453 858551 2341552 7241978 28071526 132684487 727802772 2.5 61242 162937 430919 1093366 3365120 9904136 37953058 182123946 1015373018 3.0 73570 198630 519457 1370345 3698389 12403810 45866233 223677191 1025834015 3.5 80108 216504 563799 1487766 4365537 13999256 54342976 263586831 1025841827 91 4.0 86651 234382 608143 1722571 5033725 15597980 60260601 301086078 1025918580 4.5 93201 252262 686814 1840192 5366720 17196034 68207443 332831576 1025859852 Table 4.6: Approximated number of subdivisions for t = 0.1 g/p 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 27873 71597 174554 442094 1134337 1134337 3527550 11753526 52352342 2 26378 67150 165106 395096 1071579 3153872 10468585 46563914 231613175 2.5 10927 23662 59775 132836 313754 871089 2640180 12411139 43366292 3.0 21895 55344 129525 314216 820575 2317025 7900759 34340174 162079821 3.5 32362 83377 202917 512656 1385528 4088223 14564452 63593768 324972385 4.0 39371 103044 250255 630341 1708829 5210846 18417018 83514848 449967670 4.5 42358 110919 278688 697983 1897254 5958598 20991875 95754535 505825475 Table 4.7: Approximated number of subdivisions for t = 1 g/p 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.5 17083 42163 98128 234320 595947 595947 1682968 5453895 21115912 2.0 11585 27968 63737 146338 364389 990878 3066607 11407642 51737121 2.5 9085 21670 48741 110344 268200 714928 2168010 7844868 34370523 3 7986 18772 41544 93857 227526 595009 1765265 6353968 27302158 3.5 7286 17273 38147 84362 201649 525021 1563165 5548342 23631082 4.0 6886 16074 35748 78364 186865 485046 1439166 5052182 21275601 4.5 6586 15474 33949 75366 178355 455084 1346239 4711303 19703359 Table 4.8: Approximated number of subdivisions for t = 5 4.1.6 Number of Subdivisions Required to Compute I for Continuous Values of t and Discrete Values of p and γ This section up to section 4.1.8, corresponding to step 4 of the plan, signifies the transition from discrete values of parameters p, t, and γ to continuous values, achieved through a series of interpolations. We utilize linear interpolation to extend the scope of our program to handle continuous values of t. For optimization purposes, we implement four different variants of INTRODUCTION, METHOD AND VERIFICATION 92 linear interpolation formulas and select the most conservative one, which returns the highest value of n. For t ∈ [t1 , t2 ], their general formulas are: 1. nt = nt1 + (t−t1 )(nt2 −nt1 ) , (t2 −t1 ) 2. nt = nt1 + (log (t)−log (t1 ))(nt2 −nt1 ) , (log (t2 )−log (t1 )) 3. log(nt ) = log(nt1 ) + (t−t1 )(log(nt2 )−log(nt1 )) , (t2 −t1 ) 4. log(nt ) = log(nt1 ) + (log(t)−log(t1 ))(log(nt2 )−log(nt1 )) , (log(t2 )−log(t1 )) with log referring to the log10 function. Implementation of these variants in C Below are their respective implementation in C. 1 2 int N_T ( double t , double t1 , double t2 , int n1 , int n2 ) { 5 int n = ( int ) ( n1 + ( t - t1 ) *( n2 - n1 ) /( t2 - t1 ) ) ; 6 return n ; 7 } 3 4 8 int N_logT ( double t , double t1 , double t2 , int n1 , int n2 ) { 11 int n = ( int ) ( n1 + ( log10 ( t ) - log10 ( t1 ) ) *( n2 - n1 ) /( log10 ( t2 ) log10 ( t1 ) ) ) ; 12 return n ; 13 } 9 10 14 int logN_T ( double t , double t1 , double t2 , int n1 , int n2 ) { 17 double logn = log10 ( n1 ) + ( t - t1 ) *( log10 ( n2 ) - log10 ( n1 ) ) /( t2 - t1 ) ; 18 return ( int ) ( pow (10 , logn ) ) ; 19 } 15 16 20 int logN_logT ( double t , double t1 , double t2 , int n1 , int n2 ) { 23 double logn = log10 ( n1 ) + ( log10 ( t ) - log10 ( t1 ) ) *( log10 ( n2 ) - log10 ( n1 ) ) / 24 ( log10 ( t2 ) - log10 ( t1 ) ) ; 25 return ( int ) ( pow (10 , logn ) ) ; 26 } 21 22 Listing 4.7: Implementation of the previous interpolation functions in C. INTRODUCTION, METHOD AND VERIFICATION 93 Determining the most conservative linear interpolation method using the linearN T function The purpose of linearN T is straightforward: it compares the values of the number of subdivisions returned by each method and returns the highest one. int linearN_T ( double t , double t1 , double t2 , int n1 , int n2 ) { 3 int temp1 = N_T (t , t1 , t2 , n1 , n2 ) ; 4 int temp2 = N_logT (t , t1 , t2 , n1 , n2 ) ; 5 int temp3 = logN_T (t , t1 , t2 , n1 , n2 ) ; 6 int temp4 = logN_logT (t , t1 , t2 , n1 , n2 ) ; 1 2 7 int n = temp1 ; 8 9 if ( n < temp2 ) { n = temp2 ; } 10 11 12 13 14 if ( n < temp3 ) { n = temp3 ; } 15 16 17 18 19 if ( n < temp4 ) { n = temp4 ; } 20 21 22 23 24 return n ; 25 26 } Listing 4.8: Code used to compare the different interpolation methods. 4.1.7 Number of Subdivisions Required to Compute I with a Controlled Precision for Continuous Values of t, p and γ In contrast to t, the transition from discrete to continuous values for the pair of parameters (p, γ) is accomplished using bi-linear interpolation. This is because (p, γ) constitute a pair of parameters, unlike t, which is a single parameter. For a given (p, γ) such that p1 ≤ p ≤ p2 and γ1 ≤ γ ≤ γ2 , we aim to determine its corresponding number of subdivisions n. Below are the bi-linear interpolation formula and its implementation in our code. INTRODUCTION, METHOD AND VERIFICATION 94 Bi-linear interpolation formula     np2 ,γ1 (p−p1 ) np2 ,γ2 (p−p1 ) np1 ,γ1 (p−p2 ) np1 ,γ2 (p−p2 ) γ− γ1 2 n = γγ−γ + + + . p1 −p2 p2 −p1 γ2 −γ1 p1 −p2 p2 −p1 1 −γ2 Implementation in C 1 2 int bilinearN_p_g ( double p , double g , double t ) { 3 int n ; double p1 , p2 , g1 , g2 ; double n_p1g1 , n_p1g2 , n_p2g1 , n_p2g2 ; 4 5 6 7 findPosition_p (p , & p1 , & p2 ) ; // find p1 findPosition_g (g , & g1 , & g2 ) ; // find p2 8 9 10 n_p1g1 n_p1g2 n_p2g1 n_p2g2 11 12 13 14 = = = = n_approx (t , n_approx (t , n_approx (t , n_approx (t , p1 , p1 , p2 , p2 , g1 ) ; g2 ) ; g1 ) ; g2 ) ; // // // // find find find find n n n n for for for for p = p1 p = p1 p = p2 p = p2 and and and and g = g1 g = g2 g = g1 g = g2 15 // find n 16 17 n = ( int ) (( g - g2 ) / ( g1 - g2 ) * ( n_p1g1 * ( p - p2 ) / ( p1 - p2 ) + n_p2g1 * ( p - p1 ) / ( p2 - p1 ) ) + ( g - g1 ) / ( g2 - g1 ) * ( n_p1g2 * ( p - p2 ) / ( p1 - p2 ) + n_p2g2 * ( p - p1 ) / ( p2 - p1 ) ) ) ; 18 19 20 return n ; 21 22 } Listing 4.9: Bi-linear interpolation of n in C. 4.1.8 Summary of the Interpolation Process The previously encountered interpolation processes, either linear or bilinear, are part of a new function called n approx general, which can be regarded as an extension of n approx to real values of t, p, and γ. On one hand, n approx allows us to predict the number of subdivisions required to approximate I for discrete values of t, p, and γ. On the other hand, n approx general performs the same task but over their continuous range. The process to find the number of subdivisions required for an unknown triplet (t, p, γ) involves finding two triplets (t1 , p1 , γ1 ) and (t2 , p2 , γ2 ) such that t1 ≤ t ≤ t2 , p1 ≤ p ≤ p2 , and γ1 ≤ γ ≤ γ2 . Moreover, it finds n1 required for (t1 , p1 , γ1 ) and n2 required for (t2 , p2 , γ2 ), and then computes n required for (t, p, γ) using interpolation. INTRODUCTION, METHOD AND VERIFICATION 95 This comprehensive approach allows us to efficiently estimate the number of subdivisions needed to achieve the desired precision for the integral over a wider range of input values. Algorithm of the n approx general function Algorithm 1: n approx general Algorithm Result: Find n for continuous values of t, p and γ ; if t ∈ {0.1, 1, 5} then n = bilinear(t, p, γ) ; end else find (t1 , t2 ) ∈ {0.1, 1, 5} × {0.1, 1, 5} such that t1 ≤ t ≤ t2 ; n1 = bilinear(t1, p, γ) ; n2 = bilinear(t2, p, γ) ; n = linear(t, t1, n1, t2, n2) ; end Implementation in C of the n approx general function 1 int n_approx_general ( double t , double p , double g ) { 2 3 double t1 , t2 ; 4 5 6 7 8 9 double p1 = ( int ) p ; // extract the integer part of p double p2 = 10 * ( p - p1 ) ; // extract the decimal part of p long int i = 0; double g1 = ( int ) g ; // extract the integer part of g double g2 = 10 * ( g - g1 ) ; // extract the decimal part of p 10 11 int n , n1 , n2 ; 12 13 // if t is not in the set {0 , 0.1 , 1 , 5} , find t1 and t2 such that t1 <= t <= t2 14 15 16 17 if ( t != 0.1 && t != 5 && t != 1 && t != 0) { t1 = 0.1; t2 = 10; 18 19 20 21 22 if ( t < 1) { t1 = 0.1; t2 = 1; } else if (1 < t < 5) { INTRODUCTION, METHOD AND VERIFICATION t1 = t2 = } else t1 = t2 = } 23 24 25 26 27 28 96 1; 5; { 0.1; 5; 29 n1 = bilinearN_p_g (p , g , t1 ) ; // find n for p , g and t1 using bilinear interpolation n2 = bilinearN_p_g (p , g , t2 ) ; // find n for p , g and t2 using bilinear interpolation 30 31 32 n = linearN_T (t , t1 , t2 , n1 , n2 ) ; // find n knowing n1 and n2 using linear interpolation 33 34 } else if ( t == 0) // find n for t =0 { t = 0.1; n = bilinearN_p_g (p , g , t ) + 1; 35 36 37 38 39 } else { 40 41 // if t is in the set {0.1 , 1 , 5} use bilinear interpolation to compute it directly n = bilinearN_p_g (p , g , t ) ; 42 43 44 } // verify that n is even , because the composite Simpson method only works with even values of n . 45 46 47 if ( n % 2 != 0) { n += 1; } 48 49 50 51 return n ; 52 53 } Listing 4.10: Implementation of the n approx general function 4.1.9 Assessment of the Accuracy of the Results This section corresponds to step 5 and is final phase of our plan, where we carefully examine the accuracy of our methodology. As a primary test for the quality of the results, we aim to reproduce the curve of Dtγ u on [−t∞ , t∞ ], and compare its shape with the original one from [3, page 12]. The newly computed curve is at the left and the former one at the right. INTRODUCTION, METHOD AND VERIFICATION (a) Plot of the reconstructed Dtγ u 97 (b) Plot of the original Dtγ u Figure 4.3: Plot of Dtγ u for 0 < γ < 1. As we ascend from the horizontal green line Dtγ u γ=0 , values γ increase by 0.1 until we reach γ = 0.9, denoted by the pink curve. This is for  of  dxi < 0 and p = 2. The horizontal axis represents the range of t-values and the sign dσ vertical axis the associated Dtγ u. error -4.744730e-008 -5.048329e-009 6.482569e-008 -4.435881e-009 2.060108e-007 -3.872901e-009 3.676208e-007 -3.364300e-009 -3.129406e-009 6.138696e-007 -2.703101e-009 7.505957e-007 -2.328432e-009 8.405479e-007 -2.001432e-009 t 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 Table 4.9: Error file for γ = 0.7 error -1.784137e-010 -1.721758e-010 -1.664409e-010 -1.605218e-010 -1.547047e-010 -1.494032e-010 -1.443059e-010 -1.395319e-010 -1.351115e-010 -1.307046e-010 -1.269774e-010 -1.235263e-010 -1.206676e-010 -1.180839e-010 -1.161406e-010 t 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 Table 4.10: New error file for γ = 0.7 The reproduction of Dtγ u was just the visual part of the assessment process. In fact, we still need to verify the precision of Dtγ u values. To achieve this, we compute the absolute difference between Dtγ u for N = n, N = 2n, and p = 2. Initially, the error variable used in the stopping condition of the writeFile function was error = abs(Simpson(lower, upper, n) - Simpson(lower, upper, n + 10)). However, due to the non-monotonic and uncontrolled character of the errors’ magnitude as depicted in Table 4.9, we adjusted the error variable to error = abs(Simpson(lower, upper, n) - Simpson(lower, upper, 2*n)), as mentioned in section 4.1.5. This modification yielded satisfactory results, as illustrated in INTRODUCTION, METHOD AND VERIFICATION 98 Table 4.10. Implementation of Dtγ u in C Below is the implementation of the Dtγ u function employed to reconstruct its curve . Implementation of Dtγ u: Parameters list: The function Du calculates the value of Dtγ u for −5 ≤ t ≤ 5. • t: the function’s variable, between -5 and 5. • nt: the number of subdivisions required to compute I3 . Process: • Calculate the first operand of Dtγ u and store its value in I1. • Calculate the second operand of Dtγ u and store its value in I2. • Calculate the third operand of Dtγ u, the one with the integral, and store its value in I3. 1 double Du ( double t , long int nt , double tmax , double p , double g ) { 2 double I1 = I_1 (t , tmax , p , g ) ; double I2 = I_2 (t , tmax , p , g ) ; double I3 = I_3 (t , tmax , nt , p , g ) ; 3 4 5 6 return I1 + I2 - I3 ; 7 8 9 } Listing 4.11: Implementation of the Dtγ u function in C Implementation of I1 , I2 , and I3 : I1 , I2 and I3 are used to implement respectively the γ first, second and third operands of Dt u as defined in section 4.1.4. 1 // Implementation of I_1 2 3 double I_1 ( double t , double tmax , double p , double g ) { 4 return ( pow ( tmax , -g ) / ( tgamma (1 - g ) ) ) * ( u (t - tmax , p ) - u (t , p ) ) ; 5 6 7 } 8 9 10 // Implementation of I_2 INTRODUCTION, METHOD AND VERIFICATION 11 double I_2 ( double t , double tmax , double p , double g ) { 12 return ( pow ( tmax , 1 - g ) / tgamma (2 - g ) ) * ( u1 (t - tmax , p ) ) ; 13 14 15 } 16 17 // Implementation of I_3 18 19 double I_3 ( double t , double tmax , long int nt , double p , double g ) { 20 return (1 / tgamma (2 - g ) ) * simpson (t , t - tmax , nt , p , g ) ; 21 22 23 } Listing 4.12: Implementation of I1 , I2 and I3 in C. 99 Chapter 5 Conclusion Observations and Results In this thesis, we have investigated the numerical approximation of the subdiffusive GiererMeinhardt model with controlled precision. Our research has yielded several key findings that contribute to our understanding of the model. We began by defining and exploring the concepts of reaction-diffusion, elucidating the main difference between normal and anomalous diffusion: the rate at which particles spread across the reaction domain in each case. Specifically, during normal diffusion, the mean square displacement of particles can be expressed as a linear function of time, while in anomalous diffusion, this mean square displacement manifests as a fractional power of time. Through our examination of integer and fractional calculus, we observed that certain rules and properties extend to fractional calculus, albeit with some limitations. For instance, the composition rule dq dQ f = dq+Q f extends to fractional calculus except in cases where f ̸= 0 and dQ f = 0. The term-by-term differentiation and integration of infinite series extend to fractional calculus under similar convergence conditions as classical calculus. This requires both ∞ ∞ X X the pointwise convergence of the series fj and the uniform convergence of dq fj for termj=0 j=0 by-term differentiation, and the uniform convergence of ∞ X fj for term-by-term integration j=0 over a specific interval. We found that while the scale change property, which proved to be highly beneficial when transitioning from the (x, t) axis to the (yi , σ) axis, remains applicable in fractional calculus without specific restrictions, the chain rule does not extend due to the non-local nature of fractional derivatives or integrals. Regarding the computation of the fractional operator of Dtγ , we observed that the lowest number of subdivisions required to achieve a controlled precision of 10−10 occurred for p = 1.5, γ = 0.1, and t = 5, totaling 17083 subdivisions. Conversely, the highest number of subdivisions exceeded 1 billion and occurred for p = 4.5, γ = 0.9, and t = 0.1. These results 100 101 align with expectations, as the curve of of Dtγ for the former case exhibited a nearly horizontal line around t = 5, while the latter showed a steep slope around t = 0.1. Finally, in our verification process, a significant decision we made was to assess the achieved integration precision against the number of subdivisions returned by our program over a wider interval [n, 2n] instead of focusing solely on close values such as n and n + 10, ensuring that of Dtγ maintains consistent precision across a substantial range. The objective of calculating of Dtγ numerically, with controlled precision, has therefore been achieved. This dxi operator can therefore be used in system (3.23) to determine the values of and H(σ) with dσ the desired precision. Appendices 102 Appendix A Structure of the Project Repository Tree Below is the repository tree of the project. At the root of the project, you have two main repositories, Code and Data. Figure A.1: Tree diagram of the project repository. • Repository path: \Code. • Repository description: Contains all the code for the project, either in C or in Octave. • Repository content: C, Octave. 103 104 – Repository path: \Code\C. – Repository description: Contains all the C code for the project. – Repository content: Du, functions, WriteFile. ∗ Repository path: \Code\C\functions. ∗ Repository description: Contains all the functions and their prototypes used to approximate the number of subdivisions. ∗ Repository content: functions.h, functions.c. · File name: functions.h. · Contains all the functions’ prototypes used to approximate the number of subdivisions. · File name: functions.c. · File description: Contains the implementation of all the functions used in the approximation process. · File content: sech, u, u1, u2, integrand, I_1, I_2, I_3, Du, simpson, findPosition_p, findPosition_g, linearN_T, bilinearN_p_g, n_approx, n_approx_general, N_T, N_logT, logN_T, logN_logT, even, reverse_n, join. · File name: functions.h. · File description: Contains the prototypes of all the functions used in the approximation process. ∗ Repository path: \Code\C\Du. ∗ Repository description: Contains the C file used to reconstruct the curve of Dtγ u. ∗ Repository content: Du.c. · File name: Du.c. · File description: Contains the code used to reconstruct the curve of Dtγ u. ∗ Repository path: \Code\C\WriteFile. ∗ Repository description: Contains the C file used to implement the composite Simpson method and to print its returned value in text files. ∗ Repository content: WriteFile.c. · File name: WriteFile.c. · File description: Contains the C code used to implement the composite Simpson method and to print its returned value in text files. – Repository path: \Code\Octave. – Repository description: contains all the Octave code of the project. 105 – Repository content: Fit_Curve. ∗ Repository path: \Code\Octave\Fit_Curve. ∗ Repository description: Contains all the Octave files used to fit the residuals. ∗ Repository content: test1.m, fit_curve.m, fit_curvef.m · file name: test1.m. · file description: Main file containing the overall logic of the fitting process. · file name: fit_curve.m. · file description: Contains the implementation of the fit_curve function which use is explained later in this document. · file name: fit_curvef.m. · file description: Contains the implementation of the fit_curvef function which use is explained later in this document. • Repository path: \Data. • Repository description: Contains all the data used in our project. • Repository content: P_G, Nmax_Nmin, R, C. – Repository path: \Data\P_G. – Repository description: Holds the files containing the returned value of the Simpson method for t = 0.1, 1, 5. – Repository content: t0p1, t1, t5. – Repository path: \Data\C. – Repository description: Holds the files containing the c(c1 , c2 ) parameters for t = 0.1, 1, 5. – Repository content: t0p1, t1, t5. – Repository path: \Data\Nmax_Nmin. – Repository description: Holds the files containing the minimum and maximum number of subdivisions for t = 0.1, 1, 5. – Repository content: t0p1, t1, t5. – Repository path: \Data\R. – Repository description: Holds the files containing the residuals for t = 0.1, 1, 5. – Repository content: t0p1, t1, t5. Bibliography [1] K. Booker and Y. Nec. “On accuracy of numerical solution to boundary value problems on infinite domains with slow decay”. In: Math. Model. Nat. Phenom. 14.5 (2019), p. 503. [2] William L. Briggs et al. Calculus: Early Transcendentals. 3rd. Pearson, 2018. isbn: ISBN. [3] Richard L Burden, Douglas J Faires, and Annette M Burden. Numerical Analysis. 10th ed. Brooks Cole, 2015. [4] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series, and Products. Ed. by Alan Jeffrey and Daniel Zwillinger. 7th. Academic Press, 2007. [5] D. Iron, M. J. Ward, and J. Wei. “The stability of spike solutions to the one-dimensional Gierer-Meinhardt model”. In: Physica D 150.1-2 (2001), pp. 25–62. [6] Y. Nec and M. J. Ward. “Dynamics and stability of spike-type solutions to a one dimensional Gierer-Meinhardt model with sub-diffusion”. In: Physica D 241 (2012), pp. 947– 963. [7] Keith B. Oldman and Jerome Spanier. Theory and Applications of Differentiation and Integration to Arbitrary Order. Academic Press, Inc, 1974. [8] Ivar Stakgold. Green’s Functions and Boundary Value Problems. 2nd Edition. John Wiley & Sons, Inc, 1998. [9] Dr Helen J Wilson. Introduction to perturbation methods. University College London. 2014. url: https://www.ucl.ac.uk/ ~ucahhwi/LTCC/section2- 3- perturb- regular.pdf (visited on 01/25/2024). 106