\documentclass[a4paper,12pt]{extarticle}
\usepackage[english]{babel}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage{amsmath}
\numberwithin{equation}{section}
\title{Random Fibonacci Sequences}
\author{Sam Koper 22018004}
\date{}
\begin{document}
\maketitle
\begin{abstract}
This paper will be looking at the development of random Fibonacci sequences throughout history and investigating the various mathematical methods used by many mathematicians to determine important qualities about the sequence, which all lead to the growth rate.
\end{abstract}
\newpage
\section{Introduction}
The Fibonacci sequence was first introduced by Leonardo of Pisa in 1202, since then they have been studied a lot by many different mathematicians. The Fibonacci numbers have been used in various different fields such as biology to describe naturally occurring patterns and also by computer scientists when looking at efficient searching algorithms. To understand random Fibonacci sequences we must first look at the standard Fibonacci sequence. In mathematics, A Fibonacci sequence is a simple recurrence relation defined by
\begin{equation}
F_n = F_{n-1} ± F_{n-2}
\end{equation}
Where the first two numbers are both 1, creating the sequence;
$$1, 1, 2, 3, 5, 8, 13, 21, 34…$$
This sequence creates a very famous pattern called the golden spiral which is a series of connected quarter circles each confined to a square of area of the number in the sequence as seen below (taken from [6])
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\textwidth]{golden_spiral.jpg}
\end{figure}
Where the random Fibonacci sequence differs, is in the sign of the recurrence relationship,
\begin{equation}
t_n = ±t_{n-1} ± t_{n-2}
\end{equation}
Where each $±$ sign is independent of each other and has a probability of ½ to be either $+$ or $-$.
\newpage
\section{The Absolute Value}
If we want to look at the behaviour of the random Fibonacci sequence as n increases then we need to look at the absolute value of the sequence |tn|. So we need to write the sequence as
\newline
\begin{equation}
|t_n| = |±t_{n-1} ± t_{n-2}|
\end{equation}
\newline
We now have four possible outcomes dependant on the signs, each with probability ¼,
\newline
\begin{equation}
|t_n| = |+t_{n-1} + t_{n-2}|
\end{equation}
\newline
\begin{equation}
|t_n| = |-t_{n-1} - t_{n-2}|
\end{equation}
\newline
\begin{equation}
|t_n| = |+t_{n-1} - t_{n-2}|
\end{equation}
\newline
\begin{equation}
|t_n| = |-t_{n-1} + t_{n-2}|
\end{equation}
\newline
Here we see that (2.2) and (2.3) both represent the absolute sum of the terms however (2.4) and (2.5) both represent the difference of the terms. So we only have two outcomes, each with probability ½. Therefore, looking at the absolute value of the random Fibonacci sequence we have the two equivalent equations where the $±$ sign is chosen randomly with a probability of ½,
\newline
\begin{equation}
t_n = t_{n-1} ± t_{n-2}
\end{equation}
\newline
\begin{equation}
t_n = ±t_{n-1} + t_{n-2}
\end{equation}
\newpage
\section{Growth Rate}
In the early 1600’s Johannes Kepler discovered that as n increases, the ratio of the terms in the Fibonacci sequence approaches
\begin{equation}
\varphi = \frac{1+\surd5}{2}
\end{equation}
This is known as the golden ratio, and is approximately equal to 1.618. In 1765 Leonhard Euler discovered a formula
\begin{equation}
F_n = \frac{\varphi^n - (-(\frac{1}{\varphi})^n}{\surd5}
\end{equation}
Known as the Binet Formula, which shows that the Fibonacci numbers grow at an exponential rate which is equal to the golden ratio. However this only applies to the non-random Fibonacci sequence. It was shown with the use of a Stern-Brocot tree that;
\begin{equation}
\lim_{n\to \infty} |t_n|^{\frac{1}{n}} = 1.13198824\ldots
\end{equation}
Below is a graph (taken from [2]) that shows the trend of random Fibonacci sequence as $n$ increases.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\textwidth]{growth_rate.png}
\end{figure}
\noindent This gives us the exponential growth rate for random Fibonacci sequences and is called the Viswanath constant.
\newpage
\section{The Lyapunov Exponent}
If we re-write the random Fibonacci sequence using matrices we get
\newline
\begin{equation}
\left[
\begin{array}{ccc}
t_{n-1} \\
t_n \\
\end{array}
\right] = \left[
\begin{array}{ccc}
0 & 1 \\
1 & ±1 \\
\end{array}
\right]\left[
\begin{array}{ccc}
t_{n-2} \\
t_{n-1} \\
\end{array}
\right]
\end{equation}
\newline
Where we use the two matrices
\newline
$$\left[
\begin{array}{ccc}
0 & 1 \\
1 & 1 \\
\end{array}
\right] and \left[
\begin{array}{ccc}
0 & 1 \\
1 & -1 \\
\end{array}
\right]$$
\newline
With a probability of ½. Then we have the random matrix $M_n$ chosen at $n$ which is independent of $M$ for $i = n.$ So we have
\newline
\begin{equation}
\left[
\begin{array}{ccc}
t_{n-1} \\
t_n \\
\end{array}
\right] = M_{n-2}\ldots M_1\left[
\begin{array}{ccc}
1 \\
1 \\
\end{array}
\right]
\end{equation}
\newline
Where $M_{n-2}\ldots M_1$ is a product of independent, distributed random matrices. Furstenberg and Keston proved in random matrix theory the basic result
\newline
\begin{equation}
\lim_{n\to \infty}\frac{
\log||M_n \ldots M_1||}{n} = \gamma
\end{equation}
\newline
With probability 1. Using this result we see that
\newline
\begin{equation}
\lim_{n\to \infty}\frac{
\log||M_n \ldots M_1||}{n} = \gamma_f
\end{equation}
\newline
\begin{equation}
\lim_{n\to \infty}|t_n|^\frac{1}{n} = e^{\gamma_f}
\end{equation}
\newline
For a constant $\gamma_f$ with probability 1, this constant is known as a Lyapunov exponent.
We also have that the exponential of the Lyapunov exponent is equal to the Viswanath constant (growth rate of the random Fibonacci sequence)
\newline
\begin{equation}
e^{\gamma_f} = 1.13198824 \ldots
\end{equation}
\newline
\newpage
\noindent To determine the value of the Lyapunov exponent we use Furstenberg’s formula which is as follows
\newline
\begin{equation}
\gamma_f = \int amp(\bar{x})d\nu_f(\bar{x})
\end{equation}
\newline
Where
$\bar{x}$ represents directions in the real plane.
\newline
$amp(\bar{x})$ represents a smooth function of $\bar{x}$ which can be written as
\newline
\begin{equation}
\int \log{\frac{||Y_x||}{||x||}}d \mu (Y)
\end{equation}
\newline
Where $Y$ is a random matrix from a set with distribution $\mu$.
\newline
$\nu_f$ represents a probability measure over directions $x$ which is defined by
\newline
\begin{equation}
\nu_f (r) = \frac{1}{2}
\end{equation}
\newline
\begin{equation}
\nu_f(r \alpha L) = \begin{cases}
\frac{1}{1+\phi} \nu_f (r \alpha ), & {|\alpha |}\: even\\
\frac{\phi}{1+\phi} \nu_f (r \alpha ), & {|\alpha |}\: odd
\end{cases}
\end{equation}
\newline
\begin{equation}
\nu_f(r \alpha R) = \begin{cases}
\frac{\phi}{1+\phi} \nu_f (r \alpha ), & {|\alpha |}\: even\\
\frac{1}{1+\phi} \nu_f (r \alpha ), & {|\alpha |}\: odd
\end{cases}
\end{equation}
\newline
\begin{equation}
\nu_f (l\alpha ) = \nu_f (r\bar{\alpha}
\end{equation}
\newpage
\section{The Stern-Brocot Tree}
The Stern-Brocot tree was created independently by Moritz Stern in 1858 and Achille Brocot in 1861. It is an infinite complete binary tree with two roots,$$\frac{0}{1}\; and \; \frac{1}{0}$$ These roots represent zero and infinity. These two nodes break off into two children, the left child being
$$\frac{p+x}{q+y}$$
Where $\frac{p}{q}$ is the nearest ancestor to the left of $\frac{x}{y}$.
The right child is expressed by the equation
$$\frac{r+x}{s+y}$$ where $\frac{r}{s}$ is the nearest ancestor to the right of $\frac{x}{y}$. The tree is shown below (taken from [4]).
\begin{figure}[!ht]
\centering
\includegraphics[width=1.05\textwidth]{SB_tree_1.png}
\end{figure}
\newline
However, Viswanath attempted to modify this version of the tree in an attempt to find the value of $\nu_f$. This tree is shown below (taken from [2]).
\newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=1.05\textwidth]{SB_tree_2.png}
\end{figure}
\noindent This tree uses closed intervals at the nodes instead of rational numbers. The difference this tree has from the conventional one is that it look at all of the real line, not just the positive section. The root of the tree $[\frac{-1}{0} , \frac{1}{0}]$ represents the closed interval $[-\infty , \infty]$, which represents all of the real line . Its left and right children $[\frac{-1}{0} , \frac{0}{1}]$ and $[\frac{0}{1} , \frac{1}{0}]$ represent the negative and positive halves of the real line. Each node beyond these are formed by inserting a mediant between each endpoint of the existing interval, where the mediant is defined as
\newline
$$med(\frac{a}{b} , \frac{c}{d}) = \frac{a+c}{b+d}$$
\newline
Also the mediant of two fractions satisfies the following inequality for a given interval $[\frac{a}{b} , \frac{c}{d}]$
\newline
$$\frac{a}{b} < \frac{a+c}{b+d} < \frac{c}{d}$$
\newline
At depth $N$ of the tree the real line will be divided into $2^N$ intervals.
\newpage
\section{Evaluation of $e^{\gamma_f}$}
We will now prove that the exponential of the Lyapunov exponent is equal to the Viswanath constant.
\newline
$$e^{\gamma_f} = 1.13198824 \ldots$$
\newline
To do this we will use results from random matrix products which tells us that for a product $P_n = Y_n \ldots Y_1$, of independent and identically distributed random matrices with a distribution $\mu$. \newline
\newline We have
\newline
\begin{equation}
\lim_{n\to \infty} \frac{1}{n^r} \log \frac{|\langle P_n x, y\rangle|}{||P_n x||} = 0
\end{equation}
\newline
With probability 1.\newline
\newline
Which can be re-written as
\newline
\begin{equation}
\lim_{n\to \infty} \log|\langle P_n x, y \rangle | = \lim_{n\to \infty} \log||P_n x||
\end{equation}
\newline
With probability 1, and $r=1$.\newline
\newline
We now introduce the theorem of Furstenburg and Kesten, which states, for $Y_n$ for $n\geq 1$ a sequence if independent and identically distributed random matrices in $GL(d,\mathbf{R})$ with distribution $\mu $. if we have the property $\mathbf{E}(log^+||Y_1||<∞$, then with probability one the Lyapunov exponent can be defined by
\newline
\begin{equation}
\gamma = \lim_{n\to \infty} \frac{1}{n} \log ||Y_1 \ldots Y_n ||
\end{equation}
\newline
Therefore (6.1) becomes
\newline
\begin{equation}
\gamma_f = \lim_{n\to \infty} \log |\langle P_n x, y \rangle |^\frac{1}{n}
\end{equation}
\newline
With probability 1. \newline
\newline Now we can bring the log outside of the limit which gives us
\newline
\begin{equation}
\gamma_f = \log \lim_{n\to \infty} |\langle P_n x, y \rangle |^\frac{1}{n}
\end{equation}
\newline
Now taking the exponential of both sides we get
\newline
\begin{equation}
e^{\gamma_f} = \lim_{n\to \infty}|\langle P_n x, y \rangle |^\frac{1}{n}
\end{equation}
\newline
So here we have $x$ and $y$ being any $2$x$2$ vector. If we take
$$x = \left[
\newline
\begin{array}{ccc}
1 \\
1 \\
\end{array}\right ] \; and \; y = \left[\begin{array}{ccc}
1 \\
0 \\
\end{array} \right ]$$
\newline
Then from (4.2) we obtain the vector
\newline
\begin{equation}
P_n x = \left[
\begin{array}{ccc}
t_{n+1} \\
t_{n+2} \\
\end{array}\right ]
\end{equation}
\newline
Hence the inner product is
\newline
\begin{equation}
\langle P_n x, y \rangle = t_{n+1} (1) + t_{n+2} (0) = t_{n+1}
\end{equation}
\newline
Therefore (6.6) can be written as
\newline
\begin{equation}
e^{\gamma_f} = \lim_{n\to \infty} |t_{n+1}|^\frac{1}{n}
\end{equation}
\newline
Now we need the denominator of the exponent to match the subscript, so we use (6.9)
\newline
\begin{equation}
\lim_{n\to \infty} |t_{n+1}|^\frac{1}{n} = \lim_{n\to \infty} |t_{n+1}|^{\frac{1}{n} \frac{n+1}{n+1}} = \lim_{n\to \infty} |t_{n+1}|^{\frac{1}{n+1}(1 + \frac{1}{n})} \newline
\end{equation}
\newline
So that the equation can be equated to
\newline
\begin{equation}
\lim_{n\to \infty} |t_{n+1}|^\frac{1}{n+1} \times \lim_{n\to \infty} |t_{n+1}|^{\frac{1}{n+1}(\frac{1}{n})}
\end{equation}
\newpage
\noindent Here the second limit in (6.11) will be 1 as $n$ tends to infinity, so we are left with
\newline
\begin{equation}
\lim_{n\to \infty} |t_{n+1}|^\frac{1}{n+1}
\end{equation}
\newline
Now, the Lyapunov exponent from (4.7) can be written as
\newline
\begin{equation}
\gamma_f = \iint \log \frac{||Y_x||}{||x||} du(Y) d\nu_f (\bar{x})
\end{equation}
\newline
Substituting in (4.8) we can rewrite the integral in the form
\newline
\begin{equation}
\gamma_f = \int amp(m) d\nu_f (m)
\end{equation}
\newline
Where $amp(m)$ is $\frac{1}{4}\log \frac{4m^2 +1}{(m^2 +1)^2}$.\newline
\newline
We will use the definition of $\nu_f$ to determine $\gamma_f$. Using (4.12) we can see that $\nu_f$ is symmetrical about 0, which is also true about (6.14). This can be written as
\newline
\begin{equation}
\gamma_f = 2\int_{0}^{\infty} amp(m) d\nu_f (m)
\end{equation}
\newline
Now we write this interval as a sum over the interval $\mathbf{R}$ so that $\mathbf{R}$ is divided into Stern-Brocot intervals with depth $N$. Therefore we will have $2^N$ intervals at depth $N$, but we will only integrate over the positive half of the real line. These intervals will be denoted by $I_j^N$ where $1<j<2^N$. We now create a lower and upper bound for $\gamma_f$ by using the min and max of our $amp(m)$ function. At level $N$ taking $L$ and the lower bound and $U$ as the upper bound, we obtain the inequality
\newline
\begin{equation}
L = 2\sum \min amp(m)\nu_f(I_j ^N) < \gamma_f < 2\sum \max amp(m)\nu_f(I_j ^N) = U
\end{equation}
\newline
So we know that $\gamma_f$ is somewhere between $L$ and $U$, also as $N$ increases, the size of the interval decreases, so we have
\newline
\begin{equation}
\lim_{n\to \infty} (U-L) = 0
\end{equation}
\newpage
\noindent We now need to evaluate the minimum and maximum of $amp(m)$ on each interval as well as $\nu_f(I_j^N)$, then sum over $2^N$ positive intervals at the level $N+1$. At $m=\frac{1}{2}$ we have our local minimum of $amp(m)$, this is first seen in the tree at depth 3. From the definitions of $\nu_f$, it can be seen that $\nu_f(I_j^N)$ is equal to one of the values at $N+1$ in the set
\newline
\begin{equation}
\{\frac{1}{2} \frac{\phi^N-i}{(1+\phi)^N} \; | \; 0 \leq i \leq N \}
\end{equation}
\newline
Viswanath computed these calculations for $N=28$ using floating point arithmetic and found the values of the lower and upper at bound at $N=28$ to be:
\newline
$$fl(L_{28}) = 0.1239755981508$$
$$fl(U_{28}) = 0.1239755994406$$
\newline
\noindent Then taking the upper bound of the error terms:
$$|fl(L_{28}) – L_{28}|$$
$$|fl(U_{28}) – U_{28}|$$
\newline
It can be said with absolute certainty that
$$\gamma_f \in (0.1239755980, 0.129755995)$$
Therefore we can conclude
$$\lim_{n\to \infty} |t_n|^{\frac{1}{n}} =e^{\gamma_f}= 1.13198824\ldots$$
\newpage
\subsection{References}
[1] É. Janvresse, B. Bittaud, Probability Theory and Related Fields, How do random Fibonacci sequences grow?, Springer-Verlag (2008), 142. 3-4, Page 619-648.
\\ \relax
\\ \relax
[2] D. Viswanath, Mathematics of Computation, Random Fibonacci Sequences and the Number 1.13198824..., American Mathematical Society (2000), Vol. 69, No. 231, Page 1131-1155.4
\\ \relax
\\ \relax
[3] M. Embree, L. N. Trefethen, Proceedings: Mathematical, Physical and Engineering Sciences, Growth and Decay of Random Fibonacci Sequences, Royal Society (1999), Vol. 455, No. 1987, Page 2471-2485.
\\ \relax
\\ \relax
[4] K. A. McLellan, The Growth of Random Fibonacci Sequences, ProQuest Dissertations Publishing, (2007).
\\ \relax
\\ \relax
[5] K. M. Ball, Strange Curves, Counting Rabbits, and Other Mathematical Explorations, 8: Fibonacci's Rabbits Revisited, Princeton NJ, Princeton University Press (2003).
\\ \relax
\\ \relax
[6] http://www.livescience.com/37470-fibonacci-sequence.html (image)
\end{document}