%% 12. Random functions and random ODEs
%
% \setcounter{page}{141}
%
%%
%
% One of the most fascinating themes in the
% mathematical sciences, whose importance is growing every year,
% is randomness. In this chapter we say a word
% about how randomness plays into the subject of ODE\kern .5pt s.
%
%%
%
% To begin the discussion, here are two examples of random
% functions produced by the Chebfun {\tt randnfun} command.
%
ODEformats, rng(1), lambda1 = 1; dom = [0 10];
f1 = randnfun(lambda1,dom);
subplot(2,1,1), plot(f1,'k',LW,1), ylim([-5 5])
title(['Fig.~12.1.~~Random functions with length scales ' ...
'$\lambda = 1$ and 0.1'],FS,11);
lambda2 = 0.1; f2 = randnfun(lambda2,dom);
subplot(2,1,2), plot(f2,'k',LW,0.7), ylim([-5 5])
%%
% \vskip 1.02 em
%%
%
% \noindent
% At the end of the chapter we will explain the precise mathematical
% definition, but for the moment, the main thing to note
% is that these are smooth functions defined on a prescribed
% interval and with a prescribed length scale $\lambda$.
% The first function, with $\lambda = 1$, has a typical distance
% on the order of $1$ between maxima, whereas the second,
% with $\lambda = 0.1$, wiggles ten times as fast.
% The vertical scales are the same, and in fact,
% at each fixed point $t$, each function produced by
% {\tt randnfun} takes values corresponding to samples
% from the standard normal distribution $N(0,1)$, with
% mean 0 and variance 1.
%
%%
%
% Like so many commands in Chebfun, {\tt randnfun}
% provides a continuous analogue of a familiar discrete
% object. In Matlab, {\tt randn(n,1)} generates
% an $n$-vector of random entries from $N(0,1)$.
% Similarly \verb|randnfun(lambda,[a,b])| produces
% a smooth random function of typical wavelength $\lambda$
% on the interval $[\kern .3pt a,b\kern .5pt ]$.
%
%%
%
% As our first random ODE problem, let us consider the simplest
% ODE IVP of all,
% $$ y'(t) = f(t), \quad y(0) = 0, \eqno (12.1) $$
% whose solution is just the indefinite integral of $f$,
% $$ y(t) = \int_0^t f(s)\kern .7pt ds. \eqno (12.2) $$
% If we take $f$ to be the two functions plotted above,
% we get these results. We call a curve like these a {\bf smooth random
% walk.}
%
subplot(2,1,1), y1 = cumsum(f1); plot(y1,LW,1,CO,ivp), ylim([-4 4])
title(['Fig.~12.2.~~Their indefinite integrals: ' ...
'smooth random walks'],FS,11);
subplot(2,1,2), y2 = cumsum(f2); plot(y2,LW,1,CO,ivp), ylim([-1 1])
%%
% \vskip 1.02 em
%%
%
% \noindent
% Note that the first curve has a larger amplitude
% than the second. The reason for this is a familiar matter
% of statistics associated with cancellation of
% random signs. These indefinite integrals
% are essentially the average value of the integrand (times
% 10, when we reach $t=10$), and according to the law of large
% numbers, this average converges to $0$ as the number of
% samples approaches $\infty$, which in our context means
% as $\lambda$ approaches zero. Moreover, the convergence
% will be in proportion to $\lambda^{1/2}$. So in fact,
% our second curve should be expected to
% be on the order of $\sqrt{10}$ times smaller than the first.
% To eliminate this dependence on $\lambda$ we can renormalize
% $f$ by dividing it
% by $\lambda^{1/2}$, and in Chebfun, this (approximately) is what
% is done if {\tt randnfun} is
% called with the {\tt 'norm'} flag. From now on we will always
% use {\tt 'norm'}.
%
%%
% Here are three smooth random walks with $\lambda = 0.1$.
lambda = 0.1;
for k = 1:3
f = randnfun(lambda,dom,'norm'); y = cumsum(f);
subplot(1,3,k), plot(y,LW,.6,CO,ivp), ylim([-12 12])
if k==2, title(['Fig.~12.3.~~Smooth random walks ' ...
'with $\lambda = 0.1$'],FS,11), end
end
%%
% \vskip 1.02 em
%%
%
% \noindent
% Here are three smooth random walks with $\lambda = 0.01$.
%
lambda = 0.01;
for k = 1:3
f = randnfun(lambda,dom,'norm'); y = cumsum(f);
subplot(1,3,k), plot(y,LW,0.4,CO,ivp), ylim([-12 12])
if k==2, title(['Fig.~12.4.~~Smooth random walks ' ...
'with $\lambda = 0.01$'],FS,11), end
end
%%
% \vskip 1.02 em
%%
%
% The sample paths we have shown in Figs.~12.2--12.4 are smooth,
% but as $\lambda \to 0$, the smoothness
% goes away. In this limit we get the precisely defined mathematical
% notion of {\em Brownian motion}, where the sample
% paths are continuous but not smooth.
% A Brownian motion trajectory is also
% called a {\em Wiener path}, and probabilists say that
% a Wiener path is a sample from the {\em Wiener process}.
% We can show the convergence as $\lambda\to 0$ by superimposing
% three paths for successively smaller values of $\lambda$,
% all based on the same random number seed set by
% Matlab's {\tt rng} command.\footnote{In Matlab as in other programming
% languages, successive calls to {\tt randn} give new random numbers,
% but one can reinitialize the sequence for repeatability with the
% command {\tt rng(k)}, where $k$ is a fixed integer. Chebfun's
% {\tt randnfun} works the same way. This feature has been crucial for
% us in writing this chapter, since we need reproducible random curves
% if we are to comment on their particular features.}
%
clf
for lambda = [1 1/4 1/16]
rng(3), f = randnfun(lambda,dom,'norm'); y = cumsum(f);
plot(y,LW,.4 + .7*lambda), ylim([-2.5 2.5]), hold on
end
title(['Fig.~12.5.~~Convergence to a Brownian path ' ...
'as $\lambda\to 0$'],FS,11), hold off
%%
% \vskip 1.02 em
%%
%
% Here is a smooth random walk over a longer time scale, up to $t = 500$.
% Note that the maximal amplitude is a bigger than
% before, and yet,
% the trajectory comes back repeatedly to zero --- whereupon, of course,
% it ``starts over.'' In an infinitely long trajectory,
% the path will cross zero infinitely often, and yet the
% amplitudes will grow.\footnote{Statements like this hold
% ``with probability 1'' or ``almost surely.'' In principle
% a Brownian path could be any function at all,
% and thus for example might remain bounded by~1
% forever, or even identically
% zero, but the probability of such events will be zero.
% With probability~1, a Brownian path is everywhere continuous
% and nowhere differentiable.}
% Probability theory is full of such paradoxes.
%
lambda = 0.1; rng(1)
f = randnfun(lambda,[0 500],'norm'); y = cumsum(f);
clf, plot(y,LW,0.3,CO,ivp),% ylim([-18 18])
title(['Fig.~12.6.~~Smooth random walk over a larger ' ...
'time interval'],FS,11)
%%
% \vskip 1.02 em
%%
%
% Here are $n=10$ smooth random walks up to $t=100$, together
% with their mean, shown as a thicker curve in black.
% As the sample size $n$ approaches $\infty$, the mean will
% approach the function $\varphi(t)$ that is the {\em expected}
% value of $f(t)$ at each point $t$. For this simple example,
% $\varphi(t) = 0$.
%
lambda = 0.1;
F = randnfun(lambda,[0 100],10,'norm'); Y = cumsum(F);
plot(Y,LW,0.3), ylim([-35 35])
title('Fig.~12.7.~~Ten smooth random walks and their mean',FS,11)
hold on, plot(mean(Y,2),'k',LW,2), hold off
%%
% \vskip 1.02 em
%%
%
% For a second random ODE problem, let us consider an
% indefinite integral as before, but now a system of
% equations in two variables
% $y_1^{}$ and $y_2^{}$,
% $$ y_1'(t) = f_1(t), ~ y_2' = f_2(t),
% \quad y_1^{}(0) = y_2^{}(0) = 0, \eqno (12.3) $$
% where $f_1^{}$ and $f_2^{}$ are independent random functions,
% normalized again by division by $O(\lambda^{1/2})$.
% The two variables are uncoupled, so in a sense there
% is nothing new here. On the other hand, the trajectories
% now take the interesting form of two-dimensional smooth
% random walks, which in the limit $\lambda\to 0$ would
% become 2D Brownian motion. Here are two sample paths with
% $\lambda = 0.1$ on $[\kern .3pt 0,10]$.
%
clf, rng(2)
for k = 1:2
f1 = randnfun(lambda,dom,'norm')/sqrt(2); y1 = cumsum(f1);
f2 = randnfun(lambda,dom,'norm')/sqrt(2); y2 = cumsum(f2);
subplot(1,2,k), plot(y1,y2,LW,0.5,CO,ivp)
ax = get(gca,'pos'); ax(1) = ax(1)-(-1)^k*.03;
set(gca,'pos',ax), axis(7*[-1 1 -1 1]), axis square
set(gca,XT,-6:2:6,YT,-6:2:6)
if k==1, title(['\kern -.4in Fig.~12.8.~~2D smooth ' ...
'random walks to $t=10$'],FS,11,HA,'left'), end
end
%%
% \vskip 1.02 em
%%
%
% \noindent
% Following our usual trick in 2D, we could equally well
% have generated these images using a single complex random
% function instead of two real ones:
%
for k = 1:2
f = randnfun(lambda,dom,'norm','complex'); y = cumsum(f);
subplot(1,2,k), plot(y,LW,0.5,CO,ivp)
ax = get(gca,'pos'); ax(1) = ax(1)-(-1)^k*.03;
set(gca,'pos',ax)
axis(7*[-1 1 -1 1]), axis square, set(gca,XT,-6:2:6,YT,-6:2:6)
if k==1, title(['\kern -.9in Fig.~12.9.~~2D smooth random walks ' ...
'via complex arithmetic'],FS,11,HA,'left'), end
end
%%
% \vskip 1.02 em
%%
%
% \noindent
% These trajectories look different, but only because we have
% rolled the dice again. Beneath the superficial distinction of
% complex scalars vs.\ real 2-vectors, these are
% independent sample paths from the same distribution.
%
%%
%
% Our random ODE\kern .5pt s so far have been trivial, just indefinite integrals.
% Let us explore some more substantial examples, which will give
% an idea of some of the fascination of the field of
% {\bf stochastic differential equations}
% {\bf (SDE\kern .5pt s)}. In all of the next six figures, $f$ is
% a smooth random function of some small fixed time scale and amplitude on
% the interval $[\kern .3pt 0,5]$.
% In each case several sample trajectories are plotted.
%
%%
%
% First we look at an equation featuring {\em additive noise,}
% $$ y' = y + f, \quad y(0) = 0. \eqno (12.4) $$
% Without $f$, the solution would be $y(t) = 0$, but the noise
% term breaks this symmetry. At first, so long as $|y|$ is small,
% trajectories look like random walks, with signs varying from
% $+$ to $-$, but as $|y|$ gets larger
% the exponential element overwhelms the random one, and a
% path shoots off to $-\infty$ or $\infty$ with probability~1.
% By symmetry, it is clear that both fates are equally likely.
%
clf, rng(0), lambda = 0.1; dom = [0 5];
L = chebop(dom); L.op = @(y) diff(y) - y; L.lbc = 0;
for k = 1:6
f = randnfun(lambda,dom,'norm');
y = L\f; plot(y,LW,0.7), hold on
end
title('Fig.~12.11.~~Six solutions to (12.4): unstable',FS,11)
ylim([-10 10]), hold off
%%
% \vskip 1.02 em
%%
%
% \noindent
% On a larger vertical scale the same curves look simply like exponentials.
%
title('Fig.~12.11.~~The same paths on a larger scale',FS,11)
ylim([-100 100]), hold off
%%
% \vskip 1.02 em
%%
% Next, we reverse the sign in (12.4) and consider
% $$ y' = -y + f, \quad y(0) = 0. \eqno (12.5) $$
% Now the process is
% stable, showing random oscillations about 0 that
% remain bounded as $t$ increases.
%
L.op = @(y) diff(y) + y; L.lbc = 0;
for k = 1:6
f = randnfun(lambda,dom,'norm');
y = L\f; plot(y,LW,0.5), hold on
end
title('Fig.~12.12.~~Six solutions to (12.5): stable',FS,11)
ylim([-3 3]), hold off
%%
% \vskip 1.02 em
%%
%
% Now let us change (12.4) into an equation with
% {\em multiplicative noise},
% $$ y' = fy , \quad y(0) = 1, \eqno (12.6) $$
% where $f$ is again random.
% We find that the amplitudes of the solutions of this new equation vary widely.
%
dom = [0 5]; rng(1), L = chebop(dom); L.lbc = 1;
for k = 1:6
f = randnfun(lambda,dom,'norm');
L.op = @(t,y) diff(y) - f*y;
y = L\0; plot(y,LW,0.7), hold on
end
title(['Fig.~12.13.~~Solutions to (12.6): ' ...
'smooth geometric Brownian motion'],FS,11)
ylim([0 120]), set(gca,XT,0:5), hold off
%%
% \vskip 1.02 em
%%
%
% \noindent
% The greatly differing amplitudes may seem surprising at first,
% but in fact, (12.6) is nothing more than the exponential of
% (12.1). We can verify this by rewriting (12.6) as $y'/y = f$, that is,
% $$ (\log y)' = f , \quad \log y(0) = 0. \eqno (12.7) $$
% So for any given $f$, the solution $y$ of (12.6) is the
% exponential of the solution $y$ of (12.1).
%
%%
%
% Equations (12.5) and (12.6) are first-order linear equations, of type
% {\sf FLAShI} and {\sf FLaSHI}, respectively. Of course, equations involving
% $a \kern .5pt y'' + b\kern .5pt y' + c\kern .5pt y$ as in (7.10)
% in which the coefficients $a,b,c$ all vary with $t$ can
% also be considered, as can nonlinear equations.
% Let us consider a nonlinear example with a
% bistable flavor. Without the random term $f$, the equation
% $$ y' = y - y^3 +f \eqno (12.8) $$
% would have stable fixed points $y=\pm 1$.
% Taking 20 trajectories from the initial value $y=0$, and putting
% the amplitude scale of $f$ at $0.2$, we
% find that about half end up oscillating about each of these values.
% By symmetry, the positive and negative
% behaviors must be must be equally likely. (These
% fates are not permanent. Since Gaussians take
% arbitrarily large values, though rarely,
% further sign flips will happen with probability~1 for sufficiently
% large values of $t$.)
%
N = chebop(dom); rng(0)
N.lbc = 0; N.op = @(t,y) diff(y) - y + y^3;
for k = 1:20
f = 0.2*randnfun(lambda,dom,'norm');
y = N\f; plot(y,LW,0.4), hold on
end
title(['Fig.~12.14.~~Random switching in the nonlinear ' ...
'equation (12.8)'],FS,11), hold off
%%
% \vskip 1.02 em
%%
% On the other hand, suppose we bias the switch slightly
% by taking the initial value $y(0) = 0.20$. Both positive and
% negative fates are again possible, but among twenty test
% trajectories, just two now go negative.
N.lbc = 0.2;
for k = 1:20
f = 0.2*randnfun(lambda,dom,'norm');
y = N\f; plot(y,LW,0.4), hold on
end
title(['Fig.~12.15.~~Random switching with a ' ...
'positive initial condition'],FS,11), hold off
%%
% \vskip 1.02 em
%%
%
% We promised at the beginning
% to explain the definition of the smooth
% random functions delivered by {\tt randnfun}.
% The essential idea here is the use of finite Fourier series
% with normally distributed random coefficients all of
% equal variance. We start from the notion of a
% periodic function on the interval $[\kern .3pt 0,L]$, defined
% by a Fourier series
% $$ f(t) = a_0^{} + \sqrt 2 \kern 2pt
% \sum_{k=1}^m a_k^{} \cos\left({2\pi k\kern .5pt t\over L}\right) +
% b_k^{} \sin\kern -1pt \left({2\pi k\kern .5pt t\over L}\right),
% \eqno (12.9) $$
% where each $a_k^{}$ and $b_k^{}$ is an independent sample from
% the $N(0,1/(2m+1))$
% distribution, i.e., with mean 0 and variance $1/(2m+1)$.
% The space scale $\lambda$ is fixed by setting
% $m$ to be the integer closest to $L/\lambda$.
% In the ``normalized'' mode as specified in Chebfun by the
% \verb|'norm'| flag, we have the same formula but
% with $a_k^{}$ and $b_k^{}$ coming from a distribution whose variance
% does not diminish as $m\to\infty$ for fixed $L$. Such a series almost
% surely does not
% converge as $m\to\infty$, but its integrals almost surely do,
% such as the indefinite integral
% $$ \int^t \kern -3pt f(s)\kern .5pt ds = a_0^{}\kern .7pt t
% + {L\over \sqrt 2\kern .5pt \pi }\kern 1pt
% \sum_{k=1}^m k^{-1}\kern -2pt \left[ a_k^{}
% \sin\kern -1pt \left({2\pi k\kern .5pt t\over L}\right) -
% b_k^{} \cos\kern -1pt \left({2\pi k\kern .5pt t\over L}
% \right)\right]. \eqno (12.10) $$
% Random infinite series of the form (12.10)
% go back to Paley, Wiener, and Zygmund in 1933 and 1934,
% and both (12.9) and (12.10) could be
% called {\em finite Fourier--Wiener series.}
% To generate a nonperiodic random function,
% {\tt randnfun} first constructs a periodic one on a
% larger interval, and then
% restricts it to the interval prescribed.
%
%%
%
% Without fully describing any of the mathematics, let us at least
% mention some of the terminology that appears when
% our smooth random ODE\kern .5pt s are related to SDE\kern .5pt s via the limit
% $\lambda \to 0$.
% A random function $f$ is a
% sample from a certain {\bf Gaussian process} dependent on
% the parameter $\lambda$. Suppose we write an
% ODE involving $f$ in the form
% $$ y'(t) = \mu(t,y(t)) + \sigma(t,y(t)) f(t) \eqno (12.11) $$
% for some functions $\mu$ and $\sigma$. As $\lambda\to 0$, this
% ODE approaches an {\bf SDE} that would normally be written as
% $$ dX_t^{} = \mu(t,X_t^{})\kern .5pt dt + \sigma(t,X_t^{}) \circ dW_t. \eqno (12.12) $$
% The two terms on the right are sometimes labeled {\em drift} and
% {\em diffusion} (or {\em volatility}), respectively.
% If $\mu$ is of the form of a constant times
% $X_t^{}$ and $\sigma$ is a constant, as in (12.4) and (12.5), the SDE is a
% {\bf Langevin equation,} and its solution is the
% {\bf Ornstein--Uhlenbeck process.}
% If $\mu$ and $\sigma$ are both of the form of a constant times
% $X_t^{}$, as
% in (12.6), we have the SDE of {\bf geometric Brownian motion.}
% The small circle in (12.12) indicates that this is an SDE
% of {\bf Stratonovich} type. The alternative of an {\bf It\^o} SDE
% has a different definition and the notation
% $$ dX_t^{} = \tilde\mu(t,X_t^{})\kern .5pt dt +
% \sigma(t,X_t^{}) \kern .5pt dW_t. \eqno (12.13) $$
% We have changed $\mu$ to $\tilde \mu$ because although
% (12.12) and (12.13) have different meanings, they define the
% same stochastic process provided $\tilde\mu$ and $\mu$ are
% related by
% $$ \tilde\mu(t,X_t^{}) = \mu(t,X_t^{}) + {1\over 2} \kern 1pt
% \sigma(t,X_t^{}) \kern 1pt{\partial \sigma\over \partial x}
% (t,X_t^{}) \eqno (12.14) $$
% Details of the usual formulations of It\^o and Stratonovich calculus can
% be found in many books of stochastic analysis. Results about
% the convergence of random ODE\kern .5pt s to SDE\kern .5pt s stem from two papers
% by E. Wong and M. Zakai in 1966; see also Sussmann, ``On the gap
% between deterministic and stochastic ordinary differential equations,''
% {\em The Annals of Probability,} 1978.
%
%%
%
% \begin{center}
% \hrulefill\\[1pt]
% {\sc Application: metastability, radioactivity, and tunneling}\\[-3pt]
% \hrulefill
% \end{center}
%
%%
%
% Many systems in physics, chemistry, biology,
% and social sciences have what are known as {\em metastable
% states,} which means, states that may appear stable for
% a long time but then suddenly undergo a transition.
% Examples include financial bubbles, supercooled liquids,
% and radioactive nuclei.
% Often the effect can be explained by noting that there is a stable
% fixed point of a noise-free system, but when noise is present,
% it eventually kicks the system out of the stable state.
%
%%
% We can illustrate the effect with the IVP
% $$ y' = y^3 - y + \varepsilon f(t), \quad y(0) = 0, \eqno (12.15) $$
% where $f$ is a smooth random function and $\varepsilon$ is a
% noise amplitude parameter.
% (Note that the signs are opposite to those in (12.8).)
% Here are three solutions for $t\in [\kern .3pt 0,100\kern .3pt ]$ with
% $\varepsilon = 0.2$.
% In the absence of the noise term, $y=0$ is a stable fixed point
% and $y=\pm 1$ are unstable fixed points.
% When noise is added, however, the stable state will
% eventually be left behind.
lambda = 1; rng(11)
N = chebop(0,100); N.op = @(y) diff(y) - y^3 + y; N.lbc = 0;
N.maxnorm = 10; ep = 0.25;
f1000 = randnfun(lambda,[0 1000],'norm',3); f = f1000{0,100};
for k = 1:3
y = N\(ep*f(:,k));
plot(y,LW,0.7), grid on, axis([0 100 -2 2]), hold on, drawnow
end
title(['Fig.~12.16.~~Metastability for (12.15) with ' ...
'$\varepsilon = ' num2str(ep) '$'],FS,11), hold off
%%
% \vskip 1.02em
%%
%
% Note that each trajectory stays near the stable state for a while,
% and then at some moment escapes.
% We cannot predict the precise moment
% of escape, though it would appear that for this example, it happens
% on a time scale in the range 10--100.\ \ To put it another
% way, the {\em half-life} of the system is evidently in this range,
% where the half-life is defined (as in the Application of
% Chapter~2) as the expected time $t_{1/2}^{}$ by which
% the probability of escape has risen to $1/2$.
% A related notion is that of a {\em mean exit time.}\footnote{For such
% definitions to be mathematically precise, they must be based on
% a precise definition of when a particle has escaped. The definition
% implicit in the {\tt N.maxnorm} setting of our Chebfun code is
% that a particle escapes when $|y|$ reaches the value $10$.}
%
%%
%
% Intuitively speaking, a system will escape from a metastable
% state when the random fluctuations, by chance, happen to
% deviate by an exceptionally large amount from their usual
% state. The mathematical theory of
% {\em large deviations} is used to analyze such effects.
% One phenomenon one finds in this subject is that a
% small change in a parameter may have a large effect on the
% lifetime. Here, for example, we reduce $\varepsilon$ from
% $0.25$ to $0.22$ and find that none of the three trajectories escapes.
%
ep = 0.22;
for k = 1:3
y = N\(ep*f(:,k));
plot(y,LW,0.7), grid on, axis([0 100 -2 2]), hold on, drawnow
end
hold off
title(['Fig.~12.17.~~Reduction to $\varepsilon = ' ...
num2str(ep) '$'],FS,11)
%%
% \vskip 1.02em
%%
%
% \noindent
% Eventually, trajectories will still escape, as we can
% see if we show results over all of $t\in [\kern .3pt 0,1000\kern .3pt ]$.
%
N.domain = [0 1000];
for k = 1:3
y = N\(ep*f1000(:,k));
plot(y,LW,0.4), grid on, ylim([-2 2]), hold on, drawnow
end
title(['Fig.~12.18.~~A longer time interval with ' ...
'$\varepsilon = ' num2str(ep) '$'],FS,11), hold off
%%
% \vskip 1.02em
%%
%
% Thanks to the power of exponentials, half-lives of
% radioactive isotopes have been estimated ranging
% over more than 50 orders of magnitude, from less
% than $10^{-24}$ seconds to more than $10^{22}$ years.
% Three famous examples are uranium-238, with a half-life
% of 4.5 billion years, uranium-235 at 700 million years,
% and carbon-14 at 5700 years. In quantum physics the
% process of decay from a metastable state is called
% {\em tunneling.}
%
%%
%
% \smallskip
% {\sc History.}
% The approach to stochastic differential equations taken in this
% chapter, via smooth random functions, is nonstandard.
% After foundational works of Bachelier (1900), Einstein (1905 and 1906),
% Smoluchowski (1906), Langevin (1906), and Perrin (1909),
% it became usual at least among mathematicians
% since the work of Wiener (1923)
% to regard randomness as intrinsically nonsmooth,
% involving independent, instantaneous noise increments injected at
% each instant of time. An advantage of this point of view is
% that it is mathematically beautiful and just right as an
% idealization, even if the physical world does not
% contain elements on all scales down to infinitesimal.
% A disadvantage is that it is mathematically advanced, so
% that any discussion of randomness is faced with technical challenges
% of measure theory and functional analysis
% (or an apology for their omission) from page~1.
% Indeed, one cannot even write SDE\kern .5pt s
% in the usual form $y'(t) = f(t,y)$, since $y'$ does not exist --- it
% would represent white noise, which to be truly white must have
% infinite amplitude. Therefore new
% notations as in eqs.~(12.12) and (12.13) are used instead.
% Along with new notation go
% new theories of SDE\kern .5pt s above and beyond the usual theory of
% deterministic ODE\kern .5pt s (It\^o, Stratonovich); and these in turn must be solved
% by numerical methods above and beyond the usual ones (Euler--Murayama,
% Milstein$,\dots$).
%
%%
%
% \smallskip
% {\sc Our favorite reference.}
% Jean-Pierre Kahane (1926--2017) was an expert in Taylor and Fourier
% series with random coefficients.
% As our favorite reference, we would like to highlight
% his review paper ``A century of interplay between
% Taylor series, Fourier series, and Brownian motion,'' {\em Bulletin
% of the London Mathematical Society} 29 (1997), pp.~257--279.
% The opening pages tell the fascinating story of
% how an infinite Taylor series with random coefficients from $N(0,1)$,
% for example, defines an analytic function in the open complex unit disk
% $|z|<1$ and hence a smooth function of $\theta$ for $z=re^{i\theta}$ for
% any $r<1$ (see Exercise 12.2).
% As $r\to 1$, such functions approach white noise.
% \smallskip
%
%%
%
% \begin{displaymath}
% \framebox[4.7in][c]{\parbox{4.5in}{\vspace{2pt}\sl
% {\sc Summary of Chapter 12.}
% Smooth random functions with specified length scale $\lambda$ can
% be defined via finite Fourier series with random coefficients.
% Integrals of such functions give smooth random walks,
% and random ODE\kern .5pt s can incorporate such functions either as
% forcing terms or as coefficients. As $\lambda\to 0$,
% smooth random ODE\kern .7pt s approach stochastic differential equations
% (SDE\kern .5pt s) of the Stratonovich variety.
% \vspace{2pt}}}
% \end{displaymath}
%
%%
% \smallskip\small\parskip=1pt\parindent=0pt
% {\em \underline{Exercise $12.1$}. Tracking a random signal.}
% Let $f$ be the function on $[\kern .3pt 0,50\kern .3pt]$ defined
% by {\tt rng(0)}, \verb|randnfun(1,[0,50])|, and consider the IVP
% $y' = -a(y(t) - f(t)),$ $y(0) = 0$, where $a>0$ is a constant.
% Plot $f$ together with the solutions $y$ for $a = 0.1$, $1$, and
% $10$ and discuss the results. Intuitively speaking, what is
% happening here?
% \par
% {\em \underline{Exercise $12.2$}. Random and lacunary Taylor series.}
% {\em (a)}
% Define $f(z) = \sum_{k=0}^n c_k^{} z^k$, where
% $c_0^{}, \dots,c_n^{}$ are independent random samples
% from $N(0,1)$, with $n$ chosen big enough so that it is equivalent
% to $\infty$ to plotting accuracy.
% For a particular choice of random coefficients,
% plot $\hbox{Re} (f(z))$ as a function of $\theta$ for
% $z = re^{i\theta}$ with $r = 0.5, 0.9, 0.99$.
% {\em (\kern .7pt b)} Another way to generate an analytic
% function in the unit disk with a natural boundary on the unit
% circle is by means of a {\em lacunary series} (i.e., one with
% long gaps), an idea going back to Weierstrass. Make the same
% plots as in {\em (a)} but now with $c_j^{}=1$ when $j$ is a power
% of 2 and $c_j^{} = 0$ otherwise.
% \par
% {\em \underline{Exercise $12.3$}. Unbounded variation of a Brownian path.}
% White noise has unbounded 1-norm with probability 1; so
% its integral, Brownian motion, has unbounded variation.
% Make a log-log plot of the 1-norms of normalized smooth random functions on
% $[-1,1]$ as a function of wavelength parameter $\lambda$ for
% $\lambda = 1, 1/2, \dots, 1/256$. What rate of increase do you
% see as a function of $\lambda\kern .7pt ?$
% \par
% {\em \underline{Exercise $12.4$}. Cumulative maximum of a Brownian path.}
% Plot four smooth random walks with $\lambda = -0.1$ on
% $[\kern .3pt 0,50\kern .3pt ]$ together with their cumulative
% maxima, which you can calculate with {\tt cummax(f)}. Describe qualitatively
% what you see. (It is known that with probability 1,
% the maximum grows in a certain sense at a rate proportional to
% $(t\log\log t)^{1/2}$ as $t\to\infty$.)
% \par
% {\em \underline{Exercise $12.5$}. Roots of a Brownian path.}
% Calculate smooth random walk functions on
% $[\kern .3pt 0,50\kern .3pt]$ for $\lambda= 16,8,4,\dots,1/16$,
% initializing the random number seed with {\tt rng(1)} in each case.
% Plot each function and calculate its roots. Describe qualitatively
% how the sets of roots behave as $\lambda\to 0$. Find a way to
% show this graphically.
%