MAXIMS ABOUT NUMERICAL MATHEMATICS, SCIENCE, COMPUTERS, AND LIFE ON EARTH
The following is a somewhat modified list of the maxims handed out in
my Spring 1997 Cornell course,
Tools for Computational Science. -Nick Trefethen
At the bottom of each sheet I wrote the following disclaimer.
In order to stimulate thought, these
maxims are formulated as concisely as possible, with qualifications
and caveats omitted. You may find some of them exaggerated or oversimplified.
Scientific progress occurs by revolutions and paradigm shifts
on all scales. If Kuhn had written The Structure of Scientific
Revolutions after the popularization of fractals, he would never
have dared to suggest there were just two.
There are three great branches of science: theory, experiment,
If no parameters in the world were very large or very
small, science would reduce to an exhaustive list of everything.
Science is the extraction of underlying principles from given systems, and engineering
is the design of systems on the basis of underlying principles.
A computational study is unlikely to lead to real scientific
progress unless the software environment is convenient enough to
encourage one to vary parameters, modify the problem, play around.
If the answer is highly sensitive to perturbations,
you have probably asked the wrong question.
One of the most important of disciplines
is the one that never assumed its logical name:
The big gulf in the mathematical sciences is between the
continuous problems (and people) and the discrete ones.
Most scientists and engineers are in the continuous group,
and most computer scientists are in the discrete one.
The fundamental law of computer science: As machines become more powerful, the
efficiency of algorithms grows more important, not less.
The two most important unsolved problems in mathematics are the Riemann
hypothesis and " P=NP? ". Of the two, it is the latter whose solution
will have the greater impact.
No physical constants are known to more than around 11
digits, and no truly scientific problem requires computation with
much more precision than this.
(OK, throw in another 5 or 6 digits to counter the slow accumulation
of rounding errors in very long calculations -- using numerically stable algorithms,
of course, without which you're sunk in any precision.)
Digital arithmetic is no more the essence of scientific computing than collisions of
molecules are the essence of fluid mechanics.
All that it is reasonable to ask for in a scientific calculation is stability, not
Most problems of continuous mathematics
cannot be solved by finite algorithms.
Chess is a finite game, hence trivial,
but this fact does not seem to dismay those who play it.
If rounding errors vanished, 95% of numerical
analysis would remain.
Just because there's an exact formula doesn't mean
it's necessarily a good idea to use it.
For large-scale problems of continuous mathematics,
the best algorithms are usually infinite even if the problem
is finite. In other words, analysis is more useful than algebra,
even if the problem is algebraic.
Symbolic computing is mainly useful when you want a symbolic answer.
As technology advances, the ingenious ideas that make progress possible
vanish into the inner workings of our machines, where only experts may be
aware of their existence.
Numerical algorithms, being exceptionally uninteresting and
incomprehensible to the public, vanish exceptionally fast.
Is there an O(n2+epsilon) algorithm for solving an
n x n system Ax=b ?
This is the biggest unsolved problem in numerical analysis, but
nobody is working on it.
The purpose of computing is insight, not pictures.
Twenty years ago, we did not interact with computers graphically, but
now, everything is graphical. In the next twenty years an equally
great change will occur as we take to communicating with machines
109 is a kind of magic number, above which new effects emerge. Think
of neurons in the brain, nucleotides in the genetic code, people on
earth, or the number of calculations carried out by a computer
each time you press Enter.
Eventually mankind will solve the problem of consciousness
by deciding that we are
not conscious after all, nor ever were.
In the long run, our large-scale computations
must inevitably be carried out in parallel.
Nobody really knows how to program parallel computers.
Nobody really knows how the brain works.
In the next century, related revolutionary developments will occur in
Advanced technological civilizations can last a few hundreds or maybe
thousands of years, but not millions.
If this is not so, how can you explain the extraordinary coincidence
that you live right at the dawn of this one? And how can you explain
the fact that despite millions of years in which to make the journey, civilizations
from other parts of the universe have not taken over the Earth?
Thanks to digital logic and careful error correction, computers
have traditionally behaved deterministically: if you run the program
twice, you get the same answer both times. However, as computing becomes ever
more intelligent and more distributed in the upcoming century,
determinism in any practical sense will fade away. No fully intelligent
system can be expected to give you the same answer twice in a row.
If the state space is huge,
the only reasonable way to explore it is at random.
Computational mathematics is mainly based on two ideas:
Taylor series, and linear algebra.
In principle, the Taylor series of a function of n variables
involves an n-vector,
an n × n matrix, an n × n × n
tensor, and so on. Actual use of orders higher than two, however, is
so rare that the manipulation of matrices is a hundred times better supported
in our brains and in our software tools
than that of tensors.
When the arithmetic is easy and the challenge lies in efficient ordering
of a sequence of operations, computational science
turns to graph theory. Two big examples are automatic differentiation
and direct methods for sparse linear systems of equations.
The two biggest world events in the second half of the
20th century were the end of the Cold War in 1989 and the explosion of
the World Wide Web beginning in 1994.
Mankind discovered 50 years ago what natural selection discovered
3.8 billion years ago.
If you want to manipulate and copy information without errors,
your logic has to be digital.
Life on Earth consists of 107 species, each consisting of
1010 individuals, each consisting of 1013 cells,
each consisting of 1014 molecules.
Computer codes are better written than genetic ones, since there's a
programmer in the loop, but as they get bigger, this distinction is
Living things are 3D objects, yet they
are constructed by folding up 1D pieces.
This astonishing method of construction is what makes
repair, reproduction, and natural selection practicable.
If an infinite intelligence designed an organism from scratch,
by contrast, presumably it would use 3D building blocks.
Animals are composed of millions of autonomous cells, each
performing specialized functions
and communicating with the others
by prescribed signals. It is almost unimaginable
that large creatures could
have been engineered without such a modular design.
The same object-oriented principles apply in
the engineering of large software systems.
Once we know the human genome, will that knowledge prove useful?
You might as well ask an expert user of a software
system, might it be helpful to have access to the source code?