NCM_2015
Numerical Computing with Matlab
Senior Seminar in Scientific Computing
http://people.sc.fsu.edu/~jburkardt/classes/ncm_2015/ncm_2015.html
NCM_2015 is the home page for the class ISC4932,
"Numerical Computing with Matlab", Senior Seminar in Scientific Computing,
an undergraduate seminar class offered by the Department of Scientific Computing
at Florida State University, Fall Session 2015.
This is a 1-credit seminar class, intended to introduce the students
to ideas, algorithms, and programs common to Scientific Computing.
The topic this semester will be numerical linear algebra.
The course is based on the book "Numerical Computing with Matlab" by Cleve
Moler.
All the chapters of the book are available online for download
from the MathWorks website:
http://www.mathworks.com/moler/chapters.html
The class meets on ?days, from ?to?, in room ?.
Weekly class presentations will be chosen from the following topic list:
-
Introduction to Matlab:
To familiarize ourselves with Matlab in a scientific computing setting,
we will consider several simple problems, involving the Golden
Ratio, Fibonacci Numbers, the Fractal Fern, Magic Squares,
Cryptography, the 3*n+1 Sequence, and Floating Point Arithmetic.
We will learn how to explore the mathematical relationships involved,
to carry out iterations, and to use graphics to help us to see
the patterns hidden in the numbers we have computed.
-
Linear Equations:
Linear equations provide the simplest way of describing relationships
between various quantities, and solving such equations is a requirement
in almost every scientific computation. We will look at the technique
of Gauss elimination first; then try a new way of seeing it as a
factorization of the matrix. We'll look at how Matlab can use special
techniques if the matrix is triangular or symmetric. Then we will
ask where errors come from - is it because the system is large?
can a small set of equations be difficult to solve? Is there any
way to measure whether a system of equations will be hard to solve?
-
Interpolation:
Interpolation lets us assume that between a hot noon and a cold
evening there is probably a cool evening. Numerically, we are
always given a small amount of measurements, but then asked to
estimate the situation at places where no measurements were made.
We will look at reliable ways to making such estimates, and
the choices we have for filling in the gaps, including polynomial
interpolation, piecewise linear, piecewise cubic Hermite, and
spline methods.
-
Zeros and Roots:
A formula takes your input and gives you an output as answer.
A formula might tell you that if you load p pounds of gunpowder
into the cannon, the cannonball will land x feet away. Suppose
you want the cannonball to land exactly on a target that is
y feet away, what do you do? The method for determining the
input value that will result in a desired output value is known
as root finding. If enough information is known about the formula,
an answer can quickly be found using a few steps of an iteration,
such as the bisection method, the secant method, or Newton's method.
The ideas used for zero finding can be extended to optimizatino,
in which we seek the input that results in the maximum or
minimum value of a function.
-
Least Squares:
Surveyors were among the first people to encounter mathematical
problems with "too much" data. As a simplified example, we may
have measurements that essentially tell us that x+2y=10, x-y=2,
2x+y=11. No pair of values (x,y) makes all three equations true.
The surveyors assumed that all the equations were only approximately
true. Given that no answer will be perfect, is there a "least bad"
answer? The method of least squares shows how to pick an approximate
answer using the criterion that the sum of the squares of the
errors in each equation is the least possible total. Along the
way, we will see how the QR method is used to factor the linear
system, and how rank deficiency must be dealt with. In many
cases, the we are not dealing with a system of linear equations,
but with nonlinear functions. Generalizations of the least
squares method have been developed to handle them. Seeking an
approximate solution to a set of equations arises frequently,
and the least squares technique is the appropriate tool to use.
-
Quadrature:
Calculus shows us how to compute the integral of many functions,
but in computation, the integrals we encounter are usually not
treatable using antiderivatives, and must be estimated instead.
A simple idea is to sample the integrand at a few points, compute
the interpolating polynomial and integrate that. Another idea
is to estimate the integral a second time, using more points.
If the two estimates differ greatly, this indicates that we may
need to make a third estimate with even more points, continuing
to refine our estimate until it settles down.
-
Ordinary Differential Equations:
Many physical systems can be described by specifying an initial
condition, and the rules for how the system changes from one
instant of time to the next. Matlab includes several functions
that can report a sequence of system states over time, by
starting at the initial state and taking many small steps.
While most physical systems are well-behaved, there are examples
such as the Lorenz equations, which show that the computed solution
can be very sensitive to small errors. Another problem arises
when the physical system includes possibilities for changes at
a wide range of time scales, as in descriptions of chemical
reactions.
-
Fourier Analysis:
The sine and cosine functions are periodic, and are suitable for
describing electronic signals, weather, music, and other phenomena that
have an underlying cyclic nature. Fourier analysis is a way of
breaking down a signal into contributions to a sequence of
harmonic frequencies. We will look at an example involving
historical records of sunspots.
-
Random Numbers:
The generation of random numbers on the computer has always been
a serious study. Random numbers might seem at first to be meaningless;
but what is important is that they can be used as a technique
for sampling. Using random numbers allows us to rapidly estimate
the behavior of a system by, essentially, averaging its results for
many random trials. Often, the normal distribution is preferred,
since in many physical systems there is a naturally preferred
result, along with a small tendency to variation. In some computations,
rapid, repeatable, parallel computation of millions of random numbers
is vital, and we discuss how Matlab's Twister algorithm can be used
for this purpose.
-
Eigenvalues and Singular Values:
The eigenvalue equation A*x=lambda*x seeks a direction x such
that the linear transformation A simply lengthens x, without
changing its direction. Eigenvalue analysis is a standard way
of thinking about many advanced problems in linear algebra.
The related singular value decomposition, A=U*S*V', is a more
modern way of analyzing matrices. It can be applied even if
the matrix is rectangular, and the factorization does not encounter
the many special cases that can arise in eigenanalysis. We will
try to understand the meaning of these special numbers, the
eigenvalues and singular values, and the special directions, the
eigenvectors and singular vectors.
-
Partial Differential Equations:
Partial differential equations model a physical system by using
partial derivatives. The classic example is the Poisson equation,
which can be used to describe the diffusion of a drop of ink
in water, or the spread of heat in a sheet of metal. The finite
difference method allows us to approximate the second derivatives
in the Poisson equation by difference quotients of data evaluated
on a grid. This, in turn, results in a large, sparse system of
linear equations. The properties of this linear system turn out
to be interesting to analyze. We can also model the wave equation,
in which case we will wind up looking at the eigenvalue problem.
The analysis of the "L-shaped region" is the origin of the
Matlab logo.
Last revised on 12 March 2015.