Numerical Methods for Symbolic Problems, Werner Krandick

Symbolic Computation has enabled computers to answer many mathematical questions with complete rigor. Indeed, mechanical computations are widely accepted as mathematical proofs if they are based on a provably correct algorithm that is carefully implemented. Already, a number of important results have not been proven in any other way.

Traditionally, symbolic methods rely on exact arithmetic to guarantee that the results are correct. In many cases, however, exact arithmetic leads to intermediate expression swell and long computing times. Numerical methods, by contrast, have shown that many practical problems can be solved quickly by computing with just a few bits. On the other hand, every step in a floating point computation might introduce a new error, and the result cannot be taken as proof in most cases.

In recent years, the two approaches were combined in several instances to form methods that are always correct---and fast for most inputs. In some cases, exact computation is applied first, in order to reduce a problem to a well-conditioned instance that can be solved by a numerical method. In other cases, a validated numerical method is tried first. If it returns a result, the result is correct; otherwise, a failure indication is returned and an infallible method is invoked as a back-up. The seamless integration of hardware-based floating point computations, software-supported multiprecision floating point computations, and exact computations makes these hybrid symbolic-numeric strategies transparent to the user.

The tutorial will cover theoretical and practical aspects of the following topics.

  1. Hardware floats. IEEE-754 format, distribution of floating point numbers, directed rounding, exception handling.
  2. Bigfloat arithmetic. Representation, addition, multiplication, division.
  3. Symbolic-numeric interface. Conversion of arbitrary precision integers, binary rationals, standard intervals, and real algebraic numbers to floating point representation. Minimal floating point intervals.
  4. Rounded interval arithmetic. Addition, multiplication, polynomial evaluation.
  5. Example applications. Validated polynomial sign evaluation. Refinement of isolating intervals. Validated polynomial real root isolation. Application to cylindrical algebraic decomposition and problems in quantifier elimination. Application to polynomial systems solving.
  6. Technical issues. Precision management. Handling of exponent limitation.

Target Audience
The presentation will be accessible to anyone with a B.Sc. in computer science or mathematics. The material might be particularly interesting to
  1. researchers in computer algebra interested in analyzing numerical methods used to solve symbolic problems,
  2. developers of computer algebra systems wanting to fuse together floating point and exact computations in computer algebra systems in a way that is transparent to the user,
  3. numerical analysts interested in applications of numerical and validated numerical methods to rigorous mathematical problems.

The Presenter
Werner Krandick holds graduate degrees in both, mathematics and computer science. He received his Ph.D. in Computer Science from The Ohio State University in 1992. After postdoctoral work at the Research Institute for Symbolic Computation at Linz, Austria, and at the University of Paderborn, Germany, he was appointed professor at Drexel University, Philadelphia. His research focuses on infallible computation, computer algebra, computer arithmetic, and parallel computation. Together with Thomas Decker he designed, analyzed, and implemented a parallel symbolic-numeric method for high-performance isolation of polynomial real roots. In his spare time he enjoys swimming, bike riding, and running.

Christopher W Brown
Last modified: Tue Apr 3 16:24:59 EDT 2001