Traditionally, symbolic methods rely on exact arithmetic to guarantee that the results are correct. In many cases, however, exact arithmetic leads to intermediate expression swell and long computing times. Numerical methods, by contrast, have shown that many practical problems can be solved quickly by computing with just a few bits. On the other hand, every step in a floating point computation might introduce a new error, and the result cannot be taken as proof in most cases.
In recent years, the two approaches were combined in several instances to form methods that are always correct---and fast for most inputs. In some cases, exact computation is applied first, in order to reduce a problem to a well-conditioned instance that can be solved by a numerical method. In other cases, a validated numerical method is tried first. If it returns a result, the result is correct; otherwise, a failure indication is returned and an infallible method is invoked as a back-up. The seamless integration of hardware-based floating point computations, software-supported multiprecision floating point computations, and exact computations makes these hybrid symbolic-numeric strategies transparent to the user.