Second Workshop on
Compact Computer Algebra
July 10, 2009. Grand Bend, ON, Canada
List of Talks
CAMAL 40 Years On -- Is Small Still Beautiful?Invited Talk by
John FitchUniversity of Bath (UK)
Over forty years ago an algebra system was written in Cambridge, UK, designed to assist in a number of calculations in celestial mechanics and later in relativity. We present the hardware environment and the main design decisions that led this system, later dubbed CAMAL, to be used in many applications for twenty years. Its performance is investigate, both in its own time, and more recently. It is argued that a compact data representation as in CAMAL has real benefits even in today's larger memory world.
Gaston GonnetContributed Talk by
28.5 years of Maple: How Many of The Design Principles of a Compact System Paid DividendsInstitute for Computational Sciences , ETH Zurich
Most of the original literature about Maple described it as a "compact and efficient computer algebra system". It was partly our goal to be able to run in small desktop computers and even on a pocket computer (the term "pocket symbolic" was also used). This talk will concentrate on four aspects of the early design that went in this direction and were the cornerstones of the design. These are the use of the language C, the S^2T measure of complexity, option remember and its implication which is the unique representation of subexpression and the systematic elimination of quadratic algorithms.
Small Algorithms for Small SystemsContributed Talk by
James H. DavenportUniversity of Bath -- visiting Symbolic Computation Group
Knuth is said to have described Computer Science as "that part of mathematics in which log log n = 3". In this talk I will consider only some parts of Computer Algebra, and the even more special case when log n = 3, or even less, and where compactness of the algorithm itself, as well as the data structures, is important. g.c.d. This has been a bugbear of computer algebra for over forty years, and has given rise to many solutions, some of then truly heroic [CGG84, DP85]. Though difficult to prove, the subresultant algorithm [Col67] is quite short to program, and its intermediate expression swell does not manifest itself on small examples. It may well be worth considering the trial division variant of [Hea79].
Factoring (of univariate polynomials). This has been a challenge for almost as long as the g.c.d. problem, and is still far from being solved, as significant improvements keep on being made [vH02]. Nevertheless, if log n = 3, we can devise a relatively simple algorithm on the following lines.
Factoring (of multivariate polynomials). It's not clear to this author that this is worth implementing. Integration Here the Risch-Norman [NM77] algorithm can be quite short to program, and, while not a full decision procedure, is complete on a reasonable range of transcendental integrands [Dav82]. There is a recent extension [Kau08], which looks promising on many cases of algebraic integrands. Here the aim would be to integrate correctly many common cases, while not guaranteeing that "I can't" is equivalent to "no-one can".
Open Research QuestionsQuestion 1 How many primes pi should we factorize modulo in step 2 above before deciding that we have a compatible factorization, and should proceed to Hensel lifting.
[Mus78] suggests that the answer is 5, though there are heuristic arguments that this should grow as log log d, where d is the degree of the polynomial to be factored. If d is small, can we get away with less?
References[ABD85] J.A. Abbott, R.J. Bradford, and J.H. Davenport. A Remark on Fac- torisation. SIGSAM Bulletin 2, 19:31-33, 1985.
[CGG84] B.W. Char, K.O. Geddes, and G.H. Gonnet. GCDHEU: Heuristic Polynomial GCD Algorithm Based on Integer GCD Computation. In J.P. Fitch, editor, Proceedings EUROSAM 84, pages 285-296, 1984.
[Col67] G.E. Collins. Subresultants and Reduced Polynomial Remainder Se- quences. J. ACM, 14:128-142, 1967.
[CZ81] D.G. Cantor and H. Zassenhaus. A New Algorithm for Factoring Polynomials over Finite Fields. Math. Comp., 36:587-592, 1981.
[Dav82] J.H. Davenport. On the Parallel Risch Algorithm (I). In Proceedings EUROCAM '82 [Springer Lecture Notes in Computer Science 144], pages 144-157, 1982.
[DP85] J.H. Davenport and J.A. Padget. HEUGCD: How Elementary Upper- bounds Generate Cheaper Data. In Proceedings EUROCAL 85, pages 18-28, 1985.
[DST93] J.H. Davenport, Y. Siret, and E. Tournier. Computer Algebra (2nd ed.). Academic Press, 1993.
[Hea79] A.C. Hearn. Non-Modular Computation of Polynomial Gcd Using Trial Division. In Proceedings EUROSAM 79, pages 227-239, 1979.
Optimization Techniques for Matrix MultiplicationContributed Talk by
Md. Nazrul IslamDepartment of Computer Science, University of Western Ontario
Matrix multiplication is a cornerstone of linear algebra algorithms; its asymptotic complexity has attracted a lot of attention in the last forty years. Small dimension matrices with large entries are used to evaluate holonomic functions, by means of the binary splitting algorithm. As an example, we may consider computing exponential at 1; this computation requires matrix multiplication, on matrices of size 2. Our objective is to obtain matrix multiplication procedures that reduce the number of multiplications required for such small matrices.
We base our work on an optimized usage of the previous algorithms for small matrix multiplications, due to Strassen, Winograd, Pan, Laderman, etc.; we show how to exploit all of these standard algorithms for larger matrix multiplication in an improved way.
As a result, we were able to get lower numbers of multiplications for matrix sizes 1 to 30. We wrote a code generator that uses this table to write generate multiplication code for long integers and polynomials. Our future goal includes extending this tool for matrix multiplications having linear recurrence operators (or more generally, elements of an Ore algebra) as entries.
Compact Classification Of Handwritten Mathematical SymbolsContributed Talk by
Oleg GolubitskyComputer Science Department, University of Western Ontario
Online recognition of handwritten mathematical expressions can significantly improve mathematical user interfaces, especially in mobile devices, provided that it is accurate and does not cause noticeable delays. We will discuss a fast and robust method for online recognition of individual mathematical symbols. The method is based on the compact representation of parametric curves by the vector of coefficients of a truncated orthogonal polynomial series, which is computed on-line, as the curve is written. This vector is then classified using two distance measures: the Manhattan distance, which can be computed in a few machine instructions, is used for fast pre-classification among several hundred classes, and the distance to the convex hull of nearest neighbors for accurate tie-breaking.
Distance-based classification requires to store a database of samples from each symbol class. We reduce the storage requirements to about 600 KB by minimizing the number of coefficients per sample and the number of bits allocated per coefficient, as well as by choosing a subset of the most significant samples for each class.
This talk presents joint work with Stephen M. Watt and will be followed by an MKM presentation of a confidence measure developed for integrating the curve classifier into a recognizer of entire mathematical expressions.
In-place Arithmetic for Univariate Polynomials over an Algebraic Number FieldContributed Talk by
Seyed Mohammad Mahdi JavadiSchool of Computing Science, Simon Fraser University, Canada
We present a C library of in-place subroutines for univariate polynomial multiplication, division and GCD over Lp where Lp is an algebraic number field L with multiple field extensions reduced modulo a machine prime p. We assume elements of Lp and L are represented using a recursive dense representation. The key feature of our algorithms is that we eliminate the storage management overhead which is significant com- pared to the cost of arithmetic in Zp by pre-allocating the exact amount of storage needed for both the output and working storage. We give an analysis for the working storage needed for each in-place algorithm and provide benchmarks demonstrating the efficiency of our library.
Lazy and Forgetful Polynomial Arithmetic and ApplicationsContributed Talk by
Paul VrbikSchool of Computing Science, Simon Fraser University, Canada
We present lazy and forgetful algorithms for multiplying and dividing multivariate polynomials. The lazy property allows us to compute the i-th term of a polynomial without doing the work required to compute all the terms. The forgetful property allows us to forget earlier terms that have been computed to save space. For example, given polynomials A,B,C,D,E we can compute the exact quotient Q = (A*B - C*D)/E without explicitly computing the numerator A*B-C*D which can be much larger than any of A,B,C,D,E and Q. As applications we apply our lazy and forgetful algorithms to reduce the maximum space needed by the Bareiss fraction-free algorithm for computing the determinant of a matrix of polynomials and the extended Subresultant algorithm for computing the inverse of an element in a polynomial quotient ring.
Criteria for Compactness in the Design of MapleContributed Talk by
Keith O. GeddesDavid R. Cheriton School of Computer Science University of Waterloo
The original paper on the design of Maple was published in 1983 under the following title: "The design of Maple: A compact, portable and powerful computer algebra system." As this title implies, compactness was a design goal for Maple from the beginning. In this presentation, we take a retrospective view of the design goals as presented in the early papers. The fundamental design concept was to develop a compact kernel for the computer algebra system, implemented as compiled code, which handles the most basic arithmetic and algebraic operations and defines an interpreter for the user-level Maple language. The mathematical power of the computer algebra system is achieved via a large library of algorithms written in the Maple user language, to be loaded and interpreted on an as-needed basis. The most significant improvements in (space and time) efficiency are achieved by the development and implementation of better algorithms for the various mathematical tasks, and the ease of writing algorithms in the user-level Maple language facilitates the timely incorporation of new algorithms.
Page maintained by Elena Smirnova