04/07/2016

Speaker(s) : Antoine Plet (ENS de Lyon)

To analyze a priori the accuracy of an algorithm in floating-point arithmetic, one usually derives a uniform error bound on the output, valid for most inputs and parametrized by the precision p. To show further that this bound is sharp, a common way is to build an input example for which the error committed by the algorithm comes close to that bound, or even attains it. Such inputs may be given as floating-point numbers in one of the IEEE standard formats (say, for p=53) or, more generally, as expressions parametrized by p, that can be viewed as symbolic floating-point numbers. With such inputs, a sharpness result can thus be established for virtually all reasonable formats instead of just one of them. This, however, requires the ability to run the algorithm on those inputs and, in particular, to compute the correctly-rounded sum, product, or ratio of two symbolic floating-point numbers. In this talk, we show how these basic arithmetic operations can be performed automatically. We introduce a way to model symbolic floating-point data, and present algorithms for round-to-nearest addition, multiplication, fused multiply-add and, in some cases, division. An implementation as a Maple library is also described, and experiments using examples from the literature are presented to illustrate its interest in practice.

This is a join work with Claude-Pierre Jeannerod, Nicolas Louvet and Jean-Michel Muller.

Marc.Mezzarobba (at) nulllip6.fr