David Bailey: Conquering numerical error

[php]echo(single_post_title());[/php]

[php]_e(‘Published on’, ‘gonzo’);[/php] [php]the_time(‘F jS, Y’);[/php] | [php]_e(‘by’, ‘gonzo’);[/php] [php]the_author();[/php]

The proliferation of extremely large-scale, highly parallel computation (in some cases involving one million or more processors) has greatly exacerbated issues of numerical error and numerical reproducibility. In some cases, developers of applications find that they have lost considerable accuracy, while in others (e.g., climate modeling) they find it difficult to reproduce results when porting a code from one computer system to another, or even from, say, 1024 processors to 4096 processors on the same system.

One of the best ways to overcome such difficulties is to employ higher precision — “double-double” (approximately 31-digit) or “quad-double” (approximately 62-digit) arithmetic, which facilities are now available in relatively easy-to-use software modules.  Often it is not necessary to perform all computations in higher precision — just a handful of particularly sensitive sections of code, with only a minor increase in run time. Some other applications of general interest in the scientific community, including some interesting studies in mathematical physics and computational mathematics, require even more precision, in some cases hundreds or thousands of digits.

This presentation will give examples of some of the applications where numerical error is an increasing concern, together with various software and algorithmic means of dealing with numerical error. It will also mention some software programming environments, now in development, that will assist users to identify numerically sensitive portions of code and to automatically make certain transformations (including the usage of higher precision) to ameliorate numerical difficulties.

David H. Bailey, University of California, Davis, USA.[subscribe2]

Tags: , ,




Comments are closed.

Back to Top ↑