FPBench Logo

FPTalks 2020

The leading edge of floating-point research

The floating-point research community.
Online. June 24, 2020.

FPTalks 2020 was held online over Zoom. All talks were live-streamed and recorded on Youtube. Each talk was 10 minutes long, followed by audience questions.

Session 1 (Session Stream)


Pavel Panchekha, University of Utah

AdaptivFloat: A data type for resilient deep learning inference

Thierry Tambe, Harvard University

Counterexamples and simulation for floating-point loop invariant synthesis

Eva Darulova, Max Planck Institute

Building a fully verified, optimizing compiler for floating-point arithmetic

Heiko Becker, Max Planck Institute


Session 2 (Session Stream)

Arithmetic in Google's Android calculator

Hans-J. Boehm, Google

Debugging and detecting numerical errors in computation with posits

Sangeeta Chowdhary, Rutgers University

Efficient generation of error-inducing floating-point inputs

Hui Guo, University of California at Davis

Creating correctly rounded math libraries for real number approximations

Jay P. Lim, Rutgers University


Session 3 (Session Stream)

FPBench 2.0: Tensors and beyond

Bill Zorn, University of Washington

I need to write a compiler just to solve differential equations?!

Daniel Shapero, University of Washington

OL1V3R: Solving floating-point constraints via stochastic local search

Shaobo He, University of Utah

Fixed-point decision procedure

Thanh Son Nguyen, University of Utah


Session 4 (Session Stream)

Universal library for multi-precision algorithm R&D

Theodore Omtzigt, Stillwater Supercomputing

On automatically proving the correctness of math.h implementations

Wonyeol Lee, Stanford University

Detecting floating-point errors via atomic conditions

Daming Zou, Peking University

Discovering discrepancies in numerical libraries

Jackson Vanover, University of California at Davis