FPTalks 2020 was held online over Zoom. All talks were live-streamed and recorded on Youtube. Each talk was 10 minutes long, followed by audience questions.
FPTalks 2020 was supported in part by the Applications Driving Architectures (ADA) Research Center, a JUMP Center co-sponsored by SRC and DARPA.
Session 1 (Session Stream)
Welcome
Pavel Panchekha, University of Utah
AdaptivFloat: A data type for resilient deep learning inference
Thierry Tambe, Harvard University
Counterexamples and simulation for floating-point loop invariant synthesis
Eva Darulova, Max Planck Institute
Building a fully verified, optimizing compiler for floating-point arithmetic
Heiko Becker, Max Planck Institute
Break
Session 2 (Session Stream)
Arithmetic in Google's Android calculator
Hans-J. Boehm, Google
Debugging and detecting numerical errors in computation with posits
Sangeeta Chowdhary, Rutgers University
Efficient generation of error-inducing floating-point inputs
Hui Guo, University of California at Davis
Creating correctly rounded math libraries for real number approximations
Jay P. Lim, Rutgers University
Lunch
Session 3 (Session Stream)
FPBench 2.0: Tensors and beyond
Bill Zorn, University of Washington
I need to write a compiler just to solve differential equations?!
Daniel Shapero, University of Washington
OL1V3R: Solving floating-point constraints via stochastic local search
Shaobo He, University of Utah
Fixed-point decision procedure
Thanh Son Nguyen, University of Utah
Break
Session 4 (Session Stream)
Universal library for multi-precision algorithm R&D
Theodore Omtzigt, Stillwater Supercomputing
On automatically proving the correctness of math.h
implementations
Wonyeol Lee, Stanford University
Detecting floating-point errors via atomic conditions
Daming Zou, Peking University
Discovering discrepancies in numerical libraries
Jackson Vanover, University of California at Davis