FPBench Logo

FPTalks 2024

The leading edge of floating-point research

The floating-point research community.
Online. July 11, 2024.

FPTalks 2024 was held online on July 11, 2024 on Zoom. Talks were 10 minutes long, followed by audience questions over Slack. Talks were recorded and are available on Youtube!

Welcome (Recording)

Session 1

8-bit Transformer Inference and Fine-tuning for Edge Accelerators

Jeffrey Yu, Stanford University

Precision Learning for DNN Compression via Adaptive Quantization

CĂ©dric Gernigon, Inria Rennes

Accumulator-Aware Quantization with Guaranteed Overflow Avoidance

Ian Colbert, AMD Software Architecture

FTTN: Feature-Targeted Testing of NVIDIA & AMD Matrix Accelerators

Xinyi Li, Pacific Northwest National Laboratory

Break

Session 2

Type-based approaches to rounding error analysis

Ariel E. Kellison, Cornell University

End-to-End Verification of a Fast and Accurate Floating-Point Approximation

Guillaume Melquiond, Université Paris Saclay, Inria

Bit Blasting Probabilistic Programs

Guy Van den Broeck, University of California, Los Angeles

A Formal Specification of Tensor Cores via Satisfiability Modulo Theories

Benjamin Valpey, University of Rochester

Break

Session 3

Verification of Digital Numerics for High Consequence Systems

Sam Pollard, Sandia National Laboratory

Predicting Performance and Accuracy for Precision Tuning

Yutong Wang, University of California, Davis

An Overview of the NASA LaRC Tool Suite for Floating-Point Analysis

Mariano Moscato, NASA LaRC / AMA Inc.

Customizing Elementary Function Approximations for Hardware Accelerators

Benjamin Carleton, Cornell University

Conclusion

For more information, please see the FPBench project and check out past recordings from: