##
Exploring FPGA designs for MX and beyond
Ebby Samson, Imperial College London

##
Geometric predicates for unconditionally robust elastodynamics simulation
Daniele Panozzo, Courant Institute of Mathematical Sciences in New York University

##
Designing Type Systems for Rounding Error Analysis
Ariel Kellison, Cornell University

##
12 speakers at the cutting edge of numerics research

##
On the precision loss in approximate homomorphic encryption
Rachel Player, Royal Holloway University of London

##
ReLU hull approximation
Zhongkui Ma, The University of Queensland

##
Rigorous error analysis for logarithmic number systems
Thanh Son Nguyen, University of Utah

##
Low-bit quantization for efficient and accurate LLM serving
Chien-Yu Lin, University of Washington

##
Towards optimized multiplierless arithmetic circuits
Rémi Garcia, University of Rennes

##
Application-specific arithmetic
Florent de Dinechin, INSA-Lyon/Inria; and Martin Kumm, Fulda University of Applied Sciences

This talk will expose some of the opportunities offered by this freedom, and some of the methodologies and tools that allow a designer to build and compose application-specific operators. It may include some inadvertent advertising for our upcoming 800-page book dedicated to these topics.

##
Overview of numerics on the Java Platform
Joseph D. Darcy, Oracle

Joe is a long-time member on the JDK engineering team, first at Sun and later Oracle. As “Java Floating-Point Czar” he has looked after Java numerics including contributing to the design of strictfp, adding numerous floating-point math library methods, and adding hexadecimal floating-point literals to the Java language and library. Joe was a participant in and interim editor of the 2008 revision to the IEEE 754 floating-point standard. Outside of numerics, Joe has done other foundational work on the Java platform including core libraries development, Java language changes including Project Coin, infrastructure improvements, and reviewing platform interface updates, together with a smattering of project management and release management along the way.

##
Custom elementary functions for hardware accelerators
Benjamin Carleton and Adrian Sampson, Cornell University

##
A PVS formalization of Taylor error expressions
Jonas Katona, Yale University

##
Automated reasoning over the reals
Soonho Kong, Amazon Web Services

##
12 speakers at the cutting edge of numerics research

##
Floating-point education roundtable
Mike Lam, James Madison University

##
Floating-point accuracy anecdotes from real-world products
Shoaib Kamil, Adobe Research

##
Formal and semi-formal verification of C numerics
Samuel D. Pollard and Ariel Kellison,
Sandia National Laboratories

##
Automatically generating numerical PDE solvers with Finch
Eric Heisler,
University of Utah

Finch is a domain-specific language for numerically solving differential equations. It employs a variety of numerical techniques, generates code for a number of different target frameworks and languages, and allows almost arbitrary equation input. In addition, it accepts unrestricted modification of the generated code to provide maximal flexibility and hand-optimization. One downside to this (perhaps excessive) flexibility is that numerical instability and the occasional typo can easily turn a solution into a pile of NaNs.

Eric will introduce Finch, and demonstrate some of the features relevant to the numerics. We will look at some examples where things turn out just the way we hoped, and some where they don't.

##
Low-precision FP8 formats: background and industry status
Marius Cornea, Intel

##
Correctness 2022 annual workshop
Workshop on HPC software correctness, co-located with SC

##
Function-Based volumetric design and fabrication
Chris Uchytil,
Mechanical Engineering,
University of Washington

##
Finding inputs that trigger GPU exceptions
Ignacio Laguna,
Center for Applied Scientific Computing,
LLNL

##
Birds-of-a-feather
The community discussed the challenges of
connecting various analysis tools

##
Database workbench:
mixed-initiative design space search
Edward Misback,
University of Washington

##
11 speakers at the cutting edge of numerics research

##
Automatic datapath optimization using e-graphs
Samuel Coward,
Intel & Imperial College London

##
Choosing function implementations for speed and accuracy
Ian Briggs,
Computer Science,
University of Utah

##
CORE-MATH correctly rounded mathematical functions
Paul Zimmerman, INRIA

##
Formal verification of numerical Hamiltonian systems
Ariel Kellison, Cornell University

##
Birds-of-a-feather
The community discussed
training and teaching numerics in academia and industry

##
Automated discovery of invertibility conditions
Andrew Reynolds, University of Iowa

##
Workshop on software correctness for HPC applications, co-located with SC

##
Birds-of-a-feather
The community discussed the challenge of porting
numerical applications

##
Lazy exact real arithmetic using floating-point operations
Ryan McCleeary,
Numerica

##
State of FPBench
The community discussed FPBench benchmarks, education, and outreach

##
19 speakers at the cutting edge of numerics research

##
Scaling error analysis to billions of FLOPs
Sam Pollard, Sandia National Laboratories

##
RLIBM-32: fast and correct 32-bit math libraries
Jay Lim,
Rutgers University

##
SHAMAN: evaluating numerical error for real C++ code
Nestor Demeure, University Paris Saclay

##
POP: fast and efficient bit-level precision tuning
Dorra Ben Khalifa,
Perpignan University

##
Keeping science on keel when software moves
Allison Baker,
National Center for Atmospheric Research

Allison lead a great discussion of her recent CACM article Keeping Science on Keel When Software Moves.

High performance computing (HPC) is central to solving large problems in science and engineering through the deployment of massive amounts of computational power. The development of important pieces of HPC software spans years or even decades, involving dozens of computer and domain scientists. During this period, the core functionality of the software is made more efficient, new features are added, and the software is ported across multiple platforms. Porting of software in general involves the change of compilers, optimization levels, arithmetic libraries, and many other aspects that determine the machine instructions that actually get executed. Unfortunately, such changes do affect the computed results to a significant (and often worrisome) extent. In a majority of cases, there are not easily definable a priori answers one can check against. A programmer ends up comparing the new answer against a trusted baseline previously established or checks for indirect confirmations such as whether physical properties such as energy are conserved. However, such non-systematic efforts might miss underlying issues, and the code may keep misbehaving until these are fixed.

In this session, Allison presented real-world evidence to show that ignoring numerical result changes can lead to misleading scientific conclusions. She presented techniques and tools that can help computational scientists understand and analyze compiler effects on their scientific code. These techniques are applicable across a wide range of examples to narrow down the root-causes to single files, functions within files, and even computational expressions that affect specific variables. The developer may then rewrite the code selectively and/or suppress the application of certain optimizations to regain more predictable behavior.

##
Community Demos
Tanmay Tirpankar, University of Utah; and Mike Lam, James Madison University

Tanmay Tirpankar https://github.com/arnabd88/Satire

Mike Lam https://github.com/crafthpc/floatsmith

##
Rival: an interval arithmetic for robust error estimation
Oliver Flatt,
University of Utah

Interval arithmetic is a simple way to compute a mathematical expression to an arbitrary accuracy, widely used for verifying floating-point computations. Yet this simplicity belies challenges. Some inputs violate preconditions or cause domain errors. Others cause the algorithm to enter an infinite loop and fail to compute a ground truth. Plus, finding valid inputs is itself a challenge when invalid and unsamplable points make up the vast majority of the input space. These issues can make interval arithmetic brittle and temperamental.

Rival introduces three extensions to interval arithmetic to address these challenges. Error intervals express rich notions of input validity and indicate whether all or some points in an interval violate implicit or explicit preconditions. Movability flags detect futile recomputations and prevent timeouts by indicating whether a higher-precision recomputation will yield a more accurate result. And input search restricts sampling to valid, samplable points, so they are easier to find. We compare these extensions to the state-of-the-art technical computing software Mathematica, and demonstrate that our extensions are able to resolve 60.3% more challenging inputs, return 10.2× fewer completely indeterminate results, and avoid 64 cases of fatal error.