FPBench Logo

Examples 2.0

Examples of FPCore 2.0 expressions

FPBench 2.0 standards

FPBench is a standard benchmark suite for the floating-point community. The benchmark suite contains a common format for floating-point computation and metadata and a common set of accuracy measures:

  1. The FPCore input format
  2. FPCore example inputs
  3. Metadata for FPCore benchmarks
  4. Standard measures of error

Rounding with cast

The cast operation is necessary for explicitly rounding values without performing other numerical operations. Precision annotations specified with ! never cause any numerical operations to occur; they only change the rounding context.

For example, the expression

(! :precision binary64 (! :precision binary32 x))

will not round the value of the variable x. This expression is the same as simply specifying

x

regardless of the rounding context. To round x, it is necessary to cast it in a context with the desired precision. The expression

(! :precision binary64 (! :precision binary32 (cast x)))

will round x to binary32 precision (here the outer annotation for binary64 precision is redundant, as it will be overwritten by the inner one), while the expression

(! :precision binary64 (cast (! :precision binary32 (cast x))))

will round x twice, first to binary32 precision, and then from binary32 to binary64 precision.

Because numerical operations already round their outputs, it should not be necessary to cast in most cases, unless double rounding is specifically intended. For example, in an expression such as

(! :precision binary64
   ([ x (+ y 1)])
  )

the value stored in x will already be rounded to binary64 precision, as the addition is done in a context with that precision. Inserting an explicit cast around either the addition or the use of x would cause double rounding, and while in the case of binary64 precision this is a no-op, for other rounding contexts it could lead to undesirable behavior.

Inheriting properties in rounding contexts

All properties not explicitly specified in a ! precision annotation are inherited from the parent context. For the top level expression in an FPCore benchmark, the parent context includes all the overall properties of the benchmark.

For example, in the following benchmark

(FPCore ()
 :name "foo"
 :math-library gnu-libm-2.34
 :spec 0
  (! :precision binary64 (sin PI)))

the sin operation will take place in a context with name "foo", math-library gnu-libm-2.34, spec 0, and binary64 precision. Even properties that are seemingly unrelated to rounding, such as the name of the benchmark, are inherited. The FPCore standard does not prohibit tools from implementing rounding functions that depend on these properties, or other tool-specific properties, although having a rounding function that depends on name is not advised.

Properties in the rounding context might come from multiple different annotations. For example, in the expression

(! :math-library gnu-libm-2.34
  (! :round toZero :precision binary32 (+ x 1)))

the addition will take place as expected in a context with binary32 precision, toZero rounding direction, and the gnu-libm-2.34 math library. This can be useful if some, but not all, of the properties are shared by multiple subexpressions. For example, in the expression

(! :precision binary64 (- (! :round toPositive (+ x y))
                          (! :round toNegative (+ x y))))

all of the operations will take place in a context with binary64 precision, but the additions will use different rounding directions.

Using Tensors

N-dimensional arrays, or tensors, are useful for working with any kind of structured data, from simple 3D points to complex, multidimensional arrays. Tensors can be constructed in FPCore expressions either as literal arrays or by dynamically tabulating over a set of indices with tensor. Here are two ways of constructing a 2x2 identity matrix:

(array (array 1 0)
       (array 0 1))

(tensor ([i 2] [j 2])
  (if (== i j) 1 0))

Tensors can be nested, and different ways of constructing them can be nested with each other. For example, and n by 3 matrix of zeros (perhaps of 3D points) could be constructed like this:

(tensor ([i n])
  (array 0 0 0))

Tensors can also be received as inputs to an FPCore. Each tensor input must be annotated with the sizes of its dimensions; these can either be fixed integers, or symbols that will be bound to the appropriate size when the FPCore is executed. For example, the input array A in the following FPCore must be n by 3. The FPCore returns the first k rows.

(FPCore ((A n 3) k)
 :pre (<= 0 k n)
  (tensor ([i 0 k])
    (ref A i)))

As purely functional data structures, tensors cannot be modified in any way after they are created. However, they can be copied by another tensor expression, as alluded to in the previous example.

FPCore does not provide any numerical operations on tensors, only the data structure operations dim, size, and ref. By specifying an identifier to allow an FPCore to be called as an operationin other FPCores, specific tensor operations can be defined, such as matrix multiplication:

(FPCore matmul ((A am an) (B bm bn))
 :pre (== an bm)
  (tensor ([m am]
           [n bn])
    (for ([i bm])
      ([prod 0 (+ prod (* (ref A m i) (ref B i n)))])
      prod)))

For algorithms with more complicated data dependencies, a tensor* expression can be used to statefully loop over a tensor, for example to compute a partial sum:

(FPCore parsum-1d ((A n))
  (tensor* ([i n])
    ([sum 0 (+ sum (ref A i))])
    sum))