Rational Expressions with frictionless Fractions and implemented Time

Natural numbers are defined as the sequence of numbers fully defined by their initial value, its successor and the successor of its successor and so forth. Despite the beauty of their intrinsic Dedekind–Peano axioms, they are also fully proven by induction.

Whole numbers are simply expanded naturals into the negative direction, a non-orthogonal mirror view on the same axis if you like. Rational numbers are defined as fractions of such integers which may be presented using decimal numbers where they may lose precision already, especially when applied in a computer program using float.
Its hard to cope with the loss of precision using floats for e.g. 1/3, which error may even explode in non-linear propagation of uncertainty when applied to (exponential) computation.

As mentioned, computer floats have their additional lack of accuracy due to their lack of precision to either end. Here a little chart showing the accurate representable floats according to their mantissa size which very well visualizes their lack of linear representation or better, the many holes they leave behind. Datei:Exakt darstellbare Gleitkommazahlen.pngThese holes are manifested as so called round-off errors and potentially sum up and multiplied, distorting the final result.

I picked up this material issue in our little cs-class with my older son while continuously rotating geometric objects, represented by integer numbers. In our first attempt, we just rounded up the rotation-float-expression. The object’s were continuously rotated and visibly distorted their shape more and more in the process. It was quite fun to watch. We created two solutions for this problem, just using floats or storing the angle for an otherwise axis-aligned shape. A third solution would have been to actually use fractions…

This reminded me of the fixed compile-time rational arithmetic in C++, i.e. std::ratio but also std::chrono::duration. While I embrace the power of C++ application of constexpr compile time functional expression, I also love to use them at runtime with numbers derived in a non literal constant fashion.

Hence i indulged myself writing jau::fraction<T> to dive a little bit deeper into this space and also read about Euclid’s algorithm from Euclid’s Elements ~300 BC, book VII.

Adding literals to these fractions renders the code frictionless using the greatest common divisor (gcd) to calculate the least common multiple (lcm) with overflow checks for addition and subtraction:

1000_ms == 1_s;
60_s == 1_min;
fraction_i64 a = 1000_ms, b = 1_s, c = a + b;
fraction_i64 i = 1_s/3_s, j = 4_d, k = i + j;

The above shows how to compare and compound fractions of a different base w/o losing precision at runtime, spending two 64-bit integer values for the signed numerator and unsigned denominator.

Implemented Time in C++

I probably won’t live beyond year 2262, which is the limit when using a signed 64-bit nanosecond std::chrono::time_point. However, I may reach beyond year 2038, which is the ceiling by using a struct timespec on 32-bit platforms having tv_sec only last for 68 years since 1970 Unix Epoch. Both limitations are of historical nature and I believe we should do better and foremost more accurate.

Of course, we already have 64-bit variants of struct timespec designed and supported by underlying newer kernels/OS. We still lack of proper user-land libc support on GNU/Linux to my knowledge, including pthreads. Therefor it is best advised to utilize a 64-bit system. This is also true for embedded systems, especially since the modern architecture also provides a more efficient cash load and store implementation for atomic operations. C++ memory-model slides and SC-DRF std::memory_order might be of interest.

To provide at least a timeless notion of a time point storage type today, I introduced jau::fraction_timespec. This new type is simply an enforced 64-bit signed struct timespec with the capability to convert from and to fractions and supporting all customary time retrieval, sleep and pthreads-based wait_until() and wait_for() functions.

    fraction_timespec t0 = getMonotonicTime();
    // do something

    // Exact duration
    fraction_timespec td_1 = getMonotonicTime() - t0;

    // or for durations <= 292 years
    fraction_i64 td_2 = (getMonotonicTime() - t0).to_fraction_i64();
    sleep_for( td_2 + 1_ms ); // sleep for measured period + 1 ms

    // same sleep as above, but expressed in absolute time
    fraction_timespec absolute_time = getMonotonicTime() + td_2;
    sleep_until( absolute_time + 1_ms ); 

And here the fine details of wait for a condition variable in time limited fashion. When using a condition predicate loop to ensure no spurious wake-up preemptively ends waiting for the condition variable event or timeout, it is always recommended to use wait_until() using the absolute timeout time computed once before said loop. Example from latch::wait_for():

     std::unique_lock<std::mutex> lock(mtx_cd);
     const fraction_timespec timeout_time = getMonotonicTime() + timeout_duration;
     while( 0 < count ) {
         std::cv_status s = wait_until(cv, lock, timeout_time);
         if( 0 == count ) {
             return true;
         }
         if( std::cv_status::timeout == s ) {
             return false;
         }
     }

Note that the implemented time functions may operate on monotonic or wall-clock time, as supported by the underlying pthread_cond_clockwait() if available and user-library kernel clock_nanosleep(), both still using the aforementioned struct timeval – only timeless on 64-bit platforms.

All of the above is now used in Direct-BT.

… and this ends my little dive into fraction and times for today 😉