The digits command controls the number of digits for floating point numbers. The minimum (and default) number of digits is 15, in which case double precision floating point numbers are used. Setting digits to a higher value (up to 10000) invokes software floating point routines which operate in base-10.
# hardware floats 0.1 + 0.2 - 0.3; 1.11022302462516e-16 # software floats digits(20); 20 0.1 + 0.2 - 0.3; 0