Does loss of precision occur when the conversion takes place?
Yes, a loss of precision occurs.
Narrowing ( Narrowing ).
The types of floating point data are:
float : Simple precision. They are usually a 32-bit wide type that follows the IEEE-754 32 bit.
double - Double precision. They are usually a 64-bit wide type that follows the IEEE-754 64 bit.
long double - Extended precision. They are usually an 80-bit type in 32-bit and 64-bit architectures. It does not necessarily follow the IEEE-754 .
Each time you go from a more precise type to one of less, a narrowing occurs; every time there is a narrowing data can be lost when this happens?
According to the C standard in
§126.96.36.199 section Real floating point types (translation and highlighting mine):
188.8.131.52 Real floating point types
float is promoted to
long double , or when a
double is promoted to
long double , its value does not change .
double is degraded to
float , a
long double is degraded to
float , or a value represented in greater precision and range that the one required by its semantic type (see 184.108.40.206) is explicitly converted to that semantic type, if the value that is being converted can be represented exactly in the new type, it will not be changed . If the value that is being converted is in the range of values that can be represented but can not be represented accurately, the result will be the number closest to the value either rounding up or down, rounding is implementation dependent. If the value that is being converted is outside the range of values that can be represented, the behavior is undefined.
In your case, assigning the value
3.14 to a
float corresponds to a narrowing, but since the value
3.14 is representable by
float the value will not change exactly.
If the value has not changed, why does your code fail ?:
float n = 3.14;
if(n == 3.14f)
puts("Igual"); // no se imprime!
Because I have lied, the value
3.14 is NOT representable by
float exactly. There are floating-point numbers that are not exactly representable in binary, this is due to the properties of each base 1 .
The value 3.14 in binary is not exactly representable and with double precision its value is approximately 3,140000000000000 12434497875802 but when you store it in
float you lost some accuracy, how much exactly? it will depend on your system ...
So you will be comparing a number similar to the truncation of the value
double 3,14000000000000012434497875802 against a number similar to
3.14f which in many cases will not be the same number. For example the literal 3.14 on float is approximately 3.140000 1049041748046875 then your comparison would be, more or less:
if(3.1400000000000001243449 == 3.1400001049041748046875) // El double ha sido truncado
That evidently does not comply with equality.
It's terrible! What can I do?
As eferion says, you should avoid comparing floating-point numbers by equality, due to errors of rounding you should compare them by the almost equality , a function like this could be of help:
bool casi_iguales(float izquierda, float derecha)
return fabs(izquierda – derecha) <= FLT_EPSILON;
bool casi_iguales(double izquierda, double derecha)
return fabs(izquierda – derecha) <= DBL_EPSILON;
DBL_EPSILON are the difference between 1 and the next value that can be represented by
double respectively; in other words, they are roughly the smallest representable value for each of the types, so if the difference between
derecha is less than or equal to this value it is that both values are almost equal .
1 For example, 1/3 in base 10 is a pure periodic number of value 0.333333 ... while in base 12 it is exactly 0.4. In decimal the value 1/10 is exactly 0,1 but in binary it is a mixed periodic number of value 0,00011001100110011 ...