I am a beginner in C language.
As shown below, I wonder why
0.0000 is possible when I prepare variables with
int, enter values with the
double input translation specifier, and output values with the
double input translation specifier.
I understand that the above is the wrong code (it is bad to define
int), but I would like to understand the behavior.
data is stored in
int, so I imagine that typing
double will cause the memory to overflow behind me, but I'm worried that I can't explain why
0.0000 occurs in
The compiler is
gcc(MinGW.org GCC Build-2) 9.2.0.
Thank you for your cooperation.
gcc(MinGW.org GCC Build-2) 9.2.0, so is it 32-bit Windows (even 64-bit Windows creates and executes 32-bit executable files)?The C language does not specify the size of the data type, so it depends on the environment.Therefore, if the data type is intentionally incorrect, such as the question, it will show behavior according to the size of each environment.
On top of that, it's exactly what you imagine on 32-bit Windows.It can be a list of meaningless numbers.
data when calling
data is a 32-bit int type, so only 32 bits are written to the stack.However, the called
printf() is interpreted and read as a 64-bit double-precision floating point number under the direction of
%f.Therefore, you try to read the 32-bit next to the
data stored in the stack.
It was a short test program, and I think the stack was not dirty and accidentally filled with
0.As a result, it is only a bit string that can be interpreted as
0.0000 as a double-precision floating point number.
By the way, if you intentionally write another value next to
data, you will be able to confirm that the display changes.
When I ran with Visual C++ at hand, I got the following output:
Also, the behavior is different for 64-bit Windows such as MinGW-w64, and 64-bit Linux, where metropolis is described.
64-bit Windows and 64-bit Linux use 64-bit space to stack 32-bit int types.The unused 32-bit portion is cleared to zero.
printf() tries to read this 64-bit area as double-precision floating point, but the unused 32-bit part that is cleared is
0.0000 is always displayed because
is applicable and 指数0 for both exponential and mantissa parts が is applied.
For the previous code, 64bit also writes
1<i next to
data, but it is not referenced as described, so
Data is stored in int, so I imagine that if I double-enter it, the memory will overflow behind me.
The glibc scanf(3) implementation is as follows (cast to
long double*), so that's right.
Then I'm worried that I can't explain why it's 0.000000 when I printf.
If you cast the pointer to
long double* and refer to it, the value you entered (the value of float) will be displayed (though you step through the stack).
Then, I think I see why sample code Run Results stThe stack protector is enabled, so it will terminate abnormally. So Fushihara's answer is correct in a way (I think it's a little bit of a lack of explanation...)
0.000000 is displayed when
%f is specified, and I see the following warning message and
printf(3) source code (glibc/stdio-comm/print>.
stThe stack protector is enabled, so it will terminate abnormally.
So Fushihara's answer is correct in a way (I think it's a little bit of a lack of explanation...)
Regardless of whether you type in the keyboard or not, the int 0 is specified as
%f so it says
printf("%f\n", 0); is the same as writing.
© 2023 OneMinuteCode. All rights reserved.