© Parineeth M R
Consider the code below:
C/C++
void f1() { char x = -1; int y = 0; y = y + x; printf("%d\n", y); }
We expect that the output printed is -1. However this is not true in all the platforms.
When we declare a variable as int x, we are sure that the variable x is a signed integer even if we don’t specify the “signed” keyword in the declaration. Similarly, we expect that the declaration char x is equivalent to the declaration signed char x.
“The C Programming Language” book by Kernighan & Ritchie states that “Whether plain chars are signed or unsigned is machine-dependent, but printable characters are always positive.” So if we declare char x, then the variable x can be treated as a signed char in some platforms whereas it can be treated as an unsigned char in some platforms. This can create portability problems.
So if char x is treated as a signed char by the platform, then the output is -1. You can confirm this by invoking the function below
C/C++
void f1() { signed char x = -1; int y = 0; y = y + x; printf("%d\n", y); }
However, if char x is treated as an unsigned char by the platform, then the output is 255. You can confirm this by invoking the function below
C/C++
void f1() { unsigned char x = -1; int y = 0; y = y + x; printf("%d\n", y); }
In most platforms, char x is treated as signed char x. However there is the occasional odd platform where char x is treated as unsigned char. I ran into one such platform in my career and started getting weird bugs in the code which took quite some time to debug.
So to keep the code truly portable, explicitly declare as signed char or unsigned char.
© Parineeth M R