Java – why is char implicitly converted to byte (and short) primitives and when should it not?
Some features of the compiler puzzled me (using Oracle JDK 1.7 of eclipse)
Therefore, I have this book that char primitives need to be explicitly converted to short and byte, which makes sense because the allowable ranges of data types do not overlap
In other words, the following code works (but not without explicit type conversion):
char c = '&'; byte b = (byte)c; short s = (short)c;
Printing B or s correctly displays the number 38, which is the numeric equivalent of (&) in Unicode
This reminds me of my practical problems Why are the following jobs the same?
byte bc = '&'; short sc = '&'; System.out.println(bc); // Correctly displays number 38 on the console System.out.println(sc); // Correctly displays number 38 on the console
Now I will definitely understand the following (also applicable):
byte bt = (byte)'&'; System.out.println(bt); // Correctly displays number 38 on the console
But for bytes (and short) "stealth conversion", a compiler - free warning character, doesn't seem right to me
One can explain why this is allowed?
The reason may be that the interpretation of '< char >' itself, so it actually does not reach the original state of char, but is processed as a digital (octal or hexadecimal, etc.) value?
Solution
Basically, the specification of assignment conversion specifies
Your '&' happens to be a constant expression of type byte, char or int