Why do you need to “and” some character conversions in Java bit by bit?

Here, I am using java to c# sample application translation, involving encryption (AES, RSA, etc...)

At some point in the Java code (actually working and converted to c#), I found this Code:

for (i = i; i < size; i++) {
    encodedArr[j] = (byte) (data[i] & 0x00FF);
    j++;
} // where data variable is a char[] and encodedArr is a byte[]

After some Google search (here), I found that this is a common behavior mainly in Java code

I know that char is a 16 bit type with only 8 bytes, but I can't understand the reason for this bit by bit and the operation of char - > byte conversion

Can anyone explain

Thank you first

Solution

When you convert 0x00ff to binary, it becomes 0000 0000 1111 1111

When you and anything with a 1, it itself is:

1&& 1 = 1,0&& 1 = 0

When you and any 0, it is 0

1&& 0 = 0,0&& 0 = 0

When this operation occurs, encodedarr [J] = (byte) (data [i]& 0x00ff); It only occupies the last 8 bits and the last 8 bits of data and stores it It discards the first 8 bits and stores the last 8 bits

This is needed because a byte is defined as an 8 - bit value Bitwise and present to prevent potential overflow – > ie allocates 9 bits to a byte

Char in Java is 2 bytes! This logic can prevent overflow However, as someone pointed out below, this is meaningless because the actor did it for you Maybe someone is cautious?

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>