Java – compare get / put operations of direct and indirect ByteBuffer

Get / put from direct ByteBuffer is faster than get / put from direct ByteBuffer?

If I have to read / write from the direct ByteBuffer, it is better to read / write the thread's local byte array first, and then use the byte array to completely update (write) the direct ByteBuffer?

Solution

If you compare a heap buffer with a direct buffer that does not use native byte order (most systems are small end sequences, while the default value of direct ByteBuffer is large end), the performance is very similar

If a native ordered byte buffer is used, performance may be better for multibyte values For bytes, no matter what you do, it makes no difference

In hotspot / openjdk, ByteBuffer uses the unsafe class, and many native methods are treated as intrinsics This depends on the JVM. AFAIK is an intrinsic function of Android VM, which regards it as the latest version

If you dump the generated assembly, you can see that the intrinsic function in unsafe is converted to a machine code instruction That is, they have no overhead for JNI calls

In fact, if you fine tune, you may find that most of the time spent on ByteBuffer getxxxx or setxxxx is spent on boundary checking rather than actual memory access Therefore, when I have to achieve maximum performance, I still use unsafe directly (Note: Oracle does not encourage this)

I would hate to see better than that Sounds complicated

Usually the simplest solution is better and faster

You can use this code to test yourself

public static void main(String... args) {
    ByteBuffer bb1 = ByteBuffer.allocateDirect(256 * 1024).order(ByteOrder.nativeOrder());
    ByteBuffer bb2 = ByteBuffer.allocateDirect(256 * 1024).order(ByteOrder.nativeOrder());
    for (int i = 0; i < 10; i++)
        runTest(bb1,bb2);
}

private static void runTest(ByteBuffer bb1,ByteBuffer bb2) {
    bb1.clear();
    bb2.clear();
    long start = System.nanoTime();
    int count = 0;
    while (bb2.remaining() > 0)
        bb2.putInt(bb1.getInt());
    long time = System.nanoTime() - start;
    int operations = bb1.capacity() / 4 * 2;
    System.out.printf("Each putInt/getInt took an average of %.1f ns%n",(double) time / operations);
}

print

Each putInt/getInt took an average of 83.9 ns
Each putInt/getInt took an average of 1.4 ns
Each putInt/getInt took an average of 34.7 ns
Each putInt/getInt took an average of 1.3 ns
Each putInt/getInt took an average of 1.2 ns
Each putInt/getInt took an average of 1.3 ns
Each putInt/getInt took an average of 1.2 ns
Each putInt/getInt took an average of 1.2 ns
Each putInt/getInt took an average of 1.2 ns
Each putInt/getInt took an average of 1.2 ns

I'm sure the JNI call takes more than 1.2 ns

In order to prove that it is not a JNI call, but a delay problem around it You can write the same loop directly using unsafe

public static void main(String... args) {
    ByteBuffer bb1 = ByteBuffer.allocateDirect(256 * 1024).order(ByteOrder.nativeOrder());
    ByteBuffer bb2 = ByteBuffer.allocateDirect(256 * 1024).order(ByteOrder.nativeOrder());
    for (int i = 0; i < 10; i++)
        runTest(bb1,ByteBuffer bb2) {
    Unsafe unsafe = getTheUnsafe();
    long start = System.nanoTime();
    long addr1 = ((DirectBuffer) bb1).address();
    long addr2 = ((DirectBuffer) bb2).address();
    for (int i = 0,len = Math.min(bb1.capacity(),bb2.capacity()); i < len; i += 4)
        unsafe.putInt(addr1 + i,unsafe.getInt(addr2 + i));
    long time = System.nanoTime() - start;
    int operations = bb1.capacity() / 4 * 2;
    System.out.printf("Each putInt/getInt took an average of %.1f ns%n",(double) time / operations);
}

public static Unsafe getTheUnsafe() {
    try {
        Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe");
        theUnsafe.setAccessible(true);
        return (Unsafe) theUnsafe.get(null);
    } catch (Exception e) {
        throw new AssertionError(e);
    }
}

print

Each putInt/getInt took an average of 40.4 ns
Each putInt/getInt took an average of 44.4 ns
Each putInt/getInt took an average of 0.4 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns
Each putInt/getInt took an average of 0.3 ns

Therefore, you can see that the local phone is much faster than the JNI call you expect The main reason for this delay may be L2 cache speed

The above is all the contents of get / put operations of Java - compare direct and indirect ByteBuffer collected and sorted by programming house for you. I hope this article can help you solve the program development problems encountered in get / put operations of Java - compare direct and indirect ByteBuffer.

If you think the content of the programming home website is good, you are welcome to recommend the programming home website to programmers and friends.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>