NDK vs Java show

A simple answer to this simple question?

How fast does any body assume C coding and NDK, and then use java code? (if any)

Let's say that I do the X calculation (the same calculation) in Y seconds in Java code How many x calculations can be performed in the same y seconds through the C code in the NDK? 1.2? 2.7? Any guess?

Let's say that the calculation is b = L / a C / D (all x calculations are the same)

Edit:

Why do I ask this? Because I consider moving my java processing camera lens to C code to get a greater resolution opportunity

Solution

Since no one wants to touch this topic and has not considered seriously trying to answer this question, I will:

>Java is compiled into bytecode, and bytecode is compiled into local code by JIT. > C is compiled directly into local code

The difference is really an additional compilation step. In theory, Java should do a better job, and then your C compiler. That's why:

>Java can insert the statistical calculation into the generated local code, and then regenerate the local code after a period of time, so as to optimize the current runtime path in the code

The last point sounds great, but Java has some trade-offs:

>It requires a GC run to clean up memory > it may not be JIT code

GC copies the active object and throws one of all dead objects, because GC does not need to do anything for dead objects, but only for live objects. GC is theoretically faster than the normal malloc / free loop of objects

However, most Java advocates forget one thing, that is, no one says you must malloc / free each object instance when coding C You can reuse memory. You can combine memory blocks and thousands of temporary memory blocks into one object

With a lot of Java, GC time increases and pause time increases In some software, downtime during a GC cleanup cycle is perfectly acceptable and can lead to fatal errors in other cases Try to keep your software responding with a defined number of milliseconds when a GC occurs, and you'll see what I'm talking about

In some extreme cases, JIT may not choose JIT code at all When the jited method is up to 8K, this happens if I remember correctly The running time penalty of non jited method is in the range of 20000% (at least 200 times for our customers) When the codecache of the JVM begins to be satisfied, the JIT also turns around (this may happen if new classes continue to be loaded into the JVM, or at the customer site) For one thing, JIT statistics can also reduce the concurrency of 128 core machines to basic single core performance

In Java, JIT has a specific time to compile bytecode into local code. It is impossible to use all CPU resources for JIT, because it runs in parallel with the code that executes the actual work of the program In C, the compiler can run as long as it needs to spit out the code it considers optimized It has no effect on execution time, and in Java it has

What I'm saying is this:

>Java gives you more, but it's not always executed by you. > C gives you less, but it depends on how you behave

So answer your question:

>Choosing C on Java won't make the program faster

If you only keep the simple math for pre - allocated buffers, both Java and C compilers should spit out the same code

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>