How to use Android threads correctly

preface

For mobile developers, "putting time-consuming tasks into sub threads to ensure the fluency of UI threads" is the first golden rule of thread programming, but this iron rule is often the main reason why UI threads are not smooth. While urging ourselves to use threads more, we also need to remind ourselves how to avoid threads getting out of control.

One of the reasons why multithreaded programming is complex is its parallel characteristics. The working mode of human brain is more in line with the characteristics of single thread serial. One task after another is the most comfortable state of the brain. Frequent switching between tasks will produce system abnormalities such as "headache". The multitasking performance of human brain is too different from that of computer, which makes it easy for us to make mistakes when designing parallel business logic.

Another complication is the side effects brought by threads, which include but are not limited to: multi-threaded data security, deadlock, memory consumption, object life cycle management, UI jamming, etc. Each new thread is like a stone thrown into the lake, creating ripples in the distance you ignore.

Visualizing abstract things is our main way of understanding the world. As one of the "citizens" of the operating system world, how can threads be scheduled to obtain CPU and memory resources, and how can they communicate with other "citizens" to maximize benefits? These entities and behaviors are like to the brain and open the "God perspective" like the operating system, so as to correctly control the powerful beast thread.

Process priority

Thread is hosted in a process, the life cycle of the thread is directly affected by the process, and the survival of the process is directly related to its priority. When dealing with process priority, most people intuitively know that the priority of the foreground process is higher than that of the background process. However, this rough partition can not meet the needs of high-precision scheduling of operating system. Whether Android or IOS, the system further refines the foreground and background processes.

Foreground Process

Foregroup generally means that users are visible with both eyes, but visible is not necessarily active. In the Android world, when an activity is in the foreground, if it can collect the user's input event, it can be determined as active. If a dialog pops up halfway, the dialog will become a new active entity and directly face the user's operation. Although the partially obscured activity is still visible, the status changes to inactive. It is a mistake many junior programmers make that they can't distinguish between visible and active correctly.

Background Process

Background processes are also subdivided. The so-called background can be understood as invisible. Android also has important distinctions for invisible tasks. Important background tasks are defined as services. If a process contains services (called service process), it will be treated differently by the system in terms of "importance", and its priority will naturally be higher than that of processes without services (called background process). Finally, there is a kind of empty process. At first glance, empty process is a little confusing. If a process does nothing, what else is necessary. In fact, empty process is not empty, and there is still a lot of memory occupation.

In the IOS world, memory is divided into clean memory and dirty memory. Clean memory is the part of memory originally occupied after the app is started and loaded into memory, generally including the initial stack, heap, text, data and other segments. Dirty memory is the part of memory changed by user operations, that is, the status value of the app. When low memory warning appears, the system will first clear the dirty memory. For users, the operation progress will be lost. Even if you click the app icon again, everything will start from scratch. However, because the clean memory is not cleared, the IO loss of re reading app data from the disk is avoided, and the startup will be faster. This is why many people feel that the app opens slowly after the phone is restarted.

Similarly, the empty process in the Android world also saves app related clean memory, which is very helpful to improve the startup speed of the app. Obviously, empty process has the lowest priority.

To sum up, we can divide the processes in the Android world into the following categories according to priority:

The priority of processes is divided into five categories from high to low. The lower the priority, the more likely it is to be killed by the system when memory is tight. In short, the processes that are more easily perceived by users must have higher priority.

Thread scheduling

Android system is based on the simplified Linux kernel, and its thread scheduling is affected by many factors such as time slice rotation and priority control. Many beginners will think that the time slice allocated to a thread is determined by its priority compared with that of other threads, which is not entirely correct.

The scheduler of Linux system adopts CFS (complete fair scheduler) strategy when allocating time slice. This strategy will not only refer to the priority of a single thread, but also track the number of time slices that each thread has obtained. If the high priority thread has been executing for a long time, but the low priority thread has been waiting, the subsequent system will ensure that the low priority thread can also obtain more CPU time. Obviously, if this scheduling strategy is used, threads with high priority may not have an absolute advantage in striving for time slice. Therefore, the Android system uses the concept of cgroups in thread scheduling. Cgroups can better highlight the importance of some threads, so that threads with higher priority can clearly obtain more time slices.

Android divides threads into multiple groups, two of which are particularly important. One is default group, and UI threads belong to this category. The other is background group, to which worker threads should belong. All threads in the background group can only be allocated a total of 5 ~ 10% of the time slice, and the rest are allocated to the default group. This design can obviously ensure the smoothness of UI drawing by UI threads.

Many people make complaints about Android system less smooth than iOS because the priority of UI threads is consistent with common worker threads. This is actually a misunderstanding. Android designers actually provide the concept of background group to reduce the CPU resource consumption of working threads, but unlike IOS, Android developers need to explicitly attribute working threads to background group.

So when we decide to start a new thread to execute a task, we should first ask ourselves whether the completion time of the task is important enough to compete for CPU resources with the UI thread. If not, lower the thread priority and attribute it to the background group. If yes, you need to further profile to see if this thread causes the UI thread to jam.

Although the task scheduling of Android system is based on threads, setting the priority of a single thread can also change its control groups, thus affecting the allocation of CPU time slice. However, the change of process properties will also affect the thread scheduling. When an app enters the background, the whole process of the app will enter the background group to ensure that the new process visible to the user can obtain as many CPU resources as possible. You can use ADB to see the current scheduling policy of different processes.

When your app is switched to the foreground by the user again, the thread belonging to the process will return to the original group. In the process of frequent switching of these users, the priority of thread will not change, but the system is constantly adjusting the allocation of time slice.

Do you really need a new thread?

Opening threads is not the panacea for improving app performance and solving UI jam. Each newly started thread will consume at least 64KB of memory, and the system will also bring additional overhead by switching context between different threads. If you open new threads at will, with the expansion of business, it is easy to find dozens of threads running at the same time at a certain point in time when the app is running. The result was that the original intention was to solve the UI fluency, but it led to the occasional uncontrollable Caton.

The new start thread of mobile app is generally to ensure the fluency of UI and increase the responsiveness of APP user operation. But if you need to put a task into a worker thread, you need to know where the bottleneck of the task is, I / O, GPU or CPU? The UI jam is not necessarily due to the time-consuming calculation of the UI thread, but may be due to other reasons, such as the layout level is too deep.

Reusing existing worker threads as much as possible (using thread pool) can avoid a large number of simultaneously active threads, such as setting the maximum concurrent number for HTTP requests. Or put the tasks into a serial queue (handlerthread) and execute them sequentially. The worker thread task queue is suitable for processing a large number of short-time tasks to avoid a single task blocking the whole queue.

With what posture open thread?

new Thread()

This is the simplest way to open threads in the Android system, and can only be applied to the simplest scenarios. The simple benefits are accompanied by many hidden dangers.

This method only starts a new thread, has no concept of task, and cannot manage the state. After start, the code in run will be executed to the end and cannot be cancelled halfway.

As an anonymous internal class, runnable also holds a reference to an external class. The reference will always exist before the thread exits, preventing the external class objects from being recycled by GC and causing memory leakage for a period of time.

There is no thread switching interface. To transfer the processing results to the UI thread, you need to write additional thread switching code.

If you start from a UI thread, the thread priority defaults to default and belongs to the default CGroup, which will compete with the UI thread for CPU resources equally. This should be particularly noted in scenarios that require high UI performance

Although threads in the background group can only obtain 5-10% of CPU resources in total, this is more than enough for most background task processing. 1ms and 10ms are too fast for users to perceive, so we generally prefer to execute work thread tasks in the background group.

AsyncTask

A typical asynctask implementation is as follows:

Different from using thread (), there are several API callbacks to strictly regulate the interaction between worker threads and UI threads. Most of our business scenarios almost comply with this specification, such as reading pictures from the disk, scaling processing needs to be executed in the working thread, and finally drawing to the ImageView control needs to be switched to the UI thread.

Several callbacks of asynctask give us the opportunity to interrupt the task, which is more flexible than thread () in the management of task status. It is worth noting that the cancel () method of asynctask does not terminate the execution of the task. Developers need to check the status value of cancel to decide whether to abort the task.

Asynctask also has the problem of implicitly holding external class object references. Special attention should be paid to prevent accidental memory leakage.

Asynctask has been criticized by many developers because of the inconsistent execution behavior between serial and parallel on different system versions. This is really a hard injury. In most multithreaded scenarios, it is necessary to clarify whether the task is serial or parallel.

The thread priority is background, which has little impact on the execution of UI threads.

HandlerThread

In the scenario of more fine control of multitasking and more frequent thread switching, both thread() and asynctask will be inadequate. Handlerthread can meet these requirements or more.

Handlerthread combines the concepts of handler, thread, looper and messagequeue. Handler is the external interface of the thread. All new messages or runnable are posted to the working thread through handler. When looper gets a new task in the messagequeue, it switches to the worker thread for execution. Different post methods allow us to make fine control over tasks. When to execute, the order of execution can be controlled. The biggest advantage of handlerthread is that it introduces the concept of messagequeue, which can be used for multi task queue management.

There is only one thread behind the handlerthread, so the task is executed serially. Serial is safer than parallel, and there will be no multithreading safety problem between tasks.

The thread generated by handlerthread will always survive, and looper will continuously check messagequeue in this thread. This is different from thread (), asynctask. Reusing thread instances can avoid frequent reconstruction and destruction of thread related objects.

Compared with thread(), the handler thread needs to write more code, but asynctask performs better in practicability, flexibility and security.

ThreadPoolExecutor

Thread(), asynctask is suitable for the scenario of processing a single task, and handlerthread is suitable for the scenario of serial processing of multiple tasks. ThreadPoolExecutor is a better choice when multitasking needs to be processed in parallel.

Thread pool can avoid frequent creation and destruction of threads. Obviously, the performance is better, but the concurrency of thread pool is often the source of difficulties and complications, and the beginning of code degradation and out of control. The bugs caused by multithreading parallelism are often occasional and inconvenient for debugging. Once they appear, they will consume a lot of development energy.

Compared with handlerthread, ThreadPool has higher flexibility in dealing with multi tasks, but it also brings greater complexity and uncertainty.

IntentService

It has to be said that Android has very fine granularity in API design, and the same work can be completed through a variety of different classes. Intentservice is another way to open a worker thread. From the name, we can see that this worker thread will have the attribute of service. Unlike asynctask, there is no interaction with the UI thread, and unlike the handler thread, the worker thread will always survive. There is also a handlerthread behind intentservice to process message queues serially. As can be seen from the oncreate method of intentservice:

However, after all messages are processed, the worker thread will end automatically. Therefore, intentservice can be regarded as a combination of service and handlerthread, which is suitable for scenarios where UI independent tasks need to be processed in worker threads.

summary

Although there are various ways for Android to open threads, in the final analysis, it is still mapped to pthread under Linux. The business design is still inseparable from the basic concept categories related to threads: thread execution sequence, scheduling strategy, life cycle, serial or parallel, synchronous or asynchronous, etc. Understand the behavior characteristics of threads under various APIs. When designing the thread model of specific business, it is natural to be familiar with the road. The design of thread model should have the breadth of the perspective of the whole app, and each business module should not play its own game. The above is the whole content of this article. I hope it can be helpful for you to develop Android. If you have any questions, you are welcome to leave a message for discussion.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>