How can a scholar’s business be regarded as “stealing”? Summary of Android multithreading by a large manufacturer developer
Steal a piece of information from a big man's desktop, a summary of multithreading. (a) ω For this reason, I have compiled a PDF of a comprehensive knowledge system in combination with the data in my hand
(download more complete projects. To be continued. Source code. Graphic knowledge. Upload GitHub later.) you can click about me to contact me to obtain the complete PDF (VX: mm14525201314)
A thread is a number of subtasks running in a process and is the smallest unit of operating system calls
New: create a new state, new comes out, and start runnable: runnable. Call start to enter the runnable state, which may or may not run, depending on the scheduling of the operating system. Blocked: blocked, blocked by lock, temporarily inactive. The blocking state is that the thread blocks the method or code block modified by the synchronized keyword (obtaining lock) Status at. Waiting: wait status, inactive, do not run any code, wait for thread scheduler scheduling, wait sleep timed waiting: timeout waiting, return to the terminated state at the specified time, including normal termination and abnormal termination
According to sun's official documents, calling the thread. Stop() method is unsafe because the following two things will happen when calling the thread. Stop() method:
Invalid Sign
Classes of atomic operations, such as atomicinteger atomicboolean atomiclong ` ` atomicreference
Visibility: refers to the visibility between threads, that is, the modified state of one thread is visible to another thread. Volatile modification can ensure visibility. It will ensure that the modified value will be updated to main memory immediately, so it is visible to other threads. Ordinary shared variables cannot ensure visibility, because they will not be written to main memory immediately after modification. When they are written to main memory is uncertain, so other threads may read the old value when they read it
Orderliness: instruction reordering in Java (including compiler reordering and runtime reordering) can optimize code, but it will affect the correctness of concurrent execution in multithreading. Using volatile can ensure orderliness. Prohibiting instruction reordering volatile can ensure visibility orderliness, but atomicity cannot be guaranteed, In some cases, it can provide better performance and scalability than locks, replace the synchronized keyword, and simplify the code, but strictly follow the use conditions.
Working principle of thread pool: thread pool can reduce the number of threads created and destroyed, so as to reduce the consumption of system resources. When a task is submitted to the thread pool
1. Fixedthreadpool: a reusable thread pool with a fixed number of threads. There are only core threads, no non core threads, and the core threads will not be recycled. When there are tasks, idle core threads will be executed by the core threads, and if not, they will be added to the queue
2. Singlethreadexecution: single thread thread pool. There is only one core thread and no non core thread. When a task arrives, if there is no running thread, a thread will be created for execution. If it is running, it will join the queue for waiting, which can ensure that all tasks are executed in order in one thread. The difference from fixedthreadpool is only quantity
3. Cachedthreadpool: a thread pool created on demand. There are no core threads, and non core threads have integer.max_ Value. Each task submitted will be executed by an idle thread if there is one. If there is no idle thread, a new thread will be created for execution. It is applicable to a large number of tasks that need immediate processing and take a short time
4. Scheduledthreadpoolexecutor: inherited from ThreadPoolExecutor. It is used to delay or periodically execute tasks. The number of core threads is fixed, and the total number of threads is integer.max_ VALUE
Why do I need thread synchronization? When multiple threads operate on the same variable, there is a problem of when the variable is visible to another thread, that is, visibility. Each thread holds a copy of the variable in main memory. When it updates this variable, it first updates the variable value of the copy in its own thread, and then updates this value to main memory. However, it is uncertain whether to update it immediately and when to update it to main memory, which leads to that when another thread operates this variable, The variable he read from main memory is still the old value, resulting in the problem that the two threads are not synchronized.
Thread synchronization is to ensure the visibility and atomicity of multi-threaded operations. For example, we wrap one end of the code with the synchronized keyword. We hope that after the execution of this code, it will be immediately visible to another thread. When another thread operates again, it will get the updated content of the previous thread, and ensure the atomicity of this code, This code may involve several operations. We hope that these operations will not be interrupted at one time. The lock synchronization mechanism can achieve this. Generally speaking, synchronized is used for multi-threaded synchronization. In fact, synchronized only provides multi-threaded mutual exclusion, while the wait () and notify () methods of the object provide thread synchronization.
The JVM implements thread synchronization through the monitor object. When multiple threads request synchronized methods or blocks at the same time, the monitor will set up several virtual logical data structures to manage these multiple threads. The newly requested thread will be added to the thread queue first, and the thread will be blocked. When a thread with a lock is unlocked, the threads in the queue will compete for posts (synchronized is an unfair competition lock, which will be discussed below). If the running thread releases the lock after calling the object's wait() and enters the wait thread collection, the wait thread will go to the queue after calling the object's notify() or notifyall(). This is general logic.
(1) ArrayList: ArrayList is a generic class. The underlying structure uses array structure to save objects. The advantage of array structure is to facilitate fast random access to the collection, that is, if you need to often access the objects in the collection according to the index position, it is more efficient to use the list collection implemented by ArrayList class. The disadvantage of array structure is that it is slow to insert and delete objects at the specified index position, and the smaller the index position of the inserted or deleted object, the lower the efficiency. The reason is that when inserting objects at the specified index position, all objects at the specified index position and after will be moved back one bit at the same time.
(2) LinkedList: LinkedList is a generic class, and the underlying layer is a two-way linked list, so it is more efficient than ArrayList in performing insert and delete operations, but it is also better than ArrayList in random access because of the data structure of the linked list
ArrayList is poor. ArrayList is a linear table (array). Get() directly reads the subscripts. Complexity O (1) add (E) adds elements and directly adds them later. Complexity O (1) add (index, e) adds elements and inserts them after the elements. The latter elements need to be moved backward. Complexity O (n) remove() deletes elements. The latter elements need to be moved one by one. Complexity O (n)
LinkedList is the operation of the linked list. Get() obtains the first few elements and traverses them in turn. Complexity O (n) add (E) is added to the end. Complexity O (1) add (index, e) after adding the first few elements, you need to find the first few elements, direct the pointer to the operation, complexity O (n) remove() deletes the elements, direct the pointer to the operation, and complexity O (1)
The underlying storage structure of HashMap is an entry array, and each entry is a single linked list. In case of hash conflict, HashMap uses the zipper method to solve the collision conflict. Because the put method of HashMap is not synchronous, its capacity expansion method is not synchronous. In the process of capacity expansion, a new capacity array will be generated, Then recalculate all key value pairs of the original array and write to the new array, and then point to the newly generated array.
When multiple threads simultaneously detect that HashMap needs to be expanded, they will call the resize operation at the same time, generate new arrays and rehash them to the underlying array table of the map. As a result, only the new array generated by the last thread is assigned to the table variable, and the data of other threads will be lost. Moreover, when some threads have completed the assignment and other threads have just started, they will use the assigned table as the original array, which will also have a problem. The expansion may cause the linked list to form a ring structure
Compared with other IPC communication mechanisms, such as message mechanism, shared memory, pipeline, semaphore, binder only needs one memory copy to enable the target process to read the updated data. It is as efficient as shared memory. Most other IPC communication mechanisms need two memory copies. The principle of binder memory copy is: process a is a binder client. Before calling IPC, it needs to copy the data in its user space to the kernel space of binder driver. Since process B has mapped (MMAP) the kernel space of binder driver to its own process space when opening binder device (/ dev / binder), process B can directly see the changes in the kernel space of binder driver
There are two problems with traditional IPC mechanisms:
Java Memory Model (JMM) itself is an abstract concept and does not really exist. It describes a set of rules or specifications that define the access methods of various variables in the program (including instance fields, static fields and elements constituting array objects).
Since the entity of the JVM running program is a thread, and when each thread is created, the JVM will create a working memory (called stack space in some places) for it to store thread private data, while the Java Memory Model stipulates that all variables are stored in the main memory. The main memory is a shared memory area, which can be accessed by all threads, but the operation of threads on variables (reading, assignment, etc.) It must be carried out in the working memory. First, copy the variable from the main memory to its own working memory space, and then operate the variable. After the operation is completed, write the variable back to the main memory. You cannot directly operate the variable in the main memory. The working memory stores the copy of the variable in the main memory. As mentioned earlier, the working memory is the private data area of each thread, Therefore, different threads cannot access each other's working memory, and the communication (value transfer) between threads must be completed through the main memory
Class loading process mainly includes seven aspects: loading, verification, preparation, parsing, initialization, use and unloading, which are described one by one below. 1. Load: get the binary byte stream defining this class and generate the java.lang.class object of this class. 2. Verify: ensure that the information contained in the byte stream of the class file complies with the JVM specification, No harm to the JVM 3. Preparation: allocate memory for variables and set the initialization of class variables in the preparation stage 4. Resolution: the resolution process is to replace the symbol reference in the constant pool with a direct reference 5. Initialization: unlike the preparation stage, this initialization is to initialize class variables and other resources according to the plan formulated by the programmer through the program. These resources include static {} blocks, constructors, initialization of parent classes, etc. 6. Use: the use process is executed according to the behavior defined by the program. 7. Uninstall: the uninstall is completed by GC.
The class loader uses the two parent delegation mode to find the class. The so-called two parent delegation mode is to judge whether the class has been loaded. If not, it is not to find it by itself, but to delegate it to the parent loader for search. In this way, it recurses in turn until it is delegated to the bootstrap classloader at the top level. If the bootstrap classloader finds the class, it will return directly, If it is not found, continue to look down in turn. If it is not found, finally give it to yourself to find
1. Mutually exclusive condition: a resource can only be used by one process at a time. 2. Request and hold condition: the process has held at least one resource, but has made a new resource request, and the resource has been occupied by other processes. At this time, the requesting process is blocked, but it does not let go of its obtained resources. 3. Inalienable conditions: before the resources obtained by the process are used up, they cannot be forcibly taken away by other processes, that is, they can only be released by the process that obtains the resources (only on its own initiative).
4. Circular waiting conditions: several processes form a relationship of circular waiting resources from beginning to end
These four conditions are the necessary conditions for deadlock. As long as the system deadlock occurs, these conditions must be true, and as long as one of the above conditions is not satisfied, deadlock will not occur.
Methods to avoid deadlock: the system dynamically checks the resource requests issued by the process that each system can meet, and determines whether to allocate resources according to the inspection results. If the system may deadlock after allocation, it will not be allocated, otherwise it will be allocated. This is a dynamic strategy to ensure that the system does not enter the deadlock state.
In the process of dynamic resource allocation, some method is used to prevent the system from entering an unsafe state, so as to avoid deadlock. Generally speaking, mutually exclusive conditions cannot be broken, so deadlock prevention mainly starts from three other aspects:
(1) Destroy request and retention conditions: in the system, the process is not allowed to apply for other resources when it has obtained certain resources, that is, a way should be found to prevent the process from applying for other resources while holding resources. Method 1: before all processes start running, they must apply for all the resources they need in the whole running process at one time. Method 2: require each process to release the resources it occupies before making a new resource application
(2) Destroy non preemptive condition: it is allowed to rob resources. Method 1: if a process occupying some resources makes a further resource request and is rejected, the process must release the resources it originally occupies. If necessary, it can request these resources and other resources again. Mode 2: if a process requests resources currently occupied by another process, the operating system can preempt another process and ask it to release resources. This method can prevent deadlock only when the priorities of any two processes are different.
(3) Break the cycle waiting condition, sort all resources in the system linearly and assign different serial numbers, so that we can specify that the process must apply for resources in the order of increasing serial numbers when applying for resources. When applying for resources in the future, it is necessary to check that the number of resources to be applied is greater than the current number before applying.
Using banker algorithm to avoid deadlock: the so-called banker algorithm means to see whether the system deadlock will be caused after resource allocation before resource allocation. If deadlock occurs, it will not be allocated, otherwise it will be allocated. According to the banker algorithm, when a process requests resources, the system will allocate system resources according to the following principles
When the app is started, AMS will check whether the process required by the application exists. If it does not exist, it will request the zygote process to start the required application process. The zygote process receives the AMS request and creates the application process through Fock. In this way, the application process will obtain the instance of the virtual machine and create the binder thread pool (processstate. Startthreadpool()) And message loop (activitythread looper. Loop), and then the app process sends a message to sytem through binder IPC_ The server process initiates an attachapplication request; system_ After receiving the request, the server process makes a series of preparations, and then sends a schedulelaunchactivity request to the app process through binder IPC; After receiving the request, the binder thread (application thread) of the app process sends a launch to the main thread through the handler_ Activity message; After receiving the message, the main thread creates the target activity through the reflection mechanism and calls back methods such as activity. Oncreate(). At this point, the app will officially start, enter the activity life cycle, execute oncreate / OnStart / onresume methods, and you can see the main interface of the app after UI rendering.
The core principle of Android single thread model is that UI can only be processed in the UI thread (main thread). When a program is started for the first time, Android will start a corresponding main thread at the same time. The main thread is mainly responsible for handling UI related events, such as user key events, user touch screen events and screen drawing events, and distributing relevant events to corresponding components for processing. Therefore, the main thread is often called UI thread.
When developing Android applications, we must abide by the principle of single thread model: Android UI operations are not thread safe, and these operations must be executed in UI threads.
Android's single thread model has two principles:
Listview adopts the recycling mechanism of recyclerbin, which is more efficient when displaying some lightweight lists
Invalid Sign
The hashcode value of the key value and the HashMap length - 1 are used to perform the sum operation on the elements in the HashMap. By default, the array size is 16, that is, the 4th power of 2. If you want to customize the HashMap initialization array length, you should also set it to the nth power of 2, because this is the most efficient. Because when the array length is the nth power of 2, the probability of the same index calculated by different keys is small, so the data is evenly distributed on the array, that is, the probability of collision is small. Relatively, you don't have to traverse the linked list at a certain position during query, so the query efficiency is higher
Please view the complete PDF version (download more complete projects. To be continued. Source code. Graphic knowledge will be uploaded later. GitHub.) you can click about me to contact me to obtain the complete PDF (VX: mm14525201314)