Notes: implementation of Android gesture recognition function (using motionevent)

abstract

This paper is a complete learning record of gesture recognition input event processing. The content includes input event, inputevent response mode, the concept and use of touch event motionevent, action classification of touch event and multi touch. According to the case and API, the general process of touch gesture recognition processing is analyzed. The related gesturedetector, scroller and velocitytracker are introduced. Finally, the recognition of some gestures such as drag and scale is analyzed.

Input source classification

Although Android itself is a complete system, it mainly runs on mobile devices, which determines that most of the apps we open on it belong to client programs. The main goal is to display the interface and handle interaction, which is similar to the applications on the web front end and desktop.

As a "client program", most of the functions written are to deal with user interaction. Different systems (corresponding to different devices) can support different user interactions. Android can run on a variety of devices. From the perspective of interactive input, inputdevice.source_ CLASS_ The XXX constant identifies the devices of several different input sources supported by the SDK. There are: touch screen, physical / virtual buttons, rocker, mouse, etc. the following discussion is aimed at the most extensive interaction - source_touchscreen. From the perspective of interaction design, touch screen devices are all kinds of gestures, including click, double click, slide, drag, zoom and other interaction definitions. In essence, they are the combination of different modes of basic touch events.

In the Android touch screen system, it supports single point and multi-point (usually finger) touch, and each point can be pressed, moved and lifted.

The processing of touch screen interaction is divided into different touch screen operations - gesture recognition, and then different processing according to business correspondence. In order to respond to different gestures, we need to recognize them first. The recognition process is to track and collect the "basic events" provided by the system in real time to reflect the user's actions on the screen, and then determine various high-level "actions" according to these data (event sets).

Android.view.gesturedetector provides monitoring of several most common actions, such as onscroll, onlongpress, onfling, etc. Your app can recognize interactive actions like drag and scale by implementing its own gesturedetector type as needed.

Gesture recognition is the mainstream interaction / input mode of touch-screen devices such as smart phones and tablets, which is different from the keyboard and mouse on PC.

Input event

Input events generated by user interaction are finally represented by subclasses of inputevent, including keyevent (object used to report key and button events) and motionevent (object used to report movement (mouse, pen, finger, trackball) events).

There are many places to receive inputevents. According to the framework, the propagation paths of events are activity, window and view (a path of viewtree: View stack).

In most cases, these input events are received and processed in the specific view of user interaction.

There are two ways to handle events in view: one is to add an event listener, and the other is to rewrite the processor method (event handler). The former is more convenient. The latter can be rewritten as needed when customizing the view, and the customview can also define its own processor methods or provide a listening interface as needed.

event listeners

Event listening interfaces are interfaces that contain only one method, such as:

In places such as Activity, by creating anonymous classes or implementing corresponding interfaces (eliminating the assignment of new types and objects), and then calling View.setOn... Listener () to complete the registration listener.

According to the UI events (input events) transmission mechanism of Android, the callback method of the listener will be executed before various corresponding processor methods. For those callback methods that return a Boolean value, the return value indicates whether to let the event continue to be propagated. Therefore, the return value should be carefully designed as required, otherwise it will block the execution of other processes.

For example, after setting ontouchlistener for view, if the callback method ontouch returns true, the processor method Boolean ontouchevent (motionevent event) in view will not be executed after the callback method is executed in the Boolean dispatchtouchevent (motionevent event) of view.

Event handler

Event handler is the default method called when "event delivery" passes through the current view. Generally, it is the implementation of the behavior logic corresponding to the specific view (you should know that the listener is not necessary or even not defined, and any view will provide processing for the events of interest).

You can write a whole article about message passing. It is omitted here. You only need to know that the input event will pass through many "related" views from top to bottom along the viewtree, and then these views will process or continue to pass events. Before the event reaches the viewtree, it will pass through the activity and window. Of course, the final origin is the hardware event collected by the system. It is sent from the "event manager" to a class related to the interface in the interaction and begins to propagate.

The view class includes the following event handling methods:

The above processor method is processed by standing at the current node of the event propagation pipeline, that is, the processing only needs to consider the functional logic provided by the current view and inform the caller whether the processing has ended - need to continue to pass? For the ViewGroup class, it also undertakes the task of passing events to childview. The following methods are closely related to event passing:

Understanding where events can be received and when to deal with consumption events is an important aspect of interface programming, but "transmission process of input events" is an important and complex topic. This article focuses on various gesture recognition of touch-screen events. The relevant knowledge only occupies a certain space from "integrity and order of understanding".

TouchMode

For touch-screen devices, the interface will be in the interactive state of touchmode when the user starts touching until he leaves the screen (press - > lift). Generally speaking, all views are responding to touch events or other keyevent events. The two are quite different in interaction. The state maintenance of touch mode runs through the whole system, including all window and activity objects (mainly the control of touch event distribution). You can check whether the current device is in touch mode through the public Boolean isintouchmode() method of view class.

Gestures

The process of pressing one or more fingers and finally completely leaving the screen is a touch-screen operation. Each operation can be classified into different touch patterns and finally defined as different gestures (the definition of gestures and patterns is designed, and users will learn different gestures after using any touch-screen device). The main gestures supported by Android are:

The app needs to respond to these gestures according to the API provided by the system.

Gesture recognition process

In order to realize the response processing of gestures, it is necessary to understand the representation of touch events. The specific process of gesture recognition includes:

MotionEvent

The input event triggered by touch action is represented by motionevent, which implements the Parcelable interface - IPC requirement. Almost all current devices support multi touch, and the finger in each touch is regarded as a poiner. Motionevent records all poiners currently in touch, including their respective x and Y coordinates, pressure, contact area and other information.

Each finger press, move and lift will produce an event object. Each event corresponds to an "action", which is defined by motionevent.action_ Represented by the constant of XXX:

The down, move and up of each finger will generate events. For performance reasons, the mobile process generates a large number of actions_ Move events, which are sent "in batch", that is, a motionevent can contain several actual actions_ Move event data. Obviously, these events are all move actions, and the number of poiners is the same - the addition and removal of any poiner will trigger down and up events, so it is not a continuous move event.

Compared with the previous motionevent data, all data of the current motionevent are up-to-date. The packed data forms an array according to time, and the latest data is treated as current data. You can access the data of "historical events" through gethistorical series methods.

The following is the standard form for obtaining the coordinates of each poiner of all events in the current motionevent:

As mentioned earlier, events have action categories, and each event object contains relevant data of all pointers. The way to obtain an action is:

Getaction and getactionmasked

The int value returned by getaction () may contain the information of pointerindex (this should be the method of using bit bits to improve performance like view. Measurespec): corresponding action_ POINTER_ Down and action_ POINTER_ The return value of the up action contains the index value of the "current" pointer that triggers up and down, and can be used as a pointerindex parameter in the methods getpointerid (int), geTx (int), gety (int), getpressure (int), and getSize (int). The method getactionindex () is used to get the pointerindex. The execution logic of getactionmasked () is the same as that of the above statement -- return the action constant value without pointerindex. When there is only one finger, obviously getaction () and getactionmasked () are the same, because the return value itself has no additional pointerindex data. Get event actions using getactionmasked -- more accurately.

The way to obtain the data of a certain pointer is also special, such as obtaining the X coordinates of each pointer:

During a gesture operation, the number of pointers may change. Each pointer obtains an associated ID at the time of the down event, which can be used as its valid identification until after the up or cancel (the pointercount changes).

In a single motionevent object, getpointercount() returns the total number of touched pointers, and the values from 0 to getpointercount() - 1 are the pointerindex of all current pointers. The method float geTx (int pointerindex) receives the index to obtain the X coordinate value of the corresponding pointer. Similarly, other methods that receive the pointerindex parameter are used to obtain other properties of the pointer. If you need to pay attention to the continuous action of a finger, such as the first pressed finger, you can obtain the ID of the pointerindex through the method int getpointerindex (int pointerindex), record this ID, and then obtain the pointerindex corresponding to the ID in the current motionevent data through the method int findpointerindex (int pointerindex) during each motionevent data check, You can access the properties of the pointer with the specified ID in the continuous event.

Finally, the following methods of motionevent are often used:

Receive event data

A series of motionevent objects generated by gesture operation are distributed in turn, passed and passed through some UI related objects. Generally, they will eventually pass through the corresponding activity and the view objects related to the current touch screen that make up the interface - each parent up from the view where the event is located along the viewtree.

In the activity of the current interface, you can receive touch events by overriding the Boolean ontouchevent (motionevent event) method of the activity. More often, because view is the place to specifically realize UI interaction, you can receive events in the Boolean ontouchevent (motionevent) method of view. A touch operation will send a series of events, so ontouchevent will be called "many times".

You can also set a listener to receive touch events for specific view objects:

It should be noted that no matter which gesture operation is recognized, action_ The down action must return true. Otherwise, according to the calling convention, it will be considered that the current processing ignores the event sequence of this touch operation, and subsequent events will not be received.

Detect gestures

In the rewritten ontouch callback method, various gestures can be determined according to the received event sequence. For example, an action_ Down, followed by a series of actions_ Move, then action_ Up, such a sequence is usually a scroll / drag gesture. Generally speaking, when implementing the logic of gesture recognition, we need to "carefully design" the code. We often need to consider how many offsets are regarded as effective sliding, and how many time gap down and up are regarded as tap. Android. View. Gesturedetector provides recognition of the most common gestures. The key related types of gesture recognition are introduced below.

GestureDetector

Its function is to identify onscroll, onfling, ondown(), onlongpress(), and other operations. Pass the received motionevent sequence to gesturedetector, and then it triggers the callback method corresponding to different gestures. The use process is:

1. Prepare the gesturedetector object and provide listeners that respond to various gesture callback methods. Ongesturelistener is a callback interface for different gestures, which is well understood.

Pass the received event to gesturedetector in the ontouch method.

If you are only interested in the callback of individual gestures of gesturedetector, the listener can inherit gesturedetector.simpleongesturelistener. You need to return true in the ondown method, otherwise subsequent events will be ignored.

Gesture movement

Gestures can be divided into motion type and non motion type. For example, tap does not move, while scroll requires a certain distance for the fingers to move. There is a critical value for determining whether the finger moves: touch slope, which can be obtained through android.view.viewconfiguration#getscaledtouchslap, indicating the minimum distance that the touch is determined to slide.

The recognition logic of non motion gestures, such as click type, is mainly the detection of "time gap". The motion gesture is slightly more complex. The judgment of motion can obtain different aspects of motion according to the actual functional needs:

VeLocityTracker

Sometimes you are interested in the speed of gesture movement. You can calculate the speed of movement according to the collected event data through android.view.velocitytracker:

Scroller

Without strict distinction, scroll can be divided into drag, which follows the slide of the finger, and fling, which is an additional deceleration slide after the finger crosses the screen.

Usually, you need to respond to the gesture movement, such as the picture moving (translating) with the movement of the finger. The simple implementation is in action_ X and Y corresponding to the real-time offset in move. In this case, the "response time" of the action is obvious. In other cases, it is necessary to achieve a smooth sliding effect, but the timing and increment of each sliding need to be calculated. For example, the page scrolling effect after clicking the previous page and next page buttons - similar to the animation effect of viewpager. In another case, after the finger quickly crosses the screen, you need to continue to slide the displayed content, and then gradually stop the fling effect. In these cases, it is necessary to continuously adjust the picture in the future to achieve the effect of rolling animation - the timing and offset of each sliding need to be calculated. You can use scroller to complete the animation effect of "smooth move".

Android.widget.overscroller is recommended. It has good compatibility and supports edge effects. Like velocitytracker, scroller is a "calculation tool", which supports two sliding effects: startscroll and flying, corresponding to the above example. In terms of design, it is independent of the execution of rolling effect, and only provides the calculation and state judgment of rolling animation process.

Using process of scroller:

Prepare the scroller object.

Turn on the scroll animation when appropriate. Generally, the fling effect will be combined with the gesturedetector to recognize the fling gesture of the hand finger and start the scrolling Animation: execute the scroller. Fling () method in onfling in onggesturelistener.

The "smooth sliding effect" turned on by scroller. Flying () can be executed whenever you need to turn on sliding.

At the execution time of each frame of the animation, the rolling increment is calculated and applied to the specific view object. When customizing the view, you can simply trigger the next animation frame by relying on the Android. View. View #postonanimation and Android. View. View #postinvalideonanimation() methods to perform animation operations. Alternatively, a mechanism such as animation can be used to obtain the execution frequency of animation frames. View itself has a computescroll () method for subclasses to perform animated scrolling Logic -- combined with postinvalideonanimation().

Like Scrollview, horizontalscrollview itself provides scrolling function, and viewpager also uses scroller to complete smooth sliding behavior. Generally, scroller is used when customizing controls with sliding behavior. Several controls of the framework use edgeeffect to complete some edge effects.

Multi-Touch

As can be seen from the above introduction to motionevent, each finger in touch is treated as a pointer. At present, most mobile devices almost support 10 touch. Whether to consider multi touch depends on the function of view. For example, a scroll can usually have one finger, while a scale must have more than two fingers.

The getpointerid and findpointerindex methods of motionevent provide the identification of each pointer of the current event data. According to pointerindex, other methods with it as parameters can be called to obtain the values of different aspects of the corresponding pointer. The pointerid can be used as a unique identifier during a pointer touch screen.

For single touch, the corresponding action can usually be determined according to getaction in ontouchevent method. The getactionmasked method is required for multi touch. The difference is mentioned earlier. The following code fragment gives a general API for multi touch:

Class motioneventcompat provides some auxiliary methods related to multi touch, which is a compatible version.

ViewConfiguration

This class provides some UI related constants about timeout, size, and distance. It will provide unified standard reference values according to the version of the system and the running equipment environment, such as resolution and size, so as to provide a consistent interactive experience for UI elements.

ViewGroup management touchevent

Event interception

In non ViewGroup views, the "responsibility" for responding to touch events is relatively single, that is, identifying and then executing interaction logic according to the interaction requirements of the current view. That is, you only need to process the event sequence generated by touch in Android. View. View #ontouchevent.

ViewGroup inherits view, so it can handle events in ontouchevent() as needed. On the other hand, as a parent of other views, it must perform layout on childviews, and has a method onintercepttouchevent() that controls motionevent to be passed to the target childview. Note that ViewGroup itself can handle events because it is also a qualified view subclass. Depending on the function of the class, for example, viewpager will handle the event of sliding left and right, but pass the event of sliding up and down to childview. Note that the ViewGroup may or may not contain views. Therefore, some of the actual events should be handled by childview, and some "fall" in the ViewGroup itself.

correlation method

The mechanism of event distribution is only briefly mentioned here. ViewGroup can manage the delivery of motionevent. The following methods are involved:

This method is used to intercept motionevent events passed to the target childview (it can be ViewGroup, which is not necessarily the final target view of the event, but the next view after the event delivery path passes through the current ViewGroup). You can do some additional operations, or even prevent the delivery of events and handle them yourself. If the ViewGroup wants its ontouchevent() to handle gesture events, it can override this method and complete the desired gesture processing in ontouchevent().

(1) Sequence of events passing through ViewGroup

(2) . return value

Return true to steal motion events from the children and have them dispatched to this ViewGroup through onTouchEvent(). The current target will receive an ACTION_ CANCEL event,and no further messages will be delivered here.

Note: the impact of the cooperation between onintercepttouchevent() and ontouchevent() in ViewGroup on event delivery is mainly reflected in the processing of down events, which affects the delivery of subsequent events.

ViewGroup inherits ontouchevent() of view without any change.

The returned value has the following meanings:

True indicates that the event is processed (consumed), so that the delivery of future events is terminated.

False means that it is not processed, then it will be returned to the parent in turn along the path of event transmission for processing -- the ontouchevent() of the parent is executed until the ontouchevent() of a parent returns true.

Called by childview on this method. When childview is called and true is passed, each parent will be notified to set a flag related to touch along the view hierarchy from root to target view in viewtree_ DISALLOW_ This flag will be cleared after interpt, passing false or one touch operation.

When the file ViewGroup contains this tag, its default behavior is to ignore calling onintercepttouchevent() to intercept events when distributing events through the Boolean dispatchtouchevent (motionevent EV) method.

Expansion: dragging and scaling

Drag operation

Android 3.0 and above provide API support for dragging. See view. Ondraglistener. Next, handle the ontouchevent () method to respond to the drag operation and move the target view.

The focus of the implementation is the detection of the moving distance. According to the design, the first finger touches the target view to trigger the down operation. As long as there are still fingers in the touch state, the movement of the corresponding finger is detected to move the view. The moving distance is the distance between the X and Y coordinates of the event corresponding to the move action of the pointer. It should be noted that the same pointer must be detected. Since multi touch is allowed, it is necessary to record a pointer as a mobile reference - defined as an active pointer. The rule is: the first finger action_ When down, record the corresponding pointerid as an active pointer. If a finger leaves, record the remaining pointer as a new active pointer.

In action_ The new x and Y coordinates obtained in move are compared with the last (the corresponding X and y are recorded as the last coordinates each time activepointer is set). The calculated distance is the moving distance.

The above methods are in action_ Down and action_ POINTER_ Set the activepointerid and the last touch position in the up. In action_ Record the moved position and update the last touch position in move. Finally, clear the recorded pointerid in up and cancel.

It can be seen that the recognition focus of drag gesture is to record the pointerid as the mobile reference, which must be continuous.

For the identification and response of the drag operation, you can directly use the onscroll () method in the gesturedetector response.

Scroll, drag and pan are all the same gestures / operations.

Scale

You can use scalegestuuredetector to detect scaling actions. The following example is a code example identified by drag and scale. Note the sequence of events consumed by the identification operation:

The usage of gesturedetector is given earlier. The above code fragment only shows the general usage of scalegestuuredetector, scalegestuuredetector.simpleonscalegesturelistener.

Note that in ontouchevent(), the event detection of scalegestuedetector is performed first, and then the event detection of gesturedetector. The default behavior of the parent class is called only when the two identifications are not processed.

Summary

Understanding the whole process of gesture recognition is to match different patterns according to the motionevent event sequence in ontouchevent, which is the goal of the whole article. You should know that the types provided by the gesturedetector and scalegestuuredetector frameworks are convenient for the implementation of gesture recognition function when customizing the view. As long as you master the idea of gesture recognition, you can recognize any desired touch event pattern by yourself. However, it is a good start to study the source code of the framework gesturedetector and the processing of gesture operations in some open source controls.

data

Official documents

The main content of this article refers to the development documents from API 22.

Using Touch Gestures

File path: / docs / training / features / detector.html

Input Events

File path: / docs / guide / topics / UI / UI events.htm

Case: photoview

When customizing the view, you need to listen to special gestures as needed. At this time, you need to define your own gesturedetector type. It is very helpful to study the implementation of the gesturedetector class of the system. If you need to recognize a variety of gestures, you can design multiple detector types according to the actual characteristics to recognize different gestures, but you need to pay attention to the consumption order of events when using them, such as the sequence recognition of drag and scale gestures.

The open source project photoview is used to display pictures and support various gestures to zoom and pan pictures. It contains several gesture recognition classes. It is suggested that you can read its code as the practice of "implementation details" of gesture recognition.

Source code download: Project Download

The above is the whole content of this article. I hope it will be helpful to your study, and I hope you can support programming tips.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>