Detailed explanation of Android gesture operation recognition

First, in the Android system, each gesture interaction will be executed in the following order.

1. Touch the touch screen and trigger a motionevent event.

2. The event is monitored by ontouchlistener and the motionevent object is obtained in its ontouch () method.

3. Forward the secondary motionevent object to the ongesturelistener through the gesturedetector.

4. Ongesturelistener obtains the object, listens to the information encapsulated by the object and makes appropriate feedback.

This sequence can be said to be the principle of gesture interaction. Let's learn about motionevent, gesturedetector and onggesturelistener.

Motionevent: this class is used to encapsulate action events such as gestures, touch pens, trackballs, etc. It encapsulates two important attributes X and y, which are used to record the coordinates of the horizontal axis and the vertical axis respectively.

Gesturedetector: recognize various gestures.

Ongesturelistener: This is a gesture interaction monitoring interface, which provides multiple abstract methods and calls the corresponding methods according to the gesture recognition results of gesturedetector.

Next, I will demonstrate the implementation of gesture interaction through a code example of switching pictures, so that everyone can have a deeper understanding and memory of the above execution sequence and the distinction of each gesture action.

First, provide a layout file with only ImageView -- main. XML.

Then, complete our activity. Because we want to monitor the touch event and gesture time of the touch screen, the activity must implement two interfaces, ontouchlistener and ongesturelistener, and rewrite the methods therein. The specific codes are as follows:

At the beginning of learning Android, I felt that Google's documents were not very good. When studying gestures, I felt that Google's documents were too poor. Many constants, properties and methods don't even have a description. No description is enough, but there are so many gestures in ongesturelistener, and it doesn't have an introduction. Who can understand the relationship and difference between onlongpress and onshowpress, onscroll and onfling before continuous attempts? Google really needs a major operation on documents. But fortunately, after my repeated attempts. These gestures are defined from a personal perspective.

On down: the moment your finger touches the touch screen is the touch. On fling: the movement of a finger moving rapidly on the touch screen and releasing it. On long press: the finger is pressed for a period of time and does not release. Onscroll: slide your finger on the touch screen. Press and hold (onshowpress): press your finger on the touch screen, and its time range takes effect before pressing and holding. Onsingletapup: the moment your finger leaves the touch screen.

In addition to these definitions, I have also summarized some experience that can be regarded as experience. I'll share it with you here.

Any gesture action will perform an ondown action first.

The onshowpress action must be performed once before the long press action. After the onshowpress action and the ondown action, the onsingletapup action will be performed once. After long press, onscroll and onfling actions, onsingletapup action will not be performed.

The above is the whole content of this article. I hope it will be helpful to your study, and I hope you can support programming tips.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>