Explain the pit you will encounter in Android face detection

This year, the author made an Android product related to face, mainly to obtain the preview data stream returned by the camera and judge whether the data stream contains faces. When there is a face, the camera preview box is displayed, and when there is no face, the camera preview box is hidden. It seems that this function is not complex. In fact, there are not many problems encountered in the development process, and all of them have been handled, Before its official launch, the product was also tested in the company for several months and no bugs were found. However, recently, when the implementation personnel carried out the implementation in the customer company, they fed back various problems. Some of these problems are program bugs and some are related to hardware. Because the test environment is limited, the author cannot test the hardware of various models and manufacturers, This article is mainly to record some pits brought by the camera and share them with friends involved in face development, so that we can avoid detours.

1: Overview

The Android SDK supports face detection. It provides a method for face detection directly on bitmaps. This API is android.media.facedetector. The source file path is:

frameworks/base/media/java/android/media/FaceDetector.java

Face detection can be performed by calling the findfaces method, which returns the total number of detected faces and saves the information of each "face" in the array of facedetector.face. Each face contains the following information:

The identification process is as follows:

1. Read a picture to bitmap, and the bitmap must be in 565 format.

2. Call the findfaces method to analyze the bitmap (note that the width of the bitmap to be analyzed must be an even number), store the detected face data in a facedetector.face array, and return the total number of detected faces. Introduction to FaceDetector in Android SDK

Android has a native API for face detection. Android.media.facedetector is used to detect whether the bitmap contains faces. Android.media.facedetector.face is used to detect face location information. We need to implement carema.previewcallback interface in the activity. This interface has an onpreviewframe method, which returns the data stream of real-time images of the camera, Since the data stream returned by this method is in nv21 format, we need to convert bitmap for face detection. The conversion process is as follows: byte [] -- > yuvimage -- > bytearrayoutputstream -- > byte [] -- > bitmap. The specific conversion code is as follows:

Through the above conversion, we have obtained the bitmap of face detection. At this time, only face detection is OK. The code is as follows:

Basically, there are so many codes. Due to the influence of hardware, there are many mines in the above code.

2: Face detection FAQs

After the product goes online, the main problems are that people stand in front of the camera, the app can't recognize faces, the software operation performance will also decline, and there will be serious problems such as jamming. At present, I'm relatively depressed. Obviously, I've been running in the test environment for several months, and these problems haven't occurred. During the formal implementation, the problems continue. Through the sorting in the past two months, the main problems are as follows.

2.1 unrecognized face

1) : camera angle problem

When I tested, the camera image was vertical without any problem, but when I officially used it, the camera came from different businesses, resulting in the camera image being horizontal, as shown in the following figure:

The image angles are wrong. Of course, we can't recognize the face. At this time, we need to get the default rotation angle of the camera for processing. In particular, it is stated that the method of setdisplayoorientation() rotates counterclockwise. The code is as follows:

2) : after the camera is rotated, preview the picture and the camera returns to the real-time stream angle problem

This pit is disgusting. After I rotated the camera angle, I packaged the app and sent it to my colleagues. My colleagues told me that it was still not possible. Fortunately, I borrowed a ristar 1080p camera in the company, and then I streamed the returned image from onpreviewframe to ImageView. I found that the returned image is fundamentally different from the preview image. I pulled back, although the preview image was rotated, We also need to process the stream returned by onpreviewframe. This pit also makes me speechless and makes me look for it for a long time. Although the code to solve the problem is only a few short sentences, only myself can understand the process of finding out the cause. Then I use the matrix to rotate the stream returned by onpreviewframe. For the matrix, I refer to Android matrix for details. This article is very well written. However, the postrotate of the matrix rotates clockwise, which is just the opposite to camera. Setdisplayoorientation(). Let me go, These two brothers are too inconvenient. One is clockwise and the other is counterclockwise. They are super speechless. The modified code is as follows.

2.2 problems related to 720p camera and 1080p camera

1) : problem getting camera support preview size

When initializing the camera, we need to set the preview size supported by the camera. If it is not the size supported by the camera, an exception will occur. According to the needs of the project, I directly specify a subscript in the local environment, and then the value will change after the hardware changes, as shown in the following figure:

Here, according to the actual situation, the area of each size can be calculated, and the appropriate preview size can be obtained through a foundation area. The specific code is not posted. You just need to know that there is a pit.

2) : get the problem caused by the width and height of preview detection

If the lock and thread problems of the program are not handled properly, the performance problems are obvious.

If we simply recognize the face, we can solve this problem by compressing the image.

3) : the stream frequency returned by the camera is too fast, resulting in the failure of face recognition processing speed

At first, when the software was running for a period of time, the app crashed directly. Finally, it was found that the stream returned by onpreviewframe was too fast. On the Internet, it was said that the stream frequency and common setting codes can be set when starting the camera

However, this setting is completely useless, as shown in the figure:

It's not very complicated to deal with this problem. It's just that when judging a two-time processing flow, it's more than 300 milliseconds (the specific time varies according to the demand)

2.3 after the person brushing the face walks away, the screen still displays face related information

From the above description, we know that the size of the camera preview image is too large, resulting in the face brushing personnel walking away. Within a few seconds, the Android device screen still displays face related information. Because the onpreviewframe frequency is fast, and the face processing time is too long, resulting in the face alignment becoming larger and larger. Therefore, the screen will display relevant information only after people walk away. This needs to be controlled, Onpreviewframe processes faces more frequently than, and improves face recognition time

Full demo download address: https://github.com/jlq023/democamera

The above is the whole content of this article. I hope it will be helpful to your study, and I hope you can support programming tips.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>