Based on Android high imitation how old Net to realize face recognition and estimate age and gender. Gson’s brief notes

A few days ago, Microsoft launched the big data face recognition age application how old Net, it can quickly obtain the age of the characters in the photos through the photos. The system will analyze 27 "facial landmarks" such as pupils, corners of eyes and nose, and then get your "facial age".

Let's take a screenshot of this app:

I was idle last night. I consulted some information on the Internet and imitated an app with similar functions. Here is the screenshot:

About face recognition technology, I wanted to use the SDK provided by Microsoft to developers, but due to the network of TianChao jukeng, I connected how old Net official website, so I have to take a detour to find out if there are SDKs with similar functions in other places. Later, I thought of a news about Alipay's "brush payment" function before I was engaged in O2O. I found some related information and found that good stuff was provided by Face++.

This is the official website of face + + http://www.faceplusplus.com.cn/ , you can find the SDK that provides some functions for developers on the website (registration is required), including face recognition, judging age, gender and race.

We register an account, then create an application, we can get the apikey and apisecret officially provided to us, record them, and then go to the developer center( http://www.faceplusplus.com.cn/dev-tools-sdks/ )You can download to the SDK of the corresponding version and directly import a jar package into the project. This is the API reference document provided by the official to us( http://www.faceplusplus.com.cn/api-overview/ )In this way, the preparatory work is ready and you can start to enter the software coding stage.

Let's look at the layout file first:

A very simple layout file. Here you can paste the code directly:

Let's talk about the main program class. The implementation of the program can be basically divided into these steps:

1. Enter the program, click the button to jump to the album, select a picture, and display it in the main interface of the program.

Here are some points to note:

/ detection / detect according to the developer API( http://www.faceplusplus.com.cn/detection_detect/ ), we can know that while setting apikey and apisecrert, we need to specify the URL address of a picture or convert the picture into binary data for post submission to the server. Here, it should be noted that the size of the picture cannot exceed 1m. Now the pixels of smart phones are very high, and taking any picture will exceed this limit, Therefore, we need to compress the picture when we get the picture.

2. Encapsulate the required parameters, convert the picture into binary data and submit it to the server to obtain the recognition result (JSON data).

3. The settings are displayed on the image according to the data returned by the server.

Probably the implementation is like this. The code directly below is very detailed. I won't talk about it one by one. I'll pick out some key points.

This is a network request tool class:

Since it is a network request, it must be a time-consuming operation, so we need to complete it in the sub thread, and then the SDK officially provided to us encapsulates some tool classes for us

For example:

Httprequests class, which helps us encapsulate HTTP requests. We can directly set parameters to access the server.

When we open the source code, we can find that in addition to the parameterless construction, it also has two construction methods, two parameters and four parameters. In fact, we can easily find that these parameters are just the URL used to submit the server

They are: (no doubt we want to select CN + HTTP), so we can directly set the last two parameters to true.

The postparameters class is used to set the specific value of the parameter. A setimg method is provided here. We only need to convert our picture into a variable of byte array and store it directly.

In order to facilitate the main program class to obtain the returned data, the interface callback method callback is adopted here, and success (when the data returns normally) and error (when the data returns abnormally) are set.

This is the main program class:

Because the notes are very complete, I'll pick out a few places here. Other unclear friends can leave messages in the article comments, and I'll reply.

1. When clicking the select image button, access the system image through intent and compress the image according to the returned image, because the official document clearly tells us that the image size cannot exceed 1MB.

There are two ways to compress pictures. One is to compress the quality and size of the picture, and the other is to compress the size of the picture according to the proportion (here I choose the second compression method).

According to the obtained bitmap object, we can first set injustdecodebounds to true, so that we will not get a specific picture, but only the width and height of the real picture. Then we let it be removed, take 1024, take a larger number as the compression ratio, round it, and use insamplesize to compress the bitmap, Then set injustdecodebounds to false.

2. When we submit pictures and the server returns data to us, because we execute the network request in the sub thread, we transmit data through the handler mechanism, so that the main thread can get the data and update the UI.

The server returns us a JSON data, So we need to parse it here (for JSON parsing, I use the official Jason help class of Android. It's also OK to use the gson tool class provided by Google. Here is the tool class usage method: brief notes on the use of gson). We need the data here

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>