Android voice interaction based on Baidu voice (recommended)

The voice wake-up function is used in the project. IFLYTEK's voice recognition has been used in the past. It was originally intended to wake up directly with iFLYTEK's voice, but iFLYTEK's voice wake-up will be charged, and the trial version is only valid for 35 days. We have to switch to Baidu voice. All functions of Baidu voice are free, and the functions are relatively simple and practical, including voice recognition, voice synthesis and voice wake-up, which can just form a complete set of voice interaction functions.

design sketch:

The first is the voice wake-up function, which can be called voice recognition if you say the key words. There will be voice prompt if you wake up successfully. Baidu voice synthesis function is adopted here. Then Baidu speech recognition will automatically switch online or offline recognition according to the WiFi situation, but offline recognition can only recognize the imported keywords, and the first offline recognition needs to be connected. If the recognition is successful, there will also be voice prompts. There is no sound in the rendering GIF. When toast is displayed, it is the content of voice prompt.

Here's a point. The voice wake-up given in Baidu voice demo starts to wake up the monitoring in onresume(). After the wake-up is successful, the wake-up monitoring stops in onpause(). Now I want to pop up the UI interface of voice recognition after successful wake-up, so the wake-up monitoring will be stopped when the UI pops up. If the speech recognition is successful, the UI interface disappears and the wake-up monitoring will restart. At this time, you can wake up again by saying the wake-up word. However, if the recognition fails, the encapsulated UI interface will become the following figure. At this time, you need to manually click Retry or cancel, which is not in line with the concept of full voice interaction. To solve this problem, write the stop wake-up monitoring to onstop(), so that you can wake up again even if speech recognition fails.

The specific integration steps are available in the official documents. You can also refer to the following articles

https://www.oudahe.com/p/27434/

Note: I use voice recognition and voice integration in Chengdu, so the two SDKs under the official website have to be imported into the project. There is a small problem here. Normally, after the jar package is imported into the project, I have to put the assert and jnilibs folders into the project. I only put the assert folder of voice recognition, but I didn't put the jnilibs folder into the project, This can be used. If I put assert and jnilibs of speech recognition and speech synthesis into the project, I will report the following errors. I don't know why.

java. lang.UnsatisfiedLinkError: Native method not found: com. baidu. speech. easr. easrNativeJni. WakeUpFree:()I

MainActivity:

Note: the source code is to integrate the demo given by the official website, and delete some unused methods, simplifying and reducing the amount of code.

activity_ Main: there is only one textview and one editview, which is very simple. Textview is used to display the results, and editview is used for text messages of speech synthesis

Android manifest: add permissions and an activity as a UI for speech recognition

The above is the voice interaction function of Android based on Baidu voice introduced by Xiaobian. I hope it will help you. If you have any questions, please leave me a message, and Xiaobian will reply to you in time. Thank you very much for your support for the programming tips website!

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>