Android multiple ways to achieve camera circular preview example code
The renderings are as follows:
1、 Set fillet for preview control
Set the viewoutlineprovider for the control
public RoundTextureView(Context context,AttributeSet attrs) { super(context,attrs); setOutlineProvider(new ViewOutlineProvider() { @Override public void getOutline(View view,Outline outline) { Rect rect = new Rect(0,view.getMeasuredWidth(),view.getMeasuredHeight()); outline.setRoundRect(rect,radius); } }); setClipToOutline(true); }
Modify fillet values and update as needed
public void seTradius(int radius) { this.radius = radius; } public void turnRound() { invalidateOutline(); }
The fillet size displayed by the control can be updated according to the set fillet value. When the control is square and the fillet value is half of the side length, a circle is displayed.
2、 Realize square Preview
1. The device supports 1:1 preview size
Firstly, a simple but limited implementation method is introduced: adjust the camera preview size and preview control size to 1:1. Generally, Android devices support multiple preview sizes. Take Samsung tab S3 as an example
When using the camera API, the supported preview sizes are as follows:
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1920x1080 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1280x720 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1440x1080 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1088x1088 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1056x864 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 960x720 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 720x480 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 640x480 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 352x288 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 320x240 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 176x144
The preview size of 1:1 is 1088x1088.
When using Camera2 API, the supported preview sizes (including picturesize) are as follows:
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x3096 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x2322 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x2448 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x1836 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3024x3024 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2976x2976 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2880x2160 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2592x1944 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1920 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1440 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2160x2160 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1536 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1152 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1936x1936 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1920x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1440x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x960 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x720 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 960x720 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 720x480 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 640x480 2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 320x240 2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 176x144
The preview dimensions of 1:1 are 3024x3024, 2976x2976, 2160x2160 and 1936x1936. As long as we select the preview size of 1:1 and set the preview control to square, we can realize square preview; Then set the fillet of the preview control to half of the side length to realize circular preview. 2. The device does not support 1:1 preview size
Select defect analysis for 1:1 preview size
Resolution limitations as mentioned above, we can choose the preview size of 1:1 for preview, but the limitations are high and the selection range is very small. If the camera does not support 1:1 preview size, this scheme is not feasible.
Resource consumption take Samsung tab S3 as an example. When the device uses Camera2 API, the square preview size supported is large, which will occupy more system resources during image processing and other operations.
Handle cases where 1:1 preview size is not supported
Add a 1:1 ViewGroup, put the TextureView into the ViewGroup, and set the margin value of the TextureView to achieve the effect of displaying the central square area
Sketch Map
Sample code
//将预览控件和预览尺寸比例保持一致,避免拉伸 { FrameLayout.LayoutParams textureViewLayoutParams = (FrameLayout.LayoutParams) textureView.getLayoutParams(); int newHeight = 0; int newWidth = textureViewLayoutParams.width; //横屏 if (displayOrientation % 180 == 0) { newHeight = textureViewLayoutParams.width * previewSize.height / previewSize.width; } //竖屏 else { newHeight = textureViewLayoutParams.width * previewSize.width / previewSize.height; } ////当不是正方形预览的情况下,添加一层ViewGroup限制View的显示区域 if (newHeight != textureViewLayoutParams.height) { insertFrameLayout = new RoundFrameLayout(CoverByParentCameraActivity.this); int sideLength = Math.min(newWidth,newHeight); FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(sideLength,sideLength); insertFrameLayout.setLayoutParams(layoutParams); FrameLayout parentView = (FrameLayout) textureView.getParent(); parentView.removeView(textureView); parentView.addView(insertFrameLayout); insertFrameLayout.addView(textureView); FrameLayout.LayoutParams newTextureViewLayoutParams = new FrameLayout.LayoutParams(newWidth,newHeight); //横屏 if (displayOrientation % 180 == 0) { newTextureViewLayoutParams.leftMargin = ((newHeight - newWidth) / 2); } //竖屏 else { newTextureViewLayoutParams.topMargin = -(newHeight - newWidth) / 2; } textureView.setLayoutParams(newTextureViewLayoutParams); } }
3、 Use glsurfaceview for more customized Preview
Square and circular previews can be completed by using the above method, but it is only applicable to native cameras. How to perform circular previews when our data source is not a native camera? Next, we introduce the scheme of using glsurfaceview to display nv21, which is entirely to realize the drawing of preview data.
1. Glsurfaceview usage process
Opengl rendering YUV data flow
The focus is on the preparation of renderer. The introduction of renderer is as follows:
/** * A generic renderer interface. * <p> * The renderer is responsible for making OpenGL calls to render a frame. * <p> * GLSurfaceView clients typically create their own classes that implement * this interface,and then call {@link GLSurfaceView#setRenderer} to * register the renderer with the GLSurfaceView. * <p> * * <div class="special reference"> * <h3>Developer Guides</h3> * <p>For more information about how to use OpenGL,read the * <a href="{@docRoot}guide/topics/graphics/opengl.html" rel="external nofollow" >OpenGL</a> developer guide.</p> * </div> * * <h3>Threading</h3> * The renderer will be called on a separate thread,so that rendering * performance is decoupled from the UI thread. Clients typically need to * communicate with the renderer from the UI thread,because that's where * input events are received. Clients can communicate using any of the * standard Java techniques for cross-thread communication,or they can * use the {@link GLSurfaceView#queueEvent(Runnable)} convenience method. * <p> * <h3>EGL Context Lost</h3> * There are situations where the EGL rendering context will be lost. This * typically happens when device wakes up after going to sleep. When * the EGL context is lost,all OpenGL resources (such as textures) that are * associated with that context will be automatically deleted. In order to * keep rendering correctly,a renderer must recreate any lost resources * that it still needs. The {@link #onSurfaceCreated(GL10,EGLConfig)} method * is a convenient place to do this. * * * @see #setRenderer(Renderer) */ public interface Renderer { /** * Called when the surface is created or recreated. * <p> * Called when the rendering thread * starts and whenever the EGL context is lost. The EGL context will typically * be lost when the Android device awakes after going to sleep. * <p> * Since this method is called at the beginning of rendering,as well as * every time the EGL context is lost,this method is a convenient place to put * code to create resources that need to be created when the rendering * starts,and that need to be recreated when the EGL context is lost. * Textures are an example of a resource that you might want to create * here. * <p> * Note that when the EGL context is lost,all OpenGL resources associated * with that context will be automatically deleted. You do not need to call * the corresponding "glDelete" methods such as glDeleteTextures to * manually delete these lost resources. * <p> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. * @param config the EGLConfig of the created surface. Can be used * to create matching pbuffers. */ void onSurfaceCreated(GL10 gl,EGLConfig config); /** * Called when the surface changed size. * <p> * Called after the surface is created and whenever * the OpenGL ES surface size changes. * <p> * Typically you will set your viewport here. If your camera * is fixed then you Could also set your projection matrix here: * <pre class="prettyprint"> * void onSurfaceChanged(GL10 gl,int width,int height) { * gl.glViewport(0,width,height); * // for a fixed camera,set the projection too * float ratio = (float) width / height; * gl.glMatrixMode(GL10.GL_PROJECTION); * gl.glLoadIdentity(); * gl.glFrustumf(-ratio,ratio,-1,1,10); * } * </pre> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. * @param width * @param height */ void onSurfaceChanged(GL10 gl,int height); /** * Called to draw the current frame. * <p> * This method is responsible for drawing the current frame. * <p> * The implementation of this method typically looks like this: * <pre class="prettyprint"> * void onDrawFrame(GL10 gl) { * gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); * //... other gl calls to render the scene ... * } * </pre> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. */ void onDrawFrame(GL10 gl); }
Void onsurfacecreated (gl10 GL, eglconfig config) calls back when the surface is created or rebuilt
Void onsurfacechanged (gl10 GL, int height) calls back when the size of the surface changes
Void ondrawframe (gl10 GL) implements the drawing operation here. When we set the rendermode to rendermode_ When continuously, the function will be executed continuously; When we set the rendermode to rendermode_ WHEN_ When dirty, it will be executed only after the creation is completed and requestrender is called. Generally, we choose rendermode_ WHEN_ Dirty rendering mode to avoid over painting.
Generally, we will implement a renderer ourselves, and then set the renderer for glsurfaceview. It can be said that the writing of the renderer is the core step of the whole process. The following is the flow chart of initialization operation in void onsurfacecreated (gl10 GL, eglconfig config) and drawing operation in void ondrawframe (gl10 GL):
Renderer for rendering YUV data
2. Concrete realization
Introduction to coordinate system
Android view coordinate system
OpenGL world coordinate system
As shown in the figure, unlike the view coordinate system of Android, the coordinate system of OpenGL is Cartesian. The coordinate system of Android view takes the upper left corner as the origin, increasing x to the right and y to the down; The OpenGL coordinate system takes the center as the origin, increasing x to the right and y to the up.
Shader writing
/** * 顶点着色器 */ private static String VERTEX_SHADER = " attribute vec4 attr_position;\n" + " attribute vec2 attr_tc;\n" + " varying vec2 tc;\n" + " void main() {\n" + " gl_Position = attr_position;\n" + " tc = attr_tc;\n" + " }"; /** * 片段着色器 */ private static String FRAG_SHADER = " varying vec2 tc;\n" + " uniform sampler2D ySampler;\n" + " uniform sampler2D uSampler;\n" + " uniform sampler2D vSampler;\n" + " const mat3 convertMat = mat3( 1.0,1.0,-0.001,-0.3441,1.772,1.402,-0.7141,-0.58060);\n" + " void main()\n" + " {\n" + " vec3 yuv;\n" + " yuv.x = texture2D(ySampler,tc).r;\n" + " yuv.y = texture2D(uSampler,tc).r - 0.5;\n" + " yuv.z = texture2D(vSampler,tc).r - 0.5;\n" + " gl_FragColor = vec4(convertMat * yuv,1.0);\n" + " }";
Built in variable interpretation
gl_Position
VERTEX_ GL in shader code_ Position represents the spatial coordinates of the drawing. Because we are drawing in two dimensions, we directly import the lower left (- 1, - 1), lower right (1, - 1), upper left (- 1,1) and upper right (1,1) of the OpenGL two-dimensional coordinate system, that is, {- 1,1}
gl_FragColor
FRAG_ GL in shader code_ Fragcolor represents the color of a single slice
Interpretation of other variables
ySampler、uSampler、vSampler
Represent y, u and V texture samplers respectively
convertMat
According to the following formula:
R = Y + 1.402 (V - 128) G = Y - 0.34414 (U - 128) - 0.71414 (V - 128) B = Y + 1.772 (U - 128)
We can get a YUV to RGB matrix
1.0,-0.344,1.77,1.403,-0.714,0
Interpretation of some types and functions
vec3、vec4
Represent three-dimensional vector and four-dimensional vector respectively.
vec4 texture2D(sampler2D sampler,vec2 coord)
Converting the image texture of the sampler into color values with a specified matrix; For example, texture2d (ysampler, TC). R obtains y data, texture2d (usampler, TC). R obtains u data, and texture2d (vsampler, TC). R obtains V data.
Initialization in Java code
Create ByteBuffer texture data corresponding to y, u and V according to the width and height of the image; Select the corresponding conversion matrix according to the mirror display and rotation angle;
public void init(boolean isMirror,int rotateDegree,int frameWidth,int frameHeight) { if (this.frameWidth == frameWidth && this.frameHeight == frameHeight && this.rotateDegree == rotateDegree && this.isMirror == isMirror) { return; } dataInput = false; this.frameWidth = frameWidth; this.frameHeight = frameHeight; this.rotateDegree = rotateDegree; this.isMirror = isMirror; yArray = new byte[this.frameWidth * this.frameHeight]; uArray = new byte[this.frameWidth * this.frameHeight / 4]; vArray = new byte[this.frameWidth * this.frameHeight / 4]; int yFrameSize = this.frameHeight * this.frameWidth; int uvFrameSize = yFrameSize >> 2; yBuf = ByteBuffer.allocateDirect(yFrameSize); yBuf.order(ByteOrder.nativeOrder()).position(0); uBuf = ByteBuffer.allocateDirect(uvFrameSize); uBuf.order(ByteOrder.nativeOrder()).position(0); vBuf = ByteBuffer.allocateDirect(uvFrameSize); vBuf.order(ByteOrder.nativeOrder()).position(0); // 顶点坐标 squareVertices = ByteBuffer .allocateDirect(GLUtil.SQUARE_VERTICES.length * FLOAT_SIZE_BYTES) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); squareVertices.put(GLUtil.SQUARE_VERTICES).position(0); //纹理坐标 if (isMirror) { switch (rotateDegree) { case 0: coordVertice = GLUtil.MIRROR_COORD_VERTICES; break; case 90: coordVertice = GLUtil.ROTATE_90_MIRROR_COORD_VERTICES; break; case 180: coordVertice = GLUtil.ROTATE_180_MIRROR_COORD_VERTICES; break; case 270: coordVertice = GLUtil.ROTATE_270_MIRROR_COORD_VERTICES; break; default: break; } } else { switch (rotateDegree) { case 0: coordVertice = GLUtil.COORD_VERTICES; break; case 90: coordVertice = GLUtil.ROTATE_90_COORD_VERTICES; break; case 180: coordVertice = GLUtil.ROTATE_180_COORD_VERTICES; break; case 270: coordVertice = GLUtil.ROTATE_270_COORD_VERTICES; break; default: break; } } coordVertices = ByteBuffer.allocateDirect(coordVertice.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer(); coordVertices.put(coordVertice).position(0); }
The renderer is initialized when the surface is created
private void initRenderer() { rendererReady = false; createGLProgram(); //启用纹理 GLES20.glEnable(GLES20.GL_TEXTURE_2D); //创建纹理 createTexture(frameWidth,frameHeight,GLES20.GL_LUMINANCE,yTexture); createTexture(frameWidth / 2,frameHeight / 2,uTexture); createTexture(frameWidth / 2,vTexture); rendererReady = true; }
Where createglprogram is used to create OpenGL program and associate variables in shader code
private void createGLProgram() { int programHandleMain = GLUtil.createShaderProgram(); if (programHandleMain != -1) { // 使用着色器程序 GLES20.glUseProgram(programHandleMain); // 获取顶点着色器变量 int glPosition = GLES20.glGetAttribLocation(programHandleMain,"attr_position"); int textureCoord = GLES20.glGetAttribLocation(programHandleMain,"attr_tc"); // 获取片段着色器变量 int ySampler = GLES20.glGetUniformLocation(programHandleMain,"ySampler"); int uSampler = GLES20.glGetUniformLocation(programHandleMain,"uSampler"); int vSampler = GLES20.glGetUniformLocation(programHandleMain,"vSampler"); //给变量赋值 /** * GLES20.GL_TEXTURE0 和 ySampler 绑定 * GLES20.GL_TEXTURE1 和 uSampler 绑定 * GLES20.GL_TEXTURE2 和 vSampler 绑定 * * 也就是说 glUniform1i的第二个参数代表图层序号 */ GLES20.glUniform1i(ySampler,0); GLES20.glUniform1i(uSampler,1); GLES20.glUniform1i(vSampler,2); GLES20.glEnabLevertexAttribArray(glPosition); GLES20.glEnabLevertexAttribArray(textureCoord); /** * 设置Vertex Shader数据 */ squareVertices.position(0); GLES20.glVertexAttribPointer(glPosition,GLUtil.COUNT_PER_SQUARE_VERTICE,GLES20.GL_FLOAT,false,8,squareVertices); coordVertices.position(0); GLES20.glVertexAttribPointer(textureCoord,GLUtil.COUNT_PER_COORD_VERTICES,coordVertices); } }
Where createtexture is used to create textures based on width, height and format
private void createTexture(int width,int height,int format,int[] textureId) { //创建纹理 GLES20.glGenTextures(1,textureId,0); //绑定纹理 GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,textureId[0]); /** * {@link GLES20#GL_TEXTURE_WRAP_S}代表左右方向的纹理环绕模式 * {@link GLES20#GL_TEXTURE_WRAP_T}代表上下方向的纹理环绕模式 * * {@link GLES20#GL_REPEAT}:重复 * {@link GLES20#GL_MIRRORED_REPEAT}:镜像重复 * {@link GLES20#GL_CLAMP_TO_EDGE}:忽略边框截取 * * 例如我们使用{@link GLES20#GL_REPEAT}: * * squareVertices coordVertices * -1.0f,-1.0f,1.0f,* 1.0f,0.0f,-> 和textureView预览相同 * -1.0f,1.0f 0.0f,0.0f * * squareVertices coordVertices * -1.0f,2.0f,-> 和textureView预览相比,分割成了4 块相同的预览(左下,右下,左上,右上) * -1.0f,0.0f */ GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_WRAP_S,GLES20.GL_REPEAT); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_REPEAT); /** * {@link GLES20#GL_TEXTURE_MIN_FILTER}代表所显示的纹理比加载进来的纹理小时的情况 * {@link GLES20#GL_TEXTURE_MAG_FILTER}代表所显示的纹理比加载进来的纹理大时的情况 * * {@link GLES20#GL_NEAREST}:使用纹理中坐标最接近的一个像素的颜色作为需要绘制的像素颜色 * {@link GLES20#GL_LINEAR}:使用纹理中坐标最接近的若干个颜色,通过加权平均算法得到需要绘制的像素颜色 */ GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_NEAREST); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D,format,height,GLES20.GL_UNSIGNED_BYTE,null); }
Invoking in Java code
Clip and pass in frame data when the data source is obtained
@Override public void onPreview(final byte[] nv21,Camera camera) { //裁剪指定的图像区域 ImageUtil.cropNV21(nv21,this.squareNV21,previewSize.width,previewSize.height,cropRect); //刷新GLSurfaceView roundCameraGLSurfaceView.refreshFrameNV21(this.squareNV21); }
Nv21 data clipping code
/** * 裁剪NV21数据 * * @param originNV21 原始的NV21数据 * @param cropNV21 裁剪结果NV21数据,需要预先分配内存 * @param width 原始数据的宽度 * @param height 原始数据的高度 * @param left 原始数据被裁剪的区域的左边界 * @param top 原始数据被裁剪的区域的上边界 * @param right 原始数据被裁剪的区域的右边界 * @param bottom 原始数据被裁剪的区域的下边界 */ public static void cropNV21(byte[] originNV21,byte[] cropNV21,int left,int top,int right,int bottom) { int halfWidth = width / 2; int cropImageWidth = right - left; int cropImageHeight = bottom - top; //原数据Y左上 int originalYLineStart = top * width; int targetYIndex = 0; //原数据UV左上 int originalUVLineStart = width * height + top * halfWidth; //目标数据的UV起始值 int targetUVIndex = cropImageWidth * cropImageHeight; for (int i = top; i < bottom; i++) { System.arraycopy(originNV21,originalYLineStart + left,cropNV21,targetYIndex,cropImageWidth); originalYLineStart += width; targetYIndex += cropImageWidth; if ((i & 1) == 0) { System.arraycopy(originNV21,originalUVLineStart + left,targetUVIndex,cropImageWidth); originalUVLineStart += width; targetUVIndex += cropImageWidth; } } }
Pass it to glsurafceview and refresh the frame data
/** * 传入NV21刷新帧 * * @param data NV21数据 */ public void refreshFrameNV21(byte[] data) { if (rendererReady) { yBuf.clear(); uBuf.clear(); vBuf.clear(); putNV21(data,frameWidth,frameHeight); dataInput = true; requestRender(); } }
Putnv21 is used to take out the Y, u and V data in nv21 respectively
/** * 将NV21数据的Y、U、V分量取出 * * @param src nv21帧数据 * @param width 宽度 * @param height 高度 */ private void putNV21(byte[] src,int height) { int ySize = width * height; int frameSize = ySize * 3 / 2; //取分量y值 System.arraycopy(src,yArray,ySize); int k = 0; //取分量uv值 int index = ySize; while (index < frameSize) { vArray[k] = src[index++]; uArray[k++] = src[index++]; } yBuf.put(yArray).position(0); uBuf.put(uArray).position(0); vBuf.put(vArray).position(0); }
After executing requestrender, the ondrawframe function will be called back to bind and draw three textures
@Override public void onDrawFrame(GL10 gl) { // 分别对每个纹理做激活、绑定、设置数据操作 if (dataInput) { //y GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,yTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,yBuf); //u GLES20.glActiveTexture(GLES20.GL_TEXTURE1); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,uTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,frameWidth >> 1,frameHeight >> 1,uBuf); //v GLES20.glActiveTexture(GLES20.GL_TEXTURE2); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,vTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,vBuf); //在数据绑定完成后进行绘制 GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP,4); } }
The drawing is complete.
4、 Add a layer of border
Sometimes the requirement is not just a circular preview. We may also add a border to the camera preview
Border effect
In the same way, we dynamically modify the border value and redraw it. The related codes in the border custom view are as follows:
@Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); if (paint == null) { paint = new Paint(); paint.setStyle(Paint.Style.stroke); paint.setAntiAlias(true); SweepGradient sweepGradient = new SweepGradient(((float) getWidth() / 2),((float) getHeight() / 2),new int[]{Color.GREEN,Color.CYAN,Color.BLUE,Color.GREEN},null); paint.setShader(sweepGradient); } drawBorder(canvas,6); } private void drawBorder(Canvas canvas,int rectThickness) { if (canvas == null) { return; } paint.setstrokeWidth(rectThickness); Path drawPath = new Path(); drawPath.addRoundRect(new RectF(0,getWidth(),getHeight()),radius,Path.Direction.CW); canvas.drawPath(drawPath,paint); } public void turnRound() { invalidate(); } public void seTradius(int radius) { this.radius = radius; }
5、 Full demo code:
https://github.com/wangshengyang1996/GLCameraDemo
Use the camera API and Camera2 API and select the preview size closest to the square. Use the camera API and dynamically add a layer of parent controls to achieve the effect of square preview. Use the camera API to obtain the preview data and display it in OpenGL. Finally, I recommend an easy-to-use SDK for Android free offline face recognition, It can be perfectly combined with the technology of this article: https://ai.arcsoft.com.cn/product/arcface.html
The above is the whole content of this article. I hope it will help you in your study, and I hope you will support us a lot.