Options for effectively drawing byte array streams for display in Android

In short, what I need to do is to display the real-time stream of video frames in Android (each frame is in YUV420 format). I have a callback function. I receive a single frame as a byte array. It looks like this:

public void onFrameReceived(byte[] frame, int height, int width, int format) {
    // display this frame to surfaceview/textureview.
}

A feasible but slow option is to convert the byte array into bitmap and draw it on the canvas on the surfaceview. In the future, I hope to be able to change the brightness, contrast, etc. of this framework, so I hope I can use opengl-es. What are the other options for me to do this effectively?

Remember, unlike the implementation of the camera or mediaplayer class, I cannot use camera.setpreviewtexture (surfacetexture) to direct output to surfaceview / TextureView; Because I use GStreamer to receive a single frame in C

resolvent:

I'm using ffmpeg for my project, but the principle of rendering YUV frames should be the same for yourself

For example, if a frame is 756 x 576, the Y frame will be that size. U and V frames are half the width and height of the Y frame, so you must ensure that size differences are considered

I don't know the camera API, but the frames I get from the DVB source have a width and each line has a step. Add pixels at the end of each line in the frame. If your are the same, you should take this into account when calculating the texture coordinates

Adjust texture coordinates to take into account width and line size:

float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge

The vertex shader I use uses screen coordinates of 0.0 to 1.0, but you can change these to suit. It also uses texture coordinates and color input. I use color input so that I can add fade in, fade out, etc

Vertex shader:

#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif

attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;



void main(void)
{
    v_texcoord = a_texcoord;
    v_colorout = a_colorin;

    float x = a_vertex.x * c2 - c1;
    float y = -(a_vertex.y * c2 - c1);

    gl_Position = vec4(x, y, a_vertex.z, c1);
}

The clip shader takes three uniform textures, one for each Y, u and V, and converts them to RGB. This is also multiplied by the color passed in from the vertex shader:

#ifdef GL_ES
precision mediump float;
#endif

uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;

void main(void)
{
    float y = texture2D(u_texturey, v_texcoord).r;
    float u = texture2D(u_textureu, v_texcoord).r - 0.5;
    float v = texture2D(u_texturev, v_texcoord).r - 0.5;
    vec4 rgb = vec4(y + 1.403 * v,
                    y - 0.344 * u - 0.714 * v,
                    y + 1.770 * u,
                    1.0);
    gl_FragColor = rgb * v_colorout;
}

The vertices used are:

float   x, y, z;    // coords
float   s, t;       // texture coords
uint8_t r, g, b, a; // colour and alpha

I hope this can help!

Edit:

For nv12 format, you can still use fragment shader, although I haven't tried it myself. It takes interlaced UVs as brightness- α Passage or similar

See how a person answers this question here: https://stackoverflow.com/a/22456885/2979092

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>