前语

在上一篇理论文章中咱们介绍了YUV到RGB之间转化的几种公式与一些优化算法,今天咱们再来介绍一下RGB到YUV的转化,顺便运用Opengl ES做个实践,将一张RGB的图片经过Shader 的方法转化YUV格局图,然后保存到本地。

或许有的童鞋会问,YUV转RGB是为了烘托显现,那么RGB转YUV的运用场景是什么?在做视频编码的时分咱们能够运用MediaCodec搭配Surface就能够完结,形似也没有用到RGB转YUV的功能啊, 硬编码没有用到,那么软编码呢?一般咱们做视频编码的时分都是硬编码优先,软编码兜底的准则,在遇到一些硬编码不可用的情况下或许就需要用到x264库进行软编码了,而此刻RGB转YUV或许就派上用场啦。

RGB到YUV的转化公式

在前面 Opengl ES之YUV数据烘托 一文中咱们介绍过YUV的几种兼容规范,下面咱们看看RGB到YUV的转化公式:

RGB 转 BT.601 YUV

Y  =  0.257R + 0.504G + 0.098B + 16
Cb = -0.148R - 0.291G + 0.439B + 128
Cr =  0.439R - 0.368G - 0.071B + 128

RGB 转 BT.709 YUV

Y  =  0.183R + 0.614G + 0.062B + 16
Cb = -0.101R - 0.339G + 0.439B + 128
Cr =  0.439R - 0.399G - 0.040B + 128

或者也能够运用矩阵运算的方法进行转化,更加的快捷:

Opengl ES之RGB转NV21

RGB转YUV

先说一下RGB转YUV的过程,先将RGB数据依照公式转化为YUV数据,然后将YUV数据依照RGBA进行排布,这一步的意图是为了后续数据读取,最后运用glReadPixels读取YUV数据。

而对于OpenGL ES来说,现在它输入只认RGBA、lumiance、luminace alpha这几个格局,输出大多数实现只认RGBA格局,因此输出的数据格局虽然是YUV格局,但是在存储时咱们仍然要依照RGBA方法去访问texture数据。

以NV21的YUV数据为例,它的内存巨细为width x height * 3 / 2。假如是RGBA的格局存储的话,占用的内存空间巨细是width x height x 4(由于 RGBA 总共4个通道)。很显然它们的内存巨细是对不上的, 那么该如何调整Opengl buffer的巨细让RGBA的输出能对应上YUV的输出呢?咱们能够规划输出的宽为width / 4,高为height * 3 / 2即可。

为什么是这样的呢?虽然咱们的意图是将RGB转化成YUV,但是咱们的输入和输出时读取的类型GLenum是依然是RGBA,也就是说:width x height x 4 = (width / 4) x (height * 3 / 2) * 4

而YUV数据在内存中的分布以下这样子的:

width / 4
|--------------|
|              |
|              | h
|      Y       |
|--------------|            
|   U   |  V   |
|       |      |  h / 2
|--------------|

那么上面的排序假如进行了归一化之后呢,就变成了下面这样子了:

(0,0) width / 4  (1,0)
|--------------|
|              |
|              |  h
|      Y       |
|--------------|  (1,2/3)          
|   U   |  V   |
|       |      |  h / 2
|--------------|
(0,1)           (1,1)

从上面的排布能够看出看出,在纹路坐标y < (2/3)时,需要完结一次对整个纹路的采样,用于生成Y数据,当纹路坐标 y > (2/3)时,同样需要再进行一次对整个纹路的采样,用于生成UV的数据。 一起还需要将咱们的视窗设置为glViewport(0, 0, width / 4, height * 1.5);

由于视口宽度设置为本来的 1/4 ,能够简单的以为相对于本来的图画每隔4个像素做一次采样,由于咱们生成Y数据是要对每一个像素都进行采样,所以还需要进行3次偏移采样。

Opengl ES之RGB转NV21

同理,生成对于UV数据也需要进行3次额定的偏移采样。

Opengl ES之RGB转NV21

在着色器中offset变量需要设置为一个归一化之后的值:1.0/width, 依照原理图,在纹路坐标 y < (2/3) 规模,一次采样(加三次偏移采样)4 个 RGBA 像素(R,G,B,A)生成 1 个(Y0,Y1,Y2,Y3),整个规模采样结束时填充好 width*height 巨细的缓冲区; 当纹路坐标 y > (2/3) 规模,一次采样(加三次偏移采样)4 个 RGBA 像素(R,G,B,A)生成 1 个(V0,U0,V0,U1),又由于 UV 缓冲区的高度为 height/2 ,VU plane 在笔直方向的采样是隔行进行,整个规模采样结束时填充好 width*height/2 巨细的缓冲区。

首要代码

RGBtoYUVOpengl.cpp
#include "../utils/Log.h"
#include "RGBtoYUVOpengl.h"
// 极点着色器
static const char *ver = "#version 300 es\n"
                         "in vec4 aPosition;\n"
                         "in vec2 aTexCoord;\n"
                         "out vec2 v_texCoord;\n"
                         "void main() {\n"
                         "  v_texCoord = aTexCoord;\n"
                         "  gl_Position = aPosition;\n"
                         "}";
// 片元着色器
static const char *fragment = "#version 300 es\n"
                              "precision mediump float;\n"
                              "in vec2 v_texCoord;\n"
                              "layout(location = 0) out vec4 outColor;\n"
                              "uniform sampler2D s_TextureMap;\n"
                              "uniform float u_Offset;\n"
                              "const vec3 COEF_Y = vec3(0.299, 0.587, 0.114);\n"
                              "const vec3 COEF_U = vec3(-0.147, -0.289, 0.436);\n"
                              "const vec3 COEF_V = vec3(0.615, -0.515, -0.100);\n"
                              "const float UV_DIVIDE_LINE = 2.0 / 3.0;\n"
                              "void main(){\n"
                              "    vec2 texelOffset = vec2(u_Offset, 0.0);\n"
                              "    if (v_texCoord.   y <= UV_DIVIDE_LINE) {\n"
                              "        vec2 texCoord = vec2(v_texCoord.   x, v_texCoord.   y * 3.0 / 2.0);\n"
                              "        vec4 color0 = texture(s_TextureMap, texCoord);\n"
                              "        vec4 color1 = texture(s_TextureMap, texCoord + texelOffset);\n"
                              "        vec4 color2 = texture(s_TextureMap, texCoord + texelOffset * 2.0);\n"
                              "        vec4 color3 = texture(s_TextureMap, texCoord + texelOffset * 3.0);\n"
                              "        float y0 = dot(color0.   rgb, COEF_Y);\n"
                              "        float y1 = dot(color1.   rgb, COEF_Y);\n"
                              "        float y2 = dot(color2.   rgb, COEF_Y);\n"
                              "        float y3 = dot(color3.   rgb, COEF_Y);\n"
                              "        outColor = vec4(y0, y1, y2, y3);\n"
                              "    } else {\n"
                              "        vec2 texCoord = vec2(v_texCoord.x, (v_texCoord.y - UV_DIVIDE_LINE) * 3.0);\n"
                              "        vec4 color0 = texture(s_TextureMap, texCoord);\n"
                              "        vec4 color1 = texture(s_TextureMap, texCoord + texelOffset);\n"
                              "        vec4 color2 = texture(s_TextureMap, texCoord + texelOffset * 2.0);\n"
                              "        vec4 color3 = texture(s_TextureMap, texCoord + texelOffset * 3.0);\n"
                              "        float v0 = dot(color0.   rgb, COEF_V) + 0.5;\n"
                              "        float u0 = dot(color1.   rgb, COEF_U) + 0.5;\n"
                              "        float v1 = dot(color2.   rgb, COEF_V) + 0.5;\n"
                              "        float u1 = dot(color3.   rgb, COEF_U) + 0.5;\n"
                              "        outColor = vec4(v0, u0, v1, u1);\n"
                              "    }\n"
                              "}";
// 运用制作两个三角形组成一个矩形的方式(三角形带)
// 榜首第二第三个点组成一个三角形,第二第三第四个点组成一个三角形
const static GLfloat VERTICES[] = {
        1.0f,-1.0f, // 右下
        1.0f,1.0f, // 右上
        -1.0f,-1.0f, // 左下
        -1.0f,1.0f // 左上
};
// FBO贴图纹路坐标(参考手机屏幕坐标体系,原点在左下角)
// 留意坐标不要紊乱
const static GLfloat TEXTURE_COORD[] = {
        1.0f,0.0f, // 右下
        1.0f,1.0f, // 右上
        0.0f,0.0f, // 左下
        0.0f,1.0f // 左上
};
RGBtoYUVOpengl::RGBtoYUVOpengl() {
    initGlProgram(ver,fragment);
    positionHandle = glGetAttribLocation(program,"aPosition");
    textureHandle = glGetAttribLocation(program,"aTexCoord");
    textureSampler = glGetUniformLocation(program,"s_TextureMap");
    u_Offset = glGetUniformLocation(program,"u_Offset");
    LOGD("program:%d",program);
    LOGD("positionHandle:%d",positionHandle);
    LOGD("textureHandle:%d",textureHandle);
    LOGD("textureSample:%d",textureSampler);
    LOGD("u_Offset:%d",u_Offset);
}
RGBtoYUVOpengl::~RGBtoYUVOpengl() noexcept {
}
void RGBtoYUVOpengl::fboPrepare() {
    glGenTextures(1, &fboTextureId);
    // 绑定纹路
    glBindTexture(GL_TEXTURE_2D, fboTextureId);
    // 为当前绑定的纹路对象设置环绕、过滤方法
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glBindTexture(GL_TEXTURE_2D, GL_NONE);
    glGenFramebuffers(1,&fboId);
    glBindFramebuffer(GL_FRAMEBUFFER,fboId);
    // 绑定纹路
    glBindTexture(GL_TEXTURE_2D,fboTextureId);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTextureId, 0);
    // 这个纹路是多大的?
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth / 4, imageHeight * 1.5, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
    // 检查FBO状况
    if (glCheckFramebufferStatus(GL_FRAMEBUFFER)!= GL_FRAMEBUFFER_COMPLETE) {
        LOGE("FBOSample::CreateFrameBufferObj glCheckFramebufferStatus status != GL_FRAMEBUFFER_COMPLETE");
    }
    // 解绑
    glBindTexture(GL_TEXTURE_2D, GL_NONE);
    glBindFramebuffer(GL_FRAMEBUFFER, GL_NONE);
}
// 烘托逻辑
void RGBtoYUVOpengl::onDraw() {
    // 制作到FBO上去
    // 绑定fbo
    glBindFramebuffer(GL_FRAMEBUFFER, fboId);
    glPixelStorei(GL_UNPACK_ALIGNMENT,1);
    // 设置视口巨细
    glViewport(0, 0,imageWidth / 4, imageHeight * 1.5);
    glClearColor(0.0f, 1.0f, 0.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT);
    glUseProgram(program);
    // 激活纹路
    glActiveTexture(GL_TEXTURE2);
    glUniform1i(textureSampler, 2);
    // 绑定纹路
    glBindTexture(GL_TEXTURE_2D, textureId);
    // 设置偏移
    float texelOffset = (float) (1.f / (float) imageWidth);
    glUniform1f(u_Offset,texelOffset);
    /**
     * size 几个数字表明一个点,显现是两个数字表明一个点
     * normalized 是否需要归一化,不必,这里现已归一化了
     * stride 步长,接连极点之间的间隔,假如极点直接是接连的,也可填0
     */
    // 启用极点数据
    glEnableVertexAttribArray(positionHandle);
    glVertexAttribPointer(positionHandle,2,GL_FLOAT,GL_FALSE,0,VERTICES);
    // 纹路坐标
    glEnableVertexAttribArray(textureHandle);
    glVertexAttribPointer(textureHandle,2,GL_FLOAT,GL_FALSE,0,TEXTURE_COORD);
    // 4个极点制作两个三角形组成矩形
    glDrawArrays(GL_TRIANGLE_STRIP,0,4);
    glUseProgram(0);
    // 禁用极点
    glDisableVertexAttribArray(positionHandle);
    if(nullptr != eglHelper){
        eglHelper->swapBuffers();
    }
    glBindTexture(GL_TEXTURE_2D, 0);
    // 解绑fbo
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
// 设置RGB图画数据
void RGBtoYUVOpengl::setPixel(void *data, int width, int height, int length) {
    LOGD("texture setPixel");
    imageWidth = width;
    imageHeight = height;
    // 预备fbo
    fboPrepare();
    glGenTextures(1, &textureId);
    // 激活纹路,留意以下这个两句是搭配的,glActiveTexture激活的是那个纹路,就设置的sampler2D是那个
    // 默许是0,假如不是0的话,需要在onDraw的时分从头激活一下?
//    glActiveTexture(GL_TEXTURE0);
//    glUniform1i(textureSampler, 0);
// 例如,一样的
    glActiveTexture(GL_TEXTURE2);
    glUniform1i(textureSampler, 2);
    // 绑定纹路
    glBindTexture(GL_TEXTURE_2D, textureId);
    // 为当前绑定的纹路对象设置环绕、过滤方法
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
    // 生成mip贴图
    glGenerateMipmap(GL_TEXTURE_2D);
    // 解绑定
    glBindTexture(GL_TEXTURE_2D, 0);
}
// 读取烘托后的YUV数据
void RGBtoYUVOpengl::readYUV(uint8_t **data, int *width, int *height) {
    // 从fbo中读取
    // 绑定fbo
    *width = imageWidth;
    *height = imageHeight;
    glBindFramebuffer(GL_FRAMEBUFFER, fboId);
    glBindTexture(GL_TEXTURE_2D, fboTextureId);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
            GL_TEXTURE_2D, fboTextureId, 0);
    *data = new uint8_t[imageWidth * imageHeight * 3 / 2];
    glReadPixels(0, 0, imageWidth / 4, imageHeight * 1.5, GL_RGBA, GL_UNSIGNED_BYTE, *data);
    glBindTexture(GL_TEXTURE_2D, 0);
    // 解绑fbo
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
}

下面是Activity的首要代码逻辑:


public class RGBToYUVActivity extends AppCompatActivity {
    protected MyGLSurfaceView myGLSurfaceView;
    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_rgb_to_yuv);
        myGLSurfaceView = findViewById(R.id.my_gl_surface_view);
        myGLSurfaceView.setOpenGlListener(new MyGLSurfaceView.OnOpenGlListener() {
            @Override
            public BaseOpengl onOpenglCreate() {
                return new RGBtoYUVOpengl();
            }
            @Override
            public Bitmap requestBitmap() {
                BitmapFactory.Options options = new BitmapFactory.Options();
                options.inScaled = false;
                return BitmapFactory.decodeResource(getResources(),R.mipmap.ic_smile,options);
            }
            @Override
            public void readPixelResult(byte[] bytes) {
                if (null != bytes) {
                }
            }
            // 也就是RGBtoYUVOpengl::readYUV读取到成果数据回调
            @Override
            public void readYUVResult(byte[] bytes) {
                if (null != bytes) {
                    String fileName = System.currentTimeMillis() + ".yuv";
                    File fileParent = getFilesDir();
                    if (!fileParent.exists()) {
                        fileParent.mkdirs();
                    }
                    FileOutputStream fos = null;
                    try {
                        File file = new File(fileParent, fileName);
                        fos = new FileOutputStream(file);
                        fos.write(bytes,0,bytes.length);
                        fos.flush();
                        fos.close();
                        Toast.makeText(RGBToYUVActivity.this, "YUV图片保存成功" + file.getAbsolutePath(), Toast.LENGTH_LONG).show();
                    } catch (Exception e) {
                        Log.v("fly_learn_opengl", "图片保存异常:" + e.getMessage());
                        Toast.makeText(RGBToYUVActivity.this, "YUV图片保存失利", Toast.LENGTH_LONG).show();
                    }
                }
            }
        });
        Button button = findViewById(R.id.bt_rgb_to_yuv);
        button.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                myGLSurfaceView.readYuvData();
            }
        });
        ImageView iv_rgb = findViewById(R.id.iv_rgb);
        iv_rgb.setImageResource(R.mipmap.ic_smile);
    }
}

以下是自定义SurfaceView的代码:

public class MyGLSurfaceView extends SurfaceView implements SurfaceHolder.Callback {
    private final static int MSG_CREATE_GL = 101;
    private final static int MSG_CHANGE_GL = 102;
    private final static int MSG_DRAW_GL = 103;
    private final static int MSG_DESTROY_GL = 104;
    private final static int MSG_READ_PIXEL_GL = 105;
    private final static int MSG_UPDATE_BITMAP_GL = 106;
    private final static int MSG_UPDATE_YUV_GL = 107;
    private final static int MSG_READ_YUV_GL = 108;
    public BaseOpengl baseOpengl;
    private OnOpenGlListener onOpenGlListener;
    private HandlerThread handlerThread;
    private Handler renderHandler;
    public int surfaceWidth;
    public int surfaceHeight;
    public MyGLSurfaceView(Context context) {
        this(context,null);
    }
    public MyGLSurfaceView(Context context, AttributeSet attrs) {
        super(context, attrs);
        getHolder().addCallback(this);
        handlerThread = new HandlerThread("RenderHandlerThread");
        handlerThread.start();
        renderHandler = new Handler(handlerThread.getLooper()){
            @Override
            public void handleMessage(@NonNull Message msg) {
                switch (msg.what){
                    case MSG_CREATE_GL:
                        baseOpengl = onOpenGlListener.onOpenglCreate();
                        Surface surface = (Surface) msg.obj;
                        if(null != baseOpengl){
                            baseOpengl.surfaceCreated(surface);
                            Bitmap bitmap = onOpenGlListener.requestBitmap();
                            if(null != bitmap){
                                baseOpengl.setBitmap(bitmap);
                            }
                        }
                        break;
                    case MSG_CHANGE_GL:
                        if(null != baseOpengl){
                            Size size = (Size) msg.obj;
                            baseOpengl.surfaceChanged(size.getWidth(),size.getHeight());
                        }
                        break;
                    case MSG_DRAW_GL:
                        if(null != baseOpengl){
                            baseOpengl.onGlDraw();
                        }
                        break;
                    case MSG_READ_PIXEL_GL:
                        if(null != baseOpengl){
                           byte[] bytes = baseOpengl.readPixel();
                           if(null != bytes && null != onOpenGlListener){
                               onOpenGlListener.readPixelResult(bytes);
                           }
                        }
                        break;
                    case MSG_READ_YUV_GL:
                        if(null != baseOpengl){
                            byte[] bytes = baseOpengl.readYUVResult();
                            if(null != bytes && null != onOpenGlListener){
                                onOpenGlListener.readYUVResult(bytes);
                            }
                        }
                        break;
                    case MSG_UPDATE_BITMAP_GL:
                        if(null != baseOpengl){
                            Bitmap bitmap = onOpenGlListener.requestBitmap();
                            if(null != bitmap){
                                baseOpengl.setBitmap(bitmap);
                                baseOpengl.onGlDraw();
                            }
                        }
                        break;
                    case MSG_UPDATE_YUV_GL:
                        if(null != baseOpengl){
                            YUVBean yuvBean = (YUVBean) msg.obj;
                            if(null != yuvBean){
                                baseOpengl.setYuvData(yuvBean.getyData(),yuvBean.getUvData(),yuvBean.getWidth(),yuvBean.getHeight());
                                baseOpengl.onGlDraw();
                            }
                        }
                        break;
                    case MSG_DESTROY_GL:
                        if(null != baseOpengl){
                            baseOpengl.surfaceDestroyed();
                        }
                        break;
                }
            }
        };
    }
    public void setOpenGlListener(OnOpenGlListener listener) {
        this.onOpenGlListener = listener;
    }
    @Override
    public void surfaceCreated(@NonNull SurfaceHolder surfaceHolder) {
        Message message = Message.obtain();
        message.what = MSG_CREATE_GL;
        message.obj = surfaceHolder.getSurface();
        renderHandler.sendMessage(message);
    }
    @Override
    public void surfaceChanged(@NonNull SurfaceHolder surfaceHolder, int i, int w, int h) {
        Message message = Message.obtain();
        message.what = MSG_CHANGE_GL;
        message.obj = new Size(w,h);
        renderHandler.sendMessage(message);
        Message message1 = Message.obtain();
        message1.what = MSG_DRAW_GL;
        renderHandler.sendMessage(message1);
        surfaceWidth = w;
        surfaceHeight = h;
    }
    @Override
    public void surfaceDestroyed(@NonNull SurfaceHolder surfaceHolder) {
        Message message = Message.obtain();
        message.what = MSG_DESTROY_GL;
        renderHandler.sendMessage(message);
    }
    public void readGlPixel(){
        Message message = Message.obtain();
        message.what = MSG_READ_PIXEL_GL;
        renderHandler.sendMessage(message);
    }
    public void readYuvData(){
        Message message = Message.obtain();
        message.what = MSG_READ_YUV_GL;
        renderHandler.sendMessage(message);
    }
    public void updateBitmap(){
        Message message = Message.obtain();
        message.what = MSG_UPDATE_BITMAP_GL;
        renderHandler.sendMessage(message);
    }
    public void setYuvData(byte[] yData,byte[] uvData,int width,int height){
        Message message = Message.obtain();
        message.what = MSG_UPDATE_YUV_GL;
        message.obj = new YUVBean(yData,uvData,width,height);
        renderHandler.sendMessage(message);
    }
    public void release(){
        // todo 首要线程同步问题,留神surfaceDestroyed还没有执行到,但是就被release了,那就内存泄漏了
        if(null != baseOpengl){
            baseOpengl.release();
        }
    }
    public void requestRender(){
        Message message = Message.obtain();
        message.what = MSG_DRAW_GL;
        renderHandler.sendMessage(message);
    }
    public interface OnOpenGlListener{
        BaseOpengl onOpenglCreate();
        Bitmap requestBitmap();
        void readPixelResult(byte[] bytes);
        void readYUVResult(byte[] bytes);
    }
}

BaseOpengl的java代码:

public class BaseOpengl {
    public static final int YUV_DATA_TYPE_NV12 = 0;
    public static final int YUV_DATA_TYPE_NV21 = 1;
    // 三角形
    public static final int DRAW_TYPE_TRIANGLE = 0;
    // 四边形
    public static final int DRAW_TYPE_RECT = 1;
    // 纹路贴图
    public static final int DRAW_TYPE_TEXTURE_MAP = 2;
    // 矩阵变换
    public static final int DRAW_TYPE_MATRIX_TRANSFORM = 3;
    // VBO/VAO
    public static final int DRAW_TYPE_VBO_VAO = 4;
    // EBO
    public static final int DRAW_TYPE_EBO_IBO = 5;
    // FBO
    public static final int DRAW_TYPE_FBO = 6;
    // PBO
    public static final int DRAW_TYPE_PBO = 7;
    // YUV  nv12与nv21烘托
    public static final int DRAW_YUV_RENDER = 8;
    // 将rgb图画转化城nv21
    public static final int DRAW_RGB_TO_YUV = 9;
    public long glNativePtr;
    protected EGLHelper eglHelper;
    protected int drawType;
    public BaseOpengl(int drawType) {
        this.drawType = drawType;
        this.eglHelper = new EGLHelper();
    }
    public void surfaceCreated(Surface surface) {
        Log.v("fly_learn_opengl","------------surfaceCreated:" + surface);
        eglHelper.surfaceCreated(surface);
    }
    public void surfaceChanged(int width, int height) {
        Log.v("fly_learn_opengl","------------surfaceChanged:" + Thread.currentThread());
        eglHelper.surfaceChanged(width,height);
    }
    public void surfaceDestroyed() {
        Log.v("fly_learn_opengl","------------surfaceDestroyed:" + Thread.currentThread());
        eglHelper.surfaceDestroyed();
    }
    public void release(){
        if(glNativePtr != 0){
            n_free(glNativePtr,drawType);
            glNativePtr = 0;
        }
    }
    public void onGlDraw(){
        Log.v("fly_learn_opengl","------------onDraw:" + Thread.currentThread());
        if(glNativePtr == 0){
            glNativePtr = n_gl_nativeInit(eglHelper.nativePtr,drawType);
        }
        if(glNativePtr != 0){
            n_onGlDraw(glNativePtr,drawType);
        }
    }
    public void setBitmap(Bitmap bitmap){
        if(glNativePtr == 0){
            glNativePtr = n_gl_nativeInit(eglHelper.nativePtr,drawType);
        }
        if(glNativePtr != 0){
            n_setBitmap(glNativePtr,bitmap);
        }
    }
    public void setYuvData(byte[] yData,byte[] uvData,int width,int height){
        if(glNativePtr != 0){
            n_setYuvData(glNativePtr,yData,uvData,width,height,drawType);
        }
    }
    public void setMvpMatrix(float[] mvp){
        if(glNativePtr == 0){
            glNativePtr = n_gl_nativeInit(eglHelper.nativePtr,drawType);
        }
        if(glNativePtr != 0){
            n_setMvpMatrix(glNativePtr,mvp);
        }
    }
    public byte[] readPixel(){
        if(glNativePtr != 0){
            return n_readPixel(glNativePtr,drawType);
        }
        return null;
    }
    public byte[] readYUVResult(){
        if(glNativePtr != 0){
            return n_readYUV(glNativePtr,drawType);
        }
        return null;
    }
    // 制作
    private native void n_onGlDraw(long ptr,int drawType);
    private native void n_setMvpMatrix(long ptr,float[] mvp);
    private native void n_setBitmap(long ptr,Bitmap bitmap);
    protected native long n_gl_nativeInit(long eglPtr,int drawType);
    private native void n_free(long ptr,int drawType);
    private native byte[] n_readPixel(long ptr,int drawType);
    private native byte[] n_readYUV(long ptr,int drawType);
    private native void n_setYuvData(long ptr,byte[] yData,byte[] uvData,int width,int height,int drawType);
}

将转化后的YUV数据读取保存好后,能够将数据拉取到电脑上运用YUVViewer这个软件查看是否真实转化成功。

参考

/post/702522…

专栏系列

Opengl ES之EGL环境建立
Opengl ES之着色器
Opengl ES之三角形制作
Opengl ES之四边形制作
Opengl ES之纹路贴图
Opengl ES之VBO和VAO
Opengl ES之EBO
Opengl ES之FBO
Opengl ES之PBO
Opengl ES之YUV数据烘托
YUV转RGB的一些理论知识