前面我们学过摄像头的烘托、单滤镜、多滤镜的处理的流程。接下来要学习的是大眼和瘦脸的技能了。这儿会运用到人脸识别的技能,刚开端计划用的是Vision原生结构来做,无奈,脱离年代的6太卡了。难当次重任,后边运用了第三方结构。测试版可以随意玩玩。真的不错哦。功率很高,特征点106个。仍是OK的。 借鉴博客:iOS原生结构Vision实现瘦脸大眼特效、仿DY特效相机之大眼瘦脸 本文达到作用如下图:

106个特征点如下图:(104与74堆叠,105与77堆叠,去掉了)

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸

原了解析

主要是以下3点,详细请前往参考博客和原了解析。

  • 1.圆内扩大
  • 2.圆内缩小
  • 3.向某一点拉伸

用一张gif图来归纳上面的内容,也是本文章终究的达到的作用,如最初所展现的作用图

经过前面我们了解了

  • 日常开发中OpenGL开发流程
  • 1.设置图层
  • 2.设置图形上下文
  • 3.设置烘托缓冲区(renderBuffer)
  • 4.设置帧缓冲区(frameBuffer)
  • 5.编译、链接上色器(shader)
  • 6.设置VBO (Vertex Buffer Objects)
  • 7.设置纹路
  • 8.烘托

这些根本步骤大致是不变的。这章是摄像头烘托+”多滤镜”烘托思想的结合提现。内容是感觉是增加了,可是实践的开发流程仍是一样的。接下来让我们进入正题。 经过分析我们主要有以下3个作业:

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸

部分中心代码:

    ///制作面部特征点
    func renderFacePoint() {
        //MARK: - 1.制作摄像头
        //运用上色器
        glUseProgram(renderProgram)
        //绑定frameBuffer
        glBindFramebuffer(GLenum(GL_FRAMEBUFFER), facePointFrameBuffer)
        //设置清屏色彩
        glClearColor(0.0, 0.0, 0.0, 1.0)
        //铲除屏幕
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
        //1.设置视口大小
        let scale = self.contentScaleFactor
        glViewport(0, 0, GLsizei(self.frame.size.width * scale), GLsizei(self.frame.size.height * scale))
#warning("留意⚠️:想要获取shader里边的变量,这儿要记住要在glLinkProgram后边、后边、后边")
        //----处理极点数据-------
        //将极点数据经过renderProgram中的传递到极点上色程序的position
        /*1.glGetAttribLocation,用来获取vertex attribute的进口的.
          2.告诉OpenGL ES,经过glEnableVertexAttribArray,
          3.终究数据是经过glVertexAttribPointer传递曩昔的。
         */
        //留意:第二参数字符串有必要和shaderv.vsh中的输入变量:position保持一致
        let position = glGetAttribLocation(renderProgram, "position")
        //设置适宜的格局从buffer里边读取数据
        glEnableVertexAttribArray(GLuint(position))
        //设置读取办法
        //参数1:index,极点数据的索引
        //参数2:size,每个极点特点的组件数量,1,2,3,或许4.默许初始值是4.
        //参数3:type,数据中的每个组件的类型,常用的有GL_FLOAT,GL_BYTE,GL_SHORT。默许初始值为GL_FLOAT
        //参数4:normalized,固定点数据值是否应该归一化,或许直接转换为固定值。(GL_FALSE)
        //参数5:stride,接连极点特点之间的偏移量,默许为0;
        //参数6:指定一个指针,指向数组中的第一个极点特点的第一个组件。默许为0
//        glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(MemoryLayout<GLfloat>.size * 5), UnsafeRawPointer(bitPattern: MemoryLayout<GLfloat>.size * 0))
        glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVertex)
        //----处理纹路数据-------
        //1.glGetAttribLocation,用来获取vertex attribute的进口的.
        //留意:第二参数字符串有必要和shaderv.vsh中的输入变量:textCoordinate保持一致
        let textCoord = glGetAttribLocation(renderProgram, "textCoordinate")
        //设置适宜的格局从buffer里边读取数据
        glEnableVertexAttribArray(GLuint(textCoord))
        //3.设置读取办法
        //参数1:index,极点数据的索引
        //参数2:size,每个极点特点的组件数量,1,2,3,或许4.默许初始值是4.
        //参数3:type,数据中的每个组件的类型,常用的有GL_FLOAT,GL_BYTE,GL_SHORT。默许初始值为GL_FLOAT
        //参数4:normalized,固定点数据值是否应该归一化,或许直接转换为固定值。(GL_FALSE)
        //参数5:stride,接连极点特点之间的偏移量,默许为0;
        //参数6:指定一个指针,指向数组中的第一个极点特点的第一个组件。默许为0
//        glVertexAttribPointer(GLuint(textCoord), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(MemoryLayout<GLfloat>.size * 5), UnsafeRawPointer(bitPattern: MemoryLayout<GLfloat>.size * 3))
        glVertexAttribPointer(GLuint(textCoord), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVerticalInvertFragment)
        //法一:运用 CVOpenGLESTexture进行加载,翻开下面
        glActiveTexture(GLenum(GL_TEXTURE0))
        glUniform1i(glGetUniformLocation(self.renderProgram, "colorMap"), 0)
        //法二:运用 glTexImage2D 办法加载,翻开下面
//        glActiveTexture(GLenum(GL_TEXTURE1))
//        glBindTexture(GLenum(GL_TEXTURE_2D), originalTexture)
//        glUniform1i(glGetUniformLocation(self.renderProgram, "colorMap"), 1) //单个纹路可以不用设置
        glDrawArrays(GLenum(GL_TRIANGLES), 0, 6)
        //MARK: - 2.制作面部特征点
        if drawLandMark {
            //留意⚠️:不能清屏。否则看不到照相机画面
            //        glClearColor(0.0, 0.0, 0.0, 1.0)
            //铲除屏幕
            //        glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
            //1.设置视口大小
            glViewport(0, 0, GLsizei(self.frame.size.width * scale), GLsizei(self.frame.size.height * scale))
            //运用上色器
            glUseProgram(faceProgram)
            for faceInfo in FaceDetector.shareInstance().faceModels {
                var tempPoint: [GLfloat] = [GLfloat].init(repeating: 0, count: faceInfo.landmarks.count * 3)
                var indices: [GLubyte] = [GLubyte].init(repeating: 0, count: faceInfo.landmarks.count)
                for i in 0..<faceInfo.landmarks.count {
                    let point = faceInfo.landmarks[i].cgPointValue
                    tempPoint[i*3+0] = GLfloat(point.x * 2 - 1)
                    tempPoint[i*3+1] = GLfloat(point.y * 2 - 1)
                    tempPoint[i*3+2] = 0.0
                    indices[i] = GLubyte(i)
                }
                let position = glGetAttribLocation(faceProgram, "position")
                glEnableVertexAttribArray(GLuint(position))
                //这种办法得先把极点数据提交到GPU
                //            glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(MemoryLayout<GLfloat>.size * 3), UnsafeRawPointer(bitPattern: MemoryLayout<GLfloat>.size * 0))
                glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, tempPoint)
                let lineWidth = faceInfo.bounds.size.width / CGFloat(self.frame.width * scale)
                let sizeScaleUniform = glGetUniformLocation(self.faceProgram, "sizeScale")
                glUniform1f(GLint(sizeScaleUniform), GLfloat(lineWidth * 20))
                //            var scaleMatrix = GLKMatrix4Identity//GLKMatrix4Scale(GLKMatrix4Identity, 1/Float(lineWidth), 1/Float(lineWidth), 0)
                //            let scaleMatrixUniform = shader.uniformIndex("scaleMatrix")!
                //            glUniformMatrix4fv(GLint(scaleMatrixUniform), 1, GLboolean(GL_FALSE), &scaleMatrix.m.0)
                glDrawElements(GLenum(GL_POINTS), GLsizei(indices.count), GLenum(GL_UNSIGNED_BYTE), indices)
            }
        }
        //MARK: - 3.制作纹路结束,开端瘦脸
        renderThinFace()
    }
//MARK: - 制作瘦脸
    ///制作瘦脸
    func renderThinFace() {
        //运用上色器
        glUseProgram(thinFaceProgram)
        //绑定frameBuffer
        glBindFramebuffer(GLenum(GL_FRAMEBUFFER), thinFaceFrameBuffer)
        let faceInfo = FaceDetector.shareInstance().oneFace
        if faceInfo.landmarks.count == 0 {
            glUniform1i(hasFaceUniform, 0)
            //3.制作纹路结束,开端烘托到屏幕上
            displayRenderToScreen(facePointTexture)
            return
        }
        glClearColor(0.0, 0.0, 0.0, 1.0)
        //铲除屏幕
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
        //1.设置视口大小
        let scale = self.contentScaleFactor
        glViewport(0, 0, GLsizei(self.frame.size.width * scale), GLsizei(self.frame.size.height * scale))
        hasFaceUniform = glGetUniformLocation(self.thinFaceProgram, "hasFace")
        aspectRatioUniform = glGetUniformLocation(self.thinFaceProgram, "aspectRatio")
        facePointsUniform = glGetUniformLocation(self.thinFaceProgram, "facePoints")
        thinFaceDeltaUniform = glGetUniformLocation(self.thinFaceProgram, "thinFaceDelta")
        bigEyeDeltaUniform = glGetUniformLocation(self.thinFaceProgram, "bigEyeDelta")
        glUniform1i(hasFaceUniform, 1)
        let aspect: Float = Float(inputTextureW / inputTextureH)
        glUniform1f(aspectRatioUniform, aspect)
        glUniform1f(thinFaceDeltaUniform, thinFaceDelta)
        glUniform1f(bigEyeDeltaUniform, bigEyeDelta)
        let size = 106 * 2
        var tempPoint: [GLfloat] = [GLfloat].init(repeating: 0, count: size)
        var index = 0
        for i in 0..<faceInfo.landmarks.count {
            let point = faceInfo.landmarks[i].cgPointValue
            tempPoint[i*2+0] = GLfloat(point.x)
            tempPoint[i*2+1] = GLfloat(point.y)
            index += 2
            if (index == size) {
                break
            }
        }
        glUniform1fv(facePointsUniform, GLsizei(size), tempPoint)
        //留意:第二参数字符串有必要和shaderv.vsh中的输入变量:position保持一致
        let position = glGetAttribLocation(thinFaceProgram, "position")
        glEnableVertexAttribArray(GLuint(position))
        glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVertex)
        //----处理纹路数据-------
        let textCoord = glGetAttribLocation(thinFaceProgram, "inputTextureCoordinate")
        //设置适宜的格局从buffer里边读取数据
        glEnableVertexAttribArray(GLuint(textCoord))
        glVertexAttribPointer(GLuint(textCoord), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVerticalInvertFragment)
        glActiveTexture(GLenum(GL_TEXTURE1))
        glBindTexture(GLenum(GL_TEXTURE_2D), facePointTexture)
        glUniform1i(glGetUniformLocation(self.thinFaceProgram, "inputImageTexture"), 1) //单个纹路可以不用设置
        glDrawArrays(GLenum(GL_TRIANGLES), 0, 6)
        //MARK: - 3.制作纹路结束,开端烘托到屏幕上
        displayRenderToScreen(thinFaceTexture)
    }
    //8.烘托到屏幕上
    private func displayRenderToScreen(_ texture: GLuint) {
        //留意⚠️:打破之前的纹路绑定联系,使OpenGL的纹路绑定状况康复到默许状况。
        glBindTexture(GLenum(GL_TEXTURE_2D), 0) //将2D纹路绑定到默许的纹路,一般用于打破之前的纹路绑定联系,使OpenGL的纹路绑定状况康复到默许状况。
        glBindFramebuffer(GLenum(GL_FRAMEBUFFER), 0)//将framebuffer绑定到默许的FBO处,一般用于打破之前的FBO绑定联系,使OpenGL的FBO绑定状况康复到默许状况。
        //设置清屏色彩
        glClearColor(0.0, 0.0, 0.0, 1.0)
        //铲除屏幕
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
        //1.设置视口大小
        let scale = self.contentScaleFactor
        glViewport(0, 0, GLsizei(self.frame.size.width * scale), GLsizei(self.frame.size.height * scale))
        //运用上色器
        glUseProgram(displayProgram)
        //绑定frameBuffer
        glBindFramebuffer(GLenum(GL_FRAMEBUFFER), frameBuffer)
#warning("留意⚠️:想要获取shader里边的变量,这儿要记住要在glLinkProgram后边、后边、后边")
        //----处理极点数据-------
        //将极点数据经过renderProgram中的传递到极点上色程序的position
        /*1.glGetAttribLocation,用来获取vertex attribute的进口的.
          2.告诉OpenGL ES,经过glEnableVertexAttribArray,
          3.终究数据是经过glVertexAttribPointer传递曩昔的。
         */
        //留意:第二参数字符串有必要和shaderv.vsh中的输入变量:position保持一致
        let position = glGetAttribLocation(displayProgram, "position")
        //设置适宜的格局从buffer里边读取数据
        glEnableVertexAttribArray(GLuint(position))
        //设置读取办法
        //参数1:index,极点数据的索引
        //参数2:size,每个极点特点的组件数量,1,2,3,或许4.默许初始值是4.
        //参数3:type,数据中的每个组件的类型,常用的有GL_FLOAT,GL_BYTE,GL_SHORT。默许初始值为GL_FLOAT
        //参数4:normalized,固定点数据值是否应该归一化,或许直接转换为固定值。(GL_FALSE)
        //参数5:stride,接连极点特点之间的偏移量,默许为0;
        //参数6:指定一个指针,指向数组中的第一个极点特点的第一个组件。默许为0
//        glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(MemoryLayout<GLfloat>.size * 5), UnsafeRawPointer(bitPattern: MemoryLayout<GLfloat>.size * 0))
        glVertexAttribPointer(GLuint(position), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVertex)
        //----处理纹路数据-------
        //1.glGetAttribLocation,用来获取vertex attribute的进口的.
        //留意:第二参数字符串有必要和shaderv.vsh中的输入变量:textCoordinate保持一致
        let textCoord = glGetAttribLocation(displayProgram, "textCoordinate")
        //设置适宜的格局从buffer里边读取数据
        glEnableVertexAttribArray(GLuint(textCoord))
        //3.设置读取办法
        //参数1:index,极点数据的索引
        //参数2:size,每个极点特点的组件数量,1,2,3,或许4.默许初始值是4.
        //参数3:type,数据中的每个组件的类型,常用的有GL_FLOAT,GL_BYTE,GL_SHORT。默许初始值为GL_FLOAT
        //参数4:normalized,固定点数据值是否应该归一化,或许直接转换为固定值。(GL_FALSE)
        //参数5:stride,接连极点特点之间的偏移量,默许为0;
        //参数6:指定一个指针,指向数组中的第一个极点特点的第一个组件。默许为0
//        glVertexAttribPointer(GLuint(textCoord), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(MemoryLayout<GLfloat>.size * 5), UnsafeRawPointer(bitPattern: MemoryLayout<GLfloat>.size * 3))
        glVertexAttribPointer(GLuint(textCoord), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, standardVerticalInvertFragment)
        glActiveTexture(GLenum(GL_TEXTURE0))
        glBindTexture(GLenum(GL_TEXTURE_2D), texture)
        glUniform1i(glGetUniformLocation(self.displayProgram, "inputImageTexture"), 0) //单个纹路可以不用设置
        glDrawArrays(GLenum(GL_TRIANGLES), 0, 6)
        if (EAGLContext.current() == myContext) {
            myContext.presentRenderbuffer(Int(GL_RENDERBUFFER))
        }
    }

这儿值得留意的是:制作特征点的时分不能进行Clear清屏操作,否则会看不摄像头所捕获的内容

大眼片元上色器算法:

 //圓內扩大
 vec2 enlargeEye(vec2 textureCoord, vec2 originPosition, float radius, float delta) {
    float weight = distance(vec2(textureCoord.x, textureCoord.y / aspectRatio), vec2(originPosition.x, originPosition.y / aspectRatio)) / radius;
    weight = 1.0 - (1.0 - weight * weight) * delta;
    weight = clamp(weight,0.0,1.0);
    textureCoord = originPosition + (textureCoord - originPosition) * weight;
    return textureCoord;
}
vec2 bigEye(vec2 currentCoordinate) {
    vec2 faceIndexs[2];
    faceIndexs[0] = vec2(74., 72.);//如下图中,以74为圆心,74到72作为半径R
    faceIndexs[1] = vec2(77., 75.);
    for(int i = 0; i < 2; i++)
    {
        int originIndex = int(faceIndexs[i].x);
        int targetIndex = int(faceIndexs[i].y);
        vec2 originPoint = vec2(facePoints[originIndex * 2], facePoints[originIndex * 2 + 1]);
        vec2 targetPoint = vec2(facePoints[targetIndex * 2], facePoints[targetIndex * 2 + 1]);
        float radius = distance(vec2(targetPoint.x, targetPoint.y / aspectRatio), vec2(originPoint.x, originPoint.y / aspectRatio));
        radius = radius * 5.;
        currentCoordinate = enlargeEye(currentCoordinate, originPoint, radius, bigEyeDelta);
    }
    return currentCoordinate;
}

textureCoord表明当时要修正的坐标,originPosition表明圆心坐标,radius表明圆的半径,delta用来操控变形强度。 和瘦脸的算法类似,根据originPositiontargetPosition确认一个圆,圆内的坐标会参与核算,圆外的不变。 圆内的坐标环绕圆心originPosition在改变,终究的坐标完全是由weight的值决议,weight越大,终究的坐标改变越小,当weight为1,即坐标处于圆边界或圆外时,终究的坐标不变;当weight小于1时,终究的坐标会落在原坐标和圆点之间,也就是说终究返回的像素点比原像素点间隔圆点更近,这样就产生了以圆点为中心的扩大作用。

如下图中,以74为圆心,74到72作为半径R

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸

瘦脸片元上色器算法:

vec2 curveWarp(vec2 textureCoord, vec2 originPosition, vec2 targetPosition, float delta) {
    vec2 offset = vec2(0.0);
    vec2 result = vec2(0.0);
    vec2 direction = (targetPosition - originPosition) ;
    float radius = distance(vec2(targetPosition.x, targetPosition.y / aspectRatio), vec2(originPosition.x, originPosition.y / aspectRatio));
    float ratio = distance(vec2(textureCoord.x, textureCoord.y / aspectRatio), vec2(originPosition.x, originPosition.y / aspectRatio)) / radius;
    ratio = 1.0 - ratio;
    ratio = clamp(ratio, 0.0, 1.0);
    offset = direction * ratio * delta;
    result = textureCoord - offset;
    return result;
}
//指定9对 圆心坐标和目标坐标,如下图
vec2 thinFace(vec2 currentCoordinate) {
    vec2 faceIndexs[9];
    faceIndexs[0] = vec2(3., 44.);
    faceIndexs[1] = vec2(29., 44.);
    faceIndexs[2] = vec2(7., 45.);
    faceIndexs[3] = vec2(25., 45.);
    faceIndexs[4] = vec2(10., 46.);
    faceIndexs[5] = vec2(22., 46.);
    faceIndexs[6] = vec2(14., 49.);
    faceIndexs[7] = vec2(18., 49.);
    faceIndexs[8] = vec2(16., 49.);
    for(int i = 0; i < 9; i++)
    {
        int originIndex = int(faceIndexs[i].x);
        int targetIndex = int(faceIndexs[i].y);
        vec2 originPoint = vec2(facePoints[originIndex * 2], facePoints[originIndex * 2 + 1]);
        vec2 targetPoint = vec2(facePoints[targetIndex * 2], facePoints[targetIndex * 2 + 1]);
        currentCoordinate = curveWarp(currentCoordinate, originPoint, targetPoint, thinFaceDelta);
    }
    return currentCoordinate;
}

textureCoord表明当时要修正的坐标,originPosition表明圆心坐标,targetPosition表明目标坐标,delta用来操控变形强度。

上述shader办法可以这样了解,首先确认一个以originPosition为圆心、targetPositionoriginPosition之间的间隔为半径的圆,然后将圆内的像素朝着同一个方向移动一个偏移值,且偏移值在间隔圆心越近时越大,终究将变换后的坐标返回。

如果将办法简化为这样的表达式变换后的坐标 = 原坐标 - (目标坐标 - 圆心坐标) * 变形强度,也就是说,办法的作用就是要在原坐标的基础上减去一个偏移值,而(targetPosition - originPosition)决议了移动的方向和最大值。

  • 指定9对 圆心坐标和目标坐标,如下图

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸

刚开端想的是实现像最初动图那样的作用,可是在实现的时分遇到了一些问题。刚开端的主意是这样的,如下图

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸

后边想到在实现多滤镜的时分,上一个片元上色器的输出,作为下一个片元上色器的输入, 如下图所示:

iOS视觉-- (11) OpenGL ES+GLSL实现大眼和瘦脸
详细详情请查看源码。

本文Demo:码云、Github