我正在参与「启航方案」

前语

最近公司提出了一个有趣的新需求,需求开发一个功用来自动辨认用户前置摄像头中的人脸,并且能够对其进行截图。

话不多说,直接开整…

  • 技能点:
  • AVCaptureSession:拜访和操控设备的摄像头,并捕获实时的视频流。
  • Vision:供给了强壮的人脸辨认和剖析功用,能够快速精确地检测和辨认人脸。

作用

开端

首先,工程中引进两个框架

import Vision
import AVFoundation

接下来,咱们需求确保应用程序具有拜访设备摄像头的权限。先判别是否具有权限,如果没有权限咱们告诉用户去获取

let videoStatus = AVCaptureDevice.authorizationStatus(for: .video)
switch videoStatus {
case .authorized, .notDetermined:
    print("有权限、开端咱们的业务")
case .denied, .restricted:
    print("没有权限、提示用户去开启权限")
default:
    break
}

然后,咱们需求对摄像头进行装备,包含承认前后置摄像头、处理视频分辨率、设置视频安稳模式、输出图画方向以及设置视频数据输出

装备

承认前后置摄像头:

运用AVCaptureDevice类能够获取设备上的一切摄像头,并判别它们是前置摄像头仍是后置摄像头

// 获取一切视频设备
let videoDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .unspecified).devices
// 挑选前置摄像头和后置摄像头
var frontCamera: AVCaptureDevice?
var backCamera: AVCaptureDevice?
for device in videoDevices {
    if device.position == .front {
        frontCamera = device
    } else if device.position == .back {
        backCamera = device
    }
}
// 依据需求挑选前置或后置摄像头
let cameraDevice = frontCamera ?? backCamera

处理视频分辨率:

能够经过设置AVCaptureSession的sessionPreset特点来挑选适合的视频分辨率。常见的分辨率选项包含.high、.medium、.low等。

let captureSession = AVCaptureSession()
captureSession.sessionPreset = .high

输出图画方向:

能够经过设置AVCaptureVideoOrientation来指定输出图画的方向。一般,咱们需求依据设备方向和界面方向进行调整。

if let videoConnection = videoOutput.connection(with: .video) {
    if videoConnection.isVideoOrientationSupported {
        let currentDeviceOrientation = UIDevice.current.orientation
        var videoOrientation: AVCaptureVideoOrientation
        switch currentDeviceOrientation {
        case .portrait:
            videoOrientation = .portrait
        case .landscapeRight:
            videoOrientation = .landscapeLeft
        case .landscapeLeft:
            videoOrientation = .landscapeRight
        case .portraitUpsideDown:
            videoOrientation = .portraitUpsideDown
        default:
            videoOrientation = .portrait
        }
        videoConnection.videoOrientation = videoOrientation
    }
}

视频数据输出:

能够运用AVCaptureVideoDataOutput来获取摄像头捕捉到的实时视频数据。首先,创立一个AVCaptureVideoDataOutput对象,并将其添加到AVCaptureSession中。然后,设置署理对象来接收视频数据回调。

let videoOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoOutput)
let videoOutputQueue = DispatchQueue(label: "VideoOutputQueue")
videoOutput.setSampleBufferDelegate(self, queue: videoOutputQueue)

视频处理、人脸验证

接下来,咱们将对视频进行处理,包含人脸验证和圈出人脸区域。咱们将在AVCaptureVideoDataOutputSampleBufferDelegate 的署理办法中来实现这些功用

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        guard let bufferRef = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            return
        }
        let detectFaceRequest = VNDetectFaceRectanglesRequest()
        let detectFaceRequestHandler = VNImageRequestHandler(cvPixelBuffer: bufferRef, options: [:])
        do {
            try detectFaceRequestHandler.perform([detectFaceRequest])
            guard let results = detectFaceRequest.results else {
                return
            }
            DispatchQueue.main.async { [weak self] in
                guard let self = self else {
                    return
                }
                // 移除先前的人脸矩形
                for layer in self.layers {
                    layer.removeFromSuperlayer()
                }
                self.layers.removeAll()
                for observation in results {
                    let oldRect = observation.boundingBox
                    let w = oldRect.size.width * self.view.frame.size.width
                    let h = oldRect.size.height * self.view.frame.size.height
                    let x = oldRect.origin.x * self.view.bounds.size.width
                    let y = self.view.frame.size.height - (oldRect.origin.y * self.view.frame.size.height) - h
                    // 添加矩形图层
                    let layer = CALayer()
                    layer.borderWidth = 2
                    layer.cornerRadius = 3
                    layer.borderColor = UIColor.orange.cgColor
                    layer.frame = CGRect(x: x, y: y, width: w, height: h)
                    self.layers.append(layer)
                }
                // 将矩形图层添加到视图的图层上
                for layer in self.layers {
                    self.view.layer.addSublayer(layer)
                }
            }
        } catch {
            print("错误: \(error)")
        }
    }

结尾

辨认单个人脸的时分没有太大问题,但是多个人脸方位不是很精确,有知道原因的小伙伴告知一下