一、简介

媒体子系统为开发者供给了媒体相关的很多功用,本文针对其间的视频录制功用做个具体的介绍。首要,我将经过媒体子系统供给的视频录制Test代码作为切入点,给大家梳理一下整个录制的流程。

二、目录

foundation/multimedia/camera_framework

├── frameworks
│   ├── js
│   │   └── camera_napi                            #napi完成
│   │       └── src
│   │           ├── input                          #Camera输入
│   │           ├── output                         #Camera输出
│   │           └── session                        #会话管理
│   └── native                                     #native完成
│       └── camera
│           ├── BUILD.gn
│           ├── src
│           │   ├── input                          #Camera输入
│           │   ├── output                         #Camera输出
│           │   └── session                        #会话管理
├── interfaces                                     #接口界说
│   ├── inner_api                                  #内部native完成
│   │   └── native
│   │       ├── camera
│   │       │   └── include
│   │       │       ├── input
│   │       │       ├── output
│   │       │       └── session
│   └── kits                                       #napi接口
│       └── js
│           └── camera_napi
│               ├── BUILD.gn
│               ├── include
│               │   ├── input
│               │   ├── output
│               │   └── session
│               └── @ohos.multimedia.camera.d.ts
└── services                                       #服务端
    └── camera_service
        ├── binder
        │   ├── base
        │   ├── client                             #IPC的客户端
        │   │   └── src
        │   └── server                             #IPC的服务端
        │       └── src
        └── src

三、录制的全体流程

OpenHarmony 3.2 Beta多媒体系列——视频录制

四、Native接口运用

在OpenAtom OpenHarmony(以下简称“OpenHarmony”)系统中,多媒体子系统经过N-API接口供给给上层JS调用,N-API相当所以JS和Native之间的桥梁,在OpenHarmony源码中,供给了C++直接调用视频录制功用的例子,foundation/multimedia/camera_framework/interfaces/inner_api/native/test目录中。本文章首要参阅了camera_video.cpp文件中的视频录制流程。

首要依据camera_video.cpp的main办法,了解下视频录制的首要流程代码。

int main(int argc, char **argv)
{
    ......
    // 创立CameraManager实例
    sptr<CameraManager> camManagerObj = CameraManager::GetInstance();
    // 设置回调
    camManagerObj->SetCallback(std::make_shared<TestCameraMngerCallback>(testName));
    // 获取支撑的相机设备列表
    std::vector<sptr<CameraDevice>> cameraObjList = camManagerObj->GetSupportedCameras();
    // 创立收集会话
    sptr<CaptureSession> captureSession = camManagerObj->CreateCaptureSession();
    // 开端装备收集会话
    captureSession->BeginConfig();
    // 创立CameraInput
    sptr<CaptureInput> captureInput = camManagerObj->CreateCameraInput(cameraObjList[0]);
    sptr<CameraInput> cameraInput = (sptr<CameraInput> &)captureInput;
    // 敞开CameraInput
    cameraInput->Open();
    // 设置CameraInput的Error回调
    cameraInput->SetErrorCallback(std::make_shared<TestDeviceCallback>(testName));
    // 增加CameraInput实例到收集会话中
    ret = captureSession->AddInput(cameraInput);
    sptr<Surface> videoSurface = nullptr;
    std::shared_ptr<Recorder> recorder = nullptr;
    // 创立Video的Surface
    videoSurface = Surface::CreateSurfaceAsConsumer();
    sptr<SurfaceListener> videoListener = new SurfaceListener("Video", SurfaceType::VIDEO, g_videoFd, videoSurface);
    // 注册Surface的事件监听
    videoSurface->RegisterConsumerListener((sptr<IBufferConsumerListener> &)videoListener);
    // 视频的装备
    VideoProfile videoprofile = VideoProfile(static_cast<CameraFormat>(videoFormat), videosize, videoframerates);
    // 创立VideoOutput实例
    sptr<CaptureOutput> videoOutput = camManagerObj->CreateVideoOutput(videoprofile, videoSurface);
    // 设置VideoOutput的回调
    ((sptr<VideoOutput> &)videoOutput)->SetCallback(std::make_shared<TestVideoOutputCallback>(testName));
    // 增加videoOutput到收集会话中
    ret = captureSession->AddOutput(videoOutput);
    // 提交会话装备
    ret = captureSession->CommitConfig();
    // 开端录制
    ret = ((sptr<VideoOutput> &)videoOutput)->Start();
    sleep(videoPauseDuration);
    MEDIA_DEBUG_LOG("Resume video recording");
    // 暂停录制
    ret = ((sptr<VideoOutput> &)videoOutput)->Resume();
    MEDIA_DEBUG_LOG("Wait for 5 seconds before stop");
    sleep(videoCaptureDuration);
    MEDIA_DEBUG_LOG("Stop video recording");
    // 中止录制
    ret = ((sptr<VideoOutput> &)videoOutput)->Stop();
    MEDIA_DEBUG_LOG("Closing the session");
    // 中止收集会话
    ret = captureSession->Stop();
    MEDIA_DEBUG_LOG("Releasing the session");
    // 开释会话收集
    captureSession->Release();
    // Close video file
    TestUtils::SaveVideoFile(nullptr, 0, VideoSaveMode::CLOSE, g_videoFd);
    cameraInput->Release();
    camManagerObj->SetCallback(nullptr);
    return 0;
}

以上是视频录制的全体流程,其进程首要经过Camera模块支撑的才能来完成,其间涉及几个重要的类:CaptureSession、CameraInput、VideoOutput。CaptureSession是整个进程的控制者,CameraInput和VideoOutput相当所以设备的输入和输出。

五、调用流程

OpenHarmony 3.2 Beta多媒体系列——视频录制

后续首要针对上面的调用流程,梳理具体的调用流程,便利我们对了解视频录制的整理架构有一个愈加深化的了解。

  1. 创立CameraManager实例

经过CameraManager::GetInstance()获取CameraManager的实例,后续的一些接口都是经过该实例进行调用的。GetInstance运用了单例形式,在OpenHarmony代码中这种办法很常见。

sptr<CameraManager> &CameraManager::GetInstance()
{
    if (CameraManager::cameraManager_ == nullptr) {
        MEDIA_INFO_LOG("Initializing camera manager for first time!");
        CameraManager::cameraManager_ = new(std::nothrow) CameraManager();
        if (CameraManager::cameraManager_ == nullptr) {
            MEDIA_ERR_LOG("CameraManager::GetInstance failed to new CameraManager");
        }
    }
    return CameraManager::cameraManager_;
}
  1. 获取支撑的相机设备列表

经过调用CameraManager的GetSupportedCameras()接口,获取设备支撑的CameraDevice列表。跟踪代码能够发现serviceProxy_->GetCameras终究会调用到Camera服务端的对应接口。

std::vector<sptr<CameraDevice>> CameraManager::GetSupportedCameras()
{
    CAMERA_SYNC_TRACE;
    std::lock_guard<std::mutex> lock(mutex_);
    std::vector<std::string> cameraIds;
    std::vector<std::shared_ptr<Camera::CameraMetadata>> cameraAbilityList;
    int32_t retCode = -1;
    sptr<CameraDevice> cameraObj = nullptr;
    int32_t index = 0;
    if (cameraObjList.size() > 0) {
        cameraObjList.clear();
    }
    if (serviceProxy_ == nullptr) {
        MEDIA_ERR_LOG("CameraManager::GetCameras serviceProxy_ is null, returning empty list!");
        return cameraObjList;
    }
    std::vector<sptr<CameraDevice>> supportedCameras;
    retCode = serviceProxy_->GetCameras(cameraIds, cameraAbilityList);
    if (retCode == CAMERA_OK) {
        for (auto& it : cameraIds) {
            cameraObj = new(std::nothrow) CameraDevice(it, cameraAbilityList[index++]);
            if (cameraObj == nullptr) {
                MEDIA_ERR_LOG("CameraManager::GetCameras new CameraDevice failed for id={public}%s", it.c_str());
                continue;
            }
            supportedCameras.emplace_back(cameraObj);
        }
    } else {
        MEDIA_ERR_LOG("CameraManager::GetCameras failed!, retCode: %{public}d", retCode);
    }
    ChooseDeFaultCameras(supportedCameras);
    return cameraObjList;
}
  1. 创立收集会话

下面是比较重要的环节,经过调用CameraManager的CreateCaptureSession接口创立收集会话。CameraManager创立收集会话,是经过serviceProxy_->CreateCaptureSession办法进行调用,这儿涉及到了OpenHarmony中的IPC的调用,serviceProxy_是远端服务在本地的代理,经过这个代理能够调用到具体的服务端,这儿是HCameraService。

sptr<CaptureSession> CameraManager::CreateCaptureSession()
{
    CAMERA_SYNC_TRACE;
    sptr<ICaptureSession> captureSession = nullptr;
    sptr<CaptureSession> result = nullptr;
    int32_t retCode = CAMERA_OK;
    if (serviceProxy_ == nullptr) {
        MEDIA_ERR_LOG("CameraManager::CreateCaptureSession serviceProxy_ is null");
        return nullptr;
    }
    retCode = serviceProxy_->CreateCaptureSession(captureSession);
    if (retCode == CAMERA_OK && captureSession != nullptr) {
        result = new(std::nothrow) CaptureSession(captureSession);
        if (result == nullptr) {
            MEDIA_ERR_LOG("Failed to new CaptureSession");
        }
    } else {
        MEDIA_ERR_LOG("Failed to get capture session object from hcamera service!, %{public}d", retCode);
    }
    return result;
}

代码终究来到HCameraService::CreateCaptureSession中,该办法中new了一个HCaptureSession目标,而且将该目标传递给了参数session,所以前面的captureSession目标便是这儿new出来的HCaptureSession,前面的CameraManager的CreateCaptureSession()办法中将captureSession封装成CaptureSession目标返回给应用层运用。

int32_t HCameraService::CreateCaptureSession(sptr<ICaptureSession> &session)
{
    CAMERA_SYNC_TRACE;
    sptr<HCaptureSession> captureSession;
    if (streamOperatorCallback_ == nullptr) {
        streamOperatorCallback_ = new(std::nothrow) StreamOperatorCallback();
        if (streamOperatorCallback_ == nullptr) {
            MEDIA_ERR_LOG("HCameraService::CreateCaptureSession streamOperatorCallback_ allocation failed");
            return CAMERA_ALLOC_ERROR;
        }
    }
    std::lock_guard<std::mutex> lock(mutex_);
    OHOS::Security::AccessToken::AccessTokenID callerToken = IPCSkeleton::GetCallingTokenID();
    captureSession = new(std::nothrow) HCaptureSession(cameraHostManager_, streamOperatorCallback_, callerToken);
    if (captureSession == nullptr) {
        MEDIA_ERR_LOG("HCameraService::CreateCaptureSession HCaptureSession allocation failed");
        return CAMERA_ALLOC_ERROR;
    }
    session = captureSession;
    return CAMERA_OK;
}
  1. 开端装备收集会话

调用CaptureSession的BeginConfig进行收集会话的装备作业。这个作业终究调用到被封装的HCaptureSession中。

int32_t HCaptureSession::BeginConfig()
{
    CAMERA_SYNC_TRACE;
    if (curState_ == CaptureSessionState::SESSION_CONFIG_INPROGRESS) {
        MEDIA_ERR_LOG("HCaptureSession::BeginConfig Already in config inprogress state!");
        return CAMERA_INVALID_STATE;
    }
    std::lock_guard<std::mutex> lock(sessionLock_);
    prevState_ = curState_;
    curState_ = CaptureSessionState::SESSION_CONFIG_INPROGRESS;
    tempCameraDevices_.clear();
    tempStreams_.clear();
    deletedStreamIds_.clear();
    return CAMERA_OK;
}
  1. 创立CameraInput

应用层经过camManagerObj->CreateCameraInput(cameraObjList[0])的办法进行CameraInput的创立,cameraObjList[0]便是前面获取支撑设备的第一个。依据CameraDevice创立对应的CameraInput目标。

sptr<CameraInput> CameraManager::CreateCameraInput(sptr<CameraDevice> &camera)
{
    CAMERA_SYNC_TRACE;
    sptr<CameraInput> cameraInput = nullptr;
    sptr<ICameraDeviceService> deviceObj = nullptr;
    if (camera != nullptr) {
        deviceObj = CreateCameraDevice(camera->GetID());
        if (deviceObj != nullptr) {
            cameraInput = new(std::nothrow) CameraInput(deviceObj, camera);
            if (cameraInput == nullptr) {
                MEDIA_ERR_LOG("failed to new CameraInput Returning null in CreateCameraInput");
                return cameraInput;
            }
        } else {
            MEDIA_ERR_LOG("Returning null in CreateCameraInput");
        }
    } else {
        MEDIA_ERR_LOG("CameraManager::CreateCameraInput: Camera object is null");
    }
    return cameraInput;
}
  1. 敞开CameraInput

调用了CameraInput的Open办法,进行输入设备的启动打开。

void CameraInput::Open()
{
    int32_t retCode = deviceObj_->Open();
    if (retCode != CAMERA_OK) {
        MEDIA_ERR_LOG("Failed to open Camera Input, retCode: %{public}d", retCode);
    }
}
  1. 增加CameraInput实例到收集会话中

经过调用captureSession的AddInput办法,将创立的CameraInput目标增加到收集会话的输入中,这样收集会话就知道收集输入的设备。

int32_t CaptureSession::AddInput(sptr<CaptureInput> &input)
{
    CAMERA_SYNC_TRACE;
    if (input == nullptr) {
        MEDIA_ERR_LOG("CaptureSession::AddInput input is null");
        return CAMERA_INVALID_ARG;
    }
    input->SetSession(this);
    inputDevice_ = input;
    return captureSession_->AddInput(((sptr<CameraInput> &)input)->GetCameraDevice());
}

终究调用到HCaptureSession的AddInput办法,该办法中核心的代码是tempCameraDevices_.emplace_back(localCameraDevice),将需求增加的CameraDevice插入到tempCameraDevices_容器中。

int32_t HCaptureSession::AddInput(sptr<ICameraDeviceService> cameraDevice)
{
    CAMERA_SYNC_TRACE;
    sptr<HCameraDevice> localCameraDevice = nullptr;
    if (cameraDevice == nullptr) {
        MEDIA_ERR_LOG("HCaptureSession::AddInput cameraDevice is null");
        return CAMERA_INVALID_ARG;
    }
    if (curState_ != CaptureSessionState::SESSION_CONFIG_INPROGRESS) {
        MEDIA_ERR_LOG("HCaptureSession::AddInput Need to call BeginConfig before adding input");
        return CAMERA_INVALID_STATE;
    }
    if (!tempCameraDevices_.empty() || (cameraDevice_ != nullptr && !cameraDevice_->IsReleaseCameraDevice())) {
        MEDIA_ERR_LOG("HCaptureSession::AddInput Only one input is supported");
        return CAMERA_INVALID_SESSION_CFG;
    }
    localCameraDevice = static_cast<HCameraDevice*>(cameraDevice.GetRefPtr());
    if (cameraDevice_ == localCameraDevice) {
        cameraDevice_->SetReleaseCameraDevice(false);
    } else {
        tempCameraDevices_.emplace_back(localCameraDevice);
        CAMERA_SYSEVENT_STATISTIC(CreateMsg("CaptureSession::AddInput"));
    }
    sptr<IStreamOperator> streamOperator;
    int32_t rc = localCameraDevice->GetStreamOperator(streamOperatorCallback_, streamOperator);
    if (rc != CAMERA_OK) {
        MEDIA_ERR_LOG("HCaptureSession::GetCameraDevice GetStreamOperator returned %{public}d", rc);
        localCameraDevice->Close();
        return rc;
    }
    return CAMERA_OK;
}
  1. 创立Video的Surface

经过Surface::CreateSurfaceAsConsumer创立Surface。

sptr<Surface> Surface::CreateSurfaceAsConsumer(std::string name, bool isShared)
{
    sptr<ConsumerSurface> surf = new ConsumerSurface(name, isShared);
    GSError ret = surf->Init();
    if (ret != GSERROR_OK) {
        BLOGE("Failure, Reason: consumer surf init failed");
        return nullptr;
    }
    return surf;
}
  1. 创立VideoOutput实例

经过调用CameraManager的CreateVideoOutput来创立VideoOutput实例。

sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface)
{
    CAMERA_SYNC_TRACE;
    sptr<IStreamRepeat> streamRepeat = nullptr;
    sptr<VideoOutput> result = nullptr;
    int32_t retCode = CAMERA_OK;
    camera_format_t metaFormat;
    metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat());
    retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat,
                                               profile.GetSize().width, profile.GetSize().height, streamRepeat);
    if (retCode == CAMERA_OK) {
        result = new(std::nothrow) VideoOutput(streamRepeat);
        if (result == nullptr) {
            MEDIA_ERR_LOG("Failed to new VideoOutput");
        } else {
            std::vector<int32_t> videoFrameRates = profile.GetFrameRates();
            if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2
                result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]);
            }
            POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO,
                                            profile.GetSize().width,
                                            profile.GetSize().height);
        }
    } else {
        MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode);
    }
    return result;
}

该办法中经过IPC的调用终究调用到了HCameraService的CreateVideoOutput(surface->GetProducer(), format, streamRepeat)。

sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface)
{
    CAMERA_SYNC_TRACE;
    sptr<IStreamRepeat> streamRepeat = nullptr;
    sptr<VideoOutput> result = nullptr;
    int32_t retCode = CAMERA_OK;
    camera_format_t metaFormat;
    metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat());
    retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat,
                                               profile.GetSize().width, profile.GetSize().height, streamRepeat);
    if (retCode == CAMERA_OK) {
        result = new(std::nothrow) VideoOutput(streamRepeat);
        if (result == nullptr) {
            MEDIA_ERR_LOG("Failed to new VideoOutput");
        } else {
            std::vector<int32_t> videoFrameRates = profile.GetFrameRates();
            if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2
                result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]);
            }
            POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO,
                                            profile.GetSize().width,
                                            profile.GetSize().height);
        }
    } else {
        MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode);
    }
    return result;
}

HCameraService的CreateVideoOutput办法中首要创立了HStreamRepeat,而且经过参数传递给前面的CameraManager运用,CameraManager经过传递的HStreamRepeat目标,进行封装,创立出VideoOutput目标。

  1. 增加videoOutput到收集会话中,而且提交收集会话

该过程类似增加CameraInput到收集会话的进程,能够参阅前面的流程。

  1. 开端录制

经过调用VideoOutput的Start进行录制的操作。

int32_t VideoOutput::Start()
{
    return static_cast<IStreamRepeat *>(GetStream().GetRefPtr())->Start();
}

该办法中会调用到HStreamRepeat的Start办法。

int32_t HStreamRepeat::Start()
{
    CAMERA_SYNC_TRACE;
    if (streamOperator_ == nullptr) {
        return CAMERA_INVALID_STATE;
    }
    if (curCaptureID_ != 0) {
        MEDIA_ERR_LOG("HStreamRepeat::Start, Already started with captureID: %{public}d", curCaptureID_);
        return CAMERA_INVALID_STATE;
    }
    int32_t ret = AllocateCaptureId(curCaptureID_);
    if (ret != CAMERA_OK) {
        MEDIA_ERR_LOG("HStreamRepeat::Start Failed to allocate a captureId");
        return ret;
    }
    std::vector<uint8_t> ability;
    OHOS::Camera::MetadataUtils::ConvertMetadataToVec(cameraAbility_, ability);
    CaptureInfo captureInfo;
    captureInfo.streamIds_ = {streamId_};
    captureInfo.captureSetting_ = ability;
    captureInfo.enableShutterCallback_ = false;
    MEDIA_INFO_LOG("HStreamRepeat::Start Starting with capture ID: %{public}d", curCaptureID_);
    CamRetCode rc = (CamRetCode)(streamOperator_->Capture(curCaptureID_, captureInfo, true));
    if (rc != HDI::Camera::V1_0::NO_ERROR) {
        ReleaseCaptureId(curCaptureID_);
        curCaptureID_ = 0;
        MEDIA_ERR_LOG("HStreamRepeat::Start Failed with error Code:%{public}d", rc);
        ret = HdiToServiceError(rc);
    }
    return ret;
}

核心的代码是streamOperator_->Capture,其间最终一个参数true,表示收集连续数据。

  1. 录制完毕,保存录制文件

六、总结

本文首要对OpenHarmony 3.2 Beta多媒体子系统的视频录制进行介绍,首要梳理了全体的录制流程,然后对录制进程中的首要过程进行了具体地剖析。视频录制首要分为以下几个过程:

(1) 获取CameraManager实例。

(2) 创立收集会话CaptureSession。

(3) 创立CameraInput实例,而且将输入设备增加到CaptureSession中。

(4) 创立Video录制需求的Surface。

(5) 创立VideoOutput实例,而且将输出增加到CaptureSession中。

(6) 提交收集会话的装备。

(7) 调用VideoOutput的Start办法,进行视频的录制。

(8) 录制完毕,保存录制的文件。

关于OpenHarmony 3.2 Beta多媒体系列开发,我之前还分享过

《OpenHarmony 3.2 Beta源码剖析之MediaLibrary》

《OpenHarmony 3.2 Beta多媒体系列——音视频播放框架

《OpenHarmony 3.2 Beta多媒体系列——音视频播放gstreamer》

这几篇文章,欢迎感兴趣的开发者进行阅读。

OpenHarmony 3.2 Beta多媒体系列——视频录制