Openharmony 4.1 release Camera HDF 介绍
1.概述 OpenHarmony相机驱动框架模型对上实现相机HDI(Hardware Device Interface)接口,对下实现相机Pipeline模型,管理相机各个硬件设备。该驱动框架模型内部分为三层,依次为HDI实现层、框架层和设备适配层,各层基本概念如下: HDI实现层:实现OHOS(OpenHarmony Operation System)相机标准南向接口。框架层:对接HDI实现层的
1.概述
OpenHarmony相机驱动框架模型对上实现相机HDI(Hardware Device Interface)接口,对下实现相机Pipeline模型,管理相机各个硬件设备。
该驱动框架模型内部分为三层,依次为HDI实现层、框架层和设备适配层,各层基本概念如下:
- HDI实现层:实现OHOS(OpenHarmony Operation System)相机标准南向接口。
- 框架层:对接HDI实现层的控制、流的转发,实现数据通路的搭建,管理相机各个硬件设备等功能。
- 设备适配层:屏蔽底层芯片和OS(Operation System)差异,支持多平台适配。
Camera模块主要包含服务、设备的初始化,数据通路的搭建,流的配置、创建、下发、捕获等。
基于HDF驱动框架的Camera驱动模型。
目前,Camera驱动框架主要提供了两种适配方式:MPP和V4L2。
- MPP方式主要是针对海思系列的芯片,是海思自己实现的多媒体框架。
- V4L2方式主要是针对驱动是基于V4L2接口实现的芯片平台,比如Rockchip等。
2.Camera Host服务加载
2.1.Camera HDF 驱动服务配置
文件路径:vendor/{公司}/{产品}/hdf_config/uhdf/device_info.hcs
配置作用:配置camera host启动属性等,在启动时由HDF框架加载
.......
hdi_server :: host {
hostName = "camera_host";
priority = 50;
gid = ["camera_host", "uhdf_driver", "vendor_mpp_driver"];
camera_device :: device {
device0 :: deviceNode {
policy = 2;
priority = 100;
moduleName = "libcamera_host_service_1.0.z.so";
serviceName = "camera_service";
}
}
}
.......
2.2.注册Camera Host
在 drivers/peripheral/camera/interfaces/hdi_ipc/camera_host_driver.cpp
,HDF在加载时,通过 HDF_INIT
将设置好的 g_camerahostDriverEntry
注册到 HDF
框架。
......
static struct HdfDriverEntry g_camerahostDriverEntry = {
.moduleVersion = 1,
.moduleName = "camera_service",
.Bind = HdfCameraHostDriverBind,
.Init = HdfCameraHostDriverInit,
.Release = HdfCameraHostDriverRelease,
};
HDF_INIT(g_camerahostDriverEntry);
......
2.3.camera_host_config.hcs配置
文件路径:vendor/{公司}/{产品}/hdf_config/uhdf/camera/hdi_impl/camera_host_config.hcs
配置作用:将相机支持的所有性能参数罗列出来,如:分辨率大小,闪光灯,自动对焦,手动对焦等等。系统在编译时会把hcs文件编译成hcb文件,让服务能更快的加载配置。
主要关注点:
- vdiLibList:配置VDI实现
- metadata节点:提供oh层的各属性的关键字
- ability节点:相机支持的规格参数,支持多摄像头,每个摄像头单独配置一个ability节点
.......
# vdi实现
vdiLibList = [
"libcamera_host_vdi_impl_1.0.z.so"
];
.......
# 里面提供各属性的关键字
metadata {
aeAvailableAntiBandingModes = [
"OHOS_CONTROL_AE_ANTIBANDING_MODE_OFF",
"OHOS_CONTROL_AE_ANTIBANDING_MODE_50HZ",
"OHOS_CONTROL_AE_ANTIBANDING_MODE_60HZ",
"OHOS_CONTROL_AE_ANTIBANDING_MODE_AUTO"
];
aeAvailableModes = ["OHOS_CONTROL_AE_MODE_ON"];
availableFpsRange = [15, 30];
aeCompensationRange = [0, 0];
aeCompensationSteps = [0, 1];
availableAwbModes = [
"OHOS_CONTROL_AWB_MODE_OFF"
];
sceneModesOverrides = [
"OHOS_CONTROL_AE_MODE_ON",
"OHOS_CONTROL_AWB_MODE_AUTO"
];
aeLockAvailable = "OHOS_CONTROL_AE_LOCK_AVAILABLE_FALSE";
awbLockAvailable = "OHOS_CONTROL_AWB_LOCK_AVAILABLE_FALSE";
sensitivityRange = [32, 2400];
exposureTimeRange = [100000, 200000000];
faceDetectMode = "OHOS_STATISTICS_FACE_DETECT_MODE_OFF";
.......
}
# 每个摄像头配置1个ability,里面列出支持的具体规格参数。
ability_01 :: ability {
logicCameraId = "lcam001";
physicsCameraIds = [
"CAMERA_FIRST",
"CAMERA_SECOND"
];
metadata {
aeAvailableAntiBandingModes = [
"OHOS_CAMERA_AE_ANTIBANDING_MODE_OFF"
];
aeAvailableModes = ["OHOS_CAMERA_AE_MODE_OFF"];
availableFpsRange = [5, 10];
cameraPosition = "OHOS_CAMERA_POSITION_FRONT";
cameraType = "OHOS_CAMERA_TYPE_WIDE_ANGLE";
cameraConnectionType ="OHOS_CAMERA_CONNECTION_TYPE_BUILTIN";
cameraMemoryType ="OHOS_CAMERA_MEMORY_USERPTR";
faceDetectMaxNum = "10";
aeCompensationRange = [0, 0];
aeCompensationSteps = [0, 0];
availableAwbModes = [
"OHOS_CAMERA_AWB_MODE_OFF"
];
sensitivityRange = [32, 2400];
faceDetectMode = "OHOS_CAMERA_FACE_DETECT_MODE_OFF";
availableCharacteristicsKeys = [
"OHOS_CONTROL_AE_AVAILABLE_ANTIBANDING_MODES",
"OHOS_CONTROL_AE_AVAILABLE_MODES",
"OHOS_ABILITY_FPS_RANGES",
"OHOS_CONTROL_AE_COMPENSATION_RANGE",
"OHOS_CONTROL_AE_COMPENSATION_STEP",
"OHOS_CONTROL_AWB_AVAILABLE_MODES",
"OHOS_JPEG_AVAILABLE_THUMBNAIL_SIZES",
"OHOS_JPEG_MAX_SIZE",
"OHOS_SENSOR_INFO_PIXEL_ARRAY_SIZE",
"OHOS_SENSOR_INFO_ACTIVE_ARRAY_SIZE",
"OHOS_SENSOR_INFO_SENSITIVITY_RANGE",
"OHOS_SENSOR_INFO_PHYSICAL_SIZE",
];
.......
}
.......
}
.......
2.4.Camera Host 服务启动和调用流程
1.在 CameraHostDriver
中配置 HdfDriverEntry
结构体,通过宏 HDF_INIT
初始化。
2.HDF
启动后,通过 HdfCameraHostDriverBind
创建 HDI Service
。
3.HDI Service
会通过 camera_host_config.hcs
加载所有 VDI
实现。
4.HDI Service
中实现了 interface
中所有接口。
5.CameraFramework
通过 HDF
框架的IPC获取到 proxy,与 HDI Service
中的 stub通讯,完成调用。
流程如下图:
2.5.启动 Camera Host 服务代码分析
2.4.1.在 HDF
框架拉起 Camera Host
时,会通过 HdfCameraHostDriverBind
函数调用 CameraHostServiceGetInstance
将实现的 CameraHostService
启动。
static int HdfCameraHostDriverBind(struct HdfDeviceObject *deviceObject)
{
HDF_LOGI("HdfCameraHostDriverBind enter");
......
OHOS::sptr<ICameraHost> serviceImpl {CameraHostServiceGetInstance()};
......
hdfCameraHostHost->stub = OHOS::HDI::ObjectCollector::GetInstance().GetOrNewObject(serviceImpl,
ICameraHost::GetDescriptor());
......
deviceObject->service = &hdfCameraHostHost->ioService;
return HDF_SUCCESS;
}
2.4.2. CameraHostServiceGetInstance
函数会依次调用 GetInstance
和 GetVdiLibList
。
extern "C" ICameraHost *CameraHostServiceGetInstance(void)
{
// 获取CameraHostService实例
OHOS::sptr<CameraHostService> service = CameraHostService::GetInstance();
if (service == nullptr) {
CAMERA_LOGE("Camera host service is nullptr");
return nullptr;
}
return service.GetRefPtr();
}
OHOS::sptr<CameraHostService> CameraHostService::GetInstance()
{
.......
if (GetVdiLibList(vdiLibList) != OHOS::HDI::Camera::V1_0::NO_ERROR) {
CAMERA_LOGE("Can not get vdi lib name");
return nullptr;
}
for (auto vdiLib : vdiLibList) {
// 循环将所有vdi都加载进来
struct HdfVdiObject *cameraHostVdiLoader = HdfLoadVdi(vdiLib.c_str());
.......
struct VdiWrapperCameraHost *vdiWrapper = reinterpret_cast<struct VdiWrapperCameraHost *>(
cameraHostVdiLoader->vdiBase);
.......
ICameraHostVdi *cameraHostVdi = reinterpret_cast<ICameraHostVdi *>(vdiWrapper->module);
cameraHostVdiList.push_back(cameraHostVdi);
cameraHostVdiLoaderList.push_back(cameraHostVdiLoader);
}
cameraHostService_ = new (std::nothrow) CameraHostService(cameraHostVdiList, cameraHostVdiLoaderList);
.......
return cameraHostService_;
}
2.4.3.在 GetVdiLibList 中通过 vendor/hihope/rk3568/hdf_config/uhdf/camera/hdi_impl/camera_host_config.hcs
配置
的 vdiLibList
节点,来获取 HDI Service
的具体实现 VDI
。
int32_t CameraHostService::GetVdiLibList(std::vector<std::string> &vdiLibList)
{
std::vector<std::string>().swap(vdiLibList);
ReleaseHcsTree();
const struct DeviceResourceIface *pDevResIns = DeviceResourceGetIfaceInstance(HDF_CONFIG_SOURCE);
.......
SetHcsBlobPath(CONFIG_PATH_NAME);
const struct DeviceResourceNode *pRootNode = pDevResIns->GetRootNode();
.......
const char *vdiLib = nullptr;
int32_t elemNum = pDevResIns->GetElemNum(pRootNode, "vdiLibList");
// 从hcs里获取vdiLibList所有实现
for (int i = 0; i < elemNum; i++) {
pDevResIns->GetStringArrayElem(pRootNode, "vdiLibList", i, &vdiLib, nullptr);
.......
vdiLibList.push_back(std::string(vdiLib));
}
.......
return OHOS::HDI::Camera::V1_0::NO_ERROR;
}
3.Camera Pipeline的配置和创建流程
3.1.scene和stream
scene
和 stream
文件配置路径:vendor/{厂家}/产品/hdf_config/uhdf/camera/pipeline_core/params.hcs。
root {
priview :: stream_info {
id = 0;
name = "preview";
}
video :: stream_info {
id = 1;
name = "video";
}
snapshot :: stream_info {
id = 2;
name = "snapshot";
}
analyze :: stream_info {
id = 4;
name = "analyze";
}
normal :: scene_info {
id = 0;
name = "normal";
}
dual :: scene_info {
id = 1;
name = "dual";
}
uvc :: scene_info {
id = 2;
name = "uvc";
}
}
scene
即场景,目前有3种场景:normal
, dual
, uvc
。stream
即流,目前有4种流类型:priview
, video
, snapshot
, analyze
。
camera服务是以1种 scene
+ 多种 stream
类型的组合来工作的。
3.2.Nodes
node
的作用是在camera底层驱动出流后,用来处理流buffer的,如用来编码的CodeNode,用来缩放的ScaleNode等等。
目前在4.1release上提供了多种node:V4L2SourceNode,UvcNode,SourceNode,ForkNode,MergeNode,SensorNode,StabilityNode,TransformNode,SinkNode等。
各产品也可以提供自己实现的node,如依赖硬件或厂家算法的。
如RK3568提供多种node:RKCodecNode,RKExifNode,RKFaceNode,RKScaleNode等。
各个Node必须先注册后,才能使用。
// drivers/peripheral/camera/vdi_base/common/pipeline_core/nodes/include/inode.h
#define REGISTERNODE(cls, ...) \
namespace { \
static std::string g_##cls = NodeFactory::Instance().DoRegister<cls>(__VA_ARGS__, \
[](const std::string& name, const std::string& type, const std::string &cameraId) \
{return std::make_shared<cls>(name, type, cameraId);}); \
}
// drivers/peripheral/camera/vdi_base/common/pipeline_core/nodes/src/source_node/source_node.cpp
// 将SourceNode注册到NodeFactory中,注册的别名是source
REGISTERNODE(SourceNode, {"source"})
3.3.Pipeline的配置
pipeline
的配置文件路径:vendor/{厂家}/{产品}/hdf_config/uhdf/camera/pipeline_core/config.hcs。
在 config.hcs
中,以1种 scene
+ 多种 stream
类型组合成1个 pipeline
规格。规格中定义需要的 node
,以及各 node
的连接顺序。
目前在RK3568 4.1release上提供了17种 pipeline
规格,其中比较常见的规格:
- normal_preview_snapshot 常规预览拍照
- normal_preview_video 常规预览录像
- uvc_preview_snapshot usb_camera预览拍照
- uvc_preview_video usb_camera预览录像
其它 pipeline
规格可以在 config.hcs
文件中查看。
params.hcs
和 config.hcs
文件在编译阶段,会被生成.h/.c文件,拷贝到指定路径 drivers/peripheral/camera/vdi_base/common/pipeline_core/pipeline_impl/src/strategy/config
后,被编译到 camera_pipeline_config
so中。
# device/board/{厂家}/{产品}/camera/vdi_impl/v4l2/BUILD.gn
# 将hcs文件转换生成为c文件
hc_gen_c("generate_source") {
sources = [
"$product_config_path/hdf_config/uhdf/camera/pipeline_core/config.hcs",
"$product_config_path/hdf_config/uhdf/camera/pipeline_core/params.hcs",
]
}
# 将生成的c文件拷贝到drivers/peripheral/camera/vdi_base/common/pipeline_core/pipeline_impl/src/strategy/config目录下
ohos_prebuilt_etc("config.c") {
deps = [ ":copy_source" ]
source =
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/config.c"
exec_script(
"/usr/bin/env",
[
"touch",
rebase_path(
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/config.c"),
])
}
ohos_prebuilt_etc("params.c") {
deps = [ ":copy_source" ]
source =
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/params.c"
exec_script(
"/usr/bin/env",
[
"touch",
rebase_path(
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/params.c"),
])
}
# 将C文件编译成so
ohos_shared_library("camera_pipeline_config") {
sources = [
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/config.c",
"$camera_path/pipeline_core/pipeline_impl/src/strategy/config/params.c",
]
include_dirs =
[ "$camera_path/pipeline_core/pipeline_impl/src/strategy/config" ]
install_images = [ chipset_base_dir ]
subsystem_name = "rockchip_products"
part_name = "rockchip_products"
}
3.4.Pipeline的创建流程
1.由 PipelineCore
创建获取 StreamPipelineCore
。
2.StreamPipelineCore
分别创建 StreamPipelineStrategy
, StreamPipelineBuilder
和 StreamPipelineDispatcher
。
3.StreamPipelineStrategy
会根据传入的参数匹配对应的 pipeline
规格。
4.StreamPipelineCore
将 pipeline
规格传给 StreamPipelineBuilder
,由 StreamPipelineBuilder
串连各node,生成 pipeline
。
5.StreamPipelineCore
将 pipeline
传给 StreamPipelineDispatcher
,后续由 StreamPipelineDispatcher
来下发 Prepare
,Start
,Config
,Capture
,CancelCapture
,Stop
等等指令给各个node。
大致流程图如下:
3.5.Pipeline创建流程的代码分析
1.在 PipelineCore::Init
创建 StreamPipelineCore
。
// drivers/peripheral/camera/vdi_base/common/pipeline_core/src/pipeline_core.cpp
RetCode PipelineCore::Init()
{
......
spc_ = IStreamPipelineCore::Create(context_);
return RC_OK;
}
// drivers/peripheral/camera/vdi_base/common/pipeline_core/pipeline_impl/src/stream_pipeline_core.cpp
std::shared_ptr<IStreamPipelineCore> IStreamPipelineCore::Create(const std::shared_ptr<NodeContext>& c)
{
return std::make_shared<StreamPipelineCore>(c);
}
2.在 StreamPipelineCore::Init
中分别创建 StreamPipelineStrategy
,StreamPipelineBuilder
和 StreamPipelineDispatcher
。
// drivers/peripheral/camera/vdi_base/common/pipeline_core/pipeline_impl/src/stream_pipeline_core.cpp
RetCode StreamPipelineCore::Init(const std::string &cameraId)
{
strategy_ = StreamPipelineStrategy::Create(context_->streamMgr_);
builder_ = StreamPipelineBuilder::Create(context_->streamMgr_);
dispatcher_ = StreamPipelineDispatcher::Create();
cameraId_ = cameraId;
return RC_OK;
}
3.在 StreamPipelineCore::CreatePipeline
中,调用 StreamPipelineStrategy::GeneratePipelineSpec
生成 pipeline
规格,调用 StreamPipelineBuilder::Build
将规格中配置的node串连起来生成 pipeline
,最后调用 StreamPipelineDispatcher::Update
由dispatcher来操作 pipeline
中各 node
。
// drivers/peripheral/camera/vdi_base/common/pipeline_core/pipeline_impl/src/stream_pipeline_core.cpp
RetCode StreamPipelineCore::CreatePipeline(const int32_t& mode)
{
......
std::shared_ptr<PipelineSpec> spec = strategy_->GeneratePipelineSpec(mode);
......
std::shared_ptr<Pipeline> pipeline = builder_->Build(spec, cameraId_);
......
return dispatcher_->Update(pipeline);
}
4.在 StreamPipelineStrategy::GeneratePipelineSpec
中,会根据 mode
参数匹配 pipeline
规格,然后加载规格中配置的各个node。
std::shared_ptr<PipelineSpec> StreamPipelineStrategy::GeneratePipelineSpec(const int32_t& mode)
{
PipelineSpec pipe {};
if (SelectPipelineSpec(mode, pipe) != RC_OK) {
return nullptr;
}
if (CombineSpecs(pipe) != RC_OK) {
return nullptr;
}
return pipelineSpec_;
}
5.在 params.hcs
和 config.hcs
生成的对应.c文件中,可以看到已经生成好的 Scene
和 Stream
以及 PipelineSpec
。
6.在 StreamPipelineStrategy::SelectPipelineSpec
中会调用 StreamPipelineStrategy::ConstructKeyStrIndex
,生成 pipeline
规格名称。
std::string StreamPipelineStrategy::ConstructKeyStrIndex(const int32_t& mode)
{
std::string keyStr;
std::string sceneStr = CheckIdExsit(mode, G_SCENE_TABLE_PTR, G_SCENE_TABLE_SIZE);
if (sceneStr.empty()) {
CAMERA_LOGE("scene:%{public}d not supported!\n", mode);
return keyStr;
}
// sceneStr为scene name,如:normal/dual/uvc
keyStr += sceneStr;
std::vector<int32_t> streamTypeSet;
hostStreamMgr_->GetStreamTypes(streamTypeSet);
for (const auto& it : streamTypeSet) {
std::string streamStr = CheckIdExsit(it, G_STREAM_TABLE_PTR, G_STREAM_TABLE_SIZE);
if (streamStr.empty()) {
CAMERA_LOGI("stream type:%{public}d not support!\n", it);
}
// streamStr为stream name,如:priview/video/snapshot/analyze
keyStr += "_" + streamStr;
}
// 在for循环里,可以连接多个stream name
// keyStr经过上面的拼接后,即为规格名称,如:normal_priview_snapshot等
return keyStr;
}
7.在 StreamPipelineStrategy::SelectPipelineSpec
中会调用 StreamPipelineStrategy::InitPipeSpecPtr
,根据生成的keyStr,在 config.hcs
转化生成的 config.c
去查找指定的 PipelineSpec
。
void StreamPipelineStrategy::InitPipeSpecPtr(G_PIPELINE_SPEC_DATA_TYPE &pipeSpecPtr, const std::string& keyStr)
{
for (int i = 0; i < G_PIPELINE_SPECS_SIZE; i++) {
if (G_PIPELINE_SPECS_TABLE[i].name == keyStr) {
pipeSpecPtr = &G_PIPELINE_SPECS_TABLE[i];
break;
}
}
}
// config.hcs 转化生成的 config.c
static const struct HdfConfigPipelineSpecsPipelineSpec g_hdfConfigPipelineSpec[] = {
......
[9] = {
.name = "normal_preview_snapshot",
.nodeSpec = g_hdfConfigNodeSpec10,
.nodeSpecSize = 10,
},
......
[14] = {
.name = "normal_preview_video",
.nodeSpec = g_hdfConfigNodeSpec15,
.nodeSpecSize = 9,
},
......
};
8.在 StreamPipelineStrategy::SelectPipelineSpec
中,再将指定的 PipelineSpec
中的 node
读取出来。如 normal_preview_video
规格配置的对应 nodeSpec
是 g_hdfConfigNodeSpec15
数组,里面有9个 node
。
static const struct HdfConfigPipelineSpecsNodeSpec g_hdfConfigNodeSpec15[] = {
[0] = {
.name = "v4l2_source#0",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec120,
.portSpecSize = 1,
},
[1] = {
.name = "fork#0",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec121,
.portSpecSize = 3,
},
[2] = {
.name = "RKScale#0",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec122,
.portSpecSize = 2,
},
[3] = {
.name = "RKScale#1",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec123,
.portSpecSize = 2,
},
[4] = {
.name = "stability#0",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec124,
.portSpecSize = 2,
},
[5] = {
.name = "RKCodec#0",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec125,
.portSpecSize = 2,
},
[6] = {
.name = "RKCodec#1",
.status = "new",
.streamType = "",
.portSpec = g_hdfConfigPortSpec126,
.portSpecSize = 2,
},
[7] = {
.name = "sink#0",
.status = "new",
.streamType = "preview",
.portSpec = g_hdfConfigPortSpec127,
.portSpecSize = 1,
},
[8] = {
.name = "sink#1",
.status = "new",
.streamType = "video",
.portSpec = g_hdfConfigPortSpec128,
.portSpecSize = 1,
},
};
9.在 StreamPipelineBuilder::Build
中,根据配置的 node
名称,由 NodeFactory
来创建 node
,设置完属性后,将各 node
连接起来。
std::shared_ptr<Pipeline> StreamPipelineBuilder::Build(const std::shared_ptr<PipelineSpec>& pipelineSpec,
const std::string &cameraId)
{
CHECK_IF_PTR_NULL_RETURN_VALUE(pipelineSpec, nullptr);
CAMERA_LOGI("------------------------Node Instantiation Begin-------------\n");
RetCode re = RC_OK;
std::set<std::vector<int32_t>> sizeSet;
for (auto& it : pipelineSpec->nodeSpecSet_) {
if (it.status_ == "new") {
std::string nodeName;
size_t pos = it.name_.find_first_of('#');
nodeName = it.name_.substr(0, pos);
// 以 # 字符从NodeSpec的name中分割出 node 注册的别名,然后创建指定的 node
std::shared_ptr<INode> newNode = NodeFactory::Instance().CreateShared(nodeName, it.name_,
it.type_, cameraId);
if (newNode == nullptr) {
CAMERA_LOGI("create node failed! \n");
return nullptr;
}
std::optional<int32_t> typeId = GetTypeId(it.type_, G_STREAM_TABLE_PTR, G_STREAM_TABLE_SIZE);
if (typeId) {
newNode->SetCallBack(hostStreamMgr_->GetBufferCb(it.streamId_));
}
pipeline_->nodes_.push_back(newNode);
it.status_ = "remain";
for (const auto& portSpec : it.portSpecSet_) {
std::vector<int32_t> vectorSize;
vectorSize.push_back(portSpec.format_.w_);
vectorSize.push_back(portSpec.format_.h_);
sizeSet.insert(vectorSize);
auto peerNode = std::find_if(pipeline_->nodes_.begin(), pipeline_->nodes_.end(),
[portSpec](const std::shared_ptr<INode>& n) {
return n->GetName() == portSpec.info_.peerPortNodeName_;
});
if (peerNode != pipeline_->nodes_.end()) {
std::shared_ptr<IPort> peerPort = (*peerNode)->GetPort(portSpec.info_.peerPortName_);
re = peerPort->SetFormat(portSpec.format_);
CHECK_IF_NOT_EQUAL_RETURN_VALUE(re, RC_OK, nullptr);
std::shared_ptr<IPort> port = newNode->GetPort(portSpec.info_.name_);
re = port->SetFormat(portSpec.format_);
CHECK_IF_NOT_EQUAL_RETURN_VALUE(re, RC_OK, nullptr);
re = port->Connect(peerPort);
CHECK_IF_NOT_EQUAL_RETURN_VALUE(re, RC_OK, nullptr);
re = peerPort->Connect(port);
CHECK_IF_NOT_EQUAL_RETURN_VALUE(re, RC_OK, nullptr);
}
}
}
}
SetMaxSize(sizeSet);
CAMERA_LOGI("------------------------Node Instantiation End-------------\n");
return pipeline_;
}
根据 normal_preview_video
规格配置,node
连接如下图:
10.至此 pipeline
创建完成,在 StreamPipelineCore::CreatePipeline
将 pipeline
更新到 StreamPipelineDispatcher
中,后续由 dispatcher
下发指令到各 node
。
4.Camera流操作
4.1.Camera流
Camera在拍照/录像时,是以流的方式来驱动,整个过程包括:创建流,配置流,开始捕获(起流),取消捕获(停流),释放流。
在开始捕获的过程中,先依据场景等类型创建pipeline串联各个node,并依次启动各node。
在V4L2SourceNode中,会通过SensorController以V4L2来控制camera底层出流,V4L2SourceNode在收到底层上来的流buffer后,依次在各个node中流转,保证Camera运转。
具体流程如下图:
4.2.流操作代码分析
1.由 CameraDevice
的实现 CameraDeviceVdiImpl
调用 GetStreamOperator
来获取 vdi 流实现 StreamOperatorVdiImpl
,后续由 StreamOperatorVdiImpl
来操作流。
int32_t CameraDeviceVdiImpl::GetStreamOperator(const sptr<IStreamOperatorVdiCallback> &callbackObj,
sptr<IStreamOperatorVdi> &streamOperator)
{
......
if (spStreamOperator_ == nullptr) {
#ifdef CAMERA_BUILT_ON_OHOS_LITE
spStreamOperator_ = std::make_shared<StreamOperatorVdiImpl>(callbackObj, shared_from_this());
#else
spStreamOperator_ = new(std::nothrow) StreamOperatorVdiImpl(callbackObj, shared_from_this());
#endif
if (spStreamOperator_ == nullptr) {
CAMERA_LOGW("create stream operator failed.");
return DEVICE_ERROR;
}
spStreamOperator_->Init();
ismOperator_ = spStreamOperator_;
}
......
return VDI::Camera::V1_0::NO_ERROR;
}
2.上层先调用 StreamOperatorVdiImpl::CreateStreams
批量创建流,如拍照会创建预览流和拍照流,录像会创建预览流和录像流。
int32_t StreamOperatorVdiImpl::CreateStreams(const std::vector<VdiStreamInfo> &streamInfos)
{
......
for (const auto &it : streamInfos) {
......
// 循环创建指定类型的流
std::shared_ptr<IStream> stream = StreamFactory::Instance().CreateShared(
IStream::g_availableStreamType[it.intent_], it.streamId_, it.intent_, pipelineCore_, messenger_);
......
StreamConfiguration scg;
StreamInfoToStreamConfiguration(scg, it);
RetCode rc = stream->ConfigStream(scg);
......
if (!scg.tunnelMode && (it.bufferQueue_)->producer_ != nullptr) {
return INVALID_ARGUMENT;
}
if ((it.bufferQueue_)->producer_ != nullptr) {
// 创建tunnel,由tunnel来给流申请和归还buffer
auto tunnel = std::make_shared<StreamTunnel>();
rc = tunnel->AttachBufferQueue((it.bufferQueue_)->producer_);
if (stream->AttachStreamTunnel(tunnel) != RC_OK) {
return INVALID_ARGUMENT;
}
}
std::lock_guard<std::mutex> l(streamLock_);
streamMap_[stream->GetStreamId()] = stream;
}
return VDI::Camera::V1_0::NO_ERROR;
}
3.流创建好之后,调用 StreamOperatorVdiImpl::CommitStreams
先获取流的属性设置,调用 StreamBase::CommitStream
分别设置到具体的各个流里,然后设置到pipelinecore里,并创建pipeline。
int32_t StreamOperatorVdiImpl::CommitStreams(VdiOperationMode mode, const std::vector<uint8_t> &modeSetting)
{
......
std::vector<StreamConfiguration> configs = {};
{
std::lock_guard<std::mutex> l(streamLock_);
std::transform(streamMap_.begin(), streamMap_.end(), std::back_inserter(configs),
[](auto &iter) { return iter.second->GetStreamAttribute(); });
}
std::shared_ptr<CameraMetadata> setting;
MetadataUtils::ConvertVecToMetadata(modeSetting, setting);
DynamicStreamSwitchMode method = streamPipeline_->CheckStreamsSupported(mode, setting, configs);
{
......
for (auto it : streamMap_) {
if (it.second->CommitStream() != RC_OK) {
CAMERA_LOGE("commit stream [id = %{public}d] failed.", it.first);
return DEVICE_ERROR;
}
}
}
RetCode rc = streamPipeline_->PreConfig(setting);
......
rc = streamPipeline_->CreatePipeline(mode1);
......
return VDI::Camera::V1_0::NO_ERROR;
}
4.的 StreamBase::CommitStream
中,主要是初始化 BufferPool
,将 buffer
处理函数注册下去。
RetCode StreamBase::CommitStream()
{
pipeline_ = pipelineCore_->GetStreamPipelineCore();
......
hostStreamMgr_ = pipelineCore_->GetHostStreamMgr();
......
HostStreamInfo info;
info.type_ = static_cast<VdiStreamIntent>(streamType_);
info.streamId_ = streamId_;
info.width_ = streamConfig_.width;
info.height_ = streamConfig_.height;
info.format_ = streamConfig_.format;
info.usage_ = streamConfig_.usage;
info.encodeType_ = streamConfig_.encodeType;
if (streamConfig_.tunnelMode) {
BufferManager* mgr = BufferManager::GetInstance();
if (bufferPool_ == nullptr) {
poolId_ = mgr->GenerateBufferPoolId();
CHECK_IF_EQUAL_RETURN_VALUE(poolId_, 0, RC_ERROR);
bufferPool_ = mgr->GetBufferPool(poolId_);
CHECK_IF_PTR_NULL_RETURN_VALUE(bufferPool_, RC_ERROR);
}
info.bufferPoolId_ = poolId_;
info.bufferCount_ = GetBufferCount();
RetCode rc = bufferPool_->Init(streamConfig_.width, streamConfig_.height, streamConfig_.usage,
streamConfig_.format, GetBufferCount(), CAMERA_BUFFER_SOURCE_TYPE_EXTERNAL);
}
RetCode rc = hostStreamMgr_->CreateHostStream(info, [this](auto buffer) { HandleResult(buffer); });
......
state_ = STREAM_STATE_ACTIVE;
return RC_OK;
}
5.在流配置好之后,调用 StreamOperatorVdiImpl::Capture
,分别给各个 stream
下发 request。
int32_t StreamOperatorVdiImpl::Capture(int32_t captureId, const VdiCaptureInfo &info, bool isStreaming)
{
......
std::shared_ptr<CameraMetadata> captureSetting;
MetadataUtils::ConvertVecToMetadata(info.captureSetting_, captureSetting);
......
auto request =
std::make_shared<CaptureRequest>(captureId, info.streamIds_.size(), captureSetting,
info.enableShutterCallback_, isStreaming);
for (auto id : info.streamIds_) {
RetCode rc = streamMap_[id]->AddRequest(request);
if (rc != RC_OK) {
return DEVICE_ERROR;
}
}
{
std::lock_guard<std::mutex> l(requestLock_);
requestMap_[captureId] = request;
}
return VDI::Camera::V1_0::NO_ERROR;
}
6.在 StreamBase::AddRequest
中,在第1个 request
时,会调用 StartStream
启流。
RetCode StreamBase::AddRequest(std::shared_ptr<CaptureRequest>& request)
{
CHECK_IF_PTR_NULL_RETURN_VALUE(request, RC_ERROR);
request->AddOwner(shared_from_this());
request->SetFirstRequest(false);
if (isFirstRequest) {
RetCode rc = StartStream();
if (rc != RC_OK) {
CAMERA_LOGE("start stream [id:%{public}d] failed", streamId_);
return RC_ERROR;
}
request->SetFirstRequest(true);
isFirstRequest = false;
}
{
std::unique_lock<std::mutex> l(wtLock_);
waitingList_.emplace_back(request);
cv_.notify_one();
}
return RC_OK;
}
7.在 StreamBase::StartStream
中,会起线程来启动 HandleRequest
来处理 request
,并通过 pipeline
下发 prepare
和 Start
启动 pipeline
中的各个 node
。
RetCode StreamBase::StartStream()
{
int origin = calltimes_.fetch_add(1);
......
tunnel_->NotifyStart();
RetCode rc = pipeline_->Prepare({streamId_});
......
std::string threadName =
g_availableStreamType[static_cast<VdiStreamIntent>(streamType_)] + "#" + std::to_string(streamId_);
handler_ = std::make_unique<std::thread>([this, &threadName] {
prctl(PR_SET_NAME, threadName.c_str());
while (state_ == STREAM_STATE_BUSY) {
tunnel_->DumpStats(3); // set output interval to 30 second
HandleRequest();
}
});
......
rc = pipeline_->Start({streamId_});
......
return RC_OK;
}
8.在 StreamBase::HandleRequest
中调用具体的流来处理 request
,在具体的流中调用 StreamBase::Capture
。
void StreamBase::HandleRequest()
{
......
std::shared_ptr<CaptureRequest> request = nullptr;
{
// keep a copy of continious-capture in waitingList_, unless it's going to be canceled.
std::unique_lock<std::mutex> l(wtLock_);
if (waitingList_.empty()) {
return;
}
request = waitingList_.front();
CHECK_IF_PTR_NULL_RETURN_VOID(request);
CAMERA_LOGI("HandleRequest streamId = [%{public}d] and needCancel = [%{public}d]",
streamId_, request->NeedCancel() ? 1 : 0);
if (!request->IsContinous()) {
waitingList_.pop_front();
}
}
if (request == nullptr) {
CAMERA_LOGE("fatal error, stream [%{public}d] request list is not empty, but can't get one", streamId_);
return;
}
if (request->NeedCancel()) {
return;
}
request->Process(streamId_);
return;
}
// 调用 stream 的 Capture
RetCode CaptureRequest::Process(const int32_t id)
{
auto stream = owners_[id].lock();
CHECK_IF_PTR_NULL_RETURN_VALUE(stream, RC_ERROR);
semp_->Sync();
return stream->Capture(shared_from_this());
}
9.在 StreamBase::Capture
中先取一个 buffer
传下去,然后通过 pipeline
下发 Config
命令配置各个 node
后,再下发 Capture
启动各个 node
。
RetCode StreamBase::Capture(const std::shared_ptr<CaptureRequest>& request)
{
......
RetCode rc = RC_ERROR;
if (request->IsFirstOne() && !request->IsContinous()) {
uint32_t n = GetBufferCount();
for (uint32_t i = 0; i < n; i++) {
DeliverStreamBuffer();
}
} else {
do {
rc = DeliverStreamBuffer();
{
std::unique_lock<std::mutex> l(wtLock_);
if (waitingList_.empty()) {
CAMERA_LOGI("Capture stream [id:%{public}d] stop deliver buffer.", streamId_);
break;
}
}
} while (rc != RC_OK && state_ == STREAM_STATE_BUSY);
}
rc = pipeline_->Config({streamId_}, request->GetCaptureSetting());
......
rc = pipeline_->Capture({streamId_}, request->GetCaptureId());
......
return RC_OK;
}
10.在 V4L2SourceNode::Start
中会通过 SensorController::Start
调用 V4L2
控制底层打开设备,申请 buffer
给底层填充,然后在 SourceNode
中调用 SourceNode::Start
来启动流的 buffer
流转。
RetCode V4L2SourceNode::Start(const int32_t streamId)
{
......
std::vector<std::shared_ptr<IPort>> outPorts = GetOutPorts();
for (const auto& it : outPorts) {
DeviceFormat format;
format.fmtdesc.pixelformat = V4L2Utils::ConvertPixfmtHal2V4l2(
static_cast<OHOS::Camera::CameraBufferFormat>(it->format_.format_));
format.fmtdesc.width = wide_;
format.fmtdesc.height = high_;
int bufCnt = it->format_.bufferCount_;
// SensorController::Start 中最终会调用到 v4l2 ,打开设备
rc = sensorController_->Start(bufCnt, format);
if (rc == RC_ERROR) {
CAMERA_LOGE("start failed.");
return RC_ERROR;
}
}
......
rc = SourceNode::Start(streamId);
return rc;
}
11.在 SourceNode::Start
中会分别调用 StartCollectBuffers
和 StartDistributeBuffers
启动执行线程。
RetCode SourceNode::Start(const int32_t streamId)
{
......
RetCode rc = handler_[streamId]->StartCollectBuffers();
CHECK_IF_NOT_EQUAL_RETURN_VALUE(rc, RC_OK, RC_ERROR);
rc = handler_[streamId]->StartDistributeBuffers();
CHECK_IF_NOT_EQUAL_RETURN_VALUE(rc, RC_OK, RC_ERROR);
return RC_OK;
}
12.在 StartCollectBuffers
中起线程,依次调用 CollectBuffers
V4L2SourceNode::ProvideBuffers
和 SensorController::SendFrameBuffer
,在 SendFrameBuffer
中调用 V4L2
的 StartStream
去真正起流,并将申请的 buffer
给到camera底层填充。
RetCode SensorController::SendFrameBuffer(std::shared_ptr<FrameSpec> buffer)
{
RetCode ret = RC_OK;
if (buffCont_ >= 1) {
CAMERA_LOGI("buffCont_ %{public}d", buffCont_);
sensorVideo_->CreatBuffer(GetName(), buffer);
if (buffCont_ == 1) {
CAMERA_LOGI("xxx SensorController::SendFrameBuffer StartStream");
ret = sensorVideo_->StartStream(GetName());
}
buffCont_--;
} else {
ret = sensorVideo_->QueueBuffer(GetName(), buffer);
}
return ret;
}
13.在 StartDistributeBuffers
中起线程,循环调用 DistributeBuffers
,函数里会通过条件变量 rbcv
等待camera底层填充好的 buffer
回调上来,一旦有 buffer
过来,会依次调用各个 node
的 DeliverBuffer
函数分别做对应的处理。
void V4L2SourceNode::SetBufferCallback()
{
// 设置buffer回调
sensorController_->SetNodeCallBack([&](std::shared_ptr<FrameSpec> frameSpec) {
OnPackBuffer(frameSpec);
});
return;
}
void SourceNode::OnPackBuffer(std::shared_ptr<FrameSpec> frameSpec)
{
CAMERA_LOGI("SourceNode::OnPackBuffer enter");
CHECK_IF_PTR_NULL_RETURN_VOID(frameSpec);
auto buffer = frameSpec->buffer_;
CHECK_IF_PTR_NULL_RETURN_VOID(buffer);
// 在回调中调用 SourceNode::PortHandler::OnBuffer
handler_[buffer->GetStreamId()]->OnBuffer(buffer);
CAMERA_LOGI("SourceNode::OnPackBuffer exit");
return;
}
void SourceNode::PortHandler::OnBuffer(std::shared_ptr<IBuffer>& buffer)
{
CAMERA_LOGV("SourceNode::PortHandler::OnBuffer enter");
{
std::unique_lock<std::mutex> l(rblock);
respondBufferList.emplace_back(buffer);
rbcv.notify_one();
}
CAMERA_LOGV("SourceNode::PortHandler::OnBuffer exit");
return;
}
// 将填充好的buffer分发到node
void SourceNode::PortHandler::DistributeBuffers()
{
std::shared_ptr<IBuffer> buffer = nullptr;
{
std::unique_lock<std::mutex> l(rblock);
auto timeout = std::chrono::system_clock::now() + std::chrono::milliseconds(5000); // 5000ms
// SourceNode::PortHandler::OnBuffer 通过 rbcv 通知有填充buffer可处理
if (!rbcv.wait_until(l, timeout, [this] {
return (!dbtRun || !respondBufferList.empty());
})) {
CAMERA_LOGE("DistributeBuffers timeout, dbtRun=%{public}d, respondBufferList size=%{public}d",
dbtRun.load(std::memory_order_acquire), respondBufferList.size());
}
if (!dbtRun || respondBufferList.empty()) {
return;
}
buffer = respondBufferList.front();
respondBufferList.pop_front();
}
auto node = port->GetNode();
CHECK_IF_PTR_NULL_RETURN_VOID(node);
CAMERA_LOGE("DistributeBuffers Loop, start deliverBuffer, streamId = %{public}d", buffer->GetStreamId());
// 由下一个node来处理buffer
node->DeliverBuffer(buffer);
return;
}
14.后续每个 node
都会执行 NodeBase::DeliverBuffer
将当前处理过的 buffer
给到下一个 node
去处理,处理完之后 buffer
会还回 BufferPool
,保证camera的流能流转起来。
15.在结束拍照或是录像时,依次调用 CancelCapture
和 ReleaseStreams
取消捕获和释放流,可以参考开始捕获和创建流的调用链。
5.总结
主要是介绍了3个方面:
- HDF框架加载CameraHost服务,启动流程分析。
- Camera Pipeline中的场景,流,Node的介绍,以及pipeline创建流程的分析。
- Camera中流的操作(创建/配置/捕获/取消捕获/释放)分析。
通过这些介绍可以对Camera HDI框架内部运行原理和流程有一定的了解。
更多推荐
所有评论(0)