本文正在参加「技能专题19期 闲谈数据库技能」活动

前段时刻根据ClickHouse完结了面向体系会员的数据洞察剖析包括了事情剖析、漏斗剖析、途径剖析。这儿简单介绍一下详细的完结。

1-架构

这儿先看一个简单的功用的架构,中心功用便是ClickHouse来完结剖析查询。

基于ClickHouse多维用户行为分析实践

上图中的行为数据一般是来自于埋点渠道通过Kafka集成到ClickHouse的。

标签和人群数据一般是来自咱们的标签体系,如离线标签一般是根据Spark计算完然后同步到ClickHouse 中。关于离线数据怎么集成到ClickHouse能够参考上一篇。

这儿咱们不关心数据是怎么进入的ClickHouse的,要做的便是怎么来完结所需求的各种查询。

2-宽表预备

这儿咱们预备一张行为数据的大宽表,详细字段能够自行扩大。

CREATE TABLE ads_user_event_analysis  on cluster '{cluster}'
(
    `user_id` Int64,
    `event_name` String DEFAULT '',
    `event_time` DateTime DEFAULT now(),
    `city` String DEFAULT '',
    `sex` String DEFAULT ''
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/default/ads_user_event_analysis/{shard}', '{replica}')
ORDER BY user_id
SETTINGS index_granularity = 8192

event_name代表了用户详细的行为,如购买、下单、点击等等。

event_time代表了事情产生的时刻。

city、sex则是用户相关的属性,这儿尽量能够清洗到一张大宽表中,这样能够减少ClickHouse SQL中关联的查询。

3-事情剖析

首先咱们先来看看怎么完结事情剖析。

基于ClickHouse多维用户行为分析实践

其实便是在页面装备图中的各种目标,这儿展现出了一种比较简单的装备,如:

挑选事情为下单的总次数,性别为女,并按城市分组展现事情时刻规模是近一个月的状况,SQL完结如下:

insert into ads_user_event_analysis values(1,'下单','2022-11-01 01:00:40','南京','女')
insert into ads_user_event_analysis values(1,'下单','2022-11-01 01:01:40','南京','女')
insert into ads_user_event_analysis values(2,'下单','2022-11-02 02:00:40','南京','男')
insert into ads_user_event_analysis values(3,'下单','2022-11-03 03:00:40','南京','男')
insert into ads_user_event_analysis values(4,'下单','2022-11-04 04:00:40','南京','男')
insert into ads_user_event_analysis values(5,'下单','2022-11-01 01:00:40','北京','女')
insert into ads_user_event_analysis values(5,'下单','2022-11-02 04:00:40','北京','女')
insert into ads_user_event_analysis values(6,'下单','2022-11-03 03:00:40','北京','男')
insert into ads_user_event_analysis values(7,'下单','2022-11-03 02:00:40','北京','男')
--总次数
SELECT
    count(1) AS cnt,
    city
FROM ads_user_event_analysis
WHERE (event_name = '下单') AND (sex = '女') AND 
(toDate(event_time) >= '2022-11-01') AND (toDate(event_time) <= '2022-11-30')
GROUP BY city
ORDER BY city ASC
--总用户数
SELECT count(distinct user_id) as cnt,city FROM 
ads_user_event_analysis 
where event_name='下单' and sex='男' and toDate(event_time)>='2022-11-01' and toDate(event_time)<='2022-11-08' 
group by city order by city
--窗口函数,能够加个序号,有必要的时候能够约束回来条数。
SELECT count(distinct user_id) as cnt,city, dense_rank() over(order by city) as rn FROM 
ads_user_event_analysis 
where event_name='下单' and sex='男' and toDate(event_time)>='2022-11-01' and toDate(event_time)<='2022-11-08' 
group by city order by city

假如设计的复杂点,页面能够一次性挑选多个事情,条件挑选也能够分为对单个事情的挑选,和对挑选一切事情的全局挑选。假如你想看到每天的事情剖析状况,SQL还能够在对天进行分组,获取到每天的状况。只不过咱们需求设计好和前端对接的JSON数据格式,以方便可视化的展现,这块也是个不小的作业。

需求留意的是上面的条件挑选其实需求分多个种类:包括标签和人群相关的条件,比方只针对打上某个标签的用户来剖析。

4-漏斗剖析

漏斗剖析也是很常见的一种剖析,能够直观的感受到每一步的转化丢失状况。

而在ClickHouse也是提供了对应的函数,一起来看下怎么完结吧。

基于ClickHouse多维用户行为分析实践

和事情剖析不同的是,需求先根据不同事情称号装备一个漏斗如登录->加购->购买。

装备窗口期,如7天则代表了一个用户只要在7天内完结了漏斗中的过程就会被视为完结漏斗

条件挑选维度挑选时刻规模和事情剖析共同。

数据预备和漏斗SQL完结:

insert into ads_user_event_analysis values(1,'登录','2022-11-01 01:00:40','南京','女')
insert into ads_user_event_analysis values(2,'登录','2022-11-01 01:01:40','南京','女')
insert into ads_user_event_analysis values(3,'登录','2022-11-01 02:00:40','南京','男')
insert into ads_user_event_analysis values(4,'登录','2022-11-01 03:00:40','南京','男')
insert into ads_user_event_analysis values(5,'登录','2022-11-01 04:00:40','南京','男')
insert into ads_user_event_analysis values(2,'加购','2022-11-02 01:00:40','南京','女')
insert into ads_user_event_analysis values(3,'加购','2022-11-02 04:00:40','南京','男')
insert into ads_user_event_analysis values(4,'加购','2022-11-02 04:00:40','南京','男')
insert into ads_user_event_analysis values(3,'购买','2022-11-03 03:00:40','南京','男')
insert into ads_user_event_analysis values(4,'购买','2022-11-03 02:00:40','南京','男')
-- 终究SQL如下:
select *,
neighbor(user_count, -1) AS prev_user_count,
if (prev_user_count = 0, -1, round(user_count / prev_user_count * 100, 3)) AS conv_rate_percent
from (
SELECT level_index,count(1) as user_count 
FROM
(
    SELECT  user_id,
        arrayWithConstant(level, 1) levels, 
        arrayJoin(arrayEnumerate( levels )) level_index
      FROM (
SELECT
 user_id,
 windowFunnel(864000)(
     event_time,
     event_name = '登录',
     event_name = '加购',
     event_name = '购买'
) AS level
 FROM (
SELECT event_time,  event_name , user_id as user_id 
    FROM ads_user_event_analysis
    WHERE  toDate(event_time) >= '2022-11-01' and toDate(event_time)<='2022-11-30'  and event_name in('登录','加购','购买')
)
        GROUP BY user_id)
)
group by  level_index
ORDER BY 
 level_index
)
SQL成果就能够得到咱们想要的:
漏斗次序      当时过程用户量   上一步过程用户量            转化率  
┌─level_index─┬─user_count─┬─prev_user_count─┬─conv_rate_percent─┐
│           150-1 │
│           23560 │
│           32366.667 │
└─────────────┴────────────┴─────────────────┴───────────────────┘
--假如需求按维度比方性别则SQL中group by新增字段即可
select *,
neighbor(user_count, -1) AS prev_user_count,
if (prev_user_count = 0, -1, round(user_count / prev_user_count * 100, 3)) AS conv_rate_percent
from (
SELECT level_index,count(1) as user_count ,sex
FROM
(
    SELECT  user_id,sex,
        arrayWithConstant(level, 1) levels, 
        arrayJoin(arrayEnumerate( levels )) level_index
      FROM (
SELECT
 user_id,sex, windowFunnel(864000)(
     event_time,
     event_name = '登录',
     event_name = '加购',
     event_name = '购买'
) AS level
 FROM (
SELECT event_time,  event_name , user_id as user_id ,sex
    FROM ads_user_event_analysis
    WHERE  toDate(event_time) >= '2022-11-01' and toDate(event_time)<='2022-11-30'  and event_name in('登录','加购','购买')
)
        GROUP BY sex,user_id)
)
group by  sex,level_index
ORDER BY sex, level_index
)
┌─level_index─┬─user_count─┬─sex─┬─prev_user_count─┬─conv_rate_percent─┐
│           12 │ 女  │               0-1 │
│           21 │ 女  │               250 │
│           13 │ 男  │               1300 │
│           22 │ 男  │               366.667 │
│           32 │ 男  │               2100 │
└─────────────┴────────────┴─────┴─────────────────┴───────────────────┘
这儿需求留意的是给前端的数据需求把level_index=1的 prev_user_count和conv_rate_percent调整一下.
第一个过程能够默认都是1或许100%

通过成果咱们能够看到漏斗中每一步的人数,以及转化率。

上面的SQL中有几个函数阐明:

arrayWithConstant:生成一个指定长度的数组

SELECT arrayWithConstant(3, 'level')
Query id: 016115fb-71af-4fda-983c-34c221cffc6f
┌─arrayWithConstant(3, 'level')─┐
│ ['level','level','level']     │
└───────────────────────────────┘

arrayEnumerate: 回来数组下标等同于 Hive中 ROW_NUMBER

arrayJoin:行转列

neighbor:获取某一列前后相邻的数据。

5-途径剖析

途径剖析较为复杂,看下全体流程

基于ClickHouse多维用户行为分析实践

咱们需求做的便是挑选途径的开始或许停止事情,并且定义整个途径一切的事情规模。然后挑选途径需求的session距离。假如有条件挑选同样需求支持。

咱们先预备相关的数据

insert into ads_user_event_analysis values(1,'注册','2022-11-01 01:00:40','南京','女')
insert into ads_user_event_analysis values(1,'上线','2022-11-01 01:01:40','南京','女')
insert into ads_user_event_analysis values(1,'阅读','2022-11-01 01:02:40','南京','女')
insert into ads_user_event_analysis values(1,'购买','2022-11-01 01:03:40','南京','女')
insert into ads_user_event_analysis values(2,'注册','2022-11-01 01:02:40','南京','男')
insert into ads_user_event_analysis values(2,'上线','2022-11-01 01:03:40','南京','男')
insert into ads_user_event_analysis values(2,'购买','2022-11-01 01:04:40','南京','男')
insert into ads_user_event_analysis values(2,'阅读','2022-11-01 01:00:45','南京','男')

能够看到两个用户的途径分别是

用户1:注册 -> 上线 -> 阅读 -> 购买

用户2:注册 -> 上线 -> 购买 (终究一条数据不属于一个session了)

根据SQL完结:

select * from (
 WITH toUInt32(
minIf(
                event_time,
                event_name = '注册'
            )
    ) AS end_event_maxt,arrayCompact(
            arraySort(
                    x ->
                        x.1,
                    arrayFilter(
                            x -> x.1 >= end_event_maxt,
                            groupArray((toUInt32(event_time), event_name))
                        )
                )
        ) AS sorted_events,
arrayEnumerate(sorted_events) AS event_idxs,
arrayFilter(
            (x, y, z) -> z.1 >= end_event_maxt
                AND (
                                 z.2 = '登录'
                                 or y > '600'
                             ),
            event_idxs,
            arrayDifference(sorted_events.1),
            sorted_events
        ) AS gap_idxs,
arrayMap(x -> x, gap_idxs) AS gap_idxs_,
arrayMap(x -> if(has(gap_idxs_, x), 1, 0), event_idxs) AS gap_masks,
arraySplit((x, y) -> y, sorted_events, gap_masks) AS split_events
SELECT user_id,
       arrayJoin(split_events)      AS event_chain_,
       arrayCompact(event_chain_.2) AS event_chain,
       arrayStringConcat(
               event_chain,
               ' -> '
           )                        AS result_chain,
event_chain[1] AS page1,
event_chain[2] AS page2,
event_chain[3] AS page3,
event_chain[4] AS page4,
event_chain[5] AS page5 FROM (SELECT event_time,  event_name , user_id as user_id 
 FROM ads_user_event_analysis
 WHERE 1=1 and toDate(event_time) >= '2022-11-01' and toDate(event_time)<='2022-11-04' 
 and event_name in ('注册','上线','阅读','购买')) t group by user_id
)  WHERE event_chain[1] = '注册'
┌─user_id─┬─event_chain_──────────────────────────────────────────────────────────────────────┬─event_chain───────────────────┬─result_chain─────────────────┬─page1─┬─page2─┬─page3─┬─page4─┬─page5─┐
│       2[(1667264560,'注册'),(1667264620,'上线'),(1667264680,'购买')]['注册','上线','购买']        │ 注册 -> 上线 -> 购买         │ 注册  │ 上线  │ 购买  │       │       │
│       1[(1667264440,'注册'),(1667264500,'上线'),(1667264560,'阅读'),(1667264620,'购买')]['注册','上线','阅读','购买'] │ 注册 -> 上线 -> 阅读 -> 购买 │ 注册  │ 上线  │ 阅读  │ 购买  │       │
└─────────┴───────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴──────────────────────────────┴───────┴───────┴───────┴───────┴───────┘

上面咱们现已获取到了每个用户的途径信息,但是仅仅这些在页面展现是不够的,还需求获取到每个途径有多少人,到下一个事情流转了多少人。这时候咱们能够借助窗口函数来完结一个能够给前端交互运用的回来

SELECT arrayDistinct(groupArray([concat(page1, '_', toString(page1_cnt),'_',toString(page1_user))])) AS page1,
arrayDistinct(groupArray([concat(page2, '_', toString(page2_cnt),'_',toString(page2_user))])) AS page2,
arrayDistinct(groupArray([concat(page3, '_', toString(page3_cnt),'_',toString(page3_user))])) AS page3,
arrayDistinct(groupArray([concat(page4, '_', toString(page4_cnt),'_',toString(page4_user))])) AS page4,
arrayDistinct(groupArray([concat(page5, '_', toString(page5_cnt),'_',toString(page5_user))])) AS page5,
arrayDistinct(groupArray([concat(link1_2, '_', toString(link1_2_cnt),'_',toString(link1_2_user))])) AS link1_2,
arrayDistinct(groupArray([concat(link2_3, '_', toString(link2_3_cnt),'_',toString(link2_3_user))])) AS link2_3,
arrayDistinct(groupArray([concat(link3_4, '_', toString(link3_4_cnt),'_',toString(link3_4_user))])) AS link3_4,
arrayDistinct(groupArray([concat(link4_5, '_', toString(link4_5_cnt),'_',toString(link4_5_user))])) AS link4_5 FROM (
SELECT page1,sum(1) OVER (PARTITION BY page1) AS page1_cnt,count(distinct user_id) OVER (PARTITION BY page1) As page1_user,
page2,sum(1) OVER (PARTITION BY page2) AS page2_cnt,count(distinct user_id) OVER (PARTITION BY page2) As page2_user,
page3,sum(1) OVER (PARTITION BY page3) AS page3_cnt,count(distinct user_id) OVER (PARTITION BY page3) As page3_user,
page4,sum(1) OVER (PARTITION BY page4) AS page4_cnt,count(distinct user_id) OVER (PARTITION BY page4) As page4_user,
page5,sum(1) OVER (PARTITION BY page5) AS page5_cnt,count(distinct user_id) OVER (PARTITION BY page5) As page5_user,
concat(page1, '->', page2) AS link1_2,sum(1) OVER (PARTITION BY page1, page2) AS link1_2_cnt,count(distinct user_id) OVER (PARTITION BY page1, page2) AS link1_2_user,
concat(page2, '->', page3) AS link2_3,sum(1) OVER (PARTITION BY page2, page3) AS link2_3_cnt,count(distinct user_id) OVER (PARTITION BY page2, page3) AS link2_3_user,
concat(page3, '->', page4) AS link3_4,sum(1) OVER (PARTITION BY page3, page4) AS link3_4_cnt,count(distinct user_id) OVER (PARTITION BY page3, page4) AS link3_4_user,
concat(page4, '->', page5) AS link4_5,sum(1) OVER (PARTITION BY page4, page5) AS link4_5_cnt,count(distinct user_id) OVER (PARTITION BY page4, page5) AS link4_5_user FROM (
select * from (
 WITH toUInt32(
minIf(
                event_time,
                event_name = '注册'
            )
    ) AS end_event_maxt,arrayCompact(
            arraySort(
                    x ->
                        x.1,
                    arrayFilter(
                            x -> x.1 >= end_event_maxt,
                            groupArray((toUInt32(event_time), event_name))
                        )
                )
        ) AS sorted_events,
arrayEnumerate(sorted_events) AS event_idxs,
arrayFilter(
            (x, y, z) -> z.1 >= end_event_maxt
                AND (
                                 z.2 = '登录'
                                 or y > '600'
                             ),
            event_idxs,
            arrayDifference(sorted_events.1),
            sorted_events
        ) AS gap_idxs,
arrayMap(x -> x, gap_idxs) AS gap_idxs_,
arrayMap(x -> if(has(gap_idxs_, x), 1, 0), event_idxs) AS gap_masks,
arraySplit((x, y) -> y, sorted_events, gap_masks) AS split_events
SELECT user_id,
       arrayJoin(split_events)      AS event_chain_,
       arrayCompact(event_chain_.2) AS event_chain,
       arrayStringConcat(
               event_chain,
               ' -> '
           )                        AS result_chain,
event_chain[1] AS page1,
event_chain[2] AS page2,
event_chain[3] AS page3,
event_chain[4] AS page4,
event_chain[5] AS page5 FROM (SELECT event_time,  event_name , user_id as user_id 
 FROM ads_user_event_analysis
 WHERE 1=1 and toDate(event_time) >= '2022-11-01' and toDate(event_time)<='2022-11-04' 
 and event_name in ('注册','上线','阅读','购买')) t group by user_id
)  WHERE event_chain[1] = '注册'
) AS t) AS t
┌─page1──────────┬─page2──────────┬─page3───────────────────────┬─page4───────────────────┬─page5──────┬─link1_2──────────────┬─link2_3─────────────────────────────────┬─link3_4─────────────────────────────┬─link4_5─────────────────────┐
│ [['注册_2_2']] │ [['上线_2_2']] │ [['购买_1_1'],['阅读_1_1']] │ [['_1_1'],['购买_1_1']] │ [['_2_2']] │ [['注册->上线_2_2']] │ [['上线->购买_1_1'],['上线->阅读_1_1']] │ [['购买->_1_1'],['阅读->购买_1_1']] │ [['->_1_1'],['购买->_1_1']] │
└────────────────┴��───────────────┴─────────────────────────────┴─────────────────────────┴────────────┴──────────────────────┴─────────────────────────────────────────┴─────────────────────────────────────┴─────────────────────────────┘

上面的成果中page数组代表了当时层级有哪些事情,link则维护了前后两个途径的联系。有了这些数据,大家就能够想办法展现到前端页面了。

总结

到此咱们现已完结了根据ClickHouse最常见的的一些事情剖析功用。最主要还是介绍了SQL的完结。其间缺少了标签画像相关的条件,这些内容则需求根据公司标签群组详细是表结构来完结SQL,终究能够通过in语句来将标签或许人群的数据作为条件。终究还有一大部分作业则是在前端交互的代码开发上,需求设计好回来的JSON。但是有了上面的根底,相信这些对你来说仅仅时刻问题。

本文正在参加「技能专题19期 闲谈数据库技能」活动