条件

1.回顾

之前讲了Android中,第一个发动的是init 解析init.rc文件,发动对应的service。Zygote便是由init发动起来的。Zygote作为一个孵化器,孵化了system_server以及其他的启用程序。SystemServer是体系用来发动Service的进口,比方咱们常用的AMSWMSPMS等等都是由它创立的,由SystemServer来操控办理。SystemServer进程做为一个体系进程他创立了ActivityThread加载了对应的apkframewok-res.apk(比方体系的一些弹窗都是由他弹出),接着调用了startBootstrapServicesstartCoreServicesstartOtherServices开启了非常多的服务InstallerActivityTaskManagerService、ActivityManagerServicePowerManagerServicePackageManagerService等等。把他们增加到了SystemServiceManagermServices(ArrayList)中去。趁便讲了一下Android中都WatchDog,它是用来监控SystemServer中的Services的,一旦出现问题就会杀死system_server,进而杀死Zygote,由init重启Zygote再重启system_server

详细的文章细节咱们能够看我往期发布的:

【Android FrameWork】第一个发动的程序–init

【Android FrameWork】Zygote

【Android FrameWork】SystemServer

2.介绍

1.先介绍下ServiceManager

ServiceManager是Android体系为开发者供给的一个服务大管家,当开机之后,由内核态进入用户态之后,会发动system_server进程,在该进程里边会对AMS,PKMS,PMS等等进行创立。然后增加到ServiceManager中。

2.ServiceManager的效果

这儿咱们可能会有疑问,Service不是在SystemServer中存储了吗?为什么还要在ServiceManager中去存储呢?咱们不是能够直接从SystemServer中取吗? 我的了解是这样的SystemServer算是一个大管家,他整合了体系的各种服务,监控着咱们服务,办理服务的周期。ServiceManager只要一个功用便是供给binder通讯,让应用能够获取到体系供给的服务。所以他们并不冲突,责任很清晰。

3.再介绍下Binder

Binder是Android特有的一种通讯办法。Android Binder的前身是OpenBinder,后来在OpenBinder的基础上开发了Android Binder。 Android依据Linux所以支撑Linux原生的IPC通讯机制:同享内存、Pipe、Socket。Binder是Android特有的。 我就画个表来说明下他们的差异吧。

Binder 同享内存 Pipe Socket
性能 一次内存复制 0次内存复制 两次内存复制 两次内存复制
稳定性 C/S架构稳定性高 同步问题、死锁问题 仅支撑父子进程通讯,单全功功率低 C/S架构,传输需求握手、挥手、功率低,开支大
安全 内核层对App分配UID,安全性高 自定义协议,需求自己完结安全,而且接口对外开放 自定义协议,需求自己完结安全,而且接口对外开放 自定义协议,需求自己完结安全,而且接口对外开放

有爱好的能够看看我之前的文章: Binder究竟是什么?

4.吐槽

其实这一块很难去讲,ServiceManager一定会掺杂这Binder,很难讲,可是分开讲咱们可能会看的云里雾里的。所以仍是花点时间和Binder一同讲。之前也写过类似的文章,那时是以Binder为主角写的,现在开端了Android FrameWork系列,那我就以ServiceManager为主角,看看ServiceManager是怎样增加服务,供给服务的吧。

3.正文

1.ServiceManager的发动

ServiceManager它是由init进程拉起来的,而且发动机遇要比Zygote早,咱们看下init.rc文件是怎样描绘的。 文件目录:/frameworks/native/cmds/servicemanager/servicemanager.rc

service servicemanager /system/bin/servicemanager //可履行文件
    class core animation //className =core animation
    user system

之前没有讲init.rc服务分三类:coremain、和 late_start。次序分别是core>main>late_start。感爱好咱们能够再看看init.cpp

ServiceManager的源代码在:/frameworks/native/cmds/servicemanager/service_manager.c 咱们看看它的进口函数main

int main(int argc, char** argv)
{
    struct binder_state *bs;
    union selinux_callback cb;
    char *driver;
    if (argc > 1) {
        driver = argv[1];
    } else {
        driver = "/dev/binder";
    }
    bs = binder_open(driver, 128*1024);//调用binder_open 打开binder驱动 传递巨细为128*1024也便是128k
  //………………
    if (binder_become_context_manager(bs)) {//设置自己成为binder设备的上下文办理者
    }    
    //………………
    //调用binder_loop 让binder进入loop状况,等候客户端衔接,而且传入回调函数svcmgr_handler
    binder_loop(bs, svcmgr_handler);
    return 0;
}

这三个函数文件目录:/frameworks/native/cmds/servicemanager/binder.c

咱们先看第一个函数binder_open:

struct binder_state *binder_open(const char* driver, size_t mapsize)
{
    struct binder_state *bs;
    struct binder_version vers;
    //在堆上拓荒binder_state。
    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }
    //driver ='/dev/binder' 调用open函数打开binder驱动 其实这儿就调用到驱动层了,咱们稍后在驱动层再详细讲
    bs->fd = open(driver, O_RDWR | O_CLOEXEC);
    if (bs->fd < 0) {
    }
    //进行Binder版本校验,其实这儿也会调用到驱动层的代码,稍后在驱动层再讲。
    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
    }
    bs->mapsize = mapsize;
    //mapsize = 128 经过mmap映射128k的空间 可读 私有
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
    }
    return bs;
}

这儿打开了Binder(/dev/binder),而且对Binder进行了操作校验了BINDER_VERSION。经过mmap映射了128k的空间。

第二个函数:binder_become_context_manager:


int binder_become_context_manager(struct binder_state *bs)
{
    //创立flat_binder_object结构体 这个结构体很重要 Binder通讯数据都存在这个里边
    struct flat_binder_object obj;
    memset(&obj, 0, sizeof(obj));
    obj.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX;
    //调用Binder驱动层 写入BINDER_SET_CONTEXT_MGR_EXT,而且把obj传递进去
    int result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR_EXT, &obj);
    if (result != 0) {//假如失利调用老办法
        android_errorWriteLog(0x534e4554, "121035042");
        result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
    }
    return result;
}

这儿创立了flat_binder_object obj 结构体,这个结构体非常重要,咱们的数据都在这儿面存储。后续咱们再传递数据比较多的时分再详细剖析。

第三个函数binder_loop

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;//创立了binder_write_read结构体,这个结构体也非常重要 稍后详解
    uint32_t readbuf[32];//创立一个readBuf 32字节的数组
    bwr.write_size = 0; //指定bwr的write_size = 0
    bwr.write_consumed = 0; //指定bwr的 write_consume=0
    bwr.write_buffer = 0;//指定bwr的_write_buffer=0 这几个设置也很重要 咱们到驱动层再看
    readbuf[0] = BC_ENTER_LOOPER;//给readBuf[0]设置成BC_ENTER_LOOPER。
    binder_write(bs, readbuf, sizeof(uint32_t));//调用binder_write将BC_ENTER_LOOPER数据交给Binder(驱动层)处理
    for (;;) {//死循环
        bwr.read_size = sizeof(readbuf);//读取readBuf的size
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;//将readBuf的数据传递给当时bwr的read_buffer
        //调用ioctl 将BINDER_WRITE_READ指令 和数据bwr传给Binder驱动。
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
        //获取到驱动的回来成果并解析
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
    }
}
//给binder写入数据  data = BC_ENTER_LOOPER
int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;//要写入的数据是BC_ENTER_LOOPER
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    //调用binder驱动 传入BINDER_WRITE_READ 数据是上边的bwr 
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
//解析binder回来的数据并履行对应的指令 
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;
    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
        case BR_TRANSACTION_SEC_CTX:
        case BR_TRANSACTION: {//服务端回来的话会回来这样的cmd 到这儿来处理数据
            struct binder_transaction_data_secctx txn;
            if (cmd == BR_TRANSACTION_SEC_CTX) {
                memcpy(&txn, (void*) ptr, sizeof(struct binder_transaction_data_secctx));
                ptr += sizeof(struct binder_transaction_data_secctx);
            } else /* BR_TRANSACTION */ {//在这儿来处理
                //把数据ptr(readBuf)复制到&txn.transaction_data 
                memcpy(&txn.transaction_data, (void*) ptr, sizeof(struct binder_transaction_data));
                ptr += sizeof(struct binder_transaction_data);
                txn.secctx = 0;
            }
            binder_dump_txn(&txn.transaction_data);
            if (func) {//进行回调 func = svcmgr_handler
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;
                bio_init(&reply, rdata, sizeof(rdata), 4);
                bio_init_from_txn(&msg, &txn.transaction_data);
                res = func(bs, &txn, &msg, &reply);
                if (txn.transaction_data.flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
                } else {
                    binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
                }
            }
            break;
        }
    }
    return r;
}

service_manager.c 总结下:

main函数主要分了3步:

1.调用binder_open 打开Binder驱动,设置巨细为128k。

2.调用binder_become_context_manager,设置自己成为Binder设备的上下文办理者

3.调用binder_loop让binder进入loop状况,而且等候客户端恳求并处理,处理完结之后会把数据写入readbuf,进行svcmgr_handler回调。

留意:Binder的ioctl函数会堵塞。

这样咱们的ServiceManager就发动了,等候客户端的恳求衔接。

2.Binder的发动

Binder驱动的源码在内核层binder.c中,函数不多,主要有4个函数,分别是binder_ini、binder_open、binder_mmap、binder_ioctl,咱们拆开看下各个函数。

0.binder的条件知识

在讲Binder之前咱们需求先弥补下常用的进程通讯机制。首要咱们需求理解进程阻隔,我就不长篇大论的说了,简略的说便是各个进程之间不同享,无法互相访问;进程空间:用户空间:03g。内核空间:3g4g。接着画个图来说明下传统IPC。图像的比较笼统 能够参考视频了解。后续讲完之后我会再画一张Binder的图。

【Android FrameWork】ServiceManager(一) (含视频)

1.binder_init

static int __init binder_init(void)
{
   int ret;
   char *device_name, *device_names, *device_tmp;
   struct binder_device *device;
   struct hlist_node *tmp;
   ret = binder_alloc_shrinker_init();
   if (ret)
      return ret;
   atomic_set(&binder_transaction_log.cur, ~0U);
   atomic_set(&binder_transaction_log_failed.cur, ~0U);
   //创立在一个线程中运行的workqueue,命名为binder
   binder_deferred_workqueue = create_singlethread_workqueue("binder");
   if (!binder_deferred_workqueue)
      return -ENOMEM;
   binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL);
    //device_name =  binder也有hwbinder和vndbinder 这个便是结构/应用之间   结构和供货商  供货商和供货商之间的binder
   device_names = kzalloc(strlen(binder_devices_param) + 1, GFP_KERNEL);
   if (!device_names) {
      ret = -ENOMEM;
      goto err_alloc_device_names_failed;
   }
   strcpy(device_names, binder_devices_param);
   device_tmp = device_names;
   while ((device_name = strsep(&device_tmp, ","))) {
       //调用init_binder_device
      ret = init_binder_device(device_name);
      if (ret)
         goto err_init_binder_device_failed;
   }
   return ret;
}
static int __init init_binder_device(const char *name)
{
   int ret;
   struct binder_device *binder_device;//创立binder_device结构体
   binder_device = kzalloc(sizeof(*binder_device), GFP_KERNEL);
   if (!binder_device)
      return -ENOMEM;
   //制定fops = binder_fops
   binder_device->miscdev.fops = &binder_fops;
   binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
   binder_device->miscdev.name = name;//name便是上边传进来的binder
   //这儿设置uid位invalid 后续在ServiceManager设置上下文的时分才装备,姓名位/dev/binder
   binder_device->context.binder_context_mgr_uid = INVALID_UID;
   binder_device->context.name = name;
   mutex_init(&binder_device->context.context_mgr_node_lock);
    //注册misc设备
   ret = misc_register(&binder_device->miscdev);
   if (ret < 0) {
      kfree(binder_device);
      return ret;
   }
    //把binder设备增加到hlist中
   hlist_add_head(&binder_device->hlist, &binder_devices);
   return ret;
}
//binder的一些操作
static const struct file_operations binder_fops = {
   .owner = THIS_MODULE,
   .poll = binder_poll,
   .unlocked_ioctl = binder_ioctl,
   .compat_ioctl = binder_ioctl,
   .mmap = binder_mmap,
   .open = binder_open,
   .flush = binder_flush,
   .release = binder_release,
};

注册了binder驱动,以及fops有 poll,unlocked_ioctl compat_ioctl mmap open flush release

2.binder_open

在看binder_open之前,咱们先来看一个结构体binder_proc,它是binder中本进程的一个描绘,threads当时进程的binder线程的信息,nodes是自己进程的binder 信息,refs_by_desc是其他进程对应的binder方针,是以handle做为k的,refs_by_node也是其他进程的binder方针,是以内存地址为key。他们都是红黑树的数据结构。以及vm_area_struct 用户空间内存的映射办理,vma_vm_mm虚拟内存信息,task_struct 进程信息,*buffer是内核空间对应的地址,user_buffer_offset是用户空间和内核空间的偏移量。 todo便是当时进程需求做的使命行列,wait便是当时进程在等候的行列。max_threads是当时进程的最大线程数。上代码:

struct binder_proc {
 struct hlist_node proc_node;
 struct rb_root threads; //binder的线程信息
 struct rb_root nodes; //自己binder的root信息 其实内部保存的便是flat_binder_object的数据
 struct rb_root refs_by_desc;//其他进程对应的binder方针 以handle为key
 struct rb_root refs_by_node;//其他进程的binder方针,内存地址为key
 int pid;
 struct vm_area_struct *vma; //用户内存的映射办理 
 struct mm_struct *vma_vm_mm;//虚拟内存信息
 struct task_struct *tsk;//进程办理
 struct files_struct *files;
 struct hlist_node deferred_work_node;
 int deferred_work;
 void *buffer;//内核空间对应的首地址
 ptrdiff_t user_buffer_offset;//用户空间和内核空间的偏移量。
 struct list_head buffers;
 struct rb_root free_buffers;
 struct rb_root allocated_buffers;
 size_t free_async_space;
 struct page **pages;
 size_t buffer_size;
 uint32_t buffer_free;
 struct list_head todo;//todo 行列  方针进程的使命
 wait_queue_head_t  wait;//wati行列 当时进程的使命
 struct binder_stats stats;
 struct list_head delivered_death;
 int max_threads;//最大线程数
 int requested_threads;
 int requested_threads_started;
 int ready_threads;
 long default_priority;
 struct dentry *debugfs_entry;
};

咱们看看binder_open函数:

文件目录:include/linux/sched.h
#define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
static int binder_open(struct inode *nodp, struct file *filp)
{
    struct binder_proc *proc;//创立binder_proc结构体
    struct binder_device *binder_dev;
    ………………
    //在内核空间请求binder_proc的内存
    proc = kzalloc(sizeof(*proc), GFP_KERNEL);
    //初始化内核同步自旋锁
    spin_lock_init(&proc->inner_lock);
    spin_lock_init(&proc->outer_lock);
    //原子操作赋值
    atomic_set(&proc->tmp_ref, 0);
    //使履行当时体系调用进程的task_struct.usage加1
    get_task_struct(current->group_leader);
    //获取到当时进程的 binder_proc->tsk = task_struct 在Linux中线程和进程都用task_struct来描绘,都是运用PCB进程操控快来办理,差异便是线程能够运用一些公共资源。
    proc->tsk = current->group_leader;
    //初始化文件锁
    mutex_init(&proc->files_lock);
    //初始化todo列表
    INIT_LIST_HEAD(&proc->todo);
    //设置优先级
    if (binder_supported_policy(current->policy)) {
        proc->default_priority.sched_policy = current->policy;
        proc->default_priority.prio = current->normal_prio;
    } else {
        proc->default_priority.sched_policy = SCHED_NORMAL;
        proc->default_priority.prio = NICE_TO_PRIO(0);
    }
    //找到binder_device结构体的首地址
    binder_dev = container_of(filp->private_data, struct binder_device,
                  miscdev);
    //使binder_proc的上下文指向binder_device的上下文
    proc->context = &binder_dev->context;
    //初始化binder缓冲区
    binder_alloc_init(&proc->alloc);
    binder_stats_created(BINDER_STAT_PROC);
    //设置当时进程id
    proc->pid = current->group_leader->pid;
    //初始化已分发的逝世告诉列表
    INIT_LIST_HEAD(&proc->delivered_death);
    //初始化等候线程列表
    INIT_LIST_HEAD(&proc->waiting_threads);
    //保存binder_proc数据
    filp->private_data = proc;
    //由于binder支撑多线程,所以需求加锁
    mutex_lock(&binder_procs_lock);
    //将proc->binder_proc增加到binder_procs链表中
    hlist_add_head(&proc->proc_node, &binder_procs);
    //释放锁
    mutex_unlock(&binder_procs_lock);
    //在binder/proc目录下创立文件,以履行当时体系调用的进程id为名
    if (binder_debugfs_dir_entry_proc) {
        char strbuf[11];
        snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
        proc->debugfs_entry = debugfs_create_file(strbuf, 0444,
            binder_debugfs_dir_entry_proc,
            (void *)(unsigned long)proc->pid,
            &binder_proc_fops);
    }
    return 0;
}

binder_open函数便是创立了这样的一个结构体,设置了当时进程而且初始化了todowait行列并将proc_node增加到binder的binder_procs中。咱们这儿可能会猎奇为什么增加proc的proc_node,这是由于能够经过核算的办法获取到binder_proc

3.binder_mmap

在看binder_mmap之前,咱们首要得知道应用的用户空间和内核空间是一片连续的虚拟内存地址。咱们能够经过偏移量来得到对应的地址.

//ServiceManager传递过来的file便是binder,*vma便是128k的内存地址
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
   int ret;
   //在binder_open的时分现已指定了private_data便是binder_proc 所以拿到了当时进程的binder_proc
   struct binder_proc *proc = filp->private_data;
   const char *failure_string;
   //进程校验
   if (proc->tsk != current->group_leader)
      return -EINVAL;
   //假如你请求的空间大于>4M 也就给你4M
   if ((vma->vm_end - vma->vm_start) > SZ_4M)
      vma->vm_end = vma->vm_start + SZ_4M;
   if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
      ret = -EPERM;
      failure_string = "bad vm_flags";
      goto err_bad_arg;
   }
   //表明vma不能够被fork复制
   vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
   vma->vm_flags &= ~VM_MAYWRITE;
   vma->vm_ops = &binder_vm_ops;
   //vma的vm_private_data指向binder_proc
   vma->vm_private_data = proc;
    //调用binder_alloc_mmap_handler 树立用户空间和内核空间的地址映射联系
   ret = binder_alloc_mmap_handler(&proc->alloc, vma);
   if (ret)
      return ret;
   mutex_lock(&proc->files_lock);
   //获取进程的打开文件信息结构体file_struct,并把引证+1
   proc->files = get_files_struct(current);
   mutex_unlock(&proc->files_lock);
   return 0;
   return ret;
}
文件目录:/kernel_msm-android-msm-wahoo-4.4-android11/drivers/android/binder_alloc.c
int binder_alloc_mmap_handler(struct binder_alloc *alloc,
               struct vm_area_struct *vma)
{
   int ret;
   struct vm_struct *area;
   const char *failure_string;
   struct binder_buffer *buffer;
   mutex_lock(&binder_alloc_mmap_lock);
   //假如现已分配过的逻辑
   if (alloc->buffer) {
      ret = -EBUSY;
      failure_string = "already mapped";
      goto err_already_mapped;
   }
    //请求128k的内核空间
   area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC);
  //把请求到的空间给proc->buffer 也便是proc->buffer是内核空间的首地址 
   alloc->buffer = area->addr;
   //经过vma的首地址和内核空间proc->buffer(area->addr)相减得到用户空间和内核空间的偏移量,就能够在内核空间拿到内核空间的地址,也能够再内核空间拿到用户空间的地址
   alloc->user_buffer_offset =
      vma->vm_start - (uintptr_t)alloc->buffer;
   mutex_unlock(&binder_alloc_mmap_lock);
#ifdef CONFIG_CPU_CACHE_VIPT
   if (cache_is_vipt_aliasing()) {
      while (CACHE_COLOUR(
            (vma->vm_start ^ (uint32_t)alloc->buffer))) {
         pr_info("binder_mmap: %d %lx-%lx maps %pK bad alignment\n",
            alloc->pid, vma->vm_start, vma->vm_end,
            alloc->buffer);
         vma->vm_start += PAGE_SIZE;
      }
   }
#endif
    //请求内存为一页巨细
   alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
               ((vma->vm_end - vma->vm_start) / PAGE_SIZE),
                GFP_KERNEL);
   if (alloc->pages == NULL) {
      ret = -ENOMEM;
      failure_string = "alloc page array";
      goto err_alloc_pages_failed;
   }
   //得到buffer_size
   alloc->buffer_size = vma->vm_end - vma->vm_start;
    //在物理内存上请求buufer的内存空间
   buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
   if (!buffer) {
      ret = -ENOMEM;
      failure_string = "alloc buffer struct";
      goto err_alloc_buf_struct_failed;
   }
    //指向内核空间的地址
   buffer->data = alloc->buffer;
   //把buffer->entry增加到alloc->buffer的红黑树中
   list_add(&buffer->entry, &alloc->buffers);
   buffer->free = 1;
   binder_insert_free_buffer(alloc, buffer);
   alloc->free_async_space = alloc->buffer_size / 2;//假如算一步的话需求/2
   barrier();
   //设置alloc的vma是当时的用户空间
   alloc->vma = vma;
   alloc->vma_vm_mm = vma->vm_mm;
    //引证计数+1
   atomic_inc(&alloc->vma_vm_mm->mm_count);
   return 0;
}

这儿比较笼统,咱们画张图。画图之后全部简略清晰。

【Android FrameWork】ServiceManager(一) (含视频)

3.binder_ioctl

//在binder_open中 第一个调用ioctl 传递的参数是binder_version 咱们看看怎样处理的 arg = &ver
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
   int ret;
   struct binder_proc *proc = filp->private_data;//拿到当时进程的proc
   struct binder_thread *thread;
   //得到cmd的巨细
   unsigned int size = _IOC_SIZE(cmd);
   void __user *ubuf = (void __user *)arg;
   binder_selftest_alloc(&proc->alloc);
   trace_binder_ioctl(cmd, arg);
    //进入休眠状况,等候被唤醒 这儿不会被休眠 binder_stop_on_user_error>2才会被休眠
   ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
   if (ret)
      goto err_unlocked;
    //唤醒后拿到当时的thread 第一次肯定是没有的 会创立thread
   thread = binder_get_thread(proc);
   if (thread == NULL) {
      ret = -ENOMEM;
      goto err;
   }
   switch (cmd) {//cmd = BINDER_VERSION
   case BINDER_VERSION: {
   //在这儿 拿到了ver
      struct binder_version __user *ver = ubuf;
      //把版本信息放入到ver->protocol_version
      if (put_user(BINDER_CURRENT_PROTOCOL_VERSION,
              &ver->protocol_version)) {
         ret = -EINVAL;
         goto err;
      }
      break;
   }
   return ret;
}
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
   struct binder_thread *thread;
   struct binder_thread *new_thread;
   binder_inner_proc_lock(proc);
   //从当时proc的thread.rb_nodes中来找,假如找不到回来NULL
   thread = binder_get_thread_ilocked(proc, NULL);
   binder_inner_proc_unlock(proc);
   if (!thread) {//当时为空 创立newThread
      new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
      if (new_thread == NULL)
         return NULL;
      binder_inner_proc_lock(proc);
      //再进去找 不过这个时分new_thread不为null
      thread = binder_get_thread_ilocked(proc, new_thread);
      binder_inner_proc_unlock(proc);
      if (thread != new_thread)
         kfree(new_thread);
   }
   return thread;
}
//从当时进程的rb_node中来找thread 假如找到回来 假如没找到看new_thread是否为NULL 是Null 所以回来Null
static struct binder_thread *binder_get_thread_ilocked(
      struct binder_proc *proc, struct binder_thread *new_thread)
{
   struct binder_thread *thread = NULL;
   struct rb_node *parent = NULL;
   struct rb_node **p = &proc->threads.rb_node;
   while (*p) {
      parent = *p;
      thread = rb_entry(parent, struct binder_thread, rb_node);
      if (current->pid < thread->pid)
         p = &(*p)->rb_left;
      else if (current->pid > thread->pid)
         p = &(*p)->rb_right;
      else
         return thread;
   }
   if (!new_thread)
      return NULL;
     //new_thread不为Null  会帮咱们设置信息
   thread = new_thread;
   binder_stats_created(BINDER_STAT_THREAD);
   thread->proc = proc;//设置proc为当时proc
   thread->pid = current->pid;//pid
   get_task_struct(current);//引证计数+1
   thread->task = current;
   atomic_set(&thread->tmp_ref, 0);
   //初始化wait行列
   init_waitqueue_head(&thread->wait);
   //初始化todo行列
   INIT_LIST_HEAD(&thread->todo);
   //刺进红黑树
   rb_link_node(&thread->rb_node, parent, p);
   //调整颜色
   rb_insert_color(&thread->rb_node, &proc->threads);
   thread->looper_need_return = true;
   thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
   thread->return_error.cmd = BR_OK;
   thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
   thread->reply_error.cmd = BR_OK;
   INIT_LIST_HEAD(&new_thread->waiting_thread_node);
   return thread;
}
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
   struct binder_thread *thread = NULL;
   struct rb_node *parent = NULL;
   struct rb_node **p = &proc->threads.rb_node;//第一次进来是NULL
   while (*p) {//第一次进来不会查找到
      parent = *p;
      thread = rb_entry(parent, struct binder_thread, rb_node);
      if (current->pid < thread->pid)
         p = &(*p)->rb_left;
      else if (current->pid > thread->pid)
         p = &(*p)->rb_right;
      else
         break;
   }
   if (*p == NULL) {
   //创立thread
      thread = kzalloc(sizeof(*thread), GFP_KERNEL);
      if (thread == NULL)
         return NULL;
      binder_stats_created(BINDER_STAT_THREAD);
      thread->proc = proc;//thread绑定proc
      thread->pid = current->pid;//指定pid
      init_waitqueue_head(&thread->wait);//初始化thread的wait行列
      INIT_LIST_HEAD(&thread->todo);//初始化thread的todo行列
      rb_link_node(&thread->rb_node, parent, p);//链接到proc->threads
      rb_insert_color(&thread->rb_node, &proc->threads);
      thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;//设置looper的状况
      thread->return_error = BR_OK;
      thread->return_error2 = BR_OK;
   }
   return thread;
}

binder_ioctl函数会依据传入的cmd 履行对应的操作,例如咱们之前service_managerbinder_open的时分调用了ioctl 传入cmd=BINDER_VERSION 就会将version存入ver->protocol_version,然后会进行version的校验。(ioctl(bs->fd, BINDER_VERSION, &vers) == -1)。能够回来上边的binder_open 留意是service_manager里边的binder.c不是驱动层

总结下:

1.binder_init 注册了Binder驱动, poll,unlocked_ioctl compat_ioctl mmap open flush release

2.binder_open 创立了binder_proc以及初始化进程信息,todo,wait行列,而且把proc->node增加到binder_procs

3.binder_mmap 拓荒内核空间(128k),同时拓荒物理内存空间(128k),然后把内核空间和物理空间进行映射,使他们2个指向同一个内存地址。这儿咱们看到拓荒的内存巨细为1页也便是4k,这么小根本不够用,其实他是在数据传递的时分按需拓荒内存的,咱们等讲到数据传递的时分再来看 怎样拓荒。

4.binder_ioctl 对binder设备进行读写操作,咱们上边的代码中只带入了service_manager.c的binder_open调用的时分传递的BINDER_VERSION校验。后边咱们会再进来了解的。

3.再看service_manager

1.成为上下文办理者 binder_become_context_manager

binder_open打开了binder创立了binder_proc,ioctl(bs->fd, BINDER_VERSION, &vers)校验了binder_version,调用了mmap用户空间、内核空间、物理内存 进行了映射。之后调用了binder_become_context_manager(bs),将binder设置成为上下文办理者。咱们来看看

int binder_become_context_manager(struct binder_state *bs)
{
    struct flat_binder_object obj;//创立了flat_binder_object结构体 
    memset(&obj, 0, sizeof(obj));
    obj.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX;
    //调用驱动层的binder_ioctl cmd=BINDER_SET_CONTEXT_MGR_EXT obj传入
    int result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR_EXT, &obj);
    // fallback to original method
    if (result != 0) {
        android_errorWriteLog(0x534e4554, "121035042");
        result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
    }
    return result;
}

在这儿调用Binder驱动,传入的cmd是BINDER_SET_CONTEXT_MGR_EXT,那么Binder怎样处理的呢?上边 贴过代码了,所以咱们这次就贴处理部分的代码(详细代码在上方binder_ioctl):

case BINDER_SET_CONTEXT_MGR_EXT: {
   struct flat_binder_object fbo;
    //从用户空间复制数据到fbo  service_manager传递的是一个flat_binder_object obj
   if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
      ret = -EINVAL;
      goto err;
   }
   //调用binder_ioctl_set_ctx_mgr 设置成为办理员
   ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
   if (ret)
      goto err;
   break;
}
//fob 便是service_manager传递的flat_binder_object
static int binder_ioctl_set_ctx_mgr(struct file *filp,
                struct flat_binder_object *fbo)
{
   int ret = 0;
   //获取到当时进程的proc
   struct binder_proc *proc = filp->private_data;
   //获取到当时binder的办理者
   struct binder_context *context = proc->context;
   struct binder_node *new_node;//创立binder_node
   kuid_t curr_euid = current_euid();
   mutex_lock(&context->context_mgr_node_lock);
   if (context->binder_context_mgr_node) {//假如有就直接回来,第一次来默许是空的
      pr_err("BINDER_SET_CONTEXT_MGR already set\n");
      ret = -EBUSY;
      goto out;
   }
   //查看当时进程是否有注册成为mgr的权限 感爱好的咱们能够去看看Selinux 我在这儿就简略说下service_manager的权限装备文件在servicemanager.te中 设置了是domain 具有unconfined_domain的特点 只要有这个特点都有设置ContextMananger的权限。
   ret = security_binder_set_context_mgr(proc->tsk);
   if (ret < 0)//没有权限就履行out
      goto out;
   if (uid_valid(context->binder_context_mgr_uid)) {//校验uid
      if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
         pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
                from_kuid(&init_user_ns, curr_euid),
                from_kuid(&init_user_ns,
                context->binder_context_mgr_uid));
         ret = -EPERM;
         goto out;
      }
   } else {//设置context->binder_context_mgr_uid
      context->binder_context_mgr_uid = curr_euid;
   }
   //创立binder_node
   new_node = binder_new_node(proc, fbo);
   if (!new_node) {
      ret = -ENOMEM;
      goto out;
   }
   //锁
   binder_node_lock(new_node);
   //软引证++
   new_node->local_weak_refs++;
   //强指针++
   new_node->local_strong_refs++;
   new_node->has_strong_ref = 1;
   new_node->has_weak_ref = 1;
   //context->binder_context_mgr_node = new_node;设置当时node为binder的上下文办理者
   context->binder_context_mgr_node = new_node;
   binder_node_unlock(new_node);
   binder_put_node(new_node);
out:
   mutex_unlock(&context->context_mgr_node_lock);
   return ret;
}
//创立binder_node binder_node便是维护用户空间的各种service的指针用来找binder方针的。
static struct binder_node *binder_new_node(struct binder_proc *proc,
                  struct flat_binder_object *fp)
{
   struct binder_node *node;
   //请求binder_node的内存
   struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL);
    //锁
   binder_inner_proc_lock(proc);
   //初始化binder_node
   node = binder_init_node_ilocked(proc, new_node, fp);
   binder_inner_proc_unlock(proc);
   if (node != new_node)//回来的是同一个
      kfree(new_node);
    //回来node
   return node;
}
//初始化binder_node fp是service_manager传递过来的 里边也是0值
static struct binder_node *binder_init_node_ilocked(
                  struct binder_proc *proc,
                  struct binder_node *new_node,
                  struct flat_binder_object *fp)
{
   struct rb_node **p = &proc->nodes.rb_node;
   struct rb_node *parent = NULL;
   struct binder_node *node;
   //fp不为null 所以值是fp->binder 不过fp->binder也没值 所以ptr cookie都是0
   binder_uintptr_t ptr = fp ? fp->binder : 0;
   binder_uintptr_t cookie = fp ? fp->cookie : 0;
   __u32 flags = fp ? fp->flags : 0;
   s8 priority;
   assert_spin_locked(&proc->inner_lock);
    //proc->nodes.rb_nodes 刺进到红黑树中
   while (*p) {
      parent = *p;
      node = rb_entry(parent, struct binder_node, rb_node);
      if (ptr < node->ptr)
         p = &(*p)->rb_left;
      else if (ptr > node->ptr)
         p = &(*p)->rb_right;
      else {
         /*
          * A matching node is already in
          * the rb tree. Abandon the init
          * and return it.
          */
         binder_inc_node_tmpref_ilocked(node);
         return node;
      }
   }
   node = new_node;
   binder_stats_created(BINDER_STAT_NODE);
   node->tmp_refs++;
   rb_link_node(&node->rb_node, parent, p);
   rb_insert_color(&node->rb_node, &proc->nodes);
   node->debug_id = atomic_inc_return(&binder_last_id);
   node->proc = proc;
   node->ptr = ptr;
   node->cookie = cookie;
   node->work.type = BINDER_WORK_NODE;
   priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
   node->sched_policy = (flags & FLAT_BINDER_FLAG_SCHED_POLICY_MASK) >>
      FLAT_BINDER_FLAG_SCHED_POLICY_SHIFT;
   node->min_priority = to_kernel_prio(node->sched_policy, priority);
   node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
   node->inherit_rt = !!(flags & FLAT_BINDER_FLAG_INHERIT_RT);
   node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
   spin_lock_init(&node->lock);
   INIT_LIST_HEAD(&node->work.entry);
   INIT_LIST_HEAD(&node->async_todo);
   binder_debug(BINDER_DEBUG_INTERNAL_REFS,
           "%d:%d node %d u%016llx c%016llx created\n",
           proc->pid, current->pid, node->debug_id,
           (u64)node->ptr, (u64)node->cookie);
   return node;
}

也便是说把service_manangerproc->context-> binder_context_mgr_node设置成咱们新创立的binder_node

终究调用了binder_loop。咱们就来看看service_manager的binder_loop对Binder做了什么?上边咱们贴过binder_loop的代码了,这儿就不贴了,只贴关键的代码:

readbuf[0] = BC_ENTER_LOOPER; //在请求的readbuf写入 BC_ENTER_LOOPER
binder_write(bs, readbuf, sizeof(uint32_t));//调用binder_write和binder通讯

咱们看看binder_write函数:

int binder_write(struct binder_state *bs, void *data, size_t len)
{
   struct binder_write_read bwr;
   int res;
   bwr.write_size = len;
   bwr.write_consumed = 0;
   bwr.write_buffer = (uintptr_t) data;//wirte_buffer = BC_ENTER_LOOPER
   bwr.read_size = 0;
   bwr.read_consumed = 0;
   bwr.read_buffer = 0;
   res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
   if (res < 0) {
       fprintf(stderr,"binder_write: ioctl failed (%s)\n",
               strerror(errno));
   }
   return res;
}

Binder驱动是怎样处理的呢?

case BINDER_WRITE_READ://此时的cmd = BINDER_WRITE_READ  args是 bwr write_size = len(也便是4字节)
   ret = binder_ioctl_write_read(filp, cmd, arg, thread);
   if (ret)
      goto err;
   break;
static int binder_ioctl_write_read(struct file *filp,
            unsigned int cmd, unsigned long arg,
            struct binder_thread *thread)
{
   int ret = 0;
   struct binder_proc *proc = filp->private_data;//拿到当时进程的proc
   unsigned int size = _IOC_SIZE(cmd);//cmd是BINDER_WRITE_READ
   void __user *ubuf = (void __user *)arg;
   struct binder_write_read bwr;
   if (size != sizeof(struct binder_write_read)) {
      ret = -EINVAL;
      goto out;
   }
   if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//复制用户空间的ubuf也便是传递的bwr到内核空间的的bwr中
      ret = -EFAULT;
      goto out;
   }
   if (bwr.write_size > 0) {//write是大于0的
      ret = binder_thread_write(proc, thread,
                 bwr.write_buffer,
                 bwr.write_size,
                 &bwr.write_consumed);
      trace_binder_write_done(ret);
      if (ret < 0) {
         bwr.read_consumed = 0;
         if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
            ret = -EFAULT;
         goto out;
      }
   }
   //……………………
   if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
      ret = -EFAULT;
      goto out;
   }
out:
   return ret;
}
//进入写函数 binder_buffer是BC_ENTER_LOOPER size是4字节
static int binder_thread_write(struct binder_proc *proc,
         struct binder_thread *thread,
         binder_uintptr_t binder_buffer, size_t size,
         binder_size_t *consumed)
{
   uint32_t cmd;
   struct binder_context *context = proc->context;
   //传递过来的数据BC_ENTER_LOOPER
   void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
   void __user *ptr = buffer + *consumed;
   //完毕的方位
   void __user *end = buffer + size;
   while (ptr < end && thread->return_error.cmd == BR_OK) {
      int ret;
      //从用户空间复制4个字节到cmd中 也便是cmd = BC_ENTER_LOOPER 
      if (get_user(cmd, (uint32_t __user *)ptr))
         return -EFAULT;
        //指针偏移
      ptr += sizeof(uint32_t);
      trace_binder_command(cmd);
      if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
         atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);
         atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);
         atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);
      }
      switch (cmd) {
      //……………………
      case BC_ENTER_LOOPER:
         binder_debug(BINDER_DEBUG_THREADS,
                 "%d:%d BC_ENTER_LOOPER\n",
                 proc->pid, thread->pid);
         if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
            thread->looper |= BINDER_LOOPER_STATE_INVALID;
            binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n",
               proc->pid, thread->pid);
         }
         thread->looper |= BINDER_LOOPER_STATE_ENTERED;
         break;
         //…………………………
         }
      *consumed = ptr - buffer;
   }
   return 0;
}
写入BINDER_LOOPER_STATE_ENTERED之后,咱们回到binder_loop中 看看接下来写了什么
binder_loop中的代码:
for (;;) {//无限循环
    bwr.read_size = sizeof(readbuf);//这个时分read_size >0  read_buffer=BC_ENTER_LOOPER
    bwr.read_consumed = 0;
    bwr.read_buffer = (uintptr_t) readbuf;
    //调用Binder驱动传入BINDER_WRITE_READ 而且传入bwr
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
        break;
    }
    res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
    if (res == 0) {
        ALOGE("binder_loop: unexpected reply?!\n");
        break;
    }
    if (res < 0) {
        ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
        break;
    }
}
驱动层binder.c的binder_ioctl_write_read函数:
之前writesize>0现在write_size =0,read_size>0了
if (bwr.read_size > 0) {
   ret = binder_thread_read(proc, thread, bwr.read_buffer,
             bwr.read_size,
             &bwr.read_consumed,
             filp->f_flags & O_NONBLOCK);
   trace_binder_read_done(ret);
   binder_inner_proc_lock(proc);
   if (!binder_worklist_empty_ilocked(&proc->todo))
      binder_wakeup_proc_ilocked(proc);
   binder_inner_proc_unlock(proc);
   if (ret < 0) {
      if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
         ret = -EFAULT;
      goto out;
   }
}
//履行read函数
static int binder_thread_read(struct binder_proc *proc,
               struct binder_thread *thread,
               binder_uintptr_t binder_buffer, size_t size,
               binder_size_t *consumed, int non_block)
{
//用户空间传递过来的数据readBuf
   void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
   void __user *ptr = buffer + *consumed;
   void __user *end = buffer + size;
   int ret = 0;
   int wait_for_proc_work;
   if (*consumed == 0) {//这儿传递的是0 给用户空间put了一个BR_NOOP ptr便是上边传来的bwr
      if (put_user(BR_NOOP, (uint32_t __user *)ptr))
         return -EFAULT;
      ptr += sizeof(uint32_t);
   }
retry:
   binder_inner_proc_lock(proc);
   //查看是否有作业需求处理 查看todo行列 和 transaction_stack!=null thread->todo是空 wait_for_proc_work = true
   wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);
   binder_inner_proc_unlock(proc);
//设置looper的状况为等候
   thread->looper |= BINDER_LOOPER_STATE_WAITING;
   trace_binder_wait_for_work(wait_for_proc_work,
               !!thread->transaction_stack,
               !binder_worklist_empty(proc, &thread->todo));
   if (wait_for_proc_work) {//这儿是true
      if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
               BINDER_LOOPER_STATE_ENTERED))) {//进入了enter_loop 也便是之前处理的BC_ENTER_LOOPER
               //调用等候函数进行等候,等候客户端的唤醒
         wait_event_interruptible(binder_user_error_wait,
                   binder_stop_on_user_error < 2);
      }
      //恢复优先级
      binder_restore_priority(current, proc->default_priority);
   }
   if (non_block) {//非堵塞模式  默许是堵塞的
      if (!binder_has_work(thread, wait_for_proc_work))
         ret = -EAGAIN;
   } else {
   //看看binder_proc中是否有作业
      ret = binder_wait_for_work(thread, wait_for_proc_work);
   }
//免除等候
   thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
   if (ret)
      return ret;
   while (1) {
      uint32_t cmd;
      struct binder_transaction_data_secctx tr;
      struct binder_transaction_data *trd = &tr.transaction_data;
      struct binder_work *w = NULL;
      struct list_head *list = NULL;
      struct binder_transaction *t = NULL;
      struct binder_thread *t_from;
      size_t trsize = sizeof(*trd);
      binder_inner_proc_lock(proc);
      if (!binder_worklist_empty_ilocked(&thread->todo))
         list = &thread->todo;
      else if (!binder_worklist_empty_ilocked(&proc->todo) &&
            wait_for_proc_work)
         list = &proc->todo;
      else {
         binder_inner_proc_unlock(proc);
         /* no data added */
         if (ptr - buffer == 4 && !thread->looper_need_return)
            goto retry;
         break;
      }
      if (end - ptr < sizeof(tr) + 4) {
         binder_inner_proc_unlock(proc);
         break;
      }
      w = binder_dequeue_work_head_ilocked(list);
      if (binder_worklist_empty_ilocked(&thread->todo))
         thread->process_todo = false;
      switch (w->type) {
      case BINDER_WORK_TRANSACTION: {
       //……………………
      } break;
      case BINDER_WORK_RETURN_ERROR: {
       //……………………
      } break;
      case BINDER_WORK_TRANSACTION_COMPLETE: {
      //……………………
      } break;
      case BINDER_WORK_NODE: {
      //……………………
      } break;
      case BINDER_WORK_DEAD_BINDER:
      case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
      case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
      //……………………
      break;
   }
done:
   *consumed = ptr - buffer;
   binder_inner_proc_lock(proc);
   if (proc->requested_threads == 0 &&
       list_empty(&thread->proc->waiting_threads) &&
       proc->requested_threads_started < proc->max_threads &&
       (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
        BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
        /*spawn a new thread if we leave this out */) {
      proc->requested_threads++;
      binder_inner_proc_unlock(proc);
      binder_debug(BINDER_DEBUG_THREADS,
              "%d:%d BR_SPAWN_LOOPER\n",
              proc->pid, thread->pid);
      if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
         return -EFAULT;
      binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
   } else
      binder_inner_proc_unlock(proc);
   return 0;
}

首要写入了BC_ENTER_LOOPERthread->loop设置成了BINDER_LOOPER_STATE_ENTERED,然后进入read模式 此时transaction_stacktodo为空,所以进入睡觉,等候客户端唤醒。

至此service_manager开启完毕,会一向等候客户端的恳求衔接。

擦擦汗,service_manager现已进入休眠了,让咱们再来看看 怎样唤醒它的吧。下面咱们先跳出Binder回到ServiceManager。

4.ServiceMananger的获取与增加

1.条件头绪

文件目录:/frameworks/base/services/java/com/android/server/SystemServer.java          /frameworks/base/services/core/java/com/android/server/am/ActivityManagerService.java

让咱们回到system_server,看看AMS是怎样增加到ServiceManager中去的吧。在startBootstrapServices函数中调用 mActivityManagerService = ActivityManagerService.Lifecycle.startService( mSystemServiceManager, atm);创立了AMS。然后调用mActivityManagerService.setSystemProcess()。咱们看看setSystemProcess

public void setSystemProcess() {
    //……………………… 代码比较多我就只留下关键代码
        ServiceManager.addService(Context.ACTIVITY_SERVICE, this, /* allowIsolated= */ true,
                DUMP_FLAG_PRIORITY_CRITICAL | DUMP_FLAG_PRIORITY_NORMAL | DUMP_FLAG_PROTO);
               // ……………………
}

经过调用ServiceManager的静态办法addService传递了Context.ACTIVITY_SERVICE(值是 activity),this,true

咱们进入ServiceManager:/android/frameworks/base/core/java/android/os/ServiceManager.java

2. addService

public static void addService(String name, IBinder service, boolean allowIsolated,
     int dumpPriority) {
 try {
     getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
 } catch (RemoteException e) {
     Log.e(TAG, "error in addService", e);
 }
}

调用了getIServiceManager().addService();参数原地不动的传入了进去,咱们看看getIServiceManager是谁

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }
    // Find the service manager
    //asInterface(BpBinder)
    //调用了BinderInternal.getContextObject();咱们看看他回来的是啥
    sServiceManager = ServiceManagerNative
            .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    return sServiceManager;
}

文件目录:/frameworks/base/core/java/com/android/internal/os/BinderInternal.java

public static final native IBinder getContextObject();

是一个native函数。走,去native看看 文件目录:/frameworks/base/core/jni/android_util_Binder.cpp

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
     //熟悉的朋友ProcessState回来了 看看他的getContextObject
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
    //这儿的b便是BpBinder(0)
    //包装成Java层的android/os/BinderProxy 并回来
    return javaObjectForIBinder(env, b);
}
//ProcessState.cpp
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);
}
//handle = 0
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    //拿到handle_entry
    handle_entry* e = lookupHandleLocked(handle);
    if (e != nullptr) {
        IBinder* b = e->binder;//binder==null 新建的嘛
        if (b == nullptr || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {//handle == 0 
                Parcel data;//创立Parcel 
                //调用IPCThreadState的transact 传入PING_TRANSACTION咱们稍后再看
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, nullptr, 0);
                if (status == DEAD_OBJECT)
                   return nullptr;
            }
            //创立一个BpBinder(handle = 0);
            b = BpBinder::create(handle);
            e->binder = b;//给handle_entry->binder赋值
            if (b) e->refs = b->getWeakRefs();
            result = b;//回来BpBinder(0)
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
//看当时是否有 假如没有就新建一个
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size();//这儿是0
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = nullptr;
        e.refs = nullptr;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return nullptr;
    }
    return &mHandleToObject.editItemAt(handle);
}
//把native层的BpBinder(0)包装成BinderProxy回来
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    if (val == NULL) return NULL;
    if (val->checkSubclass(&gBinderOffsets)) { // == false
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }
    BinderProxyNativeData* nativeData = new BinderProxyNativeData();
    nativeData->mOrgue = new DeathRecipientList;
    nativeData->mObject = val;
    //BinderProxy.getInstance mNativeData   nativeData
    jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
            gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
    if (env->ExceptionCheck()) {
        // In the exception case, getInstance still took ownership of nativeData.
        return NULL;
    }
    BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
    if (actualNativeData == nativeData) {//是同一个创立一个新的Proxy
        // Created a new Proxy
        uint32_t numProxies = gNumProxies.fetch_add(1, std::memory_order_relaxed);
        uint32_t numLastWarned = gProxiesWarned.load(std::memory_order_relaxed);
        if (numProxies >= numLastWarned + PROXY_WARN_INTERVAL) {
            // Multiple threads can get here, make sure only one of them gets to
            // update the warn counter.
            if (gProxiesWarned.compare_exchange_strong(numLastWarned,
                        numLastWarned + PROXY_WARN_INTERVAL, std::memory_order_relaxed)) {
                ALOGW("Unexpectedly many live BinderProxies: %d\n", numProxies);
            }
        }
    } else {
        delete nativeData;
    }
    return object;
}

也便是说getContextObject()这个函数回来了Java层的BinderProxy,native层的BpBinder。留意handle=0。 回到ServiceManagergetIserviceMananger():

//BinderInternal.getContextObject() 这儿回来了BinderProxy
//Binder.allowBlocking做了什么呢?
sServiceManager = ServiceManagerNative
        .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
        //设置mWarnOnBlocking = false
public static IBinder allowBlocking(IBinder binder) {
    try {
        if (binder instanceof BinderProxy) {//这儿是true 设置mWarnOnBlocking = false
            ((BinderProxy) binder).mWarnOnBlocking = false;
        } else if (binder != null && binder.getInterfaceDescriptor() != null
                && binder.queryLocalInterface(binder.getInterfaceDescriptor()) == null) {
            Log.w(TAG, "Unable to allow blocking on interface " + binder);
        }
    } catch (RemoteException ignored) {
    }
    return binder;
}
那么ServiceManagerNative.asInterface呢?
文件目录:`/frameworks/base/core/java/android/os/ServiceManagerNative.java`
//留意这儿的obj是BinderProxy
static public IServiceManager asInterface(IBinder obj)
{
    if (obj == null) {
        return null;
    }
    IServiceManager in =
        (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {//所以这儿是null
        return in;
    }
//调用 ServiceManagerProxy(BinderProxy)
    return new ServiceManagerProxy(obj);
}
文件目录:`/frameworks/base/core/java/android/os/BinderProxy.java`
 public IInterface queryLocalInterface(String descriptor) {
        return null;
    }
  回到ServiceManagerNative.javaclass  `ServiceManagerProxy`在这儿: 
public ServiceManagerProxy(IBinder remote) {//remote便是BinderProxy 而且指向的是BpBinder(0)
    mRemote = remote;
}

接下来看看调用的addService办法:

public void addService(String name, IBinder service, boolean allowIsolated, int dumpPriority)
        throws RemoteException {
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    //descriptor= "android.os.IServiceManager"
    data.writeInterfaceToken(IServiceManager.descriptor);
    data.writeString(name);//name ="activity"
    data.writeStrongBinder(service);//service便是ActivityManagerService的this指针 service写入data
    data.writeInt(allowIsolated ? 1 : 0);//allowIsolated是ture所以这儿是1
    data.writeInt(dumpPriority);
    //这儿调用的是BinderProxy的transact(ADD_SERVICE_TRANSACTION,data便是parce,reply不为null,0)
    mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
    reply.recycle();
    data.recycle();
}
咱们先看看writeStrongBinder
文件目录:`/frameworks/base/core/jni/android_os_Parcel.cpp`
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            signalExceptionForError(env, clazz, err);
        }
    }
}
文件目录:`/frameworks/native/libs/binder/Parcel.cpp`
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;//老朋友了,传递的service数据都写在这儿的
    if (IPCThreadState::self()->backgroundSchedulingDisabled()) {
        /* minimum priority for all nodes is nice 0 */
        obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;
    } else {
        /* minimum priority for all nodes is MAX_NICE(19) */
        obj.flags = 0x13 | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    }
    if (binder != nullptr) {//咱们传递的是ActivityManagerService 看看AMS的localBinder回来的 是什么?我这儿没编译Android源码  硬盘不够了,所以就不看` 
IActivityManager.Stub`咱们知道他继承自Binder.java 代码贴在下边
//文件目录:`/frameworks/base/core/java/android/os/Binder.java`
        BBinder *local = binder->localBinder();//回来的是ams的BBInder的this指针 剖析在下边
        if (!local) {//这儿不为null,不走这儿的逻辑
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == nullptr) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.hdr.type = BINDER_TYPE_HANDLE;
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            if (local->isRequestingSid()) {
                obj.flags |= FLAT_BINDER_FLAG_TXN_SECURITY_CTX;
            }
            obj.hdr.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
    //走这儿的逻辑 type是Binder_type_binder
        obj.hdr.type = BINDER_TYPE_BINDER;
        obj.binder = 0;//binder== 0
        obj.cookie = 0;//cookie == 0
    }
    //把obj写入out
    return finish_flatten_binder(binder, obj, out);
}
//AMS继承自Binder 他的午饭构造函数会调用有参构造函数而且传递null
public Binder() {
    this(null);
}
public Binder(@Nullable String descriptor)  {
//经过native办法获取生成BBinder
    mObject = getNativeBBinderHolder();
    NoImagePreloadHolder.sRegistry.registerNativeAllocation(this, mObject);
    if (FIND_POTENTIAL_LEAKS) {
        final Class<? extends Binder> klass = getClass();
        if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) &&
                (klass.getModifiers() & Modifier.STATIC) == 0) {
            Log.w(TAG, "The following Binder class should be static or leaks might occur: " +
                klass.getCanonicalName());
        }
    }
    mDescriptor = descriptor;
}
代码方位:android_util_Binder.cpp
static jlong android_os_Binder_getNativeBBinderHolder(JNIEnv* env, jobject clazz)
{
//回来JavaBBinderHolder 也便是JavaBBinder
    JavaBBinderHolder* jbh = new JavaBBinderHolder();
    return (jlong) jbh;
}
JavaBBinder中并没有loaclBinder的完结,可是他继承自BBinder
//class JavaBBinder : public BBinder
文件目录:`/frameworks/native/libs/binder/Binder.cpp`
//这儿回来的是this指针
BBinder* BBinder::localBinder()
{
    return this;
}
//out写入数据
inline static status_t finish_flatten_binder(
    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}

addService把AMS写入Parcel,然后经过调用transact来进行通讯。

接下来看看是怎样通讯的调用了BinderProxytransact办法


public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
    if (mWarnOnBlocking && ((flags & FLAG_ONEWAY) == 0)) {
        mWarnOnBlocking = false;
    }
    //…………………………
    try {
        return transactNative(code, data, reply, flags);
    } finally {
//……………………
}
android_util_Binder.cpp:
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    //从java层的方针解出来native的Parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    Parcel* reply = parcelForJavaObject(env, replyObj);
    //obj是BpBinder(0)
    IBinder* target = getBPNativeData(env, obj)->mObject.get();
    bool time_binder_calls;
    int64_t start_millis;
    if (kEnableBinderSample) {
        // Only log the binder call duration for things on the Java-level main thread.
        // But if we don't
        time_binder_calls = should_time_binder_calls();
        if (time_binder_calls) {
            start_millis = uptimeMillis();
        }
    }
    //调用BpBinder的transact通讯
    status_t err = target->transact(code, *data, reply, flags);
    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }
    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
    //调用IPCThreadState::transact(0,code,data,reply,flag),上边的PINBINDER咱们没有讲这儿一同看看
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}
文件目录:`/frameworks/native/libs/binder/IPCThreadState.cpp`
IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {//第一次肯定是false
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }
    if (gShutdown) {
        ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
        return nullptr;
    }
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
    //帮咱们创立gtls
        int key_create_value = pthread_key_create(&gTLS, threadDestructor);
        if (key_create_value != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
                    strerror(key_create_value));
            return nullptr;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
//写数据进行通讯  handle = 0 code = ADD_SERVICE_TRANSACTION  data=AMS
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err;
    flags |= TF_ACCEPT_FDS;
    //调用writeTransactionData 来写数据
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);
    if ((flags & TF_ONE_WAY) == 0) {//同步的
        if (UNLIKELY(mCallRestriction != ProcessState::CallRestriction::NONE)) {
        if (reply) {
        //调用waitForResponse等候回来成果
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {//异步
        err = waitForResponse(nullptr, nullptr);
    }
    return err;
}

经过BpBinder,终究调用到IPCThreadStatewriteTransactionData

//把数据写入到mOut cmd = BC_TRANSACTION
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;//创立tr
    tr.target.ptr = 0; 
    tr.target.handle = handle;//handle = 0
    tr.code = code;//code = ADD_SERVICE_TRANSACTION
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();//数据巨细
        tr.data.ptr.buffer = data.ipcData();//数据
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    //写入cmd = BC_TRANSACTION
    mOut.writeInt32(cmd);
   //写入数据
    mOut.write(&tr, sizeof(tr));
    return NO_ERROR;
}

接着调用waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;
    while (1) {//死循环 
        //调用talkWithDriver进行Binder通讯  可算到正题了
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        cmd = (uint32_t)mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;
        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;
                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(nullptr,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(nullptr,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;
        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    return err;
}
//终于到正题了doReceive = true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    binder_write_read bwr;
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//0 所以是true
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;//这儿是true回来的是mOut.dataSize()
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();//获取到方才写入的data code = ADD_SERVICE_TRANSACTION  data=AMS
    if (doReceive && needRead) {//true
        bwr.read_size = mIn.dataCapacity(); //read_size = 0
        bwr.read_buffer = (uintptr_t)mIn.data();//read_buffer= null
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
//调用ioctl和Binder通讯 传入的cmd是BINDER_WRITE_READ
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
    } while (err == -EINTR);
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        return NO_ERROR;
    }
    return err;
}

bwrread_size置为0write_size设置为数据巨细 write_buffer为上层带过来的数据code = ADD_SERVICE_TRANSACTION data=AMS。调用ioctl进入binder层 传入的是Binder_WRITE_READ 咱们去看看驱动层怎样处理的。

注:这儿我就不从binder_ioctl来跟了,咱们知道write_size大于0 是履行 binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
         struct binder_thread *thread,
         binder_uintptr_t binder_buffer, size_t size,
         binder_size_t *consumed)
{
   uint32_t cmd;
   struct binder_context *context = proc->context;
   //这儿是咱们带进来的数据AMS的 以及code = ADD_SERVICE_TRANSACTION 
   void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
   void __user *ptr = buffer + *consumed;
   void __user *end = buffer + size;
   while (ptr < end && thread->return_error.cmd == BR_OK) {
      int ret;
        //先读取cmd cmd是BC_TRANSACTION
      if (get_user(cmd, (uint32_t __user *)ptr))
         return -EFAULT;
      ptr += sizeof(uint32_t);
      trace_binder_command(cmd);
      if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
         atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);
         atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);
         atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);
      }
      switch (cmd) {
      case BC_TRANSACTION:
      case BC_REPLY: {//此时的指令是BC_TRANSACTION
         struct binder_transaction_data tr;
           //从用户空间把tr复制到当时tr tr.code = ADD_SERVICE_TRANSACTION  tr.target.handle == 0 tr.data.ptr.buffer 等于传递过来的AMS
         if (copy_from_user(&tr, ptr, sizeof(tr)))
            return -EFAULT;
         ptr += sizeof(tr);
         //调用binder_transaction  cmd==BC_REPLY吗? false的
         binder_transaction(proc, thread, &tr,
                  cmd == BC_REPLY, 0);
         break;
      }
      }
      *consumed = ptr - buffer;
   }
   return 0;
}
//进行通讯
static void binder_transaction(struct binder_proc *proc,
                struct binder_thread *thread,
                struct binder_transaction_data *tr, int reply,
                binder_size_t extra_buffers_size)
{
   if (reply) {//这儿是false 所以先把这儿代码删除去 实在太多了
   } else {
      if (tr->target.handle) {//handle == 0 所以这儿不来 也删除去
      } else {
         mutex_lock(&context->context_mgr_node_lock);
         target_node = context->binder_context_mgr_node;//留意咱们是非null的 由于咱们ServiceManager现已设置了context->binder_context_mgr_node
         if (target_node)//所以咱们会进到这儿来
         //经过target_node获取到binder_node也便是service_manager的binder_node
            target_node = binder_get_node_refs_for_txn(
                  target_node, &target_proc,
                  &return_error);
         mutex_unlock(&context->context_mgr_node_lock);
         if (target_node && target_proc == proc) {//这儿不相等
              //………………
         }
      }
        //安全查看 感爱好的能够自己去跟踪去看
      if (security_binder_transaction(proc->tsk,
                  target_proc->tsk) < 0) {
         //………………
      }
      binder_inner_proc_lock(proc);
      if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
      }
   if (target_thread)
      e->to_thread = target_thread->pid;
   e->to_proc = target_proc->pid;
   //创立binder_transaction节点
   t = kzalloc(sizeof(*t), GFP_KERNEL);
   binder_stats_created(BINDER_STAT_TRANSACTION);
   spin_lock_init(&t->lock);
  //创立binder_work节点
   tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
   binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
   t->debug_id = t_debug_id;
   if (!reply && !(tr->flags & TF_ONE_WAY))//这儿为true 得知道自己从哪里来的
      t->from = thread;
   else
      t->from = NULL;
   t->sender_euid = task_euid(proc->tsk);
   t->to_proc = target_proc;//设置要去的proc 服务端也便是service_manager
   t->to_thread = target_thread;
   t->code = tr->code;//code=ADD_SERVICE_TRANSACTION
   t->flags = tr->flags;
   //设置优先级
   if (!(t->flags & TF_ONE_WAY) &&
       binder_supported_policy(current->policy)) {
      t->priority.sched_policy = current->policy;
      t->priority.prio = current->normal_prio;
   } else {
      t->priority = target_proc->default_priority;
   }
   //安全相关
   if (target_node && target_node->txn_security_ctx) {
      u32 secid;
      size_t added_size;
      security_task_getsecid(proc->tsk, &secid);
      ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
      added_size = ALIGN(secctx_sz, sizeof(u64));
      extra_buffers_size += added_size;
      if (extra_buffers_size < added_size) {
      }
   }
   trace_binder_transaction(reply, t, target_node);
    //分配缓存 树立用户空间,内核空间,物理内存的映射联系,让他们指向同一个地址
   t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
      tr->offsets_size, extra_buffers_size,
      !reply && (t->flags & TF_ONE_WAY));
   t->buffer->debug_id = t->debug_id;
   t->buffer->transaction = t;//设置 传输的数据是t 也便是他自己
   t->buffer->target_node = target_node;//记录方针target_node也便是service_manager
   off_start = (binder_size_t *)(t->buffer->data +
                  ALIGN(tr->data_size, sizeof(void *)));
   offp = off_start;
    //开端从用户空间(tr->data.ptr.buffer)复制数据到t->buffer->data
   if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
            tr->data.ptr.buffer, tr->data_size)) {
            //…………失利会进来
   }
   if (copy_from_user(offp, (const void __user *)(uintptr_t)
            tr->data.ptr.offsets, tr->offsets_size)) {
          //…………失利会进来
   }
   //对齐查看
   if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
    //…………失利会进来
   }
   if (!IS_ALIGNED(extra_buffers_size, sizeof(u64))) {
      //…………失利会进来
   }
   off_end = (void *)off_start + tr->offsets_size;
   sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));
   sg_buf_end = sg_bufp + extra_buffers_size -
      ALIGN(secctx_sz, sizeof(u64));
   off_min = 0;
   //循环遍历每一个binder方针
   for (; offp < off_end; offp++) {
      struct binder_object_header *hdr;
      size_t object_size = binder_validate_object(t->buffer, *offp);
      if (object_size == 0 || *offp < off_min) {
             //…………失利会进来
      }
      hdr = (struct binder_object_header *)(t->buffer->data + *offp);
      off_min = *offp + object_size;
      switch (hdr->type) {
      case BINDER_TYPE_BINDER://咱们之前传递的type是binder_type_binder
      case BINDER_TYPE_WEAK_BINDER: {
         struct flat_binder_object *fp;
         fp = to_flat_binder_object(hdr);
         //会把咱们当时传入的type修正成binder_type_handle
         ret = binder_translate_binder(fp, t, thread);
      } break;
      }
   }
   //设置自己的binder_work的type为BINDER_WORK_TRANSACTION_COMPLETE 其实啥也不干 goto finsish
   tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
   //设置方针binder_proc的work.type=BINDER_WORK_TRANSACTION
   t->work.type = BINDER_WORK_TRANSACTION;
   if (reply) {//这儿是false
   } else if (!(t->flags & TF_ONE_WAY)) {//这儿是true
      binder_inner_proc_lock(proc);
      //把tcomplete刺进到自己的binder的todo行列中
      binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
      t->need_reply = 1;//设置reply = 1
      t->from_parent = thread->transaction_stack;
     //刺进transaction_task链表中
      thread->transaction_stack = t;
      binder_inner_proc_unlock(proc);
      //刺进方针进程的todo行列,并唤醒它 当时是service_manager
      if (!binder_proc_transaction(t, target_proc, target_thread)) {
      }
   } else {//异步
   }
   //削减暂时引证计数
   if (target_thread)
      binder_thread_dec_tmpref(target_thread);
   binder_proc_dec_tmpref(target_proc);
   if (target_node)
      binder_dec_node_tmpref(target_node);
   return;
}
//看看怎样把binder引证转换成binder方针
static int binder_translate_binder(struct flat_binder_object *fp,
               struct binder_transaction *t,
               struct binder_thread *thread)
{
   struct binder_node *node;
   struct binder_proc *proc = thread->proc;
   struct binder_proc *target_proc = t->to_proc;
   struct binder_ref_data rdata;
   int ret = 0;
    //从proc->nodes.rb_node查找binder_node 找不到的话回来null 第一次是null 自己的进程
   node = binder_get_node(proc, fp->binder);
    if (!node) {
       node = binder_new_node(proc, fp);//创立新的binder_node
       if (!node)
          return -ENOMEM;
    }
  //安全校验
   if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
   }
    //查找binder_ref 而且引证计数+1 假如没有找到就创立一个 这个函数很重要咱们看看
   ret = binder_inc_ref_for_node(target_proc, node,
         fp->hdr.type == BINDER_TYPE_BINDER,
         &thread->todo, &rdata);
   if (ret)
      goto done;
    //此时的binder_type是BINDER_TYPE_BINDER 转换成BINDER_TYPE_HANDLE
   if (fp->hdr.type == BINDER_TYPE_BINDER)
      fp->hdr.type = BINDER_TYPE_HANDLE;
   else
      fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
   fp->binder = 0;
   fp->handle = rdata.desc;//handle赋值为0
   fp->cookie = 0;
done:
   binder_put_node(node);
   return ret;
}
//创立新的binder_node
static struct binder_node *binder_new_node(struct binder_proc *proc,
                  struct flat_binder_object *fp)
{
   struct binder_node *node;
   struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL);
   if (!new_node)
      return NULL;
   binder_inner_proc_lock(proc);
   node = binder_init_node_ilocked(proc, new_node, fp);
   binder_inner_proc_unlock(proc);
   if (node != new_node)//假如现已增加了
      /*
       * The node was already added by another thread
       */
      kfree(new_node);
   return node;
}
//创立binder引证
static int binder_inc_ref_for_node(struct binder_proc *proc,
         struct binder_node *node,
         bool strong,
         struct list_head *target_list,
         struct binder_ref_data *rdata)
{
   struct binder_ref *ref;
   struct binder_ref *new_ref = NULL;
   int ret = 0;
   binder_proc_lock(proc);
   ref = binder_get_ref_for_node_olocked(proc, node, NULL);
   if (!ref) {
      binder_proc_unlock(proc);
      new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
      if (!new_ref)
         return -ENOMEM;
      binder_proc_lock(proc);
      ref = binder_get_ref_for_node_olocked(proc, node, new_ref);
   }
   ret = binder_inc_ref_olocked(ref, strong, target_list);
   *rdata = ref->data;
   binder_proc_unlock(proc);
   if (new_ref && ref != new_ref)
      /*
       * Another thread created the ref first so
       * free the one we allocated
       */
      kfree(new_ref);
   return ret;
}
//设置binder_node的信息
static struct binder_ref *binder_get_ref_for_node_olocked(
               struct binder_proc *proc,
               struct binder_node *node,
               struct binder_ref *new_ref)
{
   struct binder_context *context = proc->context;
   struct rb_node **p = &proc->refs_by_node.rb_node;
   struct rb_node *parent = NULL;
   struct binder_ref *ref;
   struct rb_node *n;
   while (*p) {
      parent = *p;
      ref = rb_entry(parent, struct binder_ref, rb_node_node);
      if (node < ref->node)
         p = &(*p)->rb_left;
      else if (node > ref->node)
         p = &(*p)->rb_right;
      else
         return ref;
   }
   if (!new_ref)
      return NULL;
   binder_stats_created(BINDER_STAT_REF);
   new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
   new_ref->proc = proc;
   new_ref->node = node;
   rb_link_node(&new_ref->rb_node_node, parent, p);
   rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
//假如当时是service_manager desc便是handle为0 否则为1
   new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
   //从树中找到终究一个把desc+1 ams的desc就会被+1
   for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
      ref = rb_entry(n, struct binder_ref, rb_node_desc);
      if (ref->data.desc > new_ref->data.desc)
         break;
      new_ref->data.desc = ref->data.desc + 1;
   }
   p = &proc->refs_by_desc.rb_node;
   while (*p) {
      parent = *p;
      ref = rb_entry(parent, struct binder_ref, rb_node_desc);
      if (new_ref->data.desc < ref->data.desc)
         p = &(*p)->rb_left;
      else if (new_ref->data.desc > ref->data.desc)
         p = &(*p)->rb_right;
      else {
         dump_ref_desc_tree(new_ref, n);
         BUG();
      }
   }
   rb_link_node(&new_ref->rb_node_desc, parent, p);
   rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
   binder_node_lock(node);
   hlist_add_head(&new_ref->node_entry, &node->refs);
   binder_debug(BINDER_DEBUG_INTERNAL_REFS,
           "%d new ref %d desc %d for node %d\n",
            proc->pid, new_ref->data.debug_id, new_ref->data.desc,
            node->debug_id);
   binder_node_unlock(node);
   return new_ref;
}
//外面传入的proc指向procp地址
static struct binder_node *binder_get_node_refs_for_txn(
      struct binder_node *node,
      struct binder_proc **procp,
      uint32_t *error)
{
   struct binder_node *target_node = NULL;
   binder_node_inner_lock(node);
   if (node->proc) {
      target_node = node;
      binder_inc_node_nilocked(node, 1, 0, NULL);//binder_node的强指针+1
      binder_inc_node_tmpref_ilocked(node);//暂时引证+1
      atomic_inc(&node->proc->tmp_ref);
      *procp = node->proc;//外面传入的proc指向procp地址
   } else
      *error = BR_DEAD_REPLY;
   binder_node_inner_unlock(node);
   return target_node;
}
//分配内存,树立映射  extra_buffers_size =0  is_async=true datasize = 要写入的AMS的data_size
struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
                  size_t data_size,
                  size_t offsets_size,
                  size_t extra_buffers_size,
                  int is_async)
{
   struct binder_buffer *buffer;
   mutex_lock(&alloc->mutex);
   buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
                    extra_buffers_size, is_async);
   mutex_unlock(&alloc->mutex);
   return buffer;
}
//内存映射 extra_buffers_size = 0  is_async = true
static struct binder_buffer *binder_alloc_new_buf_locked(
            struct binder_alloc *alloc,
            size_t data_size,
            size_t offsets_size,
            size_t extra_buffers_size,
            int is_async)
{
   struct rb_node *n = alloc->free_buffers.rb_node;
   struct binder_buffer *buffer;
   size_t buffer_size;
   struct rb_node *best_fit = NULL;
   void *has_page_addr;
   void *end_page_addr;
   size_t size, data_offsets_size;
   int ret;
   if (alloc->vma == NULL) {//不等于null
      return ERR_PTR(-ESRCH);
   }
  //核算需求的缓冲区巨细 需求做对齐
   data_offsets_size = ALIGN(data_size, sizeof(void *)) +
      ALIGN(offsets_size, sizeof(void *));
   size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
   size = max(size, sizeof(void *));
    //从binder_alloc 一切闲暇的空间中找到一个巨细适宜的binder_buffer
   while (n) {
        //buffer等于找到的空间地址
      buffer = rb_entry(n, struct binder_buffer, rb_node);
      BUG_ON(!buffer->free);
      buffer_size = binder_alloc_buffer_size(alloc, buffer);
      //查找策略 感爱好能够自己看
      if (size < buffer_size) {
         best_fit = n;
         n = n->rb_left;
      } else if (size > buffer_size)
         n = n->rb_right;
      else {
         best_fit = n;
         break;
      }
   }
   if (best_fit == NULL) {//没找到}
   //此时buffer指向的是所需求的空间的父节点 所以咱们要让他指向真正需求的
   if (n == NULL) {
      buffer = rb_entry(best_fit, struct binder_buffer, rb_node);
      buffer_size = binder_alloc_buffer_size(alloc, buffer);
   }
//核算出buffer的完毕方位(向下对齐)
   has_page_addr =
      (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK);
   WARN_ON(n && buffer_size != size);
   //核算出实际需求接收数据的空间的完毕方位
   end_page_addr =
      (void *)PAGE_ALIGN((uintptr_t)buffer->data + size);
      //假如超出了可用的,回复到正常可用的完毕方位
   if (end_page_addr > has_page_addr)
      end_page_addr = has_page_addr;
      //调用binder_update_page_range请求内存 树立映射
   ret = binder_update_page_range(alloc, 1,
       (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr);
   if (ret)
      return ERR_PTR(ret);
    //假如有剩余空间,切割buffer,把剩余的参加alloc的闲暇中去
   if (buffer_size != size) {
      struct binder_buffer *new_buffer;
      new_buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
      if (!new_buffer) {
         pr_err("%s: %d failed to alloc new buffer struct\n",
                __func__, alloc->pid);
         goto err_alloc_buf_struct_failed;
      }
      new_buffer->data = (u8 *)buffer->data + size;
      list_add(&new_buffer->entry, &buffer->entry);
      new_buffer->free = 1;
      binder_insert_free_buffer(alloc, new_buffer);
   }
   //咱们现已运用了 所以需求把他从闲暇列表中移除
   rb_erase(best_fit, &alloc->free_buffers);
   buffer->free = 0;//标记为非闲暇
   buffer->allow_user_free = 0;
   //刺进到现已分配的alloc空间中
   binder_insert_allocated_buffer_locked(alloc, buffer);
   binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
           "%d: binder_alloc_buf size %zd got %pK\n",
            alloc->pid, size, buffer);
   buffer->data_size = data_size;
   buffer->offsets_size = offsets_size;
   buffer->async_transaction = is_async;
   buffer->extra_buffers_size = extra_buffers_size;
   if (is_async) {
   }
   return buffer;
}
//咱们看看他是怎样拓荒空间的
static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
                void *start, void *end)
{
   void *page_addr;
   unsigned long user_page_addr;
   struct binder_lru_page *page;
   struct vm_area_struct *vma = NULL;
   struct mm_struct *mm = NULL;//mm便是虚拟内存办理的结构体 整个应用的进程描绘。 比方咱们的so .text  .data   pgd 等都在这儿进行办理存储 
   bool need_mm = false;
    //地址校验
   if (end <= start)
      return 0;
   if (allocate == 0)//allocate是1
      goto free_range;
 //查看是否有页框需求分配
   for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
      page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
      if (!page->page_ptr) {
         need_mm = true;
         break;
      }
   }
   if (need_mm && atomic_inc_not_zero(&alloc->vma_vm_mm->mm_users))
      mm = alloc->vma_vm_mm;//获取到用户空间的mm
   if (mm) {
      down_read(&mm->mmap_sem);//获取mm_struct的读信号量
      if (!mmget_still_valid(mm)) {//有效性查看
         if (allocate == 0)
            goto free_range;
         goto err_no_vma;
      }
      vma = alloc->vma;
   }
   if (!vma && need_mm) {//无法映射用户空间内存
    //…………
   }
//start = buffer->data(找到的闲暇地址)的首地址  一向拓荒到(end_page_addr),一页巨细的开
   for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
      int ret;
      bool on_lru;
      size_t index;
       //算出来页框的地址
      index = (page_addr - alloc->buffer) / PAGE_SIZE;
      page = &alloc->pages[index];
      if (page->page_ptr) {//假如不为null说明之前映射过了
         continue;
      }
        //分配一个物理页内存
      page->page_ptr = alloc_page(GFP_KERNEL |
                   __GFP_HIGHMEM |
                   __GFP_ZERO);
      if (!page->page_ptr) {//没有分配成功
      //……
      }
      page->alloc = alloc;
      INIT_LIST_HEAD(&page->lru);
       //将物理内存空间映射到内核空间
      ret = map_kernel_range_noflush((unsigned long)page_addr,
                      PAGE_SIZE, PAGE_KERNEL,
                      &page->page_ptr);
      flush_cache_vmap((unsigned long)page_addr,
            (unsigned long)page_addr + PAGE_SIZE);
      //依据偏移量核算出用户空间的内存地址
      user_page_addr =
         (uintptr_t)page_addr + alloc->user_buffer_offset;
         //将物理内存空间映射到用户空间地址
      ret = vm_insert_page(vma, user_page_addr, page[0].page_ptr);
      if (index + 1 > alloc->pages_high)
         alloc->pages_high = index + 1;
      trace_binder_alloc_page_end(alloc, index);
      /* vm_insert_page does not seem to increment the refcount */
   }
   if (mm) {//释放读的信号量
      up_read(&mm->mmap_sem);
      mmput(mm);
   }
   return 0;
}

客户端调用binder_thread_write(),获取到cmd=BC_TRANSACTION,接着从用户空间copy_from_user把数据tr复制到内核空间来 等于传递过来的AMS,调用binder_transaction进行通讯 此时的cmd!=reply所以是不需求回复的。找到service_manager(handle=0)创立方针进程传递的数据t(binder_transact)本进程的tcomplete(binder_work),调用binder_alloc_new_buf请求内存并树立用户空间,内核空间,物理内存的映射联系,让他们指向同一个地址,接着从用户空间复制传递的数据buffer到请求都内存中,之前传递的obj.hdr.type = BINDER_TYPE_BINDER 这儿进入BINDER_TYPE_BINDER逻辑 会调用binder_translate_binder修正fp(flat_binder_object)的type为binder_type_handle,然后这只自己的进程type=BINDER_WORK_TRANSACTION_COMPLETE,设置方针进程的type = BINDER_WORK_TRANSACTION,把tcomplete参加到自己的todo行列中,设置need_reply为1,终究把t刺进到方针进程service_managertodo行列而且调用binder_proc_transaction唤醒它.客户端进程就在talkWithDriver。看看怎样唤醒服务端的。

static bool binder_proc_transaction(struct binder_transaction *t,
                struct binder_proc *proc,
                struct binder_thread *thread)
{
   struct binder_node *node = t->buffer->target_node;
   struct binder_priority node_prio;
   bool oneway = !!(t->flags & TF_ONE_WAY);//这儿是false 咱们是同步的
   bool pending_async = false;
   binder_inner_proc_lock(proc);
      //进程假如死了 
   if (proc->is_dead || (thread && thread->is_dead)) {
      binder_inner_proc_unlock(proc);
      binder_node_unlock(node);
      return false;
   }
    //假如没有传递方针进程
   if (!thread && !pending_async)
      thread = binder_select_thread_ilocked(proc);
   if (thread) {
   //设置优先级
      binder_transaction_priority(thread->task, t, node_prio,
                   node->inherit_rt);
                   //把t->work刺进到方针进程的todo行列中
      binder_enqueue_thread_work_ilocked(thread, &t->work);
   } else if (!pending_async) {
      binder_enqueue_work_ilocked(&t->work, &proc->todo);
   } else {
      binder_enqueue_work_ilocked(&t->work, &node->async_todo);
   }
   if (!pending_async)
      binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);//唤醒方针进程的等候行列
   binder_inner_proc_unlock(proc);
   binder_node_unlock(node);
   return true;
}
static void binder_wakeup_thread_ilocked(struct binder_proc *proc,
                struct binder_thread *thread,
                bool sync)
{
   assert_spin_locked(&proc->inner_lock);
   if (thread) {
      if (sync)
         wake_up_interruptible_sync(&thread->wait);
      else//走这儿
         wake_up_interruptible(&thread->wait);
      return;
   }
   binder_wakeup_poll_threads_ilocked(proc, sync);
}

到这儿service_manager服务端成功唤醒

在线视频:

www.bilibili.com/video/BV1RT…