Android之binder驅動個人學習小結

前言:
Read the fucking Source Code.
這段時間,大概花瞭兩個星期(期間還偷懶瞭好幾天),深入學習瞭一下Android的Binder驅動。話說上半年在看Mediaplay的源碼時,就遇到過很多的IPC,當時也沒有深入的去瞭解這塊內容。這次為瞭對Android有一個系統級別的瞭解,所以較為深入的學習瞭一番。主要參考的內容包括:csdn的android 紅人老羅,以及手裡的一本楊豐盛的Android技術內幕(系統卷),作為主要的學習資料。當然我所小結的內容,也沒有他們那麼的詳細,隻是理清瞭整個思路而已。
註釋:
SM:ServiceManager
MS:MediaPlayerService
xxx:指的是某種服務,入HelloService。
一.Binder驅動的整體架構
單從C++層宏觀上看來,binder驅動的主要組成部分是:client(客戶端),server(服務端),一個Service Manager和binder底層驅動。
整體的框圖如下(摘自老羅的圖):

其實從圖中可以清晰的發現,在Android的應用層中Client和Server所謂的IPC,其實真正的工作均由底層的Binder驅動來完成。也就是說binder驅動可以完成進程間通信,這也是Android特點之一。Service Manager做為一個守護進程,主要來處理客戶端的服務請求,管理所有的服務項。
 
二.binder底層驅動核心內容。
說到底,binder底層的驅動架構和通用的linux驅動沒有區別,核心的內容包括binder_init,binder_open,binder_mmap,binder_ioctl.
binder驅動在Android系統中以miscdevice完成設備的註冊,作為抽象設備,他沒有直接操作硬件,隻是完成瞭內存的拷貝處理。如果要深入理解這塊機制,請參考老羅的android之旅。在這裡對binder_ioctl做一定的分析:
2.1 驅動核心的操作數據結構:
binder_proc和binder_thread:
每open一個binder驅動(系統允許多個進程打開binder驅動),都會有一個專門的binder_proc管理當前進程的信息,包括進程的ID,當前進程由mmap所映射出的buffer信息,以及當前進程所允許的最大線程量。同時這個binder_proc會加入到系統的全局鏈表binder_procs中去,方便在不同進程之間可以查找信息。
binder_thread:在當前進程下存在多線程,因此binder驅動使用binder_thread來管理對應的線程信息,主要包括線程所屬的binder_proc、當前狀態looper以及一個transaction_stack(我的理解是負責著實際進程間通信交互的源頭和目的地)。
binder_write_read :
[plain]
struct binder_write_read { 
    signed long write_size; /* bytes to write */ 
    signed long write_consumed; /* bytes consumed by driver */ 
    unsigned long   write_buffer; 
    signed long read_size;  /* bytes to read */ 
    signed long read_consumed;  /* bytes consumed by driver */ 
    unsigned long   read_buffer; 
}; 
在binder驅動中,以該結構體作為信息封裝的中轉(可以理解為內核和用戶的連接)。在驅動中為根據write_size和read_size的大小來進行處理(見ioctl的解析部分),在write_buffer和read_buffer都代表著用戶空間的buffer地址。在write_buffer中,由一個cmd和binder_transaction_data組成,cmd主要告知驅動當前所要處理的內容。
binder_transaction_data:
[plain]  
struct binder_transaction_data { 
    /* The first two are only used for bcTRANSACTION and brTRANSACTION, 
     * identifying the target and contents of the transaction. 
     */ 
    union { 
        size_t  handle; /* target descriptor of command transaction */ 
        void    *ptr;   /* target descriptor of return transaction */ 
    } target; 
    void        *cookie;    /* target object cookie */ 
    unsigned int    code;       /* transaction command */ 
 
    /* General information about the transaction. */ 
    unsigned int    flags; 
    pid_t       sender_pid; 
    uid_t       sender_euid; 
    size_t      data_size;  /* number of bytes of data */ 
    size_t      offsets_size;   /* number of bytes of offsets */ 
 
    /* If this transaction is inline, the data immediately 
     * follows here; otherwise, it ends with a pointer to 
     * the data buffer. 
     */ 
    union { 
        struct { 
            /* transaction data */ 
            const void  *buffer; 
            /* offsets from buffer to flat_binder_object structs */ 
            const void  *offsets; 
        } ptr; 
        uint8_t buf[8]; 
    } data; 
}; 
在這裡,buffer和offsets分別代表傳輸內容的數據量以及Binder實體的偏移量(會遇到多個Binder實體)。
binder_transaction:該結構體主要C/S即請求進程和服務進程的相關信息,方便進程間通信,以及信息的調用
binder_work:理解為binder驅動中,進程所要處理的工作項。
binder_transactionbinder_transactionbinder_transactionbinder_transaction
2.2 binder驅動之ioctl解析:
和常用的ioctl相類似,在這裡我們關註BINDER_WRITE_READ命令項的內容。
binder_thread_write和binder_thread_read會根據用戶傳入的write_size和read_size的有無來進行處理。在這裡以Mediaplayservice和ServiceManager的通信來分析,調用的cmd如下:
MS首先傳入cmd=BC_TRANSACTION:
調用binder_transaction:
[plain] 
static void binder_transaction(struct binder_proc *proc, 
                   struct binder_thread *thread, 
                   struct binder_transaction_data *tr, int reply) 

…else {//client請求service 
        if (tr->target.handle) {//SM時為target.handle=0 
            struct binder_ref *ref; 
            ref = binder_get_ref(proc, tr->target.handle); 
            if (ref == NULL) { 
                binder_user_error("binder: %d:%d got " 
                    "transaction to invalid handle\n", 
                    proc->pid, thread->pid); 
                return_error = BR_FAILED_REPLY; 
                goto err_invalid_target_handle; 
            } 
            target_node = ref->node; 
        } else { 
            target_node = binder_context_mgr_node;//調用的是SM守護進程節點 
            if (target_node == NULL) { 
                return_error = BR_DEAD_REPLY; 
                goto err_no_context_mgr_node; 
            } 
        } 
        e->to_node = target_node->debug_id; 
        target_proc = target_node->proc;//SM守護進程的相關信息 
        if (target_proc == NULL) { 
            return_error = BR_DEAD_REPLY; 
            goto err_dead_binder; 
        } 
        if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) { 
            struct binder_transaction *tmp; 
            tmp = thread->transaction_stack; 
            if (tmp->to_thread != thread) { 
                binder_user_error("binder: %d:%d got new " 
                    "transaction with bad transaction stack" 
                    ", transaction %d has target %d:%d\n", 
                    proc->pid, thread->pid, tmp->debug_id, 
                    tmp->to_proc ? tmp->to_proc->pid : 0, 
                    tmp->to_thread ? 
                    tmp->to_thread->pid : 0); 
                return_error = BR_FAILED_REPLY; 
                goto err_bad_call_stack; 
            } 
            while (tmp) { 
                if (tmp->from && tmp->from->proc == target_proc) 
                    target_thread = tmp->from; 
                tmp = tmp->from_parent; 
            } 
        } 
    } 
    if (target_thread) { 
        e->to_thread = target_thread->pid; 
        target_list = &target_thread->todo; 
        target_wait = &target_thread->wait; 
    } else { 
        target_list = &target_proc->todo;//SM進程binder_proc的todo 
        target_wait = &target_proc->wait;//等待隊列頭,對應於SM 
    } 
    … 
    if (!reply && !(tr->flags & TF_ONE_WAY)) 
        t->from = thread;//事務性記錄from binder進程,即記錄下請求進程 
    else 
        t->from = NULL; 
    t->sender_euid = proc->tsk->cred->euid; 
    t->to_proc = target_proc; 
    t->to_thread = target_thread;//目的服務進程 
    t->code = tr->code; 
    t->flags = tr->flags; 
    t->priority = task_nice(current); 
    t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));//在SM上進程上開辟一個binder_buffer 
    if (t->buffer == NULL) { 
        return_error = BR_FAILED_REPLY; 
        goto err_binder_alloc_buf_failed; 
    } 
    t->buffer->allow_user_free = 0; 
    t->buffer->debug_id = t->debug_id; 
    t->buffer->transaction = t; 
    t->buffer->target_node = target_node; 
    if (target_node) 
        binder_inc_node(target_node, 1, 0, NULL);//增加目標節點的引用 
 
    offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));//內存中的偏移量 
 
    if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
        binder_user_error("binder: %d:%d got transaction with invalid " 
            "data ptr\n", proc->pid, thread->pid); 
        return_error = BR_FAILED_REPLY; 
        goto err_copy_data_failed; 
    } 
    if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
        binder_user_error("binder: %d:%d got transaction with invalid " 
            "offsets ptr\n", proc->pid, thread->pid); 
        return_error = BR_FAILED_REPLY; 
        goto err_copy_data_failed; 
    } 
    if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) { 
        binder_user_error("binder: %d:%d got transaction with " 
            "invalid offsets size, %zd\n", 
            proc->pid, thread->pid, tr->offsets_size); 
        return_error = BR_FAILED_REPLY; 
        goto err_bad_offset; 
    } 
    off_end = (void *)offp + tr->offsets_size; 
    for (; offp < off_end; offp++) { 
        struct flat_binder_object *fp; 
        if (*offp > t->buffer->data_size – sizeof(*fp) || 
            t->buffer->data_size < sizeof(*fp) || 
            !IS_ALIGNED(*offp, sizeof(void *))) {       //對buffer大小做一定的檢驗 
            binder_user_error("binder: %d:%d got transaction with " 
                "invalid offset, %zd\n", 
                proc->pid, thread->pid, *offp); 
            return_error = BR_FAILED_REPLY; 
            goto err_bad_offset; 
        } 
        fp = (struct flat_binder_object *)(t->buffer->data + *offp);//獲取一個binder實體 
        switch (fp->type) { 
        case BINDER_TYPE_BINDER://初次調用 
        case BINDER_TYPE_WEAK_BINDER: { 
            struct binder_ref *ref; 
            struct binder_node *node = binder_get_node(proc, fp->binder); 
            if (node == NULL) { 
                node = binder_new_node(proc, fp->binder, fp->cookie);//創建一個mediaservice節點 
                if (node == NULL) { 
                    return_error = BR_FAILED_REPLY; 
                    goto err_binder_new_node_failed; 
                } 
                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; 
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); 
            } 
            if (fp->cookie != node->cookie) { 
                binder_user_error("binder: %d:%d sending u%p " 
                    "node %d, cookie mismatch %p != %p\n", 
                    proc->pid, thread->pid, 
                    fp->binder, node->debug_id, 
                    fp->cookie, node->cookie); 
                goto err_binder_get_ref_for_node_failed; 
            } 
            ref = binder_get_ref_for_node(target_proc, node); 
            if (ref == NULL) { 
                return_error = BR_FAILED_REPLY; 
                goto err_binder_get_ref_for_node_failed; 
            } 
            if (fp->type == BINDER_TYPE_BINDER) 
                fp->type = BINDER_TYPE_HANDLE;//fp->type類型改為瞭BINDER_TYPE_HANDLE句柄 
            else 
                fp->type = BINDER_TYPE_WEAK_HANDLE; 
            fp->handle = ref->desc;// 
            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, 
                       &thread->todo);//增加引用次數 
 
            binder_debug(BINDER_DEBUG_TRANSACTION, 
                     "        node %d u%p -> ref %d desc %d\n", 
                     node->debug_id, node->ptr, ref->debug_id, 
                     ref->desc); 
        } break; 
        case BINDER_TYPE_HANDLE: 
        case BINDER_TYPE_WEAK_HANDLE: { 
            struct binder_ref *ref = binder_get_ref(proc, fp->handle); 
            if (ref == NULL) { 
                binder_user_error("binder: %d:%d got " 
                    "transaction with invalid " 
                    "handle, %ld\n", proc->pid, 
                    thread->pid, fp->handle); 
                return_error = BR_FAILED_REPLY; 
                goto err_binder_get_ref_failed; 
            } 
            if (ref->node->proc == target_proc) { 
                if (fp->type == BINDER_TYPE_HANDLE) 
                    fp->type = BINDER_TYPE_BINDER; 
                else 
                    fp->type = BINDER_TYPE_WEAK_BINDER; 
                fp->binder = ref->node->ptr; 
                fp->cookie = ref->node->cookie; 
                binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL); 
                binder_debug(BINDER_DEBUG_TRANSACTION, 
                         "        ref %d desc %d -> node %d u%p\n", 
                         ref->debug_id, ref->desc, ref->node->debug_id, 
                         ref->node->ptr); 
            } else { 
                struct binder_ref *new_ref; 
                new_ref = binder_get_ref_for_node(target_proc, ref->node); 
                if (new_ref == NULL) { 
                    return_error = BR_FAILED_REPLY; 
                    goto err_binder_get_ref_for_node_failed; 
                } 
                fp->handle = new_ref->desc; 
                binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); 
                binder_debug(BINDER_DEBUG_TRANSACTION, 
                         "        ref %d desc %d -> ref %d desc %d (node %d)\n", 
                         ref->debug_id, ref->desc, new_ref->debug_id, 
                         new_ref->desc, ref->node->debug_id); 
            } 
        } break; 
        default: 
            binder_user_error("binder: %d:%d got transactio" 
                "n with invalid object type, %lx\n", 
                proc->pid, thread->pid, fp->type); 
            return_error = BR_FAILED_REPLY; 
            goto err_bad_object_type; 
        } 
    } 
    if (reply) { 
        BUG_ON(t->buffer->async_transaction != 0); 
        binder_pop_transaction(target_thread, in_reply_to); 
    } else if (!(t->flags & TF_ONE_WAY)) { 
        BUG_ON(t->buffer->async_transaction != 0); 
        t->need_reply = 1; 
        t->from_parent = thread->transaction_stack;  
        thread->transaction_stack = t; 
    } else { 
        BUG_ON(target_node == NULL); 
        BUG_ON(t->buffer->async_transaction != 1); 
        if (target_node->has_async_transaction) { 
            target_list = &target_node->async_todo; 
            target_wait = NULL; 
        } else 
            target_node->has_async_transaction = 1; 
    } 
    t->work.type = BINDER_WORK_TRANSACTION; 
    list_add_tail(&t->work.entry, target_list);//binder_work添加到SM進程Proc鏈表中 
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//type設置為BINDER_WORK_TRANSACTION_COMPLETE 
    list_add_tail(&tcomplete->entry, &thread->todo);//待完成的工作加入的本線程的todo鏈表中 
    if (target_wait) 
        wake_up_interruptible(target_wait);//喚醒Service Manager進程 
    return; 
 
…} 
分析這個函數,可以知道和SM通信時,獲取target_proc為SM進程的相關信息。然後是維護當前請求的binder實體,以免被crash。以binder_transaction t為C/S之間做為傳遞的信息,做初始化記錄請求進程和服務進程到t中。最後做如下操作:
 t->work.type = BINDER_WORK_TRANSACTION;
 list_add_tail(&t->work.entry, target_list);//binder_work添加到SM進程Proc鏈表中
 tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//type設置為BINDER_WORK_TRANSACTION_COMPLETE
 list_add_tail(&tcomplete->entry, &thread->todo);//待完成的工作加入的本線程的todo鏈表中
 if (target_wait)
  wake_up_interruptible(target_wait);//喚醒Service Manager進程
可以看到,將這個t加入到瞭服務進程SM的鏈表中,將待完成的tcomplete加入到當前MS的thread中,最後喚醒SM,做相關的處理。
MS繼續執行binder_thread_read如下:
[plain] 
static int binder_thread_read(struct binder_proc *proc, 
                  struct binder_thread *thread, 
                  void  __user *buffer, int size, 
                  signed long *consumed, int non_block) 

    void __user *ptr = buffer + *consumed; 
    void __user *end = buffer + size; 
 
    int ret = 0; 
    int wait_for_proc_work; 
 
    if (*consumed == 0) { 
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))//添加BR_NOOP 
            return -EFAULT; 
        ptr += sizeof(uint32_t); 
    } 
 
retry: 
    wait_for_proc_work = thread->transaction_stack == NULL && 
                list_empty(&thread->todo);// false 
 
    if (thread->return_error != BR_OK && ptr < end) { 
        if (thread->return_error2 != BR_OK) { 
            if (put_user(thread->return_error2, (uint32_t __user *)ptr)) 
                return -EFAULT; 
            ptr += sizeof(uint32_t); 
            if (ptr == end) 
                goto done; 
            thread->return_error2 = BR_OK; 
        } 
        if (put_user(thread->return_error, (uint32_t __user *)ptr)) 
            return -EFAULT; 
        ptr += sizeof(uint32_t); 
        thread->return_error = BR_OK; 
        goto done; 
    } 
 
 
    thread->looper |= BINDER_LOOPER_STATE_WAITING; 
    if (wait_for_proc_work) 
        proc->ready_threads++; 
    mutex_unlock(&binder_lock); 
    if (wait_for_proc_work) { 
        if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED | 
                    BINDER_LOOPER_STATE_ENTERED))) { 
            binder_user_error("binder: %d:%d ERROR: Thread waiting " 
                "for process work before calling BC_REGISTER_" 
                "LOOPER or BC_ENTER_LOOPER (state %x)\n", 
                proc->pid, thread->pid, thread->looper); 
            wait_event_interruptible(binder_user_error_wait, 
                         binder_stop_on_user_error < 2); 
        } 
        binder_set_nice(proc->default_priority); 
        if (non_block) { 
            if (!binder_has_proc_work(proc, thread)) 
                ret = -EAGAIN; 
        } else 
            ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));//binder_has_proc_work為false喚醒 
    } else { 
        if (non_block) { 
            if (!binder_has_thread_work(thread)) 
                ret = -EAGAIN; 
        } else 
            ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread)); 
    } 
    mutex_lock(&binder_lock); 
    if (wait_for_proc_work) 
        proc->ready_threads–; 
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING; 
 
    if (ret) 
        return ret; 
 
    while (1) { 
        uint32_t cmd; 
        struct binder_transaction_data tr; 
        struct binder_work *w; 
        struct binder_transaction *t = NULL; 
 
        if (!list_empty(&thread->todo)) 
            w = list_first_entry(&thread->todo, struct binder_work, entry); 
        else if (!list_empty(&proc->todo) && wait_for_proc_work)//在SM被喚醒時proc->todo為1且wait_for_proc_work等待進程有事情做 
            w = list_first_entry(&proc->todo, struct binder_work, entry);//獲取binder_work 
        else { 
            if (ptr – buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ 
                goto retry; 
            break; 
        } 
 
        if (end – ptr < sizeof(tr) + 4) 
            break; 
 
        switch (w->type) { 
        case BINDER_WORK_TRANSACTION: {//SM喚醒時帶調用 
            t = container_of(w, struct binder_transaction, work);//通過binder_transaction的指針變量work為w,獲取binder_transaction 
        } break; 
        case BINDER_WORK_TRANSACTION_COMPLETE: {  
            cmd = BR_TRANSACTION_COMPLETE; 
            if (put_user(cmd, (uint32_t __user *)ptr))  //BR_TRANSACTION_COMPLETE命令寫回 
                return -EFAULT; 
            ptr += sizeof(uint32_t); 
 
            binder_stat_br(proc, thread, cmd); 
            binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, 
                     "binder: %d:%d BR_TRANSACTION_COMPLETE\n", 
                     proc->pid, thread->pid); 
 
            list_del(&w->entry);//從thread->todo刪除鏈表 
            kfree(w); 
            binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); 
        } break; 
寫會BR_NOOP和BR_TRANSACTION_COMPLETE給用戶空間,相當於從內核讀取瞭數據,同時也做list_del(&w->entry)的處理。
MS繼續與binder交互,進入ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));進入睡眠等待SM的喚醒。
 
SM在被MS喚醒後所做的處理如下:
SM同樣在binder_thread_read時處於ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));的睡眠當中,但是此時proc->todo已經有內容,在前面的MS write的過程進行瞭(list_add_tail(&t->work.entry, target_list);//binder_work添加到SM進程Proc鏈表中)操作,所以會執行:
 w = list_first_entry(&proc->todo, struct binder_work, entry);//獲取binder_work
 t = container_of(w, struct binder_transaction, work);//通過binder_transaction的指針變量work為w,獲取binder_transaction
最後獲取binder_transaction t 用於SM和MS用來交互和中轉信息。
有瞭從MS傳遞過來的t,將t的相關信息讀取回SM的用戶空間,傳遞給SM的命令為cmd=BR_TRANSACTION。
MS再次傳遞cmd=BC_REPLY,再次回到binder_thread_write
[plain]
{  //reply=1,sevice回復給client 
        in_reply_to = thread->transaction_stack;//獲取當前事務性即原來MS傳遞給SM的binder_transaction變量t 
        if (in_reply_to == NULL) { 
            binder_user_error("binder: %d:%d got reply transaction " 
                      "with no transaction stack\n", 
                      proc->pid, thread->pid); 
            return_error = BR_FAILED_REPLY; 
            goto err_empty_call_stack; 
        } 
        binder_set_nice(in_reply_to->saved_priority); 
        if (in_reply_to->to_thread != thread) { 
            binder_user_error("binder: %d:%d got reply transaction " 
                "with bad transaction stack," 
                " transaction %d has target %d:%d\n", 
                proc->pid, thread->pid, in_reply_to->debug_id, 
                in_reply_to->to_proc ? 
                in_reply_to->to_proc->pid : 0, 
                in_reply_to->to_thread ? 
                in_reply_to->to_thread->pid : 0); 
            return_error = BR_FAILED_REPLY; 
            in_reply_to = NULL; 
            goto err_bad_call_stack; 
        } 
        thread->transaction_stack = in_reply_to->to_parent; 
        target_thread = in_reply_to->from;//獲取請求的線程 
        if (target_thread == NULL) { 
            return_error = BR_DEAD_REPLY; 
            goto err_dead_binder; 
        } 
        if (target_thread->transaction_stack != in_reply_to) { 
            binder_user_error("binder: %d:%d got reply transaction " 
                "with bad target transaction stack %d, " 
                "expected %d\n", 
                proc->pid, thread->pid, 
                target_thread->transaction_stack ? 
                target_thread->transaction_stack->debug_id : 0, 
                in_reply_to->debug_id); 
            return_error = BR_FAILED_REPLY; 
            in_reply_to = NULL; 
            target_thread = NULL; 
            goto err_dead_binder; 
        } 
        target_proc = target_thread->proc;//請教進程的相關信息 
    } 
前期MS在執行時,將MS自己的thread信息記錄在瞭t當中。
[plain]
if (!reply && !(tr->flags & TF_ONE_WAY)) 
    t->from = thread;//事務性記錄from binder進程,即記錄下請求進程 
因此SM在執行binder_thread_write時,會獲取到請求進程的thread,最終和前面MS喚醒SM一樣,喚醒SM,隻是現在的目標進程target_proc換成瞭MS的內容。
最終SM回互用戶空間BR_TRANSACTION_COMPLETE,SM隨後再次進行LOOP循環,睡眠等待其他請求進程的喚醒。
MS被喚醒後,所做的處理和SM被喚醒時相類似,在這裡寫會的cmd=BR_REPLY,以此完成瞭一次SM和MS的IPC.
 
2.3 binder 驅動C++層的機制簡單介紹
可以簡單的理解Binder IPC 實際就是C/S通過Linux的機制,對各自線程的信息進行維護,使SM和MS的用戶空間不斷和內核空間以ioctl進行讀寫的交互。服務端對信息進行解析完成相應的操作。客戶度實際隻需發送命令即可。作為應用程序的開發,Android很好的為我們做瞭各種封裝,包括C++層次的binder和Java層次的binder驅動。
核心類:BpBinder(遠程BinderProxy),BBinder(Native 本地Binder)
基於Binder C++層的機制,以SM和MS為例,在MS如果要和SM通信,就需要獲得SM在MS進程中的一個Proxy,這裡稱之為BpServiceManager,BpServiceManager的操作函數分為addservice和getservice,需要的參量為一個Bpbinder(在這裡就是SM遠程的binder對象,相當於一個句柄,由於其特殊性,句柄數值為0)。
在2.2中分析的binder底層部分內容,就是基於用戶空間的addservice開始的。在這裡引用羅老師的UML圖,方便自己的理解。

在這裡隻對BpServiceManager的addservice做解析,該類最終的實現其實還是調用BpBinder的transact來完成,而該函數的實現最終調用的是IPCThreadState的transact,在該transact代碼如下:
[plain]
status_t IPCThreadState::transact(int32_t handle, 
                                  uint32_t code, const Parcel& data, 
                                  Parcel* reply, uint32_t flags)  //handle=0,flags=0 

    status_t err = data.errorCheck(); 
 
    flags |= TF_ACCEPT_FDS; 
 
    IF_LOG_TRANSACTIONS() { 
        TextOutput::Bundle _b(alog); 
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " 
            << handle << " / code " << TypeCode(code) << ": " 
            << indent << data << dedent << endl; 
    } 
     
    if (err == NO_ERROR) { 
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), 
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); 
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);//將要發送的數據整理成一個binder_transaction_data 
    } 
     
    if (err != NO_ERROR) { 
        if (reply) reply->setError(err); 
        return (mLastError = err); 
    } 
     
    if ((flags & TF_ONE_WAY) == 0) { 
        #if 0 
        if (code == 4) { // relayout 
            LOGI(">>>>>> CALLING transaction 4"); 
        } else { 
            LOGI(">>>>>> CALLING transaction %d", code); 
        } 
        #endif 
        if (reply) { 
            err = waitForResponse(reply); 
        } else { 
            Parcel fakeReply; 
            err = waitForResponse(&fakeReply); 
        } 
        #if 0 
        if (code == 4) { // relayout 
            LOGI("<<<<<< RETURNING transaction 4"); 
        } else { 
            LOGI("<<<<<< RETURNING transaction %d", code); 
        } 
        #endif 
         
        IF_LOG_TRANSACTIONS() { 
            TextOutput::Bundle _b(alog); 
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " 
                << handle << ": "; 
            if (reply) alog << indent << *reply << dedent << endl; 
            else alog << "(none requested)" << endl; 
        } 
    } else { 
        err = waitForResponse(NULL, NULL); 
    } 
     
    return err; 

在這裡真正實現ioctl的內容在waitForResponse的talkWithDriver中實現。
SM作為Android系統中特殊的一部分,他即可用當做服務端,也管理著系統的所有Service。新的服務需要向他完成註冊才可以正常的使用。因此在這裡的addservice就是在遠程通過Binder驅動和SM交互,完成瞭MS的註冊,註冊傳入的是一個BBinder的實體BnMediaPlayService,name=MediaPlay。
在C++的binder機制中,Bpxxx對應的Bnxxx(Bpxxx繼承自BpBinder,Bnxxx繼承自BBinder),簡單理解就是Bnxxx在向SM完成註冊後,會自動啟動一個線程來等待客戶端的請求,而在客戶端如果要請求服務,需要獲取一個Bpxxx遠程代理來完成。Bpxxx在getservice時還回xxx服務的binder句柄,存放在Bpxxx對應的BpBinder的mHandle中。在binder驅動的底層會根據這個mHandle,查找到對應的target服務進程,同理根據2.2中MS喚醒SM的過程,進行命令的處理。
因此總結出在客戶端需要服務時,首先獲得Bpxxx(new BpBinder(mHandle))。然後是最終調用BpBinder的remote()->transact。而在用戶端以BBinder->ontransact完成命令的解析。
 
2.4 Binder驅動的Java機制
簡單的說一下Java層的binder驅動,其實這部分的難點還是在於Java 中Native函數在JNI的轉換,感謝Google的開發人員,實現瞭Java和C++層的Binder函數的轉換。
簡單的總結3個小點:
1.Java層擁有一個SM的遠程接口SMProxy,句柄為0 的BinderProxy對象,BinderProxy相當於BpBinder,在JNI實現轉換。
2,Ixxx接口定義一個stub和proxy,Stub(存根):理解為本地服務。proxy:遠程的代理。與C++相對應的前者就是Bnxxx,後者就是Bpxxx。www.aiwalls.com
3. xxx需要繼承瞭Ixxx的Stub,才可以完成請求的處理。
 
2.5 總結
上面的內容,基本是自己的閱讀和學習的感受,Binder驅動的復雜程度是難以想象的,源碼量大。寫完本文也沒有全部讀通,但是這也為深入的去瞭解整個android系統開辟瞭基礎。其中有些內容都是參考羅老師的Android之旅來完成的,在這裡表示感謝。在接下去的一端時間將在Android4.0.3 ICS上學習Android系統整個系統過程,主要關心的是3個開機畫面,繼續給力,最近身體不是很舒服,對著電腦頭老是暈,效率下降瞭很多,但是依舊在繼續努力,給自己加油。
 
 作者:gzzaigcn
 
 
 

發佈留言