概述

iOS开发中,锁的应用是必不可少的,今日咱们就一起探究一下常用的@synchronized的原理。

探究方针

咱们在运用@synchronized的时候,基本运用便是

@synchronized (self) {
    // your code here
}

这儿咱们看到,@synchronized后边跟着一个参数,后边是一个代码块,这便是咱们关于@synchronized的基本运用方法。接下来咱们将探讨一下问题。

  • @synchronized的参数究竟应该传什么?假如传nil,会发生什么?
  • @synchronized的代码块究竟是什么?
  • @synchronized的完成原理究竟是什么样的?
  • @synchronized为什么能够嵌套

关于@synchronized的初步探究

源码中并没有对@synchronized直接完成,咱们无法经过这种方法直接查看相关的底层完成。

main.m文件中完成一个@synchronized并经过xcrun来探究一下怎么。

iOS开发--@synchronized的原理分析

编译出来的cpp文件中,除了体系的代码之外,由@synchronized发生的代码在赤色框内,咱们对格局进行一个对齐,会好看一点。

iOS开发--@synchronized的原理分析

在使命中简略写了一个NSLog打印,咱们看到其间有一个_SYNC_EXIT结构体,其间界说了结构函数和析构函数外,还有一个id类型的sync_exit,也便是用来接收参数。

剔除简略的信息后咱们能够看到基本流程如下

iOS开发--@synchronized的原理分析

  • _sync_obj: @synchronized传入的参数
  • 调用objc_sync_enter(_sync_obj);
  • _sync_exit(_sync_obj):带着参数调用析构函数,实际上相当于调用objc_sync_exit(_sync_obj);

整个流程并不复杂,接下来咱们研究objc_sync_enter(_sync_obj);

iOS开发--@synchronized的原理分析

假如传入的obj存在,对obj进行业务处理;假如objnil,如注释,@synchronized(nil) does nothing,不作处理

相同objc_sync_exit(id obj)函数中对数据的处理相似

iOS开发--@synchronized的原理分析

两个函数都对obj进行了一个id2data()的操作,以及在objc_sync_enterdata进行mutex.lock()mutex.tryUnlock()操作

SyncData剖析

简略剖析一下SyncData

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData; // 显然是一个单项链表结构
    DisguisedPtr<objc_object> object;  // 关联目标的封装
    int32_t threadCount;  // number of THREADS using this block
    recursive_mutex_t mutex; // 递归锁, 但不能多线程递归
} SyncData;

先来看一下核心重点的id2data()源码中首要包括三个部分

#if SUPPORT_DIRECT_THREAD_KEYS
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
if (data) {......}

这儿SUPPORT_DIRECT_THREAD_KEYS用到了tls(Thread Local Storage)线程局部存储,是操作体系为线程独自提供的私有空间,通常只要有限容量。

 SyncCache *cache = fetch_cache(NO);
 if (cache) {......}

cache中查找,与上一种方法处理逻辑相似,意味着假如支撑tls,走第一种方法,假如不支撑则走第二种方法。

// 剩余的部分
{......}
done:
......

LOCK_FOR_OBJLIST_FOR_OBJ的宏界说中,有一个sDataList

#define LOCK_FOR_OBJ(obj) sDataLists[obj].lock
#define LIST_FOR_OBJ(obj) sDataLists[obj].data
static StripedMap<SyncList> sDataLists; // 大局哈希表

其间SyncList是一个带有SyncData和lock的结构

struct SyncList {
    SyncData *data;
    spinlock_t lock;
    constexpr SyncList() : data(nil), lock(fork_unsafe_lock) { }
};

sDataList的结构,由于是哈希结构,并不会次序存储,第一个值不一定真实0方位。

(StripeMap<SyncList>) $0 = {
    array = {
        [0] = {
            value = {
                data = nil
                lock = {
                    mLock = (_os_unfaire_lock_opaque = 0)
                }
            }
        },
        ......
        [63] = {
            value = {
                data = nil
                lock = {
                    mLock = (_os_unfaire_lock_opaque = 0)
                }
            }
        },
    }
}

SyncListdata即为syncData,那么咱们在运用@synchronized的嵌套用法时假如参数都是self,将怎么存储,所以SyncData运用链表结构,同一个self中的不同操作能够构成链表,经过同一个self找到所有的操作。

解析

static SyncData* id2data(id object, enum usage why)
{
    spinlock_t *lockp = &LOCK_FOR_OBJ(object);
    SyncData **listp = &LIST_FOR_OBJ(object);
    SyncData* result = NULL;
1假如支撑tls走这儿,第一次并不存在
7嵌套的第二层进来的时候,data存在而且支撑tls,进入此处的代码
#if SUPPORT_DIRECT_THREAD_KEYS
    // Check per-thread single-entry fast cache for matching object
    bool fastCacheOccupied = NO;
    SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
    if (data) {
        fastCacheOccupied = YES;
        if (data->object == object) {
            // Found a match in fast cache.
            uintptr_t lockCount;
            result = data;
            8获取目标被加锁了多少次,假如线程数或锁的数量小于等于零,阐明目标不存在
            lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
            if (result->threadCount <= 0  ||  lockCount <= 0) {
                _objc_fatal("id2data fastcache is buggy");
            }
            switch(why) {
            9假如是ACQUIRE,目标加锁数+1,并进行存储
            case ACQUIRE: {
                lockCount++;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                break;
            }
            10.假如是RELEASE,目标加锁数-1,并进行存储
            case RELEASE:
                lockCount--;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                11锁为零,对result中的线程数
                if (lockCount == 0) {
                    // remove from fast cache
                    tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
                    // atomic because may collide with concurrent ACQUIRE
                    11线程数-1,这儿也阐明@synchronized是跨线程的,多线程的
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }
            return result;
        }
    }
#endif
2假如不持之走这儿,第一次进来并不存在cache
    // Check per-thread cache of already-owned locks for matching object
    SyncCache *cache = fetch_cache(NO);
    if (cache) {
        unsigned int i;
        for (i = 0; i < cache->used; i++) {
            SyncCacheItem *item = &cache->list[i];
            if (item->data->object != object) continue;
            // Found a match.
            result = item->data;
            if (result->threadCount <= 0  ||  item->lockCount <= 0) {
                _objc_fatal("id2data cache is buggy");
            }
            switch(why) {
            case ACQUIRE:
                item->lockCount++;
                break;
            case RELEASE:
                item->lockCount--;
                if (item->lockCount == 0) {
                    // remove from per-thread cache
                    cache->list[i] = cache->list[--cache->used];
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }
            return result;
        }
    }
    // Thread cache didn't find anything.
    // Walk in-use list looking for matching object
    // Spinlock prevents multiple threads from creating multiple 
    // locks for the same new object.
    // We could keep the nodes in some hash table if we find that there are
    // more than 20 or so distinct locks active, but we don't do that now.
    lockp->lock();
3第一次过来list为空,why = ACQUIRE,所以也不走这儿
    {
        SyncData* p;
        SyncData* firstUnused = NULL;
        for (p = *listp; p != NULL; p = p->nextData) {
            if ( p->object == object ) {
                result = p;
                // atomic because may collide with concurrent RELEASE
                12假如是同一个目标
                OSAtomicIncrement32Barrier(&result->threadCount);
                goto done;
            }
            if ( (firstUnused == NULL) && (p->threadCount == 0) )
                firstUnused = p;
        }
        // no SyncData currently associated with object
        if ( (why == RELEASE) || (why == CHECK) )
            goto done;
        // an unused one was found, use it
        if ( firstUnused != NULL ) {
            result = firstUnused;
            result->object = (objc_object *)object;
            result->threadCount = 1;
            goto done;
        }
    }
4第一次走这儿,创立一个新的SyncData并加入到list
    // Allocate a new SyncData and add to list.
    // XXX allocating memory with a global lock held is bad practice,
    // might be worth releasing the lock, allocating, and searching again.
    // But since we never free these guys we won't be stuck in allocation very often.
    posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
    result->object = (objc_object *)object;
    result->threadCount = 1;
    new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
    5用头插法进行数据摆放
    result->nextData = *listp;
    *listp = result;
    6对syncData进行存储
 done:
    lockp->unlock();
    if (result) {
        // Only new ACQUIRE should get here.
        // All RELEASE and CHECK and recursive ACQUIRE are 
        // handled by the per-thread caches above.
        if (why == RELEASE) {
            // Probably some thread is incorrectly exiting 
            // while the object is held by another thread.
            return nil;
        }
        if (why != ACQUIRE) _objc_fatal("id2data is buggy");
        if (result->object != object) _objc_fatal("id2data is buggy");
#if SUPPORT_DIRECT_THREAD_KEYS
        if (!fastCacheOccupied) {
            // Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
#endif
        {
            // Save in thread cache
            if (!cache) cache = fetch_cache(YES);
            cache->list[cache->used].data = result;
            cache->list[cache->used].lockCount = 1;
            cache->used++;
        }
    }
    return result;
}

syncData的操作有mutex.lock(),多层嵌套会对相同的参数连续加锁,这也是@synchronized的重要特性之一,

总结

  • @synchronized调用时生成了一张哈希表sDataList,其间保护了一个寄存SyncList的array结构

  • @synchronized经过objc_sync_enterobjc_sync_exit对数据进行加锁和解锁的操作,其间用到的是递归锁

  • 其间生成的SyncData有两种存储方法:tls,cache

  • 可重入:经过lockcount对目标进行加锁次数的管理,可屡次加锁

  • 多线程:经过threadcount保护多线程对目标加锁的状况

  • @synchronized传入的参数应该保证:生命周期大于多线程要操作的数据源,通常传入self(vc)也是因为在创立syncData链时,创立一条即可。