概述
RentrantLock即重入锁,是JDK中J.U.C提供的最重要的锁。
通过自旋锁实现RentrantLock:循环调用CAS操作实现枷锁,即阻止线程进入内核态被阻塞,故而效率较高。
与Synchronized对比
- 可重入性:两者都具有可重入性。
- 锁的实现:RentrantLock是JDK中源码实现;Synchronized的锁机制是由JVM的JMM机制管理实现。
- 性能区别:Synchronized关键字优化前,其性能很差;但优化后(借鉴了RentrantLock中的CAS机制:在用户态加锁解锁),引入了偏向锁、自旋锁后,两者性能几无差异。(更推荐synchronized)。
- 功能区别:(1).synchronized关键字使用更加简洁简单,由编译器保证实现;RentrantLock需声明锁,并加锁,在finally中解锁。(2).RentrantLock锁粒度更细,灵活度高。
公平锁即按照线程的请求的先后顺序给与锁;非公平锁即按照对锁的争夺成功的线程加锁。
特有功能:
- RentrantLock可以指定该lock是公平锁或非公平锁(而synchronized只能是非公平锁);
- 通过方法提供Condition类,可以分组唤醒需要唤醒的线程。
- 提供能够中断等待锁的线程的机制 –> lock.lockInterruptibly()
适用场景
当需要用到上面RentrantLock的特有功能时,必须使用RentrantLock。
演示例子
RentrantLock
1 | 4j |
例子分析:
简简单单,只是在核心方法执行前加锁,在之后的finally中解锁。
RentrantReadWriteLock
1 | public class LockExample3 { |
例子说明
- 其中使用了RentrantReadWriteLock,该锁是读写分离锁,即读操作和写操作分别持有一把锁。
- 而且由于该RentrantReadWriteLock的读锁是一种悲观锁,即只有当其他操作都执行完后才会进行写操作。那么,但实际应用中读操作是远远多于写操作的(一般情况下),那么可能会导致写操作产生线程饥饿。
悲观锁:对数据被外界(包括本系统当前的其他事务,以及来自外部系统的事务处理)修改持保守态度;因为悲观,认为自己的数据很容易在并发中产生错误,所以在整个数据处理过程中,将数据处于锁定状态.
乐观锁:大多是基于数据版本记录机制实现;读取出数据时,将此版本号(version字段)一同读出,之后更新时,对此版本号加一;若提交的数据版本大于数据库表当前版本号,则予以更新,否则认为是过期数据。
线程饥饿:通常是因为线程优先级过低,导致该线程等待被执行的时间过久,甚至执行操作已经无意义。 - 因此,该类实际中用到的不多。
StampedLock
1 | public class StampedLockExample { |
例子分析
这个例子是JDK源码中提供的参考例子。相对于RentrantReadWriteLock,StampedLock的有三种控制锁的模式:写锁,读锁,乐观读锁(!!)。由于提供了乐观锁的实现机制,那么即使是在读操作占比很高的情况中,系统仍然可以保持较好性能和高IO吞吐量。
StampedLock源码分析
先看顶部注释:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103/**
* A capability-based lock with three modes for controlling read/write
* access. The state of a StampedLock consists of a version and mode.
* Lock acquisition methods return a stamp that represents and
* controls access with respect to a lock state; "try" versions of
* these methods may instead return the special value zero to
* represent failure to acquire access. Lock release and conversion
* methods require stamps as arguments, and fail if they do not match
* the state of the lock. ……
*
* <li><b>Optimistic Reading.</b> Method {@link #tryOptimisticRead}
* returns a non-zero stamp only if the lock is not currently held
* in write mode. Method {@link #validate} returns true if the lock
* has not been acquired in write mode since obtaining a given
* stamp. This mode can be thought of as an extremely weak version
* of a read-lock, that can be broken by a writer at any time. The
* use of optimistic mode for short read-only code segments often
* reduces contention and improves throughput. However, its use is
* inherently fragile. Optimistic read sections should only read
* fields and hold them in local variables for later use after
* validation. Fields read while in optimistic mode may be wildly
* inconsistent, so usage applies only when you are familiar enough
* with data representations to check consistency and/or repeatedly
* invoke method {@code validate()}. For example, such steps are
* typically required when first reading an object or array
* reference, and then accessing one of its fields, elements or
* methods. </li>
*
* Algorithmic notes:
*
* The design employs elements of Sequence locks
* (as used in linux kernels; see Lameter's
* http://www.lameter.com/gelato2005.pdf
* and elsewhere; see
* Boehm's http://www.hpl.hp.com/techreports/2012/HPL-2012-68.html)
* and Ordered RW locks (see Shirako et al
* http://dl.acm.org/citation.cfm?id=2312015)
*
* Conceptually, the primary state of the lock includes a sequence
* number that is odd when write-locked and even otherwise.
* However, this is offset by a reader count that is non-zero when
* read-locked. The read count is ignored when validating
* "optimistic" seqlock-reader-style stamps. Because we must use
* a small finite number of bits (currently 7) for readers, a
* supplementary reader overflow word is used when the number of
* readers exceeds the count field. We do this by treating the max
* reader count value (RBITS) as a spinlock protecting overflow
* updates.
*
* Waiters use a modified form of CLH lock used in
* AbstractQueuedSynchronizer (see its internal documentation for
* a fuller account), where each node is tagged (field mode) as
* either a reader or writer. Sets of waiting readers are grouped
* (linked) under a common node (field cowait) so act as a single
* node with respect to most CLH mechanics. By virtue of the
* queue structure, wait nodes need not actually carry sequence
* numbers; we know each is greater than its predecessor. This
* simplifies the scheduling policy to a mainly-FIFO scheme that
* incorporates elements of Phase-Fair locks (see Brandenburg &
* Anderson, especially http://www.cs.unc.edu/~bbb/diss/). In
* particular, we use the phase-fair anti-barging rule: If an
* incoming reader arrives while read lock is held but there is a
* queued writer, this incoming reader is queued. (This rule is
* responsible for some of the complexity of method acquireRead,
* but without it, the lock becomes highly unfair.) Method release
* does not (and sometimes cannot) itself wake up cowaiters. This
* is done by the primary thread, but helped by any other threads
* with nothing better to do in methods acquireRead and
* acquireWrite.
*
* These rules apply to threads actually queued. All tryLock forms
* opportunistically try to acquire locks regardless of preference
* rules, and so may "barge" their way in. Randomized spinning is
* used in the acquire methods to reduce (increasingly expensive)
* context switching while also avoiding sustained memory
* thrashing among many threads. We limit spins to the head of
* queue. A thread spin-waits up to SPINS times (where each
* iteration decreases spin count with 50% probability) before
* blocking. If, upon wakening it fails to obtain lock, and is
* still (or becomes) the first waiting thread (which indicates
* that some other thread barged and obtained lock), it escalates
* spins (up to MAX_HEAD_SPINS) to reduce the likelihood of
* continually losing to barging threads.
*
* Nearly all of these mechanics are carried out in methods
* acquireWrite and acquireRead, that, as typical of such code,
* sprawl out because actions and retries rely on consistent sets
* of locally cached reads.
*
* As noted in Boehm's paper (above), sequence validation (mainly
* method validate()) requires stricter ordering rules than apply
* to normal volatile reads (of "state"). To force orderings of
* reads before a validation and the validation itself in those
* cases where this is not already forced, we use
* Unsafe.loadFence.
*
* The memory layout keeps lock state and queue pointers together
* (normally on the same cache line). This usually works well for
* read-mostly loads. In most other cases, the natural tendency of
* adaptive-spin CLH locks to reduce memory contention lessens
* motivation to further spread out contended locations, but might
* be subject to future improvements.
*/
看来这个类还是有、东西的。不过内容太多了,也不知从何看起。目前先了解一下大致情况,以后继续慢慢了解~
答应我,不要吐~ 还是翻回去从头好好看一下,其中有用到AQS的CLH队列哦~
(后接part-2)