您好,欢迎来到锐游网。
搜索
您的当前位置:首页lock

lock

来源:锐游网
DRAFT as of September 25, 2009: Copyright 2009 Cox, Kaashoek, Morris

Chapter 4Locking

I

I

Xv6 runs on multiprocessors, computers with multiple CPUs executing code inde-pendently. These multiple CPUs operate on a single physical address space and sharedata structures; xv6 must introduce a coordination mechanism to keep them from in-terfering with each other. Even on a uniprocessor, xv6 must use some mechanism tokeep interrupt handlers from interfering with non-interrupt code. Xv6 uses the samelow-level concept for both: locks. Locks provide mutual exclusion, ensuring that onlyone CPU at a time can hold a lock. f xv6 only accesses a data structure while hold-ing a particular lock, then xv6 can be sure that only one CPU at a time is accessing

the data structure. n this situation, we say that the lock protects the data structure.

IAs an example, consider the implementation of a simple linked list:

1struct list {

2 int data;

3 struct list *next;4};56struct list *list = 0;78void9insert(int data)10{

11 struct list *l;12

13 l = malloc(sizeof *l);14 l->data = data;15 l->next = list;16 list = l;17}

Proving this implementation correct is a typical exercise in a data structures and algo-rithms class. Even though this implementation can be proved correct, it isn’t, at least

not on a multiprocessor. f two different CPUs executeinsertat the same time, itcould happen that both execute line 15 before either executes 16. f this happens,there will now be two list nodes withnextset to the former value oflist. When thetwo assignments tolisthappen at line 16, the second one will overwrite thefirst; thenode involved in thefirst assignment will be lost. This kind of problem is called arace condition. The problem with races is that they depend on the exact timing of thetwo CPUs involved and are consequently difficult to reproduce. For example, addingprint statements while debugginginsertmight change the timing of the executionenough to make the race disappear.

The typical way to avoid races is to use a lock. Locks ensure mutual exclusion, so

1

that only one CPU can executeinsertat a time; this makes the scenario above im-possible. The correctly locked version of the above code adds just a few lines (notnumbered):

6

struct list *list = 0;struct lock listlock;

I

When we say that a lock protects data, we really mean that the lock protects somecollection of invariants that apply to the data. nvariants are properties of data struc-tures that are maintained across operations. Typically, an operation’s correct behaviordepends on the invariants being true when the operation begins. The operation maytemporarily violate the invariants but must reestablish them beforefinishing. For ex-ample, in the linked list case, the invariant is thatlistpoints at thefirst node in thelist and that each node’snextfield points at the next node. The implementation ofinsertvioilates this invariant temporarily: line X creates a new list elementlwith theintent thatlbe thefirst node in the list, butl’s next pointer does not point at thenext node in the list yet (reestablished at line 15) andlistdoes not point atlyet(reestablished at line 16). The race condition we examined above happened because asecond CPU executed code that depended on the list invariants while they were (tem-porarily) violated. Proper use of a lock ensures that only one CPU at a time can oper-ate on the data structure, so that no CPU will execute a data structure operation whenthe data structure’s invariants do not hold.

78void9insert(int data)10{

11 struct list *l;12

acquire(&listlock);

13 l = malloc(sizeof *l);14 l->data = data;15 l->next = list;16 list = l;

release(&listlock);

17}

Code: Locks

Xv6’s represents a lock as astruct spinlock(1301). The criticalfield in the structure

islocked, a word that is zero when the lock is available and non-zero when it is held.Logically, xv6 should acquire a lock by executing code like

21void22acquire(struct spinlock *lk)23{

24 for(;;) {

25 if(!lk->locked) {26 lk->locked = 1;27 break;28 }

2

2 }30}

Unfortunately, this implementation does not guarantee mutual exclusion on a modernI

multiprocessor. t could happen that two (or more) CPUs simultaneously reach line25, see thatlk->lockedis zero, and then both grab the lock by executing lines 26 and27. At this point, two different CPUs hold the lock, which violates the mutual exclu-sion property. Rather than helping us avoid race conditions, this implementation ofacquirehas its own race condition. The problem here is that lines 25 and 26 execut-ed as separate actions. n order for the routine above to be correct, lines 25 and 26must execute in one atomic step.

To execute those two lines atomically, xv6 relies on a special 386 hardware in-struction,xchg(0501). n one atomic operation,xchgswaps a word in memory withthe contents of a register.Acquire(1373)repeats thisxchginstruction in a loop; eachiteration readslk->lockedand atomically sets it to 1(1382). f the lock is held,lk->lockedwill already be 1, so thexchgreturns 1 and the loop continues. f thexchgreturns 0, however,acquirehas successfully acquired the lock—lockedwas 0 and isnow 1—so the loop can stop. Once the lock is acquired,acquirerecords, for debug-ging, the CPU and stack trace that acquired the lock. When a process acquires a lockI

and forget to release it, this information can help to identify the culprit. These debug-I

gingfields are protected by the lock and must only be edited while holding the lock.I

Release(1402)is the opposite ofacquire: it clears the debuggingfields and thenreleases the lock.I

Modularity and recursive locksI

System design strives for clean, modular abstractions: it is best when a caller doesnot need to know how a callee implements particular functionality. Locks interfere9

with this modularity. For example, if a CPU holds a particular lock, it cannot call anyfunction f that will try to reacquire that lock: since the caller can’t release the lock un-til f returns, if f tries to acquire the same lock, it will spin forever, or deadlock.

There are no transparent solutions that allow the caller and callee to hide whichlocks they use. One common, transparent, but unsatisfactory solution is ‘‘recursivelocks,’’ which allow a callee to reacquire a lock already held by its caller. The problemwith this solution is that recursive locks can’t be used to protect invariants. Afterin-sertcalledacquire(&listlock)above, it can assume that no other function holdsthe lock, that no other function is in the middle of a list operation, and most impor-tantly that all the list invariants hold. n a system with recursive locks,insertcan as-sume nothing after it callsacquire: perhapsacquiresucceeded only because one ofinsert’s caller already held the lock and was in the middle of editing the list datastructure. Maybe the invariants hold or maybe they don’t. The list no longer protectsthem. Locks are just as important for protecting callers and callees from each other asthey are for protecting different CPUs from each other; recursive locks give up thatproperty.

Since there is no ideal transparent solution, we must consider locks part of thefunction’s specification. The programmer must arrange that function doesn’t invoke a

3

function f while holding a lock that f needs. Locks force themselves into our abstrac-tions.

Code: Using locks

The hardest part about using locks is deciding how many locks to use and which dataand invariants each lock protects. There are a few basic principles. First, any time avariable can be written by one CPU at the same time that another CPU can read orwrite it, a lock should be introduced to keep the two operations from overlapping.Second, remeber that locks protect invariants: if an invariant involves multiple datastructures, typically all of the structures need to be protected by a single lock to ensurethe invariant is maintained.

The rules above say when locks are necessary but say nothing about when locksare unnecessary, and it is important for efficiency not to lock too much. For protect-I

ing kernel data structures, it would suffice to create a single lock that must be acquiredon entering the kernel and released on exiting the kernel. Many uniprocessor operat-ing systems have been converted to run on multiprocessors using this approach, some-times called a ‘‘giant kernel lock,’’ but the approach sacrifices true concurrency: onlyone CPU can execute in the kernel at a time. f the kernel does any heavy computa-tion, it would be more efficient to use a larger set of morefine-grained locks, so thatthe kernel could execute on multiple CPUs simultaneously.

Ultimately, the choice of lock granularity is more art than science. Xv6 uses a fewcoarse data-structure specific locks. Hopefully, the examples of xv6 will help convey afeeling for some of the art.

4

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- ryyc.cn 版权所有 湘ICP备2023022495号-3

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务