lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Aug 2006 16:19:29 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	ego@...ibm.com
CC:	Srivatsa Vaddagiri <vatsa@...ibm.com>, Ingo Molnar <mingo@...e.hu>,
	rusty@...tcorp.com.au, torvalds@...l.org, akpm@...l.org,
	linux-kernel@...r.kernel.org, arjan@...ux.intel.com,
	davej@...hat.com, dipankar@...ibm.com, ashok.raj@...el.com
Subject: Re: [RFC][PATCH 3/4] (Refcount + Waitqueue) implementation for cpu_hotplug
 "locking"

Gautham R Shenoy wrote:
> On Thu, Aug 24, 2006 at 06:28:14PM +0530, Srivatsa Vaddagiri wrote:
> 
>>On Thu, Aug 24, 2006 at 02:25:27PM +0200, Ingo Molnar wrote:
>>
>>>no. The writer first sets the global write_active flag, and _then_ goes 
>>>on to wait for all readers (if any) to get out of their critical 
>>>sections. (That's the purpose of the per-cpu waitqueue that readers use 
>>>to wake up a writer waiting for the refcount to go to 0.)
>>>
>>>can you still see problems with this scheme?
>>
>>This can cause a deadlock sometimes, when a thread tries to take the
>>read_lock() recursively, with a writer having come in between the two
>>recursive reads:
>>
>>	Reader1 on CPU0			Writer1 on CPU1
>>
>>	read_lock() - success
>>
>>					write_lock() - blocks on Reader1
>>						  (writer_active = 1)
>>
>>
>>	read_lock() - blocks on Writer1
>>
>>The only way to avoid this deadlock is to either keep track of
>>cpu_hp_lock_count per-task (like the preemption count kept per-task)
>>or allow read_lock() to succeed if reader_count > 1 (even if
>>writer_active = 1). The later makes the lock unduely biased towards
>>readers.
> 
> 
> The reason why recursive read side locking works in the patches I posted, is 
> the fact that the _locking_is_unfair_. Which means even when a writer is 
> waiting, if there are readers in the system,a new reader will go ahead.
> 
> I can try incorporating this unfair model to Ingo's  suggestion 
> as follows:
> - A writer on arrival sets the global flag to writer_waiting.
> - A reader on cpuX checks if global flag = writer_waiting. If yes,
> and percpu(refcount) == 0, the reader blocks. if percpu(refcount)!=0,
> the reader increments it and goes ahead,because there are readers 
> in the system.
> 
> This should work, if not for task migration. It may so happen that
> a task has already taken a read lock on cpuX, gets migrated to cpuY
> where percpu(refcount) = 0. Now a writer arrives, sets the global flag.
> The reader tries taking a recursive read lock gets blocked since
> percpu(refcount) on cpuY is 0. 

This could easily block hotplug forever though, if you have lots of
tasks in the system.

> 
> Ingo, I am wondering if lockdep would be of some help here.
> Since lockdep already checks for recursive reads, can I use it in
> the above case and allow the new reader only if it is recursive?
> I don't like the idea of explicitly checking for recursiveness
> in the locking schema. Just that I can't think of a better way now.

Well you would just have a depth count in the task_struct... in fact that
could *be* the read lock (ie. writer traverses all tasks instead of all
CPU locks), and would save a cacheline in the read path...

But I think the point was that we didn't want to add yet another field
to task_struct. Maybe it is acceptable... one day it will be like
page_flags though ;)


-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ