lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080328124517.GQ16721@parisc-linux.org>
Date:	Fri, 28 Mar 2008 06:45:17 -0600
From:	Matthew Wilcox <matthew@....cx>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	"Luck, Tony" <tony.luck@...el.com>,
	Stephen Rothwell <sfr@...b.auug.org.au>,
	linux-arch@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: down_spin() implementation

On Fri, Mar 28, 2008 at 11:01:24AM +1100, Nick Piggin wrote:
> Uhm, how do you use this exactly? All other holders of this
> semaphore must have preempt disabled and not sleep, right? (and
> so you need a new down() that disables preempt too)

Ah, I see what you're saying.  The deadlock would be (on a single CPU
machine), task A holding the semaphore, being preempted, task B taking
a spinlock (thus non-preemptable), then calling down_spin() which will
never succeed.

That hadn't occurred to me -- I'm not used to thinking about preemption.
I considered interrupt context and saw how that would deadlock, so just
put a note in the documentation that it wasn't usable from interrupts.

So it makes little sense to add this to semaphores.  Better to introduce
a spinaphore, as you say.

> struct {
>   atomic_t cur;
>   int max;
> } ss_t;
> 
> void spin_init(ss_t *ss, int max)
> {
> 	&ss->cur = ATOMIC_INIT(0);
> 	&ss->max = max;
> }
> 
> void spin_take(ss_t *ss)
> {
>   preempt_disable();
>   while (unlikely(!atomic_add_unless(&ss->cur, 1, &ss->max))) {
>     while (atomic_read(&ss->cur) == ss->max)
>       cpu_relax();
>   }
> }

I think we can do better here with:

	atomic_set(max);

and

	while (unlikely(!atomic_add_unless(&ss->cur, -1, 0)))
		while (atomic_read(&ss->cur) == 0)
			cpu_relax();

It still spins on the spinaphore itself rather than on a local cacheline,
so there's room for improvement.  But it's not clear whether it'd be
worth it.

> About the same number as down_spin(). And it is much harder to
> misuse. So LOC isn't such a great argument for this kind of thing.

LOC wasn't really my argument -- I didn't want to introduce a new data
structure unnecessarily.  But the pitfalls (that I hadn't seen) of
mixing down_spin() into semaphores are just too awful.

I'll pop this patch off the stack of semaphore patches.  Thanks.

-- 
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ