[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070330015321.GB19407@wotan.suse.de>
Date: Fri, 30 Mar 2007 03:53:21 +0200
From: Nick Piggin <npiggin@...e.de>
To: Oleg Nesterov <oleg@...sign.ru>
Cc: Ravikiran G Thirumalai <kiran@...lex86.org>,
Ingo Molnar <mingo@...e.hu>,
Nikita Danilov <nikita@...sterfs.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [patch] queued spinlocks (i386)
On Thu, Mar 29, 2007 at 10:42:13PM +0400, Oleg Nesterov wrote:
> On 03/28, Nick Piggin wrote:
> >
> > Well with my queued spinlocks, all that lockbreak stuff can just come out
> > of the spin_lock, break_lock out of the spinlock structure, and
> > need_lockbreak just becomes (lock->qhead - lock->qtail > 1).
>
> Q: queued spinlocks are not CONFIG_PREEMPT friendly,
I consider the re-enabling of preemption and interrupts to be a hack
anyway. Because if you already have interrupts or preemption disabled
at entry time, they will remain disabled.
IMO the real solution is to ensure spinlock critical sections don't get
too large, and perhaps use fair spinlocks to prevent starvation.
>
> > + asm volatile(LOCK_PREFIX "xaddw %0, %1\n\t"
> > + : "+r" (pos), "+m" (lock->qhead) : : "memory");
> > + while (unlikely(pos != lock->qtail))
> > + cpu_relax();
>
> once we incremented lock->qhead, we have no optiion but should spin with
> preemption disabled until pos == lock->qtail, yes?
Correct. For the purposes of deadlock behaviour, we have effectively
taken the lock at that point.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists