lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Dec 2013 09:29:31 -0800
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	Suresh Thiagarajan <Suresh.Thiagarajan@...s.com>
Cc:	Ingo Molnar <mingo@...nel.org>, Oleg Nesterov <oleg@...hat.com>,
	Jason Seba <jason.seba42@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Tomas Henzl <thenzl@...hat.com>, Jack Wang <xjtuwjp@...il.com>,
	Viswas G <Viswas.G@...s.com>,
	"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
	Vasanthalakshmi Tharmarajan 
	<Vasanthalakshmi.Tharmarajan@...s.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: spinlock_irqsave() && flags (Was: pm80xx: Spinlock fix)

On Tue, 2013-12-24 at 09:13 +0000, Suresh Thiagarajan wrote:
> 
> On Tue, Dec 24, 2013 at 1:59 PM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> > * Oleg Nesterov <oleg@...hat.com> wrote:
> >
> >> On 12/23, Ingo Molnar wrote:
> >> >
> >> > * Oleg Nesterov <oleg@...hat.com> wrote:
> >> >
> >> > > Initially I thought that this is obviously wrong, irqsave/irqrestore
> >> > > assume that "flags" is owned by the caller, not by the lock. And
> >> > > iirc this was certainly wrong in the past.
> >> > >
> >> > > But when I look at spinlock.c it seems that this code can actually
> >> > > work. _irqsave() writes to FLAGS after it takes the lock, and
> >> > > _irqrestore() has a copy of FLAGS before it drops this lock.
> >> >
> >> > I don't think that's true: if it was then the lock would not be
> >> > irqsave, a hardware-irq could come in after the lock has been taken
> >> > and before flags are saved+disabled.
> >>
> >> I do agree that this pattern is not safe, that is why I decided to ask.
> >>
> >> But, unless I missed something, with the current implementation
> >> spin_lock_irqsave(lock, global_flags) does:
> >>
> >>       unsigned long local_flags;
> >>
> >>       local_irq_save(local_flags);
> >>       spin_lock(lock);
> >>
> >>       global_flags = local_flags;
> >>
> >> so the access to global_flags is actually serialized by lock.
> 
> Below is a small pseudo code on protecting/serializing the flag for global access.
> struct temp
> {
> 	...
> 	spinlock_t lock;
> 	unsigned long lock_flags;
> };
> void my_lock(struct temp *t)
> {
>                unsigned long flag; // thread-private variable as suggested
>                spin_lock_irqsave(&t->lock, flag);
>                t->lock_flags = flag; //updating inside critical section now to serialize the access to flag
> }
> 
> void my_unlock(struct temp *t)
> {
>                unsigned long flag = t->lock_flags;
>                t->lock_flags = 0;  //clearing it before getting out of critical section
>                spin_unlock_irqrestore(&t->lock, flag);
> }
> 
> Here for unlocking, I could even use spin_unlock_irqrestore(&t->lock,
> t->lock_flags) directly instead of my_unlock() since t->lock_flags is
> updated only in my_lock and so there is no need to explicitly clear
> t->lock_flags. Please let me know if I miss anything here in
> serializing the global lock flag.

I don't think anyone's arguing that you can't do this.  The argument is
more you shouldn't: Lock contention is one of the biggest killers of
performance and getting locking right (avoiding inversions, or even
nested locks at all) is hard.  Therefore you need to keep the shortest
and clearest critical sections in Linux that you can, so the pattern for
locking is to lock and unlock in the same routine with a local flags
variable.

So, rather than develop a primitive that supports and encourages uses of
locking that violate the pattern and is far more likely to cause
performance and other problems, could you just audit the code to see if
the need to carry a lock across a routine could be eliminated (either by
shortening the critical section or by better nesting the calls).

Thanks,

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ