lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130813083906.GT27162@twins.programming.kicks-ass.net>
Date:	Tue, 13 Aug 2013 10:39:06 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <bitbucket@...ine.de>,
	Andi Kleen <ak@...ux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	arjan@...ux.intel.com
Subject: Re: [RFC] per-cpu preempt_count

On Mon, Aug 12, 2013 at 11:53:25AM -0700, Linus Torvalds wrote:
> On Mon, Aug 12, 2013 at 10:51 AM, H. Peter Anvin <hpa@...or.com> wrote:
> >
> > So we would have code looking something like:
> >
> >         decl %fs:preempt_count
> >         jnz 1f
> >         cmpb $0,%fs:need_resched
> >         je 1f
> >         call __preempt_schedule
> > 1:
> >
> > It's a nontrivial amount of code, but would seem a fair bit better than
> > what we have now, at least.
> 
> Well, we currently don't even bother checking the preempt count at
> all, and we just optimistically assume that we don't nest in the
> common case. The preempt count is then re-checked in
> __preempt_schedule, I think.
> 
> Which sounds like a fair approach.
> 
> So the code would be simplified to just
> 
>          decl %fs:preempt_count
>          cmpb $0,%fs:need_resched
>          jne .. unlikely branch that calls __preempt_schedule
> 
> which is not horrible. Not *quite* as nice as just doing a single
> "decl+js", but hey, certainly better than what we have now.

OK, so doing per-cpu need_resched is a trivial patch except for the
mwait side of things. Then again, Arjan already asked for per-cpu
need_resched specifically for mwait so we might as well do that.

The only complication is that IIRC Arjan wants to stagger the mwait
cache-lines and we would very much like our preempt_count and
need_resched (and possible some other __switch_to related things) in the
same cacheline.

Afaict that'll yield a double indirect again :/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ