[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwfqnHQtCVVm8dd8W7pOwrGquyxwJk_0MQo=SxiHaYTeA@mail.gmail.com>
Date: Tue, 13 Aug 2013 09:38:09 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <bitbucket@...ine.de>,
Peter Anvin <hpa@...or.com>, Andi Kleen <ak@...ux.intel.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] per-cpu preempt_count
On Tue, Aug 13, 2013 at 9:29 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> The inverted need_resched that gives decl+jnz idea from Ingo should do
> it though.
I agree that that is a good approach.
> Not entirely sure I understand your MSB + jns suggestion:
>
> 0x80000002 - 1 = 0x80000001
>
> Both are very much signed and neither wants to cause a reschedule.
The thing is, we don't check the preempt count even currently, so the
above isn't fatal. The "need preemption" bit being set (or reset with
the reverse bit meaning) should be the unusual case with preemption
(you have to hit the race), *and* it should be unusual to have deeply
nested preemption anyway, so it's fine to test that in the slow path
(and we do: preempt_schedule() checks the preempt count being zero and
irqs being disabled, *exactly* because the preemption enable check
isn't precise).
But avoiding a few sloppy cases is certainly good, even if they are
unusual, so I do like the reversed bit approach. It also allows us to
pick any arbitrary bit, although I'm not sure that matters much.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists