[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0805071049180.3024@woody.linux-foundation.org>
Date: Wed, 7 May 2008 10:55:26 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Ingo Molnar <mingo@...e.hu>
cc: Andi Kleen <andi@...stfloor.org>, Matthew Wilcox <matthew@....cx>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
Alexander Viro <viro@....linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: AIM7 40% regression with 2.6.26-rc1
On Wed, 7 May 2008, Ingo Molnar wrote:
>
> no, there was another problem (which i couldnt immediately find because
> lkml.org only indexes part of the threads, i'll research it some more),
> which was some cond_resched() thing in the !PREEMPT_BKL case.
Hmm. I do agree that _cond_resched() looks a bit iffy, although in a safe
way. It uses just
!(preempt_count() & PREEMPT_ACTIVE)
to see whether it can schedule, and it should probably use in_atomic()
which ignores the kernel lock.
But right now, that whole thing is disabled if PREEMPT is on anyway, so in
effect (with my test patch, at least) cond_preempt() would just be a no-op
if PREEMPT is on, even if BKL isn't preemptable.
So it doesn't look buggy, but it looks like it might cause longer
latencies than strictly necessary. And if somebody depends on
cond_resched() to avoid some bad livelock situation, that would obviously
not work (but that sounds like a fundamental bug anyway, I really hope
nobody has ever written their code that way).
> The !PREEMPT_BKL crash was some simple screwup on my part of getting
> atomicity checks wrong in cond_resched() - and it went unnoticed for a
> long time - or something like that. I'll try to find that discussion.
Yes, some silly bug sounds more likely. Especially considering how many
different cases there were (semaphores vs spinlocks vs preemptable
spinlocks).
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists