[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080507171443.GA12072@elte.hu>
Date: Wed, 7 May 2008 19:14:43 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>, Matthew Wilcox <matthew@....cx>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
Alexander Viro <viro@....linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: AIM7 40% regression with 2.6.26-rc1
* Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> > But my preferred option would indeed be just turning it back into a
> > spinlock - and screw latency and BKL preemption - and having the RT
> > people who care deeply just work on removing the BKL in the long
> > run.
>
> Here's a trial balloon patch to do that.
here's a simpler trial baloon test-patch (well, hack) that is also
reasonably well tested. It turns the BKL into a "spin-semaphore". If
this resolves the performance problem then it's all due to the BKL's
scheduling/preemption properties.
this approach is ugly (it's just a more expensive spinlock), but has an
advantage: the code logic is obviously correct, and it would also make
it much easier later on to turn the BKL back into a sleeping lock again
- once the TTY code's BKL use is fixed. (i think Alan said it might
happen in the next few months) The BKL is more expensive than a simple
spinlock anyway.
Ingo
------------->
Subject: BKL: spin on acquire
From: Ingo Molnar <mingo@...e.hu>
Date: Wed May 07 19:05:40 CEST 2008
NOT-Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
lib/kernel_lock.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
Index: linux/lib/kernel_lock.c
===================================================================
--- linux.orig/lib/kernel_lock.c
+++ linux/lib/kernel_lock.c
@@ -46,7 +46,8 @@ int __lockfunc __reacquire_kernel_lock(v
task->lock_depth = -1;
preempt_enable_no_resched();
- down(&kernel_sem);
+ while (down_trylock(&kernel_sem))
+ cpu_relax();
preempt_disable();
task->lock_depth = saved_lock_depth;
@@ -67,11 +68,13 @@ void __lockfunc lock_kernel(void)
struct task_struct *task = current;
int depth = task->lock_depth + 1;
- if (likely(!depth))
+ if (likely(!depth)) {
/*
* No recursion worries - we set up lock_depth _after_
*/
- down(&kernel_sem);
+ while (down_trylock(&kernel_sem))
+ cpu_relax();
+ }
task->lock_depth = depth;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists