[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080508084607.GA31515@elte.hu>
Date: Thu, 8 May 2008 10:46:07 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Mike Galbraith <efault@....de>
Cc: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: sysbench+mysql(oltp, readonly) 30% regression with 2.6.26-rc1
* Mike Galbraith <efault@....de> wrote:
> > With the patch, oltp testing result is about 50% worse than the one
> > of pure 2.6.26-rc1.
>
> Hm. I was doing some sysbench+postgress(oltp, ro) testing on my
> little Q6600 box this morning, and saw a different picture.
>
> In attached pdf, .bkl refers to Linus' BKL patch, .weight is the
> weight fix, both are applied to git.today. The script I used is also
> attached.
nice numbers. I'm curious about the following detail related to Linus's
BKL-spinlock patch: am i correct that it shows a small but systematic
improvement with many clients? sysbench has not shown sensitivity to any
BKL detail before.
Could you perhaps try the hack below that uses /proc/sys/kernel/panic as
a flag whether the BKL should cause us to block or should be spun upon.
Can you confirm that with that patch too sysbench shows sensitivity to
the value of the panic flag? [this is the surest way to measure such
effects as flipping the sysctl only minimally impacts the system.]
if sysbench truly is dependent on the BKL then that's another argument
in favor of Linus's patch.
Ingo
-------------------------->
Subject: BKL: spin on acquire
From: Ingo Molnar <mingo@...e.hu>
Date: Wed May 07 19:05:40 CEST 2008
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
lib/kernel_lock.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
Index: linux/lib/kernel_lock.c
===================================================================
--- linux.orig/lib/kernel_lock.c
+++ linux/lib/kernel_lock.c
@@ -46,7 +46,10 @@ int __lockfunc __reacquire_kernel_lock(v
task->lock_depth = -1;
preempt_enable_no_resched();
- down(&kernel_sem);
+ if (panic_timeout) {
+ while (down_trylock(&kernel_sem))
+ cpu_relax();
+ }
preempt_disable();
task->lock_depth = saved_lock_depth;
@@ -67,11 +70,15 @@ void __lockfunc lock_kernel(void)
struct task_struct *task = current;
int depth = task->lock_depth + 1;
- if (likely(!depth))
+ if (likely(!depth)) {
/*
* No recursion worries - we set up lock_depth _after_
*/
- down(&kernel_sem);
+ if (panic_timeout) {
+ while (down_trylock(&kernel_sem))
+ cpu_relax();
+ }
+ }
task->lock_depth = depth;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists