lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080507184304.GA15554@elte.hu>
Date:	Wed, 7 May 2008 20:43:04 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Matthew Wilcox <matthew@....cx>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"J. Bruce Fields" <bfields@...i.umich.edu>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Alexander Viro <viro@....linux.org.uk>,
	linux-fsdevel@...r.kernel.org
Subject: Re: AIM7 40% regression with 2.6.26-rc1


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> > [ this patch should in fact be a bit worse, because there's two more
> >   atomics in the fastpath - the fastpath atomics of the old 
> >   semaphore code. ]
> 
> Well, it doesn't have the irq stuff, which is also pretty costly. 
> Also, it doesn't nest the accesses the same way (with the counts being 
> *inside* the spinlock and serialized against each other), so I'm not 
> 100% sure you'd get the same behaviour.
> 
> But yes, it certainly has the potential to show the same slowdown. But 
> it's not a very good patch, since not showing it doesn't really prove 
> much.

ok, the one below does irq ops and the counter behavior - and because 
the critical section also has the old-semaphore atomics i think this 
should definitely be a more expensive fastpath than what the new generic 
code introduces. So if this patch produces a 40% AIM7 slowdown on 
v2.6.25 it's the fastpath overhead (and its effects on slowpath 
probability) that makes the difference.

	Ingo

------------------->
Subject: add BKL atomic overhead
From: Ingo Molnar <mingo@...e.hu>
Date: Wed May 07 20:09:13 CEST 2008

NOT-Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
 lib/kernel_lock.c |   18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

Index: linux-2.6.25/lib/kernel_lock.c
===================================================================
--- linux-2.6.25.orig/lib/kernel_lock.c
+++ linux-2.6.25/lib/kernel_lock.c
@@ -24,6 +24,8 @@
  * Don't use in new code.
  */
 static DECLARE_MUTEX(kernel_sem);
+static int global_count;
+static DEFINE_SPINLOCK(global_lock);
 
 /*
  * Re-acquire the kernel semaphore.
@@ -39,6 +41,7 @@ int __lockfunc __reacquire_kernel_lock(v
 {
 	struct task_struct *task = current;
 	int saved_lock_depth = task->lock_depth;
+	unsigned long flags;
 
 	BUG_ON(saved_lock_depth < 0);
 
@@ -47,6 +50,10 @@ int __lockfunc __reacquire_kernel_lock(v
 
 	down(&kernel_sem);
 
+	spin_lock_irqsave(&global_lock, flags);
+	global_count++;
+	spin_unlock_irqrestore(&global_lock, flags);
+
 	preempt_disable();
 	task->lock_depth = saved_lock_depth;
 
@@ -55,6 +62,10 @@ int __lockfunc __reacquire_kernel_lock(v
 
 void __lockfunc __release_kernel_lock(void)
 {
+	spin_lock_irqsave(&global_lock, flags);
+	global_count--;
+	spin_unlock_irqrestore(&global_lock, flags);
+
 	up(&kernel_sem);
 }
 
@@ -66,12 +77,17 @@ void __lockfunc lock_kernel(void)
 	struct task_struct *task = current;
 	int depth = task->lock_depth + 1;
 
-	if (likely(!depth))
+	if (likely(!depth)) {
 		/*
 		 * No recursion worries - we set up lock_depth _after_
 		 */
 		down(&kernel_sem);
 
+		spin_lock_irqsave(&global_lock, flags);
+		global_count++;
+		spin_unlock_irqrestore(&global_lock, flags);
+	}
+
 	task->lock_depth = depth;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ