lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <200806201014.43576.borntraeger@de.ibm.com>
Date:	Fri, 20 Jun 2008 10:14:43 +0200
From:	Christian Borntraeger <borntraeger@...ibm.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: lmbench regression due to cond_resched nullification change 26-rc5 vs. 25

Hello Linus,

On a 6-way s390 I have seen some interesting regression in 2.6.26-rc5 vs. 
2.6.25 for the lmbench benchmark.

For example select file 500:
23 microseconds
32 microseconds

Several lmbench tests show a regression but I only bisected the select test 
case so far:
-------------------------<snip---------------------

commit c714a534d85576af21b06be605ca55cb2fb887ee
Author: Linus Torvalds <torvalds@...ux-foundation.org>
Date:   Mon May 12 13:34:13 2008 -0700

    Make 'cond_resched()' nullification depend on PREEMPT_BKL

    Because it's not correct with a non-preemptable BKL and just causes
    PREEMPT kernels to have longer latencies than non-PREEMPT ones (which is
    obviously not the point of it at all).

    Of course, that config option actually got removed as an option earlier,
    so for now this basically disables it entirely, but if BKL preemption is
    ever resurrected it will be a meaningful optimization.  And in the
    meantime, it at least documents the intent of the code, while not doing
    the wrong thing.

    Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 5a63f2d..5395a61 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2038,7 +2038,7 @@ static inline int need_resched(void)
  * cond_resched_softirq() will enable bhs before scheduling.
  */
 extern int _cond_resched(void);
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_BKL
 static inline int cond_resched(void)
 {
        return 0;
-------------------------<snip---------------------

Reverting that patch gives me the 2.6.25 performance. 

I think the patch is fine from the correctness point of view (do resched 
inside BKL protected zones if its safe) but I dont understand why it has a 
large impact on the select microbenchmark. Any ideas? Is it simply the 
overhead of _cond_resched?

Christian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ