lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51836D74.2030409@intel.com>
Date:	Fri, 03 May 2013 15:55:32 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	mingo@...hat.com, tglx@...utronix.de, akpm@...ux-foundation.org,
	arjan@...ux.intel.com, bp@...en8.de, pjt@...gle.com,
	namhyung@...nel.org, efault@....de, morten.rasmussen@....com,
	vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
	preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org, len.brown@...el.com,
	rafael.j.wysocki@...el.com, jkosina@...e.cz,
	clark.williams@...il.com, tony.luck@...el.com,
	keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [PATCH v4 0/6] sched: use runnable load based balance


> That should probably look like:
> 
> 	preempt_disable();
> 	raw_spin_unlock_irq();
> 	preempt_enable_no_resched();
> 	schedule();
> 
> Otherwise you might find a performance regression on PREEMPT=y kernels.

Yes, right!
Thanks a lot for reminder. The following patch will fix it.
> 
> OK, so what I was asking after is if you changed the scheduler after PJTs
> patches landed to deal with this bulk wakeup. Also while aim7 might no longer
> trigger the bad pattern what is to say nothing ever will? In particular
> anything using pthread_cond_broadcast() is known to be suspect of bulk wakeups.

Just find a benchmark named as pthread_cond_broadcast. 
http://kristiannielsen.livejournal.com/13577.html. will play with it. :)
> 
> Anyway, I'll go try and make sense of some of the actual patches.. :-)
> 

---

>From 4c9b4b8a9b92bcbe6934637fd33c617e73dbda97 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@...el.com>
Date: Fri, 3 May 2013 14:51:25 +0800
Subject: [PATCH 8/8] rwsem: small optimizing rwsem_down_failed_common

Peter Zijlstra suggest adding a preempt_enable_no_resched() to prevent
a unnecessary scheduler in raw_spin_unlock.
And we also can pack 2 raw_spin_lock to save one. So has this patch.

Thanks Peter!

Signed-off-by: Alex Shi <alex.shi@...el.com>
---
 lib/rwsem.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/lib/rwsem.c b/lib/rwsem.c
index ad5e0df..9aacf81 100644
--- a/lib/rwsem.c
+++ b/lib/rwsem.c
@@ -212,23 +212,25 @@ rwsem_down_failed_common(struct rw_semaphore *sem,
 		 adjustment == -RWSEM_ACTIVE_WRITE_BIAS)
 		sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED);
 
-	raw_spin_unlock_irq(&sem->wait_lock);
-
 	/* wait to be given the lock */
 	for (;;) {
-		if (!waiter.task)
+		if (!waiter.task) {
+			raw_spin_unlock_irq(&sem->wait_lock);
 			break;
+		}
 
-		raw_spin_lock_irq(&sem->wait_lock);
-		/* Try to get the writer sem, may steal from the head writer: */
+		/* Try to get the writer sem, may steal from the head writer */
 		if (flags == RWSEM_WAITING_FOR_WRITE)
 			if (try_get_writer_sem(sem, &waiter)) {
 				raw_spin_unlock_irq(&sem->wait_lock);
 				return sem;
 			}
+		preempt_disable();
 		raw_spin_unlock_irq(&sem->wait_lock);
+		preempt_enable_no_resched();
 		schedule();
 		set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+		raw_spin_lock_irq(&sem->wait_lock);
 	}
 
 	tsk->state = TASK_RUNNING;
-- 
1.7.12

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ