lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.1211272257140.18338@file.rdu.redhat.com>
Date:	Tue, 27 Nov 2012 22:59:52 -0500 (EST)
From:	Mikulas Patocka <mpatocka@...hat.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
cc:	Jeff Chua <jeff.chua.linux@...il.com>,
	Jens Axboe <axboe@...nel.dk>,
	Lai Jiangshan <laijs@...fujitsu.com>, Jan Kara <jack@...e.cz>,
	lkml <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: [PATCH 1/2] percpu-rwsem: use synchronize_sched_expedited



On Tue, 27 Nov 2012, Jeff Chua wrote:

> On Tue, Nov 27, 2012 at 3:38 PM, Jens Axboe <axboe@...nel.dk> wrote:
> > On 2012-11-27 06:57, Jeff Chua wrote:
> >> On Sun, Nov 25, 2012 at 7:23 AM, Jeff Chua <jeff.chua.linux@...il.com> wrote:
> >>> On Sun, Nov 25, 2012 at 5:09 AM, Mikulas Patocka <mpatocka@...hat.com> wrote:
> >>>> So it's better to slow down mount.
> >>>
> >>> I am quite proud of the linux boot time pitting against other OS. Even
> >>> with 10 partitions. Linux can boot up in just a few seconds, but now
> >>> you're saying that we need to do this semaphore check at boot up. By
> >>> doing so, it's inducing additional 4 seconds during boot up.
> >>
> >> By the way, I'm using a pretty fast SSD (Samsung PM830) and fast CPU
> >> (2.8GHz). I wonder if those on slower hard disk or slower CPU, what
> >> kind of degradation would this cause or just the same?
> >
> > It'd likely be the same slow down time wise, but as a percentage it
> > would appear smaller on a slower disk.
> >
> > Could you please test Mikulas' suggestion of changing
> > synchronize_sched() in include/linux/percpu-rwsem.h to
> > synchronize_sched_expedited()?
> 
> Tested. It seems as fast as before, but may be a "tick" slower. Just
> perception. I was getting pretty much 0.012s with everything reverted.
> With synchronize_sched_expedited(), it seems to be 0.012s ~ 0.013s.
> So, it's good.
> 
> 
> > linux-next also has a re-write of the per-cpu rw sems, out of Andrews
> > tree. It would be a good data point it you could test that, too.
> 
> Tested. It's slower. 0.350s. But still faster than 0.500s without the patch.
> 
> # time mount /dev/sda1 /mnt; sync; sync; umount /mnt
> 
> 
> So, here's the comparison ...
> 
> 0.500s     3.7.0-rc7
> 0.168s     3.7.0-rc2
> 0.012s     3.6.0
> 0.013s     3.7.0-rc7 + synchronize_sched_expedited()
> 0.350s     3.7.0-rc7 + Oleg's patch.
> 
> 
> Thanks,
> Jeff.

OK, I'm seinding two patches to reduce mount times. If it is possible to 
put them to 3.7.0, put them there.

Mikulas

---

percpu-rwsem: use synchronize_sched_expedited

Use synchronize_sched_expedited() instead of synchronize_sched()
to improve mount speed.

This patch improves mount time from 0.500s to 0.013s.

Note: if realtime people complain about the use
synchronize_sched_expedited() and synchronize_rcu_expedited(), I suggest
that they introduce an option CONFIG_REALTIME or
/proc/sys/kernel/realtime and turn off these *_expedited functions if
the option is enabled (i.e. turn synchronize_sched_expedited into
synchronize_sched and synchronize_rcu_expedited into synchronize_rcu).

Signed-off-by: Mikulas Patocka <mpatocka@...hat.com>

---
 include/linux/percpu-rwsem.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-3.7-rc7/include/linux/percpu-rwsem.h
===================================================================
--- linux-3.7-rc7.orig/include/linux/percpu-rwsem.h	2012-11-28 02:41:03.000000000 +0100
+++ linux-3.7-rc7/include/linux/percpu-rwsem.h	2012-11-28 02:41:15.000000000 +0100
@@ -13,7 +13,7 @@ struct percpu_rw_semaphore {
 };
 
 #define light_mb()	barrier()
-#define heavy_mb()	synchronize_sched()
+#define heavy_mb()	synchronize_sched_expedited()
 
 static inline void percpu_down_read(struct percpu_rw_semaphore *p)
 {
@@ -51,7 +51,7 @@ static inline void percpu_down_write(str
 {
 	mutex_lock(&p->mtx);
 	p->locked = true;
-	synchronize_sched(); /* make sure that all readers exit the rcu_read_lock_sched region */
+	synchronize_sched_expedited(); /* make sure that all readers exit the rcu_read_lock_sched region */
 	while (__percpu_count(p->counters))
 		msleep(1);
 	heavy_mb(); /* C, between read of p->counter and write to data, paired with B */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ