[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50B46E05.70906@kernel.dk>
Date: Tue, 27 Nov 2012 08:38:45 +0100
From: Jens Axboe <axboe@...nel.dk>
To: Jeff Chua <jeff.chua.linux@...il.com>
CC: Mikulas Patocka <mpatocka@...hat.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jan Kara <jack@...e.cz>, lkml <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: Recent kernel "mount" slow
On 2012-11-27 06:57, Jeff Chua wrote:
> On Sun, Nov 25, 2012 at 7:23 AM, Jeff Chua <jeff.chua.linux@...il.com> wrote:
>> On Sun, Nov 25, 2012 at 5:09 AM, Mikulas Patocka <mpatocka@...hat.com> wrote:
>>> So it's better to slow down mount.
>>
>> I am quite proud of the linux boot time pitting against other OS. Even
>> with 10 partitions. Linux can boot up in just a few seconds, but now
>> you're saying that we need to do this semaphore check at boot up. By
>> doing so, it's inducing additional 4 seconds during boot up.
>
> By the way, I'm using a pretty fast SSD (Samsung PM830) and fast CPU
> (2.8GHz). I wonder if those on slower hard disk or slower CPU, what
> kind of degradation would this cause or just the same?
It'd likely be the same slow down time wise, but as a percentage it
would appear smaller on a slower disk.
Could you please test Mikulas' suggestion of changing
synchronize_sched() in include/linux/percpu-rwsem.h to
synchronize_sched_expedited()?
linux-next also has a re-write of the per-cpu rw sems, out of Andrews
tree. It would be a good data point it you could test that, too.
In any case, the slow down definitely isn't acceptable. Fixing an
obscure issue like block sizes changing while O_DIRECT is in flight
definitely does NOT warrant a mount slow down.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists