lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 28 Nov 2012 09:33:30 +0100
From:	Jens Axboe <axboe@...nel.dk>
To:	Mikulas Patocka <mpatocka@...hat.com>
CC:	Jeff Chua <jeff.chua.linux@...il.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Jan Kara <jack@...e.cz>, lkml <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: Recent kernel "mount" slow

On 2012-11-28 04:57, Mikulas Patocka wrote:
> 
> 
> On Tue, 27 Nov 2012, Jens Axboe wrote:
> 
>> On 2012-11-27 11:06, Jeff Chua wrote:
>>> On Tue, Nov 27, 2012 at 3:38 PM, Jens Axboe <axboe@...nel.dk> wrote:
>>>> On 2012-11-27 06:57, Jeff Chua wrote:
>>>>> On Sun, Nov 25, 2012 at 7:23 AM, Jeff Chua <jeff.chua.linux@...il.com> wrote:
>>>>>> On Sun, Nov 25, 2012 at 5:09 AM, Mikulas Patocka <mpatocka@...hat.com> wrote:
>>>>>>> So it's better to slow down mount.
>>>>>>
>>>>>> I am quite proud of the linux boot time pitting against other OS. Even
>>>>>> with 10 partitions. Linux can boot up in just a few seconds, but now
>>>>>> you're saying that we need to do this semaphore check at boot up. By
>>>>>> doing so, it's inducing additional 4 seconds during boot up.
>>>>>
>>>>> By the way, I'm using a pretty fast SSD (Samsung PM830) and fast CPU
>>>>> (2.8GHz). I wonder if those on slower hard disk or slower CPU, what
>>>>> kind of degradation would this cause or just the same?
>>>>
>>>> It'd likely be the same slow down time wise, but as a percentage it
>>>> would appear smaller on a slower disk.
>>>>
>>>> Could you please test Mikulas' suggestion of changing
>>>> synchronize_sched() in include/linux/percpu-rwsem.h to
>>>> synchronize_sched_expedited()?
>>>
>>> Tested. It seems as fast as before, but may be a "tick" slower. Just
>>> perception. I was getting pretty much 0.012s with everything reverted.
>>> With synchronize_sched_expedited(), it seems to be 0.012s ~ 0.013s.
>>> So, it's good.
>>
>> Excellent
>>
>>>> linux-next also has a re-write of the per-cpu rw sems, out of Andrews
>>>> tree. It would be a good data point it you could test that, too.
>>>
>>> Tested. It's slower. 0.350s. But still faster than 0.500s without the patch.
>>
>> Makes sense, it's 2 synchronize_sched() instead of 3. So it doesn't fix
>> the real issue, which is having to do synchronize_sched() in the first
>> place.
>>
>>> # time mount /dev/sda1 /mnt; sync; sync; umount /mnt
>>>
>>>
>>> So, here's the comparison ...
>>>
>>> 0.500s     3.7.0-rc7
>>> 0.168s     3.7.0-rc2
>>> 0.012s     3.6.0
>>> 0.013s     3.7.0-rc7 + synchronize_sched_expedited()
>>> 0.350s     3.7.0-rc7 + Oleg's patch.
>>
>> I wonder how many of them are due to changing to the same block size.
>> Does the below patch make a difference?
> 
> This patch is wrong because you must check if the device is mapped while 
> holding bdev->bd_block_size_semaphore (because 
> bdev->bd_block_size_semaphore prevents new mappings from being created)

No it doesn't. If you read the patch, that was moved to i_mmap_mutex.

> I'm sending another patch that has the same effect.
> 
> 
> Note that ext[234] filesystems set blocksize to 1024 temporarily during 
> mount, so it doesn't help much (it only helps for other filesystems, such 
> as jfs). For ext[234], you have a device with default block size 4096, the 
> filesystem sets block size to 1024 during mount, reads the super block and 
> sets it back to 4096.

That is true, hence I was hesitant to think it'll actually help. In any
case, basically any block device will have at least one blocksize
transitioned when being mounted for the first time. I wonder if we just
shouldn't default to having a 4kb soft block size to avoid that one,
though it is working around the issue to some degree.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ