lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimwSNsjR5PnVP8JvzwEBwy3VtDp4DvNecKsTnxJ@mail.gmail.com>
Date:	Tue, 25 Jan 2011 12:26:38 +1100
From:	Nick Piggin <npiggin@...il.com>
To:	Shaohua Li <shaohua.li@...el.com>
Cc:	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	lkml <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nick Piggin <npiggin@...nel.dk>,
	"Chen, Tim C" <tim.c.chen@...el.com>
Subject: Re: more dput lock contentions in 2.6.38-rc?

On Tue, Jan 25, 2011 at 12:11 PM, Shaohua Li <shaohua.li@...el.com> wrote:
> On Tue, 2011-01-25 at 09:04 +0800, Nick Piggin wrote:
>> On Tue, Jan 25, 2011 at 11:35 AM, Shaohua Li <shaohua.li@...el.com> wrote:
>> > Hi,
>> > we are testing dbench benchmark and see big drop of 2.6.38-rc compared
>> > to 2.6.37 in several machines with 2 sockets or 4 sockets. We have 12
>> > disks mount to /mnt/stp/dbenchdata/sd*/ and dbench runs against data of
>> > the disks. According to perf, we saw more lock contentions:
>> > In 2.6.37: 13.00%        dbench  [kernel.kallsyms]   [k] _raw_spin_lock
>> > In 2.6.38-rc: 69.45%        dbench  [kernel.kallsyms]   [k]_raw_spin_lock
>> > -     69.45%        dbench  [kernel.kallsyms]   [k] _raw_spin_lock
>> >   - _raw_spin_lock
>> >      - 48.41% dput
>> >         - 61.17% path_put
>> >            - 60.47% do_path_lookup
>> >               + 53.18% user_path_at
>> >               + 42.13% do_filp_open
>> >               + 4.69% user_path_parent
>>
>> What filesystems are mounted on the path?
> ext3 or ext4

ext3 or 4 along every step of the path? Are there
any acls loaded, or security policy running?

It may be possible that they're all coming from
/proc/ access.

>
>> >            - 35.56% d_path
>> >                 seq_path
>> >                 show_vfsmnt
>> >                 seq_read
>> >                 vfs_read
>> >                 sys_read
>> >                 system_call_fastpath
>> >                 __GI___libc_read
>>
>> This guy is from glibc's statvfs call that dbench uses. It
>> parses /proc/mounts for mount flags which is racy (and
>> not a good idea to do with any frequency).
>>
>> A patch went into the kernel that allows glibc to get the
>> flags directly. Not sure about glibc status, I imagine it
>> will get there in another decade or two... Can you try
>> commenting it out of dbench source code?
> Sure, maybe after Chinese new year holiday, sorry.

No problem.

Thanks,
Nick
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ