[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1295915725.1949.967.camel@sli10-conroe>
Date: Tue, 25 Jan 2011 08:35:25 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...nel.dk>,
"Chen, Tim C" <tim.c.chen@...el.com>
Subject: more dput lock contentions in 2.6.38-rc?
Hi,
we are testing dbench benchmark and see big drop of 2.6.38-rc compared
to 2.6.37 in several machines with 2 sockets or 4 sockets. We have 12
disks mount to /mnt/stp/dbenchdata/sd*/ and dbench runs against data of
the disks. According to perf, we saw more lock contentions:
In 2.6.37: 13.00% dbench [kernel.kallsyms] [k] _raw_spin_lock
In 2.6.38-rc: 69.45% dbench [kernel.kallsyms] [k]_raw_spin_lock
- 69.45% dbench [kernel.kallsyms] [k] _raw_spin_lock
- _raw_spin_lock
- 48.41% dput
- 61.17% path_put
- 60.47% do_path_lookup
+ 53.18% user_path_at
+ 42.13% do_filp_open
+ 4.69% user_path_parent
- 35.56% d_path
seq_path
show_vfsmnt
seq_read
vfs_read
sys_read
system_call_fastpath
__GI___libc_read
+ 2.17% do_filp_open
+ 1.72% mounts_release
+ 38.69% link_path_walk
+ 30.21% path_get
+ 19.08% nameidata_drop_rcu
+ 0.83% __d_lookup
it appears there are heavy lock contention when dput release '/', 'mnt',
'stp', 'dbenchdata' and 'proc' when dbench is running.
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists