[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1268363312.3475.85.camel@localhost.localdomain>
Date: Thu, 11 Mar 2010 19:08:32 -0800
From: john stultz <johnstul@...ibm.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Nick Piggin <npiggin@...e.de>,
Thomas Gleixner <tglx@...utronix.de>,
lkml <linux-kernel@...r.kernel.org>,
Clark Williams <williams@...hat.com>,
John Kacur <jkacur@...hat.com>
Subject: Re: Nick's vfs-scalability patches ported to 2.6.33-rt
On Wed, 2010-03-10 at 04:01 -0500, Christoph Hellwig wrote:
> On Tue, Mar 09, 2010 at 06:51:02PM -0800, john stultz wrote:
> > So this all means that with Nick's patch set, we're no longer getting
> > bogged down in the vfs (at least at 8-way) at all. All the contention is
> > in the actual filesystem (ext2 in group_adjust_blocks, and ext3 in the
> > journal and block allocation code).
>
> Can you check if you're running into any fs scaling limit with xfs?
Here's the charts from some limited testing:
http://sr71.net/~jstultz/dbench-scalability/graphs/2.6.33/xfs-dbench.png
They're not great. And compared to ext3, the results are basically
flat.
http://sr71.net/~jstultz/dbench-scalability/graphs/2.6.33/ext3-dbench.png
Now, I've not done any real xfs work before, so if there is any tuning
needed for dbench, please let me know.
The odd bit is that perf doesn't show huge overheads in the xfs runs.
The spinlock contention is supposedly under 5%. So I'm not sure whats
causing the numbers to be so bad.
Clipped perf log below.
thanks
-john
11.06% dbench [kernel] [k] copy_user_generic_strin
4.82% dbench [kernel] [k] __lock_acquire
|
|--94.74%-- lock_acquire
| |
| |--38.89%-- rt_spin_lock
| | |
| | |--28.57%-- _slab_irq_disable
| | | |
| | | |--50.00%-- kmem_cache_alloc
| | | | kmem_zone_alloc
| | | | xfs_buf_get
| | | | xfs_buf_read
| | | | xfs_trans_read_buf
| | | | xfs_btree_read_buf_b
| | | | xfs_btree_lookup_get
| | | | xfs_btree_lookup
| | | | xfs_alloc_lookup_eq
| | | | xfs_alloc_fixup_tree
| | | | xfs_alloc_ag_vextent
| | | | xfs_alloc_ag_vextent
| | | | xfs_alloc_vextent
| | | | xfs_ialloc_ag_alloc
| | | | xfs_dialloc
| | | | xfs_ialloc
| | | | xfs_dir_ialloc
| | | | xfs_create
| | | | xfs_vn_mknod
| | | | xfs_vn_mkdir
| | | | vfs_mkdir
| | | | sys_mkdirat
| | | | sys_mkdir
| | | | system_call_fastpath
| | | | __GI___mkdir
| | | |
| | | --50.00%-- kmem_cache_free
| | | xfs_buf_get
| | | xfs_buf_read
| | | xfs_trans_read_buf
| | | xfs_btree_read_buf_b
| | | xfs_btree_lookup_get
| | | xfs_btree_lookup
| | | xfs_dialloc
| | | xfs_ialloc
| | | xfs_dir_ialloc
| | | xfs_create
| | | xfs_vn_mknod
| | | xfs_vn_mkdir
| | | vfs_mkdir
| | | sys_mkdirat
| | | sys_mkdir
| | | system_call_fastpath
| | | __GI___mkdir
| | |
| | |--14.29%-- dput
| | | path_put
| | | link_path_walk
| | | path_walk
| | | do_path_lookup
| | | user_path_at
| | | vfs_fstatat
| | | vfs_stat
| | | sys_newstat
| | | system_call_fastpath
| | | _xstat
| | |
| | |--14.29%-- add_to_page_cache_locked
| | | add_to_page_cache_lru
| | | grab_cache_page_write_begin
| | | block_write_begin
| | | xfs_vm_write_begin
| | | generic_file_buffered_write
| | | xfs_write
| | | xfs_file_aio_write
| | | do_sync_write
| | | vfs_write
| | | sys_pwrite64
| | | system_call_fastpath
| | | __GI_pwrite
:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists