[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150913131244.GA15926@ret.masoncoding.com>
Date: Sun, 13 Sep 2015 09:12:44 -0400
From: Chris Mason <clm@...com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Josef Bacik <jbacik@...com>,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
Jan Kara <jack@...e.cz>, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug()
On Sat, Sep 12, 2015 at 07:46:32PM -0400, Chris Mason wrote:
> I don't think the XFS numbers can be trusted too much since it was
> basically bottlenecked behind that single pegged CPU. It was bouncing
> around and I couldn't quite track it down to a process name (or perf
> profile).
I'll do more runs Monday, but I was able to grab a perf profile of the
pegged XFS CPU. It was just the writeback worker thread, and it
hit btrfs differently because we defer more of this stuff to endio
workers, effectively spreading it out over more CPUs.
With 4 mount points intead of 2, XFS goes from 140K files/sec to 250K.
Here's one of the profiles, but it bounced around a lot so I wouldn't
use this to actually tune anything:
11.42% kworker/u82:61 [kernel.kallsyms] [k] _raw_spin_lock
|
---_raw_spin_lock
|
|--83.43%-- xfs_extent_busy_trim
| xfs_alloc_compute_aligned
| |
| |--99.92%-- xfs_alloc_ag_vextent_near
| | xfs_alloc_ag_vextent
| | xfs_alloc_vextent
| | xfs_bmap_btalloc
| | xfs_bmap_alloc
| | xfs_bmapi_write
| | xfs_iomap_write_allocate
| | xfs_map_blocks
| | xfs_vm_writepage
| | __writepage
| | write_cache_pages
| | generic_writepages
| | xfs_vm_writepages
| | do_writepages
| | __writeback_single_inode
| | writeback_sb_inodes
| | __writeback_inodes_wb
| | wb_writeback
| | wb_do_writeback
| | wb_workfn
| | process_one_work
| | worker_thread
| | kthread
| | ret_from_fork
| --0.08%-- [...]
|
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists