[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c14c0103-9cbd-7d0f-486b-344dd33725ab@deltatee.com>
Date: Thu, 21 Apr 2022 10:02:33 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: Xiao Ni <xni@...hat.com>
Cc: open list <linux-kernel@...r.kernel.org>,
linux-raid <linux-raid@...r.kernel.org>,
Song Liu <song@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
Stephen Bates <sbates@...thlin.com>,
Martin Oliveira <Martin.Oliveira@...eticom.com>,
David Sloan <David.Sloan@...eticom.com>
Subject: Re: [PATCH v2 00/12] Improve Raid5 Lock Contention
On 2022-04-21 02:45, Xiao Ni wrote:
> Could you share the commands to get the test result (lock contention
> and performance)?
Sure. The performance we were focused on was large block writes. So we
setup raid5 instances with varying number of disks and ran the following
fio script directly on the drive.
[simple]
filename=/dev/md0
ioengine=libaio
rw=write
direct=1
size=8G
blocksize=2m
iodepth=16
runtime=30s
time_based=1
offset_increment=8G
numjobs=12

(We also played around with tuning this but didn't find substantial
changes once the bottleneck was hit)
We tuned md with parameters like:
echo 4 > /sys/block/md0/md/group_thread_cnt
echo 8192 > /sys/block/md0/md/stripe_cache_size
For lock contention stats, we just used lockstat[1]; roughly like:
echo 1 > /proc/sys/kernel/lock_stat
fio test.fio
echo 0 > /proc/sys/kernel/lock_stat
cat /proc/lock_stat
And compared the before and after.
Logan
[1] https://www.kernel.org/doc/html/latest/locking/lockstat.html
Powered by blists - more mailing lists