[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4094aed9-d22d-d14f-07a7-5abe599beeab@linux.dev>
Date: Sun, 24 Apr 2022 16:00:15 +0800
From: Guoqing Jiang <guoqing.jiang@...ux.dev>
To: Logan Gunthorpe <logang@...tatee.com>, Xiao Ni <xni@...hat.com>
Cc: open list <linux-kernel@...r.kernel.org>,
linux-raid <linux-raid@...r.kernel.org>,
Song Liu <song@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Stephen Bates <sbates@...thlin.com>,
Martin Oliveira <Martin.Oliveira@...eticom.com>,
David Sloan <David.Sloan@...eticom.com>
Subject: Re: [PATCH v2 00/12] Improve Raid5 Lock Contention
On 4/22/22 12:02 AM, Logan Gunthorpe wrote:
>
> On 2022-04-21 02:45, Xiao Ni wrote:
>> Could you share the commands to get the test result (lock contention
>> and performance)?
> Sure. The performance we were focused on was large block writes. So we
> setup raid5 instances with varying number of disks and ran the following
> fio script directly on the drive.
>
> [simple]
> filename=/dev/md0
> ioengine=libaio
> rw=write
> direct=1
> size=8G
> blocksize=2m
> iodepth=16
> runtime=30s
> time_based=1
> offset_increment=8G
> numjobs=12
> 
> (We also played around with tuning this but didn't find substantial
> changes once the bottleneck was hit)
Nice, I suppose other IO patterns keep the same performance as before.
> We tuned md with parameters like:
>
> echo 4 > /sys/block/md0/md/group_thread_cnt
> echo 8192 > /sys/block/md0/md/stripe_cache_size
>
> For lock contention stats, we just used lockstat[1]; roughly like:
>
> echo 1 > /proc/sys/kernel/lock_stat
> fio test.fio
> echo 0 > /proc/sys/kernel/lock_stat
> cat /proc/lock_stat
>
> And compared the before and after.
Thanks for your effort, besides the performance test, please try to run
mdadm test suites to avoid regression.
Thanks,
Guoqing
Powered by blists - more mailing lists