[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150605000636.GA24611@redhat.com>
Date: Thu, 4 Jun 2015 20:06:36 -0400
From: Mike Snitzer <snitzer@...hat.com>
To: Ming Lin <mlin@...nel.org>
Cc: Ming Lei <ming.lei@...onical.com>, dm-devel@...hat.com,
Christoph Hellwig <hch@....de>,
Alasdair G Kergon <agk@...hat.com>,
Lars Ellenberg <drbd-dev@...ts.linbit.com>,
Philip Kelleher <pjk1939@...ux.vnet.ibm.com>,
Joshua Morris <josh.h.morris@...ibm.com>,
Christoph Hellwig <hch@...radead.org>,
Kent Overstreet <kent.overstreet@...il.com>,
Nitin Gupta <ngupta@...are.org>,
Oleg Drokin <oleg.drokin@...el.com>,
Al Viro <viro@...iv.linux.org.uk>,
Jens Axboe <axboe@...nel.dk>,
Andreas Dilger <andreas.dilger@...el.com>,
Geoff Levand <geoff@...radead.org>,
Jiri Kosina <jkosina@...e.cz>,
lkml <linux-kernel@...r.kernel.org>, Jim Paris <jim@...n.com>,
Minchan Kim <minchan@...nel.org>,
Dongsu Park <dpark@...teo.net>, drbd-user@...ts.linbit.com
Subject: Re: [PATCH v4 01/11] block: make generic_make_request handle
arbitrarily sized bios
On Thu, Jun 04 2015 at 6:21pm -0400,
Ming Lin <mlin@...nel.org> wrote:
> On Thu, Jun 4, 2015 at 2:06 PM, Mike Snitzer <snitzer@...hat.com> wrote:
> >
> > We need to test on large HW raid setups like a Netapp filer (or even
> > local SAS drives connected via some SAS controller). Like a 8+2 drive
> > RAID6 or 8+1 RAID5 setup. Testing with MD raid on JBOD setups with 8
> > devices is also useful. It is larger RAID setups that will be more
> > sensitive to IO sizes being properly aligned on RAID stripe and/or chunk
> > size boundaries.
>
> I'll test it on large HW raid setup.
>
> Here is HW RAID5 setup with 19 278G HDDs on Dell R730xd(2sockets/48
> logical cpus/264G mem).
> http://minggr.net/pub/20150604/hw_raid5.jpg
>
> The stripe size is 64K.
>
> I'm going to test ext4/btrfs/xfs on it.
> "bs" set to 1216k(64K * 19 = 1216k)
> and run 48 jobs.
Definitely an odd blocksize (though 1280K full stripe is pretty common
for 10+2 HW RAID6 w/ 128K chunk size).
> [global]
> ioengine=libaio
> iodepth=64
> direct=1
> runtime=1800
> time_based
> group_reporting
> numjobs=48
> rw=read
>
> [job1]
> bs=1216K
> directory=/mnt
> size=1G
How does time_based relate to size=1G? It'll rewrite the same 1 gig
file repeatedly?
> Or do you have other suggestions of what tests I should run?
You're welcome to run this job but I'll also check with others here to
see what fio jobs we used in the recent past when assessing performance
of the dm-crypt parallelization changes.
Also, a lot of care needs to be taken to eliminate jitter in the system
while the test is running. We got a lot of good insight from Bart Van
Assche on that and put it to practice. I'll see if we can (re)summarize
that too.
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists