[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FDED6D.4070200@fb.com>
Date: Thu, 31 Mar 2016 21:39:25 -0600
From: Jens Axboe <axboe@...com>
To: Dave Chinner <david@...morbit.com>
CC: <linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<linux-block@...r.kernel.org>
Subject: Re: [PATCHSET v3][RFC] Make background writeback not suck
On 03/31/2016 09:29 PM, Jens Axboe wrote:
>>> I can't seem to reproduce this at all. On an nvme device, I get a
>>> fairly steady 60K/sec file creation rate, and we're nowhere near
>>> being IO bound. So the throttling has no effect at all.
>>
>> That's too slow to show the stalls - your likely concurrency bound
>> in allocation by the default AG count (4) from mkfs. Use mkfs.xfs -d
>> agcount=32 so that every thread works in it's own AG.
>
> That's the key, with that I get 300-400K ops/sec instead. I'll run some
> testing with this tomorrow and see what I can find, it did one full run
> now and I didn't see any issues, but I need to run it at various
> settings and see if I can find the issue.
No stalls seen, I get the same performance with it disabled and with it
enabled, at both default settings, and lower ones (wb_percent=20).
Looking at iostat, we don't drive a lot of depth, so it makes sense,
even with the throttling we're doing essentially the same amount of IO.
What does 'nr_requests' say for your virtio_blk device? Looks like
virtio_blk has a queue_depth setting, but it's not set by default, and
then it uses the free entries in the ring. But I don't know what that is...
--
Jens Axboe
Powered by blists - more mailing lists