lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FE86AE.7070704@fb.com>
Date:	Fri, 1 Apr 2016 08:33:18 -0600
From:	Jens Axboe <axboe@...com>
To:	Dave Chinner <david@...morbit.com>
CC:	<linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
	<linux-block@...r.kernel.org>
Subject: Re: [PATCHSET v3][RFC] Make background writeback not suck

On 04/01/2016 12:16 AM, Dave Chinner wrote:
> On Thu, Mar 31, 2016 at 09:39:25PM -0600, Jens Axboe wrote:
>> On 03/31/2016 09:29 PM, Jens Axboe wrote:
>>>>> I can't seem to reproduce this at all. On an nvme device, I get a
>>>>> fairly steady 60K/sec file creation rate, and we're nowhere near
>>>>> being IO bound. So the throttling has no effect at all.
>>>>
>>>> That's too slow to show the stalls - your likely concurrency bound
>>>> in allocation by the default AG count (4) from mkfs. Use mkfs.xfs -d
>>>> agcount=32 so that every thread works in it's own AG.
>>>
>>> That's the key, with that I get 300-400K ops/sec instead. I'll run some
>>> testing with this tomorrow and see what I can find, it did one full run
>>> now and I didn't see any issues, but I need to run it at various
>>> settings and see if I can find the issue.
>>
>> No stalls seen, I get the same performance with it disabled and with
>> it enabled, at both default settings, and lower ones
>> (wb_percent=20). Looking at iostat, we don't drive a lot of depth,
>> so it makes sense, even with the throttling we're doing essentially
>> the same amount of IO.
>
> Try appending numa=fake=4 to your guest's kernel command line.
>
> (that's what I'm using)

Sure, I can give that a go.

>> What does 'nr_requests' say for your virtio_blk device? Looks like
>> virtio_blk has a queue_depth setting, but it's not set by default,
>> and then it uses the free entries in the ring. But I don't know what
>> that is...
>
> $ cat /sys/block/vdc/queue/nr_requests
> 128

OK, so that would put you in the 16/32/64 category for idle/normal/high 
priority writeback. Which fits with the iostat below, which is in the 
~16 range.

So the META thing should help, it'll bump it up a bit. But we're also 
seeing smaller requests, and I think that could be because after we do 
throttle, we could potentially have a merge candidate. The code doesn't 
check post-sleeping, it'll allow any merges before though. Though that 
part is a little harder to read from the iostat numbers, but there does 
seem to be a correlation between your higher depths and bigger request 
sizes.

> I'll try the "don't throttle REQ_META" patch, but this seems like a
> fragile way to solve this problem - it shuts up the messenger, but
> doesn't solve the problem for any other subsystem that might have a
> similer issue. e.g. next we're going to have to make sure direct IO
> (which is also REQ_WRITE dispatch) does not get throttled, and so
> on....

I don't think there's anything wrong with the REQ_META patch. Sure, we 
could have better classifications (like discussed below), but that's 
mainly tweaking. As long as we get the same answers, it's fine. There's 
no throttling of O_DIRECT writes in the current code, it specifically 
doesn't include those. It's only for the unbounded writes, which 
writeback tends to be.

> It seems to me that the right thing to do here is add a separate
> classification flag for IO that can be throttled. e.g. as
> REQ_WRITEBACK and only background writeback work sets this flag.
> That would ensure that when the IO is being dispatched from other
> sources (e.g. fsync, sync_file_range(), direct IO, filesystem
> metadata, etc) it is clear that it is not a target for throttling.
> This would also allow us to easily switch off throttling if
> writeback is occurring for memory reclaim reasons, and so on.
> Throttling policy decisions belong above the block layer, even
> though the throttle mechanism itself is in the block layer.

We're already doing all of that, it's just doesn't include a specific 
REQ_WRITEBACK flag. And yeah, that would clean up the checking for 
request type, but functionally it should be the same as it is now. It'll 
be a bit more robust and easier to read if we just have a REQ_WRITEBACK, 
right now it's WRITE_SYNC vs WRITE for important vs not-important, with 
a check for write vs O_DIRECT write as well.


-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ