lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160401010134.GV11812@dastard>
Date:	Fri, 1 Apr 2016 12:01:34 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Holger Hoffstätte 
	<holger.hoffstaette@...glemail.com>
Cc:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCHSET v3][RFC] Make background writeback not suck

On Thu, Mar 31, 2016 at 10:09:56PM +0000, Holger Hoffstätte wrote:
> 
> Hi,
> 
> Jens mentioned on Twitter I should post my experience here as well,
> so here we go.
> 
> I've backported this series (incl. updates) to stable-4.4.x - not too
> difficult, minus the NVM part which I don't need anyway - and have been
> running it for the past few days without any problem whatsoever, with
> GREAT success.
> 
> My use case is primarily larger amounts of stuff (transcoded movies,
> finished downloads, built Gentoo packages) that gets copied from tmpfs
> to SSD (or disk) and every time that happens, the system noticeably
> strangles readers (desktop, interactive shell). It does not really matter
> how I tune writeback via the write_expire/dirty_bytes knobs or the
> scheduler (and yes, I understand how they work); lowering the writeback
> limits helped a bit but the system is still overwhelmed. Jacking up
> deadline's writes_starved to unreasonable levels helps a bit, but in turn
> makes all writes suffer. Anything else - even tried BFQ for a while,
> which has its own unrelated problems - didn't really help either.

Can you go back to your original kernel, and lower nr_requests to 8?

Essentially all I see the block throttle doing is keeping the
request queue depth to somewhere between 8-12 requests, rather than
letting it blow out to near nr_requests (around 105-115), so it
would be interesting to note whether the block throttling has any
noticable difference in behaviour when compared to just having a
very shallow request queue....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ