[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200403151534.GG22681@dhcp22.suse.cz>
Date: Fri, 3 Apr 2020 17:15:34 +0200
From: Michal Hocko <mhocko@...nel.org>
To: NeilBrown <neilb@...e.de>
Cc: Trond Myklebust <trondmy@...merspace.com>,
"Anna.Schumaker@...app.com" <Anna.Schumaker@...app.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>, linux-mm@...ck.org,
linux-nfs@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] MM: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE
On Thu 02-04-20 10:53:20, Neil Brown wrote:
>
> PF_LESS_THROTTLE exists for loop-back nfsd, and a similar need in the
> loop block driver, where a daemon needs to write to one bdi in
> order to free up writes queued to another bdi.
>
> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
> pages, so that it can still dirty pages after other processses have been
> throttled.
>
> This approach was designed when all threads were blocked equally,
> independently on which device they were writing to, or how fast it was.
> Since that time the writeback algorithm has changed substantially with
> different threads getting different allowances based on non-trivial
> heuristics. This means the simple "add 25%" heuristic is no longer
> reliable.
>
> This patch changes the heuristic to ignore the global limits and
> consider only the limit relevant to the bdi being written to. This
> approach is already available for BDI_CAP_STRICTLIMIT users (fuse) and
> should not introduce surprises. This has the desired result of
> protecting the task from the consequences of large amounts of dirty data
> queued for other devices.
While I understand that you want to have per bdi throttling for those
"special" files I am still missing how this is going to provide the
additional room that the additnal 25% gave them previously. I might
misremember or things have changed (what you mention as non-trivial
heuristics) but PF_LESS_THROTTLE really needed that room to guarantee a
forward progress. Care to expan some more on how this is handled now?
Maybe we do not need it anymore but calling that out explicitly would be
really helpful.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists