lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mu7ox1ut.fsf@notabene.neil.brown.name>
Date:   Mon, 06 Apr 2020 21:58:18 +1000
From:   NeilBrown <neilb@...e.de>
To:     Jan Kara <jack@...e.cz>, Michal Hocko <mhocko@...nel.org>
Cc:     Trond Myklebust <trondmy@...merspace.com>,
        "Anna.Schumaker\@Netapp.com" <Anna.Schumaker@...app.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, linux-mm@...ck.org,
        linux-nfs@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] MM: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE

On Mon, Apr 06 2020, Jan Kara wrote:

> On Mon 06-04-20 09:44:53, Michal Hocko wrote:
>> On Sat 04-04-20 08:40:17, Neil Brown wrote:
>> > On Fri, Apr 03 2020, Michal Hocko wrote:
>> > 
>> > > On Thu 02-04-20 10:53:20, Neil Brown wrote:
>> > >> 
>> > >> PF_LESS_THROTTLE exists for loop-back nfsd, and a similar need in the
>> > >> loop block driver, where a daemon needs to write to one bdi in
>> > >> order to free up writes queued to another bdi.
>> > >> 
>> > >> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
>> > >> pages, so that it can still dirty pages after other processses have been
>> > >> throttled.
>> > >> 
>> > >> This approach was designed when all threads were blocked equally,
>> > >> independently on which device they were writing to, or how fast it was.
>> > >> Since that time the writeback algorithm has changed substantially with
>> > >> different threads getting different allowances based on non-trivial
>> > >> heuristics.  This means the simple "add 25%" heuristic is no longer
>> > >> reliable.
>> > >> 
>> > >> This patch changes the heuristic to ignore the global limits and
>> > >> consider only the limit relevant to the bdi being written to.  This
>> > >> approach is already available for BDI_CAP_STRICTLIMIT users (fuse) and
>> > >> should not introduce surprises.  This has the desired result of
>> > >> protecting the task from the consequences of large amounts of dirty data
>> > >> queued for other devices.
>> > >
>> > > While I understand that you want to have per bdi throttling for those
>> > > "special" files I am still missing how this is going to provide the
>> > > additional room that the additnal 25% gave them previously. I might
>> > > misremember or things have changed (what you mention as non-trivial
>> > > heuristics) but PF_LESS_THROTTLE really needed that room to guarantee a
>> > > forward progress. Care to expan some more on how this is handled now?
>> > > Maybe we do not need it anymore but calling that out explicitly would be
>> > > really helpful.
>> > 
>> > The 25% was a means to an end, not an end in itself.
>> > 
>> > The problem is that the NFS server needs to be able to write to the
>> > backing filesystem when the dirty memory limits have been reached by
>> > being totally consumed by dirty pages on the NFS filesystem.
>> > 
>> > The 25% was just a way of giving an allowance of dirty pages to nfsd
>> > that could not be consumed by processes writing to an NFS filesystem.
>> > i.e. it doesn't need 25% MORE, it needs 25% PRIVATELY.  Actually it only
>> > really needs 1 page privately, but a few pages give better throughput
>> > and 25% seemed like a good idea at the time.
>> 
>> Yes this part is clear to me.
>>  
>> > per-bdi throttling focuses on the "PRIVATELY" (the important bit) and
>> > de-emphasises the 25% (the irrelevant detail).
>> 
>> It is still not clear to me how this patch is going to behave when the
>> global dirty throttling is essentially equal to the per-bdi - e.g. there
>> is only a single bdi and now the PF_LOCAL_THROTTLE process doesn't have
>> anything private.
>
> Let me think out loud so see whether I understand this properly. There are
> two BDIs involved in NFS loop mount - the NFS virtual BDI (let's call it
> simply NFS-bdi) and the bdi of the real filesystem that is backing NFS
> (let's call this real-bdi). The case we are concerned about is when NFS-bdi
> is full of dirty pages so that global dirty limit of the machine is
> exceeded. Then flusher thread will take dirty pages from NFS-bdi and send
> them over localhost to nfsd. Nfsd, which has PF_LOCAL_THROTTLE set, will take
> these pages and write them to real-bdi. Now because PF_LOCAL_THROTTLE is
> set for nfsd, the fact that we are over global limit does not take effect
> and nfsd is still able to write to real-bdi until dirty limit on real-bdi
> is reached. So things should work as Neil writes AFAIU.

Exactly.  The 'loop' block device follows a similar pattern - there is
the 'loop' bdi that might consume all the allowed dirty pages, and the
backing bdi that we need to write to so those dirty pages can be
cleaned.

The intention for PR_SET_IO_FLUSHER as described in 'man 2 prctl'
is much the same.  The thread that sets this is expected to be working
on behalf of a "block layer or filesystem" such as "FUSE daemons,  SCSI
device  emulation daemons" - each of these would be serving a bdi
"above" by writing to a bdi "below".

I'll add some more text to the changelog to make this clearer.

Thanks,
NeilBrown

Download attachment "signature.asc" of type "application/pgp-signature" (833 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ