[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875zecw1n6.fsf@notabene.neil.brown.name>
Date: Tue, 07 Apr 2020 11:00:29 +1000
From: NeilBrown <neilb@...e.de>
To: John Hubbard <jhubbard@...dia.com>,
Michal Hocko <mhocko@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Joel Fernandes <joel@...lfernandes.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm: clarify __GFP_MEMALLOC usage
On Mon, Apr 06 2020, John Hubbard wrote:
> On 4/6/20 12:01 AM, Michal Hocko wrote:
> ...
>> From 6c90b0a19a07c87d24ad576e69b33c6e19c2f9a2 Mon Sep 17 00:00:00 2001
>> From: Michal Hocko <mhocko@...e.com>
>> Date: Wed, 1 Apr 2020 14:00:56 +0200
>> Subject: [PATCH] mm: clarify __GFP_MEMALLOC usage
>>
>> It seems that the existing documentation is not explicit about the
>> expected usage and potential risks enough. While it is calls out
>> that users have to free memory when using this flag it is not really
>> apparent that users have to careful to not deplete memory reserves
>> and that they should implement some sort of throttling wrt. freeing
>> process.
>>
>> This is partly based on Neil's explanation [1].
>>
>> Let's also call out that a pre allocated pool allocator should be
>> considered.
>>
>> [1] http://lkml.kernel.org/r/877dz0yxoa.fsf@notabene.neil.brown.name
>> Signed-off-by: Michal Hocko <mhocko@...e.com>
>> ---
>> include/linux/gfp.h | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>> index e5b817cb86e7..9cacef1a3ee0 100644
>> --- a/include/linux/gfp.h
>> +++ b/include/linux/gfp.h
>> @@ -110,6 +110,11 @@ struct vm_area_struct;
>> * the caller guarantees the allocation will allow more memory to be freed
>> * very shortly e.g. process exiting or swapping. Users either should
>> * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
>> + * Users of this flag have to be extremely careful to not deplete the reserve
>> + * completely and implement a throttling mechanism which controls the consumption
>> + * of the reserve based on the amount of freed memory.
>> + * Usage of a pre-allocated pool (e.g. mempool) should be always considered before
>> + * using this flag.
I think this version is pretty good.
>> *
>> * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
>> * This takes precedence over the %__GFP_MEMALLOC flag if both are set.
>>
>
> Hi Michal and all,
>
> How about using approximately this wording instead? I found Neil's wording to be
> especially helpful so I mixed it in. (Also fixed a couple of slight 80-col overruns.)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index be2754841369..c247a911d8c7 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -111,6 +111,15 @@ struct vm_area_struct;
> * very shortly e.g. process exiting or swapping. Users either should
> * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
> *
> + * To be extra clear: users of __GFP_MEMALLOC must be working to free other
> + * memory, and that other memory needs to be freed "soon"; specifically, before
> + * the reserve is exhausted. This generally implies a throttling mechanism that
> + * balances the amount of __GFP_MEMALLOC memory used against the amount that the
> + * caller is about to free.
I don't like this change. "balances the amount ... is about to free"
does say anything about time, so it doesn't seem to be about throttling.
I think it is hard to write rules because the rules are a bit spongey.
With mempools, we have a nice clear rule. When you allocate from a
mempool you must have a clear path to freeing that allocation which will
not block on memory allocation except from a subordinate mempool. This
implies a partial ordering between mempools. When you have layered
block devices the path through the layers from filesystem down to
hardware defines the order. It isn't enforced, but it is quite easy to
reason about.
GFP_MEMALLOC effectively provides multiple mempools. So it could
theoretically deadlock if multiple long dependency chains
happened. i.e. if 1000 threads each make a GFP_MEMALLOC allocation and
then need to make another one before the first can be freed - then you
hit problems. There is no formal way to guarantee that this doesn't
happen. We just say "be gentle" and minimize the users of this flag,
and keep more memory in reserve than we really need.
Note that 'threads' here might not be Linux tasks. If you have an IO
request that proceed asynchronously, moving from queue to queue and
being handled by different task, then each one is a "thread" for the
purpose of understanding mem-alloc dependency.
So maybe what I really should focus on is not how quickly things happen,
but how many happen concurrently. The idea of throttling is to allow
previous requests to complete before we start too many more.
With Swap-over-NFS, some of the things that might need to be allocated
are routing table entries. These scale with the number of NFS servers
rather than the number of IO requests, so they are not going to cause
concurrency problems.
We also need memory to store replies, but these never exceed the number
of pending requests, so there is limited concurrency there.
NFS can send a lot of requests in parallel, but the main limit is the
RPC "slot table" and while that grows dynamically, it does so with
GFP_NOFS, so it can block or fail (I wonder if that should explicitly
disable the use of the reserves).
So there a limit on concurrency imposed by non-GFP_MEMALLOC allocations
So ... maybe the documentation should say that boundless concurrency of
allocations (i.e. one module allocating a boundless number of times
before previous allocations are freed) must be avoided.
NeilBrown
> + *
> + * Usage of a pre-allocated pool (e.g. mempool) should be always considered
> + * before using this flag.
> + *
> * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
> * This takes precedence over the %__GFP_MEMALLOC flag if both are set.
> */
>
>
> thanks,
> --
> John Hubbard
> NVIDIA
Download attachment "signature.asc" of type "application/pgp-signature" (833 bytes)
Powered by blists - more mailing lists