[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <34cb5019-9fab-4dfd-db48-c3f88fe5614c@nvidia.com>
Date: Mon, 6 Apr 2020 18:21:39 -0700
From: John Hubbard <jhubbard@...dia.com>
To: NeilBrown <neilb@...e.de>, Michal Hocko <mhocko@...nel.org>
CC: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Joel Fernandes <joel@...lfernandes.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
<linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm: clarify __GFP_MEMALLOC usage
On 4/6/20 6:00 PM, NeilBrown wrote:
...
>>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>>> index e5b817cb86e7..9cacef1a3ee0 100644
>>> --- a/include/linux/gfp.h
>>> +++ b/include/linux/gfp.h
>>> @@ -110,6 +110,11 @@ struct vm_area_struct;
>>> * the caller guarantees the allocation will allow more memory to be freed
>>> * very shortly e.g. process exiting or swapping. Users either should
>>> * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
>>> + * Users of this flag have to be extremely careful to not deplete the reserve
>>> + * completely and implement a throttling mechanism which controls the consumption
>>> + * of the reserve based on the amount of freed memory.
>>> + * Usage of a pre-allocated pool (e.g. mempool) should be always considered before
>>> + * using this flag.
>
> I think this version is pretty good.
>
>>> *
>>> * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
>>> * This takes precedence over the %__GFP_MEMALLOC flag if both are set.
>>>
>>
>> Hi Michal and all,
>>
>> How about using approximately this wording instead? I found Neil's wording to be
>> especially helpful so I mixed it in. (Also fixed a couple of slight 80-col overruns.)
>>
>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>> index be2754841369..c247a911d8c7 100644
>> --- a/include/linux/gfp.h
>> +++ b/include/linux/gfp.h
>> @@ -111,6 +111,15 @@ struct vm_area_struct;
>> * very shortly e.g. process exiting or swapping. Users either should
>> * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
>> *
>> + * To be extra clear: users of __GFP_MEMALLOC must be working to free other
>> + * memory, and that other memory needs to be freed "soon"; specifically, before
>> + * the reserve is exhausted. This generally implies a throttling mechanism that
>> + * balances the amount of __GFP_MEMALLOC memory used against the amount that the
>> + * caller is about to free.
>
> I don't like this change. "balances the amount ... is about to free"
> does say anything about time, so it doesn't seem to be about throttling.
>
> I think it is hard to write rules because the rules are a bit spongey.
>
> With mempools, we have a nice clear rule. When you allocate from a
> mempool you must have a clear path to freeing that allocation which will
> not block on memory allocation except from a subordinate mempool. This
> implies a partial ordering between mempools. When you have layered
> block devices the path through the layers from filesystem down to
> hardware defines the order. It isn't enforced, but it is quite easy to
> reason about.
>
> GFP_MEMALLOC effectively provides multiple mempools. So it could
> theoretically deadlock if multiple long dependency chains
> happened. i.e. if 1000 threads each make a GFP_MEMALLOC allocation and
> then need to make another one before the first can be freed - then you
> hit problems. There is no formal way to guarantee that this doesn't
> happen. We just say "be gentle" and minimize the users of this flag,
> and keep more memory in reserve than we really need.
> Note that 'threads' here might not be Linux tasks. If you have an IO
> request that proceed asynchronously, moving from queue to queue and
> being handled by different task, then each one is a "thread" for the
> purpose of understanding mem-alloc dependency.
>
> So maybe what I really should focus on is not how quickly things happen,
> but how many happen concurrently. The idea of throttling is to allow
> previous requests to complete before we start too many more.
>
> With Swap-over-NFS, some of the things that might need to be allocated
> are routing table entries. These scale with the number of NFS servers
> rather than the number of IO requests, so they are not going to cause
> concurrency problems.
> We also need memory to store replies, but these never exceed the number
> of pending requests, so there is limited concurrency there.
> NFS can send a lot of requests in parallel, but the main limit is the
> RPC "slot table" and while that grows dynamically, it does so with
> GFP_NOFS, so it can block or fail (I wonder if that should explicitly
> disable the use of the reserves).
>
> So there a limit on concurrency imposed by non-GFP_MEMALLOC allocations
>
> So ... maybe the documentation should say that boundless concurrency of
> allocations (i.e. one module allocating a boundless number of times
> before previous allocations are freed) must be avoided.
>
Well, that's a good discussion that you just wrote, above, and I think it
demonstrates that it's hard to describe the situation in just a couple of
sentences. With that in mind, perhaps it's best to take the above notes
as a starting point, adjust them slightly and drop them into
Documentation/core-api/memory-allocation.rst ?
Then the comments here could refer to it.
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists