[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YULpp4gVgZSuH65/@dhcp22.suse.cz>
Date: Thu, 16 Sep 2021 08:52:23 +0200
From: Michal Hocko <mhocko@...e.com>
To: NeilBrown <neilb@...e.de>
Cc: Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Theodore Ts'o <tytso@....edu>,
Andreas Dilger <adilger.kernel@...ger.ca>,
"Darrick J. Wong" <djwong@...nel.org>, Jan Kara <jack@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
linux-xfs@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-nfs@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/6] EXT4: Remove ENOMEM/congestion_wait() loops.
On Thu 16-09-21 08:35:40, Neil Brown wrote:
> On Wed, 15 Sep 2021, Michal Hocko wrote:
> > On Wed 15-09-21 07:48:11, Neil Brown wrote:
> > >
> > > Why does __GFP_NOFAIL access the reserves? Why not require that the
> > > relevant "Try harder" flag (__GFP_ATOMIC or __GFP_MEMALLOC) be included
> > > with __GFP_NOFAIL if that is justified?
> >
> > Does 5020e285856c ("mm, oom: give __GFP_NOFAIL allocations access to
> > memory reserves") help?
>
> Yes, that helps. A bit.
>
> I'm not fond of the clause "the allocation request might have come with some
> locks held". What if it doesn't? Does it still have to pay the price.
>
> Should we not require that the caller indicate if any locks are held?
I do not think this would help much TBH. What if the lock in question
doesn't impose any dependency through allocation problem?
> That way callers which don't hold locks can use __GFP_NOFAIL without
> worrying about imposing on other code.
>
> Or is it so rare that __GFP_NOFAIL would be used without holding a lock
> that it doesn't matter?
>
> The other commit of interest is
>
> Commit: 6c18ba7a1899 ("mm: help __GFP_NOFAIL allocations which do not trigger OOM killer")
>
> I don't find the reasoning convincing. It is a bit like "Robbing Peter
> to pay Paul". It takes from the reserves to allow a __GFP_NOFAIL to
> proceed, with out any reason to think this particular allocation has any
> more 'right' to the reserves than anything else.
I do agree that this is not really optimal. I do not remember exact
details but these changes were mostly based or inspired by extreme
memory pressure testing by Tetsuo who has managed to trigger quite some
corner cases. Especially those where NOFS was involved were problematic.
> While I don't like the reasoning in either of these, they do make it
> clear (to me) that the use of reserves is entirely an internal policy
> decision. They should *not* be seen as part of the API and callers
> should not have to be concerned about it when deciding whether to use
> __GFP_NOFAIL or not.
Yes. NOFAIL should have high enough bar to use - essentially there is no
other way than use it - that memory reserves shouldn't be a road block.
If we learn that existing users can seriously deplete memory reserves
then we might need to reconsider the existing logic. So far there are no
indications that NOFAIL would really cause any problems in that area.
> The use of these reserves is, at most, a hypothetical problem. If it
> ever looks like becoming a real practical problem, it needs to be fixed
> internally to the page allocator. Maybe an extra water-mark which isn't
> quite as permissive as ALLOC_HIGH...
>
> I'm inclined to drop all references to reserves from the documentation
> for __GFP_NOFAIL.
I have found your additions to the documentation useful.
> I think there are enough users already that adding a
> couple more isn't going to make problems substantially more likely. And
> more will be added anyway that the mm/ team won't have the opportunity
> or bandwidth to review.
>
> Meanwhile I'll see if I can understand the intricacies of alloc_page so
> that I can contibute to making it more predictable.
>
> Question: In those cases where an open-coded loop is appropriate, such
> as when you want to handle signals or can drop locks, how bad would it
> be to have a tight loop without any sleep?
>
> should_reclaim_retry() will sleep 100ms (sometimes...). Is that enough?
> __GFP_NOFAIL doesn't add any sleep when looping.
Yeah, NOFAIL doesn't add any explicit sleep points. In general there is
no guarantee that a sleepable allocation will sleep. We do cond_resched
in general but sleeping is enforced only for worker contexts because WQ
concurrency depends on an explicit sleeping. So to answer your question,
if you really need to sleep between retries then you should do it
manually but cond_resched can be implied.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists