[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+KHdyW6kS7dB95BOiNo5y5anfygB2OnJ0sOcw545s2_V1rfYA@mail.gmail.com>
Date: Thu, 25 Nov 2021 19:40:56 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
Christoph Hellwig <hch@....de>, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
Ilya Dryomov <idryomov@...il.com>,
Jeff Layton <jlayton@...nel.org>
Subject: Re: [PATCH v2 2/4] mm/vmalloc: add support for __GFP_NOFAIL
On Thu, Nov 25, 2021 at 9:48 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Wed 24-11-21 21:37:54, Uladzislau Rezki wrote:
> > On Wed, Nov 24, 2021 at 09:43:12AM +0100, Michal Hocko wrote:
> > > On Tue 23-11-21 17:02:38, Andrew Morton wrote:
> > > > On Tue, 23 Nov 2021 20:01:50 +0100 Uladzislau Rezki <urezki@...il.com> wrote:
> > > >
> > > > > On Mon, Nov 22, 2021 at 04:32:31PM +0100, Michal Hocko wrote:
> > > > > > From: Michal Hocko <mhocko@...e.com>
> > > > > >
> > > > > > Dave Chinner has mentioned that some of the xfs code would benefit from
> > > > > > kvmalloc support for __GFP_NOFAIL because they have allocations that
> > > > > > cannot fail and they do not fit into a single page.
> > > >
> > > > Perhaps we should tell xfs "no, do it internally". Because this is a
> > > > rather nasty-looking thing - do we want to encourage other callsites to
> > > > start using it?
> > >
> > > This is what xfs is likely going to do if we do not provide the
> > > functionality. I just do not see why that would be a better outcome
> > > though. My longterm experience tells me that whenever we ignore
> > > requirements by other subsystems then those requirements materialize in
> > > some form in the end. In many cases done either suboptimaly or outright
> > > wrong. This might be not the case for xfs as the quality of
> > > implementation is high there but this is not the case in general.
> > >
> > > Even if people start using vmalloc(GFP_NOFAIL) out of lazyness or for
> > > any other stupid reason then what? Is that something we should worry
> > > about? Retrying within the allocator doesn't make the things worse. In
> > > fact it is just easier to find such abusers by grep which would be more
> > > elaborate with custom retry loops.
> > >
> > > [...]
> > > > > > + if (nofail) {
> > > > > > + schedule_timeout_uninterruptible(1);
> > > > > > + goto again;
> > > > > > + }
> > > >
> > > > The idea behind congestion_wait() is to prevent us from having to
> > > > hard-wire delays like this. congestion_wait(1) would sleep for up to
> > > > one millisecond, but will return earlier if reclaim events happened
> > > > which make it likely that the caller can now proceed with the
> > > > allocation event, successfully.
> > > >
> > > > However it turns out that congestion_wait() was quietly broken at the
> > > > block level some time ago. We could perhaps resurrect the concept at
> > > > another level - say by releasing congestion_wait() callers if an amount
> > > > of memory newly becomes allocatable. This obviously asks for inclusion
> > > > of zone/node/etc info from the congestion_wait() caller. But that's
> > > > just an optimization - if the newly-available memory isn't useful to
> > > > the congestion_wait() caller, they just fail the allocation attempts
> > > > and wait again.
> > >
> > > vmalloc has two potential failure modes. Depleted memory and vmalloc
> > > space. So there are two different events to wait for. I do agree that
> > > schedule_timeout_uninterruptible is both ugly and very simple but do we
> > > really need a much more sophisticated solution at this stage?
> > >
> > I would say there is at least one more. It is about when users set their
> > own range(start:end) where to allocate. In that scenario we might never
> > return to a user, because there might not be any free vmap space on
> > specified range.
> >
> > To address this, we can allow __GFP_NOFAIL only for entire vmalloc
> > address space, i.e. within VMALLOC_START:VMALLOC_END.
>
> How should we do that?
>
<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d2a00ad4e1dd..664935bee2a2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3029,6 +3029,13 @@ void *__vmalloc_node_range(unsigned long size,
unsigned long align,
return NULL;
}
+ if (gfp_mask & __GFP_NOFAIL) {
+ if (start != VMALLOC_START || end != VMALLOC_END) {
+ gfp_mask &= ~__GFP_NOFAIL;
+ WARN_ONCE(1, "__GFP_NOFAIL is allowed only for
entire vmalloc space.");
+ }
+ }
+
if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) {
unsigned long size_per_node;
<snip>
Or just allow __GFP_NOFAIL flag usage only for a high level API, it is
__vmalloc() one where
gfp can be passed. Because it uses whole vmalloc address space, thus
we do not need to
check the range and other parameters like align, etc. This variant is
preferable.
But the problem is that there are internal functions which are
publicly available for kernel users like
__vmalloc_node_range(). In that case we can add a big comment like:
__GFP_NOFAIL flag can be
used __only__ with high level API, i.e. __vmalloc() one.
Any thoughts?
--
Uladzislau Rezki
Powered by blists - more mailing lists