[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170302130009.GC3213@bfoster.bfoster>
Date: Thu, 2 Mar 2017 08:00:09 -0500
From: Brian Foster <bfoster@...hat.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Xiong Zhou <xzhou@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
linux-xfs@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: mm allocation failure and hang when running xfstests generic/269
on xfs
On Thu, Mar 02, 2017 at 01:49:09PM +0100, Michal Hocko wrote:
> On Thu 02-03-17 07:24:27, Brian Foster wrote:
> > On Thu, Mar 02, 2017 at 11:35:20AM +0100, Michal Hocko wrote:
> > > On Thu 02-03-17 19:04:48, Tetsuo Handa wrote:
> > > [...]
> > > > So, commit 5d17a73a2ebeb8d1("vmalloc: back off when the current task is
> > > > killed") implemented __GFP_KILLABLE flag and automatically applied that
> > > > flag. As a result, those who are not ready to fail upon SIGKILL are
> > > > confused. ;-)
> > >
> > > You are right! The function is documented it might fail but the code
> > > doesn't really allow that. This seems like a bug to me. What do you
> > > think about the following?
> > > ---
> > > From d02cb0285d8ce3344fd64dc7e2912e9a04bef80d Mon Sep 17 00:00:00 2001
> > > From: Michal Hocko <mhocko@...e.com>
> > > Date: Thu, 2 Mar 2017 11:31:11 +0100
> > > Subject: [PATCH] xfs: allow kmem_zalloc_greedy to fail
> > >
> > > Even though kmem_zalloc_greedy is documented it might fail the current
> > > code doesn't really implement this properly and loops on the smallest
> > > allowed size for ever. This is a problem because vzalloc might fail
> > > permanently. Since 5d17a73a2ebe ("vmalloc: back off when the current
> > > task is killed") such a failure is much more probable than it used to
> > > be. Fix this by bailing out if the minimum size request failed.
> > >
> > > This has been noticed by a hung generic/269 xfstest by Xiong Zhou.
> > >
> > > Reported-by: Xiong Zhou <xzhou@...hat.com>
> > > Analyzed-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> > > Signed-off-by: Michal Hocko <mhocko@...e.com>
> > > ---
> > > fs/xfs/kmem.c | 2 ++
> > > 1 file changed, 2 insertions(+)
> > >
> > > diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c
> > > index 339c696bbc01..ee95f5c6db45 100644
> > > --- a/fs/xfs/kmem.c
> > > +++ b/fs/xfs/kmem.c
> > > @@ -34,6 +34,8 @@ kmem_zalloc_greedy(size_t *size, size_t minsize, size_t maxsize)
> > > size_t kmsize = maxsize;
> > >
> > > while (!(ptr = vzalloc(kmsize))) {
> > > + if (kmsize == minsize)
> > > + break;
> > > if ((kmsize >>= 1) <= minsize)
> > > kmsize = minsize;
> > > }
> >
> > More consistent with the rest of the kmem code might be to accept a
> > flags argument and do something like this based on KM_MAYFAIL.
>
> Well, vmalloc doesn't really support GFP_NOFAIL semantic right now for
> the same reason it doesn't support GFP_NOFS. So I am not sure this is a
> good idea.
>
Not sure I follow..? I'm just suggesting to control the loop behavior
based on the KM_ flag, not to do or change anything wrt to GFP_ flags.
> > The one
> > current caller looks like it would pass it, but I suppose we'd still
> > need a mechanism to break out should a new caller not pass that flag.
> > Would a fatal_signal_pending() check in the loop as well allow us to
> > break out in the scenario that is reproduced here?
>
> Yes that check would work as well I just thought the break out when the
> minsize request fails to be more logical. There might be other reasons
> to fail the request and looping here seems just wrong. But whatever you
> or other xfs people prefer.
There may be higher level reasons for why this code should or should not
loop, that just seems like a separate issue to me. My thinking is more
that this appears to be how every kmem_*() function operates today and
it seems a bit out of place to change behavior of one to fix a bug.
Maybe I'm missing something though.. are we subject to the same general
problem in any of the other kmem_*() functions that can currently loop
indefinitely?
Brian
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists