[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131031144848.GA3275@quack.suse.cz>
Date: Thu, 31 Oct 2013 15:48:48 +0100
From: Jan Kara <jack@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Luis Henriques <luis.henriques@...onical.com>,
linux-kernel@...r.kernel.org, kernel-team@...ts.ubuntu.com,
Michal Hocko <mhocko@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 3.5 29/64] fs: buffer: move allocation failure loop into
the allocator
On Thu 31-10-13 10:00:08, Johannes Weiner wrote:
> On Mon, Oct 28, 2013 at 02:47:48PM +0000, Luis Henriques wrote:
> > 3.5.7.24 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Johannes Weiner <hannes@...xchg.org>
> >
> > commit 84235de394d9775bfaa7fa9762a59d91fef0c1fc upstream.
> >
> > Buffer allocation has a very crude indefinite loop around waking the
> > flusher threads and performing global NOFS direct reclaim because it can
> > not handle allocation failures.
> >
> > The most immediate problem with this is that the allocation may fail due
> > to a memory cgroup limit, where flushers + direct reclaim might not make
> > any progress towards resolving the situation at all. Because unlike the
> > global case, a memory cgroup may not have any cache at all, only
> > anonymous pages but no swap. This situation will lead to a reclaim
> > livelock with insane IO from waking the flushers and thrashing unrelated
> > filesystem cache in a tight loop.
> >
> > Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
> > any looping happens in the page allocator, which knows how to
> > orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
> > allows memory cgroups to detect allocations that can't handle failure
> > and will allow them to ultimately bypass the limit if reclaim can not
> > make progress.
So I was under the impression that __GFP_NOFAIL is going away, doesn't
it? At least about an year ago there was some effort to remove its users so
we ended up creating loops like the above one (and similar ones for
jbd/jbd2) in cases where handling the failure wasn't easily possible. And now
it seems we are going in the opposite direction... At least we have a
steady flow of patches guaranteed :)
Honza
> >
> > Reported-by: azurIt <azurit@...ox.sk>
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > Cc: Michal Hocko <mhocko@...e.cz>
> > Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> > Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
> > Signed-off-by: Luis Henriques <luis.henriques@...onical.com>
> > ---
> > fs/buffer.c | 14 ++++++++++++--
> > mm/memcontrol.c | 2 ++
> > 2 files changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/buffer.c b/fs/buffer.c
> > index 2c78739..2675e5a 100644
> > --- a/fs/buffer.c
> > +++ b/fs/buffer.c
> > @@ -957,9 +957,19 @@ grow_dev_page(struct block_device *bdev, sector_t block,
> > struct buffer_head *bh;
> > sector_t end_block;
> > int ret = 0; /* Will call free_more_memory() */
> > + gfp_t gfp_mask;
> >
> > - page = find_or_create_page(inode->i_mapping, index,
> > - (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
> > + gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
> > + gfp_mask |= __GFP_MOVABLE;
> > + /*
> > + * XXX: __getblk_slow() can not really deal with failure and
> > + * will endlessly loop on improvised global reclaim. Prefer
> > + * looping in the allocator rather than here, at least that
> > + * code knows what it's doing.
> > + */
> > + gfp_mask |= __GFP_NOFAIL;
> > +
> > + page = find_or_create_page(inode->i_mapping, index, gfp_mask);
> > if (!page)
> > return ret;
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 226b63e..953bf3c 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2405,6 +2405,8 @@ done:
> > return 0;
> > nomem:
> > *ptr = NULL;
> > + if (gfp_mask & __GFP_NOFAIL)
> > + return 0;
> > return -ENOMEM;
> > bypass:
> > *ptr = root_mem_cgroup;
> > --
> > 1.8.3.2
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists