[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEwNFnALUoeh5cEW=XZqy7Aab4hxtE11-mAjWB1c5eddzGuQFA@mail.gmail.com>
Date: Wed, 16 Nov 2011 01:13:30 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Mel Gorman <mgorman@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Colin Cross <ccross@...roid.com>,
Pekka Enberg <penberg@...helsinki.fi>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: avoid livelock on !__GFP_FS allocations
On Mon, Nov 14, 2011 at 11:04 PM, Mel Gorman <mgorman@...e.de> wrote:
> This patch seems to have gotten lost in the cracks and the discussion
> on alternatives that started here https://lkml.org/lkml/2011/10/25/24
> petered out without any alternative patches being posted. Lacking
> a viable alternative patch, I'm reposting this patch because AFAIK,
> this bug still exists.
>
> Colin Cross reported;
>
> Under the following conditions, __alloc_pages_slowpath can loop forever:
> gfp_mask & __GFP_WAIT is true
> gfp_mask & __GFP_FS is false
> reclaim and compaction make no progress
> order <= PAGE_ALLOC_COSTLY_ORDER
>
> These conditions happen very often during suspend and resume,
> when pm_restrict_gfp_mask() effectively converts all GFP_KERNEL
> allocations into __GFP_WAIT.
>
> The oom killer is not run because gfp_mask & __GFP_FS is false,
> but should_alloc_retry will always return true when order is less
> than PAGE_ALLOC_COSTLY_ORDER.
>
> In his fix, he avoided retrying the allocation if reclaim made no
> progress and __GFP_FS was not set. The problem is that this would
> result in GFP_NOIO allocations failing that previously succeeded
> which would be very unfortunate.
>
> The big difference between GFP_NOIO and suspend converting GFP_KERNEL
> to behave like GFP_NOIO is that normally flushers will be cleaning
> pages and kswapd reclaims pages allowing GFP_NOIO to succeed after
> a short delay. The same does not necessarily apply during suspend as
> the storage device may be suspended. Hence, this patch special cases
> the suspend case to fail the page allocation if reclaim cannot make
> progress. This might cause suspend to abort but that is better than
> a livelock.
>
> [mgorman@...e.de: Rework fix to be suspend specific]
> Reported-and-tested-by: Colin Cross <ccross@...roid.com>
> Signed-off-by: Mel Gorman <mgorman@...e.de>
> ---
> mm/page_alloc.c | 22 ++++++++++++++++++++++
> 1 files changed, 22 insertions(+), 0 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9dd443d..5402897 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -127,6 +127,20 @@ void pm_restrict_gfp_mask(void)
> saved_gfp_mask = gfp_allowed_mask;
> gfp_allowed_mask &= ~GFP_IOFS;
> }
> +
> +static bool pm_suspending(void)
> +{
> + if ((gfp_allowed_mask & GFP_IOFS) == GFP_IOFS)
> + return false;
> + return true;
> +}
> +
> +#else
> +
> +static bool pm_suspending(void)
> +{
> + return false;
> +}
> #endif /* CONFIG_PM_SLEEP */
>
> #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE
> @@ -2214,6 +2228,14 @@ rebalance:
>
> goto restart;
> }
> +
> + /*
> + * Suspend converts GFP_KERNEL to __GFP_WAIT which can
> + * prevent reclaim making forward progress without
> + * invoking OOM. Bail if we are suspending
> + */
> + if (pm_suspending())
> + goto nopage;
> }
>
> /* Check if we should retry the allocation */
>
I don't have much time to look into this problem so I miss some things.
But the feeling I have a mind when I faced this problem is why we
should make another special case handling function.
Already we have such thing for hibernation - oom_killer_disabled in vm
Could we use it instead of making new branch for very special case?
Maybe It would be better to rename oom_killer_disabled with
pm_is_going or something.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists