[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170118100853.gop3iia4sq5xk3t2@techsingularity.net>
Date: Wed, 18 Jan 2017 10:08:53 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Ganapatrao Kulkarni <gpkulkarni@...il.com>,
Michal Hocko <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC 4/4] mm, page_alloc: fix premature OOM when racing with
cpuset mems update
On Tue, Jan 17, 2017 at 11:16:10PM +0100, Vlastimil Babka wrote:
> Ganapatrao Kulkarni reported that the LTP test cpuset01 in stress mode triggers
> OOM killer in few seconds, despite lots of free memory. The test attemps to
> repeatedly fault in memory in one process in a cpuset, while changing allowed
> nodes of the cpuset between 0 and 1 in another process.
>
> The problem comes from insufficient protection against cpuset changes, which
> can cause get_page_from_freelist() to consider all zones as non-eligible due to
> nodemask and/or current->mems_allowed. This was masked in the past by
> sufficient retries, but since commit 682a3385e773 ("mm, page_alloc: inline the
> fast path of the zonelist iterator") we fix the preferred_zoneref once, and
> don't iterate the whole zonelist in further attempts.
>
> A previous patch fixed this problem for current->mems_allowed. However, cpuset
> changes also update the policy nodemasks. The fix has two parts. We have to
> repeat the preferred_zoneref search when we detect cpuset update by way of
> seqcount, and we have to check the seqcount before considering OOM.
>
> Reported-by: Ganapatrao Kulkarni <gpkulkarni@...il.com>
> Fixes: 682a3385e773 ("mm, page_alloc: inline the fast path of the zonelist iterator")
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists