[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <036e01d2715a$3a227de0$ae6779a0$@alibaba-inc.com>
Date: Wed, 18 Jan 2017 15:12:27 +0800
From: "Hillf Danton" <hillf.zj@...baba-inc.com>
To: "'Vlastimil Babka'" <vbabka@...e.cz>,
"'Mel Gorman'" <mgorman@...hsingularity.net>,
"'Ganapatrao Kulkarni'" <gpkulkarni@...il.com>
Cc: "'Michal Hocko'" <mhocko@...nel.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [RFC 4/4] mm, page_alloc: fix premature OOM when racing with cpuset mems update
On Wednesday, January 18, 2017 6:16 AM Vlastimil Babka wrote:
>
> @@ -3802,13 +3811,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
> * Also recalculate the starting point for the zonelist iterator or
> * we could end up iterating over non-eligible zones endlessly.
> */
Is the newly added comment still needed?
> - if (unlikely(ac.nodemask != nodemask)) {
> -no_zone:
> + if (unlikely(ac.nodemask != nodemask))
> ac.nodemask = nodemask;
> - ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
> - ac.high_zoneidx, ac.nodemask);
> - /* If we have NULL preferred zone, slowpath wll handle that */
> - }
>
> page = __alloc_pages_slowpath(alloc_mask, order, &ac);
>
> --
> 2.11.0
Powered by blists - more mailing lists