[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3ca060b7-0648-a829-7d5e-896490b4a622@suse.cz>
Date: Wed, 18 Jan 2017 10:32:25 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Hillf Danton <hillf.zj@...baba-inc.com>,
'Mel Gorman' <mgorman@...hsingularity.net>,
'Ganapatrao Kulkarni' <gpkulkarni@...il.com>
Cc: 'Michal Hocko' <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC 4/4] mm, page_alloc: fix premature OOM when racing with
cpuset mems update
On 01/18/2017 08:12 AM, Hillf Danton wrote:
>
> On Wednesday, January 18, 2017 6:16 AM Vlastimil Babka wrote:
>>
>> @@ -3802,13 +3811,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
>> * Also recalculate the starting point for the zonelist iterator or
>> * we could end up iterating over non-eligible zones endlessly.
>> */
> Is the newly added comment still needed?
You're right that it's no longer true. I think we can even remove most of the
zoneref trickery and non-NULL checks in the fastpath (as a cleanup patch on
top), as the loop in get_page_from_freelist() should handle it just fine. IIRC
Mel even did this in the microopt series, but I pointed out that NULL
preferred_zoneref pointer would be dangerous in get_page_from_freelist(). We
didn't realize that we check the wrong pointer (i.e. patch 1/4 here).
Vlastimil
>
>> - if (unlikely(ac.nodemask != nodemask)) {
>> -no_zone:
>> + if (unlikely(ac.nodemask != nodemask))
>> ac.nodemask = nodemask;
>> - ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
>> - ac.high_zoneidx, ac.nodemask);
>> - /* If we have NULL preferred zone, slowpath wll handle that */
>> - }
>>
>> page = __alloc_pages_slowpath(alloc_mask, order, &ac);
>>
>> --
>> 2.11.0
>
Powered by blists - more mailing lists