[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <578C93CF.50509@huawei.com>
Date: Mon, 18 Jul 2016 16:31:11 +0800
From: Xishi Qiu <qiuxishi@...wei.com>
To: Vlastimil Babka <vbabka@...e.cz>
CC: Joonsoo Kim <iamjoonsoo.kim@....com>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Naoya Horiguchi" <n-horiguchi@...jp.nec.com>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mem-hotplug: use GFP_HIGHUSER_MOVABLE in, alloc_migrate_target()
On 2016/7/18 16:05, Vlastimil Babka wrote:
> On 07/18/2016 10:00 AM, Xishi Qiu wrote:
>> On 2016/7/18 13:51, Joonsoo Kim wrote:
>>
>>> On Fri, Jul 15, 2016 at 10:47:06AM +0800, Xishi Qiu wrote:
>>>> alloc_migrate_target() is called from migrate_pages(), and the page
>>>> is always from user space, so we can add __GFP_HIGHMEM directly.
>>>
>>> No, all migratable pages are not from user space. For example,
>>> blockdev file cache has __GFP_MOVABLE and migratable but it has no
>>> __GFP_HIGHMEM and __GFP_USER.
>>>
>>
>> Hi Joonsoo,
>>
>> So the original code "gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE;"
>> is not correct?
>
> It's not incorrect. GFP_USER just specifies some reclaim flags, and may perhaps restrict allocation through __GFP_HARDWALL, where the original
> page could have been allocated without the restriction. But it doesn't put the place in an unexpected address range, as placing a non-highmem page into highmem could. __GFP_MOVABLE then just controls a heuristic for placement within a zone.
>
>>> And, zram's memory isn't GFP_HIGHUSER_MOVABLE but has __GFP_MOVABLE.
>>>
>>
>> Can we distinguish __GFP_MOVABLE or GFP_HIGHUSER_MOVABLE when doing
>> mem-hotplug?
>
> I don't understand the question here, can you rephrase with more detail? Thanks.
>
Hi Joonsoo,
When we do memory offline, and the zone is movable zone,
can we use "alloc_pages_node(nid, GFP_HIGHUSER_MOVABLE, 0);" to alloc a
new page? the nid is the next node.
Thanks,
Xishi Qiu
>> Thanks,
>> Xishi Qiu
>>
>>> Thanks.
>>>
>>>
>>> .
>>>
>>
>>
>>
>
>
> .
>
Powered by blists - more mailing lists