[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170126094303.GE6590@dhcp22.suse.cz>
Date: Thu, 26 Jan 2017 10:43:03 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Yisheng Xie <xieyisheng1@...wei.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, vbabka@...e.cz,
mgorman@...hsingularity.net, hannes@...xchg.org,
iamjoonsoo.kim@....com, izumi.taku@...fujitsu.com,
arbab@...ux.vnet.ibm.com, vkuznets@...hat.com, ak@...ux.intel.com,
n-horiguchi@...jp.nec.com, minchan@...nel.org, qiuxishi@...wei.com,
guohanjun@...wei.com
Subject: Re: [RFC v2 PATCH] mm/hotplug: enable memory hotplug for non-lru
movable pages
On Wed 25-01-17 14:59:45, Yisheng Xie wrote:
> We had considered all of the non-lru pages as unmovable before
> commit bda807d44454 ("mm: migrate: support non-lru movable page
> migration"). But now some of non-lru pages like zsmalloc,
> virtio-balloon pages also become movable. So we can offline such
> blocks by using non-lru page migration.
>
> This patch straightforwardly add non-lru migration code, which
> means adding non-lru related code to the functions which scan
> over pfn and collect pages to be migrated and isolate them before
> migration.
>
> Signed-off-by: Yisheng Xie <xieyisheng1@...wei.com>
> ---
> v2
> make a minor change about lock_page logic in function scan_movable_pages.
>
> mm/memory_hotplug.c | 36 +++++++++++++++++++++++++-----------
> mm/page_alloc.c | 8 ++++++--
> 2 files changed, 31 insertions(+), 13 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index e43142c1..5559175 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1510,10 +1510,10 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
> }
>
> /*
> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> - * and hugepages). We scan pfn because it's much easier than scanning over
> - * linked list. This function returns the pfn of the first found movable
> - * page if it's found, otherwise 0.
> + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
> + * non-lru movable pages and hugepages). We scan pfn because it's much
> + * easier than scanning over linked list. This function returns the pfn
> + * of the first found movable page if it's found, otherwise 0.
> */
> static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> {
> @@ -1531,6 +1531,16 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> pfn = round_up(pfn + 1,
> 1 << compound_order(page)) - 1;
> }
> + /*
> + * check __PageMovable in lock_page to avoid miss some
> + * non-lru movable pages at race condition.
> + */
> + lock_page(page);
> + if (__PageMovable(page)) {
> + unlock_page(page);
> + return pfn;
> + }
> + unlock_page(page);
This doesn't make any sense to me. __PageMovable can change right after
you drop the lock so why the race matters? If we cannot tolerate races
then the above doesn't work and if we can then taking the lock is
pointless.
> }
> }
> return 0;
> @@ -1600,21 +1610,25 @@ static struct page *new_node_page(struct page *page, unsigned long private,
> if (!get_page_unless_zero(page))
> continue;
> /*
> - * We can skip free pages. And we can only deal with pages on
> - * LRU.
> + * We can skip free pages. And we can deal with pages on
> + * LRU and non-lru movable pages.
> */
> - ret = isolate_lru_page(page);
> + if (PageLRU(page))
> + ret = isolate_lru_page(page);
> + else
> + ret = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
we really want to propagate the proper error code to the caller.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists