[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190227215109.cpiaheyqs2qdbl7p@d104.suse.de>
Date: Wed, 27 Feb 2019 22:51:09 +0100
From: Oscar Salvador <osalvador@...e.de>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, mhocko@...e.com, david@...hat.com,
mike.kravetz@...cle.com
Subject: Re: [RFC PATCH] mm,memory_hotplug: Unlock 1GB-hugetlb on x86_64
On Thu, Feb 21, 2019 at 10:42:12AM +0100, Oscar Salvador wrote:
> [1] https://lore.kernel.org/patchwork/patch/998796/
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
Any further comments on this?
I do have a "concern" I would like to sort out before dropping the RFC:
It is the fact that unless we have spare gigantic pages in other notes, the
offlining operation will loop forever (until the customer cancels the operation).
While I do not really like that, I do think that memory offlining should be done
with some sanity, and the administrator should know in advance if the system is going
to be able to keep up with the memory pressure, aka: make sure we got what we need in
order to make the offlining operation to succeed.
That translates to be sure that we have spare gigantic pages and other nodes
can take them.
Given said that, another thing I thought about is that we could check if we have
spare gigantic pages at has_unmovable_pages() time.
Something like checking "h->free_huge_pages - h->resv_huge_pages > 0", and if it
turns out that we do not have gigantic pages anywhere, just return as we have
non-movable pages.
But I would rather not convulate has_unmovable_pages() with such checks and "trust"
the administrator.
> ---
> mm/memory_hotplug.c | 7 +------
> 1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index d5f7afda67db..04f6695b648c 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1337,8 +1337,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> if (!PageHuge(page))
> continue;
> head = compound_head(page);
> - if (hugepage_migration_supported(page_hstate(head)) &&
> - page_huge_active(head))
> + if (page_huge_active(head))
> return pfn;
> skip = (1 << compound_order(head)) - (page - head);
> pfn += skip - 1;
> @@ -1378,10 +1377,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>
> if (PageHuge(page)) {
> struct page *head = compound_head(page);
> - if (compound_order(head) > PFN_SECTION_SHIFT) {
> - ret = -EBUSY;
> - break;
> - }
> pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> isolate_huge_page(head, &source);
> continue;
> --
> 2.13.7
>
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists