[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6A58E80B-7A5F-4CAD-ACF1-89BCCBE4D3B1@lca.pw>
Date: Thu, 3 Oct 2019 08:14:40 -0400
From: Qian Cai <cai@....pw>
To: Anshuman Khandual <Anshuman.Khandual@....com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Oscar Salvador <osalvador@...e.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Mike Rapoport <rppt@...ux.ibm.com>,
Dan Williams <dan.j.williams@...el.com>,
Pavel Tatashin <pavel.tatashin@...rosoft.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc: Add a reason for reserved pages in has_unmovable_pages()
> On Oct 3, 2019, at 8:01 AM, Anshuman Khandual <Anshuman.Khandual@....com> wrote:
>
> Will something like this be better ?
Not really. dump_page() will dump PageCompound information anyway, so it is trivial to figure out if went in that path.
> hugepage_migration_supported() has got
> uncertainty depending on platform and huge page size.
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 15c2050c629b..8dbc86696515 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8175,7 +8175,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> unsigned long found;
> unsigned long iter = 0;
> unsigned long pfn = page_to_pfn(page);
> - const char *reason = "unmovable page";
> + const char *reason;
>
> /*
> * TODO we could make this much more efficient by not checking every
> @@ -8194,7 +8194,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> if (is_migrate_cma(migratetype))
> return false;
>
> - reason = "CMA page";
> + reason = "Unmovable CMA page";
> goto unmovable;
> }
>
> @@ -8206,8 +8206,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>
> page = pfn_to_page(check);
>
> - if (PageReserved(page))
> + if (PageReserved(page)) {
> + reason = "Unmovable reserved page";
> goto unmovable;
> + }
>
> /*
> * If the zone is movable and we have ruled out all reserved
> @@ -8226,8 +8228,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> struct page *head = compound_head(page);
> unsigned int skip_pages;
>
> - if (!hugepage_migration_supported(page_hstate(head)))
> + if (!hugepage_migration_supported(page_hstate(head))) {
> + reason = "Unmovable HugeTLB page";
> goto unmovable;
> + }
>
> skip_pages = compound_nr(head) - (page - head);
> iter += skip_pages - 1;
> @@ -8271,8 +8275,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> * is set to both of a memory hole page and a _used_ kernel
> * page at boot.
> */
> - if (found > count)
> + if (found > count) {
> + reason = "Unmovable non-LRU page";
> goto unmovable;
> + }
> }
> return false;
> unmovable:
Powered by blists - more mailing lists