[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6783e871-e981-c845-16c3-c5ff3e6502ed@oracle.com>
Date: Wed, 10 Feb 2021 16:56:00 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/2] mm,page_alloc: Make alloc_contig_range handle
in-use hugetlb pages
On 2/8/21 2:38 AM, Oscar Salvador wrote:
> alloc_contig_range is not prepared to handle hugetlb pages and will
> fail if it ever sees one, but since they can be migrated as any other
> page (LRU and Movable), it makes sense to also handle them.
>
> For now, do it only when coming from alloc_contig_range.
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> ---
> mm/compaction.c | 17 +++++++++++++++++
> mm/vmscan.c | 5 +++--
> 2 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index e5acb9714436..89cd2e60da29 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -940,6 +940,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> goto isolate_fail;
> }
>
> + /*
> + * Handle hugetlb pages only when coming from alloc_contig
> + */
> + if (PageHuge(page) && cc->alloc_contig) {
> + if (page_count(page)) {
Thanks for doing this!
I agree with everything in the discussion you and David had. This code
is racy, but since we are scanning lockless there is no way to eliminate
them all. Best to just minimize the windows and document.
--
Mike Kravetz
> + /*
> + * Hugetlb page in-use. Isolate and migrate.
> + */
> + if (isolate_huge_page(page, &cc->migratepages)) {
> + low_pfn += compound_nr(page) - 1;
> + goto isolate_success_no_list;
> + }
> + }
> + goto isolate_fail;
> + }
> +
> /*
> * Check may be lockless but that's ok as we recheck later.
> * It's possible to migrate LRU and non-lru movable pages.
> @@ -1041,6 +1057,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>
> isolate_success:
> list_add(&page->lru, &cc->migratepages);
> +isolate_success_no_list:
> cc->nr_migratepages += compound_nr(page);
> nr_isolated += compound_nr(page);
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b1b574ad199d..0803adca4469 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1506,8 +1506,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> LIST_HEAD(clean_pages);
>
> list_for_each_entry_safe(page, next, page_list, lru) {
> - if (page_is_file_lru(page) && !PageDirty(page) &&
> - !__PageMovable(page) && !PageUnevictable(page)) {
> + if (!PageHuge(page) && page_is_file_lru(page) &&
> + !PageDirty(page) && !__PageMovable(page) &&
> + !PageUnevictable(page)) {
> ClearPageActive(page);
> list_move(&page->lru, &clean_pages);
> }
>
Powered by blists - more mailing lists