[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZMpMfQUktateeN1D@casper.infradead.org>
Date: Wed, 2 Aug 2023 13:30:53 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Kefeng Wang <wangkefeng.wang@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Huang Ying <ying.huang@...el.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH 2/4] mm: migrate: convert numamigrate_isolate_page() to
numamigrate_isolate_folio()
On Wed, Aug 02, 2023 at 05:53:44PM +0800, Kefeng Wang wrote:
> -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
> {
> - int nr_pages = thp_nr_pages(page);
> - int order = compound_order(page);
> + int nr_pages = folio_nr_pages(folio);
> + int order = folio_order(folio);
>
> - VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
> + VM_BUG_ON_FOLIO(order && !folio_test_pmd_mappable(folio), folio);
I don't know why we have this assertion. I would be inclined to delete
it as part of generalising the migration code to handle arbitrary sizes
of folio, rather than assert that we only support PMD size folios.
> /* Do not migrate THP mapped by multiple processes */
> - if (PageTransHuge(page) && total_mapcount(page) > 1)
> + if (folio_test_pmd_mappable(folio) && folio_estimated_sharers(folio) > 1)
> return 0;
I don't know if this is the right logic. We've willing to move folios
mapped by multiple processes, as long as they're smaller than PMD size,
but once they get to PMD size they're magical and can't be moved?
Powered by blists - more mailing lists