[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkoZ_vbw-VcU6x8T=mUBREFPkZg3WHA4cuk9ff8o3i+95Q@mail.gmail.com>
Date: Wed, 14 Apr 2021 10:23:25 -0700
From: Yang Shi <shy828301@...il.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Zi Yan <ziy@...dia.com>, Michal Hocko <mhocko@...e.com>,
Hugh Dickins <hughd@...gle.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
hca@...ux.ibm.com, gor@...ux.ibm.com, borntraeger@...ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>, linux-s390@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v2 PATCH 6/7] mm: migrate: check mapcount for THP instead of ref count
On Tue, Apr 13, 2021 at 8:00 PM Huang, Ying <ying.huang@...el.com> wrote:
>
> Yang Shi <shy828301@...il.com> writes:
>
> > The generic migration path will check refcount, so no need check refcount here.
> > But the old code actually prevents from migrating shared THP (mapped by multiple
> > processes), so bail out early if mapcount is > 1 to keep the behavior.
>
> What prevents us from migrating shared THP? If no, why not just remove
> the old refcount checking?
We could migrate shared THP if we don't care about the bounce back and
forth between nodes as Zi Yan described. The other reason is, as I
mentioned in the cover letter, I'd like to keep the behavior as
consistent as possible between before and after for now. The old
behavior does prevent migrating shared THP, so I did so in this
series. We definitely could optimize the behavior later on.
>
> Best Regards,
> Huang, Ying
>
> > Signed-off-by: Yang Shi <shy828301@...il.com>
> > ---
> > mm/migrate.c | 16 ++++------------
> > 1 file changed, 4 insertions(+), 12 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index a72994c68ec6..dc7cc7f3a124 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -2067,6 +2067,10 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> >
> > VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> >
> > + /* Do not migrate THP mapped by multiple processes */
> > + if (PageTransHuge(page) && page_mapcount(page) > 1)
> > + return 0;
> > +
> > /* Avoid migrating to a node that is nearly full */
> > if (!migrate_balanced_pgdat(pgdat, compound_nr(page)))
> > return 0;
> > @@ -2074,18 +2078,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> > if (isolate_lru_page(page))
> > return 0;
> >
> > - /*
> > - * migrate_misplaced_transhuge_page() skips page migration's usual
> > - * check on page_count(), so we must do it here, now that the page
> > - * has been isolated: a GUP pin, or any other pin, prevents migration.
> > - * The expected page count is 3: 1 for page's mapcount and 1 for the
> > - * caller's pin and 1 for the reference taken by isolate_lru_page().
> > - */
> > - if (PageTransHuge(page) && page_count(page) != 3) {
> > - putback_lru_page(page);
> > - return 0;
> > - }
> > -
> > page_lru = page_is_file_lru(page);
> > mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru,
> > thp_nr_pages(page));
Powered by blists - more mailing lists