[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YL265A86DQe5Rgon@dhcp22.suse.cz>
Date: Mon, 7 Jun 2021 08:21:24 +0200
From: Michal Hocko <mhocko@...e.com>
To: Yang Shi <shy828301@...il.com>
Cc: ziy@...dia.com, nao.horiguchi@...il.com,
kirill.shutemov@...ux.intel.com, hughd@...gle.com,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: mempolicy: don't have to split pmd for huge zero page
On Fri 04-06-21 13:35:13, Yang Shi wrote:
> When trying to migrate pages to obey mempolicy, the huge zero page is
> split then the page table walk at PTE level just skips zero page. So it
> seems pointless to split huge zero page, it could be just skipped like
> base zero page.
My THP knowledge is not the best but this is incorrect AIACS. Huge zero
page is not split. We do split the pmd which is mapping the said page. I
suspect you refer to vm_normal_page when talking about a zero page but
please be aware that huge zero page is not a normal zero page. It is
allocated dynamically (see get_huge_zero_page).
So in the end you patch disables mbind of zero pages to a target node
and that is a regression.
Have you tested the patch?
> Set ACTION_CONTINUE to prevent the walk_page_range() split the pmd for
> this case.
Btw. this changelog is missing a problem statement. I suspect there is
no actual problem that it should fix and it is likely driven by reading
the code. Right?
> Signed-off-by: Yang Shi <shy828301@...il.com>
> ---
> mm/mempolicy.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b5f4f584009b..205c1a768775 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -436,7 +436,8 @@ static inline bool queue_pages_required(struct page *page,
>
> /*
> * queue_pages_pmd() has four possible return values:
> - * 0 - pages are placed on the right node or queued successfully.
> + * 0 - pages are placed on the right node or queued successfully, or
> + * special page is met, i.e. huge zero page.
> * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
> * specified.
> * 2 - THP was split.
> @@ -460,8 +461,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
> page = pmd_page(*pmd);
> if (is_huge_zero_page(page)) {
> spin_unlock(ptl);
> - __split_huge_pmd(walk->vma, pmd, addr, false, NULL);
> - ret = 2;
> + walk->action = ACTION_CONTINUE;
> goto out;
> }
> if (!queue_pages_required(page, qp))
> @@ -488,7 +488,8 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
> * and move them to the pagelist if they do.
> *
> * queue_pages_pte_range() has three possible return values:
> - * 0 - pages are placed on the right node or queued successfully.
> + * 0 - pages are placed on the right node or queued successfully, or
> + * special page is met, i.e. zero page.
> * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
> * specified.
> * -EIO - only MPOL_MF_STRICT was specified and an existing page was already
> --
> 2.26.2
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists