[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkp0Ok0ZKZvKXGaAEVGCQ3gU9zXxAV1eudpPaCZqqpcdpQ@mail.gmail.com>
Date: Wed, 20 Apr 2022 17:44:27 -0700
From: Yang Shi <shy828301@...il.com>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range
On Tue, Apr 19, 2022 at 5:22 AM Miaohe Lin <linmiaohe@...wei.com> wrote:
>
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
Nice catch. Yeah, it was missed when I worked on that commit.
Reviewed-by: Yang Shi <shy828301@...il.com>
>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> mm/mempolicy.c | 12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 75a8b247f631..3934476fb708 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
> }
>
> /*
> - * queue_pages_pmd() has four possible return values:
> + * queue_pages_pmd() has three possible return values:
> * 0 - pages are placed on the right node or queued successfully, or
> * special page is met, i.e. huge zero page.
> * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
> * specified.
> - * 2 - THP was split.
> * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
> * existing page was already on a node that does not follow the
> * policy.
> @@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> struct page *page;
> struct queue_pages *qp = walk->private;
> unsigned long flags = qp->flags;
> - int ret;
> bool has_unmovable = false;
> pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> - if (ptl) {
> - ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
> - if (ret != 2)
> - return ret;
> - }
> - /* THP was split, fall through to pte walk */
> + if (ptl)
> + return queue_pages_pmd(pmd, ptl, addr, end, walk);
>
> if (pmd_trans_unstable(pmd))
> return 0;
> --
> 2.23.0
>
>
Powered by blists - more mailing lists