[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201019065912.GA27114@dhcp22.suse.cz>
Date: Mon, 19 Oct 2020 08:59:12 +0200
From: Michal Hocko <mhocko@...e.com>
To: Shijie Luo <luoshijie1@...wei.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, osalvador@...e.de,
linmiaohe@...wei.com, linfeilong@...wei.com
Subject: Re: [PATCH V2] mm: fix potential pte_unmap_unlock pte error
On Fri 16-10-20 22:11:51, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
> and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.
This would really benefit from some improvements. It is preferable to
provide a user visibile effect of the patch. I would propose this, feel
free to reuse parts as you find fit.
"
queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't
migrate misplaced pages but returns with EIO when encountering such a
page. Since a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
MPOL_MF_STRICT is specified") and early break on the first pte in the
range results in pte_unmap_unlock on an underflow pte. This can lead to
lockups later on when somebody tries to lock the pte resp.
page_table_lock again..
Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
MPOL_MF_STRICT is specified")
"
> Signed-off-by: Shijie Luo <luoshijie1@...wei.com>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
No need to add my s-o-b.
> ---
> mm/mempolicy.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 3fde772ef5ef..3ca4898f3f24 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> unsigned long flags = qp->flags;
> int ret;
> bool has_unmovable = false;
> - pte_t *pte;
> + pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> if (pmd_trans_unstable(pmd))
> return 0;
>
> - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> for (; addr != end; pte++, addr += PAGE_SIZE) {
> if (!pte_present(*pte))
> continue;
> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> } else
> break;
> }
> - pte_unmap_unlock(pte - 1, ptl);
> + pte_unmap_unlock(mapped_pte, ptl);
> cond_resched();
>
> if (has_unmovable)
> --
> 2.19.1
>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists