[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201016134215.GL22589@dhcp22.suse.cz>
Date: Fri, 16 Oct 2020 15:42:15 +0200
From: Michal Hocko <mhocko@...e.com>
To: osalvador@...e.de
Cc: Shijie Luo <luoshijie1@...wei.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linmiaohe@...wei.com, linfeilong@...wei.com
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error
On Fri 16-10-20 15:15:32, Michal Hocko wrote:
> On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> > On Fri 16-10-20 14:37:08, osalvador@...e.de wrote:
> > > On 2020-10-16 14:31, Michal Hocko wrote:
> > > > I do not like the fix though. The code is really confusing. Why should
> > > > we check for flags in each iteration of the loop when it cannot change?
> > > > Also why should we take the ptl lock in the first place when the look is
> > > > broken out immediately?
> > >
> > > About checking the flags:
> > >
> > > https://lore.kernel.org/linux-mm/20190320081643.3c4m5tec5vx653sn@d104.suse.de/#t
> >
> > This didn't really help. Maybe the code was different back then but
> > right now the code doesn't make much sense TBH. The only reason to check
> > inside the loop would be to have a completely unpopulated address range.
> > Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
> > it makes any difference.
>
> Ohh, I have missed queue_pages_required. Let me think some more.
OK, I finally managed to convince my friday brain to think and grasped
what the code is intended to do. The loop is hairy and we want to
prevent from spurious EIO when all the pages are on a proper node. So
the check has to be done inside the loop. Anyway I would find the
following fix less error prone and easier to follow
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..8cc1fc9c4d13 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long flags = qp->flags;
int ret;
bool has_unmovable = false;
- pte_t *pte;
+ pte_t *pte, *mapped_pte;
spinlock_t *ptl;
ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
if (pmd_trans_unstable(pmd))
return 0;
- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+ pte_unmap_unlock(mapped_pte, ptl);
cond_resched();
if (has_unmovable)
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists