lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201016123137.GH22589@dhcp22.suse.cz>
Date:   Fri, 16 Oct 2020 14:31:37 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Shijie Luo <luoshijie1@...wei.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linmiaohe@...wei.com,
        linfeilong@...wei.com
Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error

On Thu 15-10-20 08:15:34, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
>  and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Yes the code is suspicious to say the least. At least mbind can reach to
here with both MPOL_MF_MOVE, MPOL_MF_MOVE_ALL unset and then the pte
would be pointing outside of the current pmd.

I do not like the fix though. The code is really confusing. Why should
we check for flags in each iteration of the loop when it cannot change?
Also why should we take the ptl lock in the first place when the look is
broken out immediately?

I have to admit that I do not fully understand a7f40cfe3b7ad so this
should be carefuly evaluated.

If anything something like below would be a better fix

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..7877b36a5a6d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -539,6 +539,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	if (pmd_trans_unstable(pmd))
 		return 0;
 
+	/* A COMMENT GOES HERE. */
+	if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)))
+		return -EIO;
+
 	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
 	for (; addr != end; pte++, addr += PAGE_SIZE) {
 		if (!pte_present(*pte))
@@ -554,28 +558,26 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 			continue;
 		if (!queue_pages_required(page, qp))
 			continue;
-		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
-			/* MPOL_MF_STRICT must be specified if we get here */
-			if (!vma_migratable(vma)) {
-				has_unmovable = true;
-				break;
-			}
 
-			/*
-			 * Do not abort immediately since there may be
-			 * temporary off LRU pages in the range.  Still
-			 * need migrate other LRU pages.
-			 */
-			if (migrate_page_add(page, qp->pagelist, flags))
-				has_unmovable = true;
-		} else
+		/* MPOL_MF_STRICT must be specified if we get here */
+		if (!vma_migratable(vma)) {
+			has_unmovable = true;
 			break;
+		}
+
+		/*
+		 * Do not abort immediately since there may be
+		 * temporary off LRU pages in the range.  Still
+		 * need migrate other LRU pages.
+		 */
+		if (migrate_page_add(page, qp->pagelist, flags))
+			has_unmovable = true;
 	}
 	pte_unmap_unlock(pte - 1, ptl);
 	cond_resched();
 
 	if (has_unmovable)
 		return 1;
 	return addr != end ? -EIO : 0;
 }
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ