lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1379427739-31451-1-git-send-email-vbabka@suse.cz>
Date:	Tue, 17 Sep 2013 16:22:19 +0200
From:	Vlastimil Babka <vbabka@...e.cz>
To:	Fengguang Wu <fengguang.wu@...el.com>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Vlastimil Babka <vbabka@...e.cz>,
	Jörn Engel <joern@...fs.org>,
	Mel Gorman <mgorman@...e.de>,
	Michel Lespinasse <walken@...gle.com>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>
Subject: [RFC PATCH RESEND] mm: munlock: Prevent walking off the end of a pagetable in no-pmd configuration

The function __munlock_pagevec_fill() introduced in commit 7a8010cd3
("mm: munlock: manual pte walk in fast path instead of follow_page_mask()")
uses pmd_addr_end() for restricting its operation within current page table.
This is insufficient on architectures/configurations where pmd is folded
and pmd_addr_end() just returns the end of the full range to be walked. In
this case, it allows pte++ to walk off the end of a page table resulting in
unpredictable behaviour.

This patch fixes the function by using pgd_addr_end() and pud_addr_end()
before pmd_addr_end(), which will yield correct page table boundary on all
configurations. This is similar to what existing page walkers do when walking
each level of the page table.

Additionaly, the patch clarifies a comment for get_locked_pte() call in the
function.

Reported-by: Fengguang Wu <fengguang.wu@...el.com>
Cc: Jörn Engel <joern@...fs.org>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Michel Lespinasse <walken@...gle.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.cz>
Cc: Vlastimil Babka <vbabka@...e.cz>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
 mm/mlock.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index d638026..758c0fc 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -379,10 +379,14 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
 
 	/*
 	 * Initialize pte walk starting at the already pinned page where we
-	 * are sure that there is a pte.
+	 * are sure that there is a pte, as it was pinned under the same
+	 * mmap_sem write op.
 	 */
 	pte = get_locked_pte(vma->vm_mm, start,	&ptl);
-	end = min(end, pmd_addr_end(start, end));
+	/* Make sure we do not cross the page table boundary */
+	end = pgd_addr_end(start, end);
+	end = pud_addr_end(start, end);
+	end = pmd_addr_end(start, end);
 
 	/* The page next to the pinned page is the first we will try to get */
 	start += PAGE_SIZE;
-- 
1.8.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ