lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091228115315.76b1ecd0.minchan.kim@barrios-desktop>
Date:	Mon, 28 Dec 2009 11:53:15 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	lkml <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Rik van Riel <riel@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: [PATCH -mmotm-2009-12-10-17-19] Prevent churning of zero page in
 LRU list.


VM doesn't add zero page to LRU list. 
It means zero page's churning in LRU list is pointless. 

As a matter of fact, zero page can't be promoted by mark_page_accessed
since it doesn't have PG_lru. 

This patch prevent unecessary mark_page_accessed call of zero page 
alghouth caller want FOLL_TOUCH. 

Signed-off-by: Minchan Kim <minchan.kim@...il.com>
---
 mm/memory.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 09e4b1b..485f727 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1152,6 +1152,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	spinlock_t *ptl;
 	struct page *page;
 	struct mm_struct *mm = vma->vm_mm;
+	int zero_pfn = 0;
 
 	page = follow_huge_addr(mm, address, flags & FOLL_WRITE);
 	if (!IS_ERR(page)) {
@@ -1196,15 +1197,15 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 
 	page = vm_normal_page(vma, address, pte);
 	if (unlikely(!page)) {
-		if ((flags & FOLL_DUMP) ||
-		    !is_zero_pfn(pte_pfn(pte)))
+		zero_pfn = is_zero_pfn(pte_pfn(pte));
+		if ((flags & FOLL_DUMP) || !zero_pfn )
 			goto bad_page;
 		page = pte_page(pte);
 	}
 
 	if (flags & FOLL_GET)
 		get_page(page);
-	if (flags & FOLL_TOUCH) {
+	if (flags & FOLL_TOUCH && !zero_pfn) {
 		if ((flags & FOLL_WRITE) &&
 		    !pte_dirty(pte) && !PageDirty(page))
 			set_page_dirty(page);
-- 
1.5.6.3


-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ