lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35FD53F367049845BC99AC72306C23D1044A02027E06@CNBJMBX05.corpusers.net>
Date:	Fri, 30 Jan 2015 15:47:54 +0800
From:	"Wang, Yalin" <Yalin.Wang@...ymobile.com>
To:	"'akpm@...ux-foundation.org'" <akpm@...ux-foundation.org>,
	"'kirill.shutemov@...ux.intel.com'" <kirill.shutemov@...ux.intel.com>,
	"'oleg@...hat.com'" <oleg@...hat.com>,
	"'gorcunov@...nvz.org'" <gorcunov@...nvz.org>,
	"'n-horiguchi@...jp.nec.com'" <n-horiguchi@...jp.nec.com>,
	"'pfeiner@...gle.com'" <pfeiner@...gle.com>,
	"'aquini@...hat.com'" <aquini@...hat.com>,
	"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: [RFC V2] mm:change smaps/pagemap_read calculation behavior

This patch change smaps/pagemap_read pagetable walk behavior, to make
sure not skip VM_PFNMAP pagetables,
so that we can calculate COW pages of VM_PFNMAP as normal pages.

Signed-off-by: Yalin Wang <yalin.wang@...ymobile.com>
---
 fs/proc/task_mmu.c | 2 ++
 include/linux/mm.h | 2 ++
 mm/pagewalk.c      | 5 +++++
 3 files changed, 9 insertions(+)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c7267e9..e7d7c43 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -616,6 +616,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
 	struct mem_size_stats mss;
 	struct mm_walk smaps_walk = {
 		.pmd_entry = smaps_pte_range,
+		.test_walk = generic_walk_page_test_no_skip,
 		.mm = vma->vm_mm,
 		.private = &mss,
 	};
@@ -1264,6 +1265,7 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
 
 	pagemap_walk.pmd_entry = pagemap_pte_range;
 	pagemap_walk.pte_hole = pagemap_pte_hole;
+	pagemap_walk.test_walk = generic_walk_page_test_no_skip;
 #ifdef CONFIG_HUGETLB_PAGE
 	pagemap_walk.hugetlb_entry = pagemap_hugetlb_range;
 #endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b976d9f..07f71c5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1191,6 +1191,8 @@ struct mm_walk {
 	void *private;
 };
 
+int generic_walk_page_test_no_skip(unsigned long start, unsigned long end,
+		struct mm_walk *walk);
 int walk_page_range(unsigned long addr, unsigned long end,
 		struct mm_walk *walk);
 int walk_page_vma(struct vm_area_struct *vma, struct mm_walk *walk);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 75c1f28..14f38d5 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -206,6 +206,11 @@ static int __walk_page_range(unsigned long start, unsigned long end,
 	return err;
 }
 
+int generic_walk_page_test_no_skip(unsigned long start, unsigned long end,
+			struct mm_walk *walk)
+{
+	return 0;
+}
 /**
  * walk_page_range - walk page table with caller specific callbacks
  *
-- 
2.2.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ