lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150813004533.GA24716@hori1.linux.bs1.fc.nec.co.jp>
Date:	Thu, 13 Aug 2015 00:45:33 +0000
From:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Jörn Engel <joern@...estorage.com>,
	Mike Kravetz <mike.kravetz@...cle.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Naoya Horiguchi <nao.horiguchi@...il.com>
Subject: Re: [PATCH v4 2/2] mm: hugetlb: proc: add HugetlbPages field to
 /proc/PID/status

On Wed, Aug 12, 2015 at 01:30:27PM -0700, David Rientjes wrote:
> On Wed, 12 Aug 2015, Naoya Horiguchi wrote:
> 
> > Currently there's no easy way to get per-process usage of hugetlb pages, which
> > is inconvenient because userspace applications which use hugetlb typically want
> > to control their processes on the basis of how much memory (including hugetlb)
> > they use. So this patch simply provides easy access to the info via
> > /proc/PID/status.
> > 
> > With this patch, for example, /proc/PID/status shows a line like this:
> > 
> >   HugetlbPages:      20480 kB (10x2048kB)
> > 
> > If your system supports and enables multiple hugepage sizes, the line looks
> > like this:
> > 
> >   HugetlbPages:    1069056 kB (1x1048576kB 10x2048kB)
> > 
> > , so you can easily know how many hugepages in which pagesize are used by a
> > process.
> > 
> > Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> 
> I'm happy with this and thanks very much for going the extra mile and 
> breaking the usage down by hstate size.

Great to hear that.

> I'd be interested in the comments of others, though, to see if there is 
> any reservation about the hstate size breakdown.  It may actually find no 
> current customer who is interested in parsing it.  (If we keep it, I would 
> suggest the 'x' change to '*' similar to per-order breakdowns in 
> show_mem()).  It may also be possible to add it later if a definitive 
> usecase is presented.

I'm fine to change to '*'.

Thanks,
Naoya Horiguchi
---
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Subject: [PATCH] mm: hugetlb: proc: add HugetlbPages field to /proc/PID/status

Currently there's no easy way to get per-process usage of hugetlb pages, which
is inconvenient because userspace applications which use hugetlb typically want
to control their processes on the basis of how much memory (including hugetlb)
they use. So this patch simply provides easy access to the info via
/proc/PID/status.

With this patch, for example, /proc/PID/status shows a line like this:

  HugetlbPages:      20480 kB (10*2048kB)

If your system supports and enables multiple hugepage sizes, the line looks
like this:

  HugetlbPages:    1069056 kB (1*1048576kB 10*2048kB)

, so you can easily know how many hugepages in which pagesize are used by a
process.

Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
---
 Documentation/filesystems/proc.txt |  3 +++
 fs/proc/task_mmu.c                 |  1 +
 include/linux/hugetlb.h            | 20 ++++++++++++++++++++
 include/linux/mm_types.h           | 10 ++++++++++
 mm/hugetlb.c                       | 27 +++++++++++++++++++++++++++
 mm/rmap.c                          |  4 +++-
 6 files changed, 64 insertions(+), 1 deletion(-)

diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index 22e40211ef64..f561fc46e41b 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -174,6 +174,7 @@ For example, to get the status information of a process, all you have to do is
   VmLib:      1412 kB
   VmPTE:        20 kb
   VmSwap:        0 kB
+  HugetlbPages:          0 kB (0*2048kB)
   Threads:        1
   SigQ:   0/28578
   SigPnd: 0000000000000000
@@ -237,6 +238,8 @@ Table 1-2: Contents of the status files (as of 4.1)
  VmPTE                       size of page table entries
  VmPMD                       size of second level page tables
  VmSwap                      size of swap usage (the number of referred swapents)
+ HugetlbPages                size of hugetlb memory portions (with additional info
+                             about number of mapped hugepages for each page size)
  Threads                     number of threads
  SigQ                        number of signals queued/max. number for queue
  SigPnd                      bitmap of pending signals for the thread
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 2c37938b82ee..b3cf7fa9ef6c 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -69,6 +69,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
 		ptes >> 10,
 		pmds >> 10,
 		swap << (PAGE_SHIFT-10));
+	hugetlb_report_usage(m, mm);
 }
 
 unsigned long task_vsize(struct mm_struct *mm)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d891f949466a..64aa4db01f48 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -469,6 +469,18 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
 #define hugepages_supported() (HPAGE_SHIFT != 0)
 #endif
 
+void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm);
+
+static inline void inc_hugetlb_count(struct mm_struct *mm, struct hstate *h)
+{
+	atomic_long_inc(&mm->hugetlb_usage.count[hstate_index(h)]);
+}
+
+static inline void dec_hugetlb_count(struct mm_struct *mm, struct hstate *h)
+{
+	atomic_long_dec(&mm->hugetlb_usage.count[hstate_index(h)]);
+}
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 #define alloc_huge_page_node(h, nid) NULL
@@ -504,6 +516,14 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
 {
 	return &mm->page_table_lock;
 }
+
+static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m)
+{
+}
+
+static inline void dec_hugetlb_count(struct mm_struct *mm, struct hstate *h)
+{
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 0038ac7466fd..e95c5fe1eb7d 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -364,6 +364,12 @@ struct mm_rss_stat {
 	atomic_long_t count[NR_MM_COUNTERS];
 };
 
+#ifdef CONFIG_HUGETLB_PAGE
+struct hugetlb_usage {
+	atomic_long_t count[HUGE_MAX_HSTATE];
+};
+#endif
+
 struct kioctx_table;
 struct mm_struct {
 	struct vm_area_struct *mmap;		/* list of VMAs */
@@ -484,6 +490,10 @@ struct mm_struct {
 	/* address of the bounds directory */
 	void __user *bd_addr;
 #endif
+
+#ifdef CONFIG_HUGETLB_PAGE
+	struct hugetlb_usage hugetlb_usage;
+#endif
 };
 
 static inline void mm_init_cpumask(struct mm_struct *mm)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a8c3087089d8..2338c9713b7a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2562,6 +2562,30 @@ void hugetlb_show_meminfo(void)
 				1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
 }
 
+void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm)
+{
+	int i;
+	unsigned long total_usage = 0;
+
+	for (i = 0; i < HUGE_MAX_HSTATE; i++) {
+		total_usage += atomic_long_read(&mm->hugetlb_usage.count[i]) *
+			(huge_page_size(&hstates[i]) >> 10);
+	}
+
+	seq_printf(m, "HugetlbPages:\t%8lu kB (", total_usage);
+	for (i = 0; i < HUGE_MAX_HSTATE; i++) {
+		if (huge_page_order(&hstates[i]) == 0)
+			break;
+		if (i > 0)
+			seq_puts(m, " ");
+
+		seq_printf(m, "%ld*%dkB",
+			atomic_long_read(&mm->hugetlb_usage.count[i]),
+			huge_page_size(&hstates[i]) >> 10);
+	}
+	seq_puts(m, ")\n");
+}
+
 /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
 unsigned long hugetlb_total_pages(void)
 {
@@ -2797,6 +2821,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 			get_page(ptepage);
 			page_dup_rmap(ptepage);
 			set_huge_pte_at(dst, addr, dst_pte, entry);
+			inc_hugetlb_count(dst, h);
 		}
 		spin_unlock(src_ptl);
 		spin_unlock(dst_ptl);
@@ -2877,6 +2902,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (huge_pte_dirty(pte))
 			set_page_dirty(page);
 
+		dec_hugetlb_count(mm, h);
 		page_remove_rmap(page);
 		force_flush = !__tlb_remove_page(tlb, page);
 		if (force_flush) {
@@ -3261,6 +3287,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
 				&& (vma->vm_flags & VM_SHARED)));
 	set_huge_pte_at(mm, address, ptep, new_pte);
 
+	inc_hugetlb_count(mm, h);
 	if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) {
 		/* Optimization, do the COW without a second fault */
 		ret = hugetlb_cow(mm, vma, address, ptep, new_pte, page, ptl);
diff --git a/mm/rmap.c b/mm/rmap.c
index 171b68768df1..b33278bc4ddb 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1230,7 +1230,9 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	update_hiwater_rss(mm);
 
 	if (PageHWPoison(page) && !(flags & TTU_IGNORE_HWPOISON)) {
-		if (!PageHuge(page)) {
+		if (PageHuge(page)) {
+			dec_hugetlb_count(mm, page_hstate(page));
+		} else {
 			if (PageAnon(page))
 				dec_mm_counter(mm, MM_ANONPAGES);
 			else
-- 
2.4.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ