lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 27 Sep 2012 18:32:33 -0700 (PDT) From: Hugh Dickins <hughd@...gle.com> To: David Rientjes <rientjes@...gle.com> cc: Andrew Morton <akpm@...ux-foundation.org>, Linus Torvalds <torvalds@...ux-foundation.org>, Andrea Arcangeli <aarcange@...hat.com>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, Johannes Weiner <hannes@...xchg.org>, Michel Lespinasse <walken@...gle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, stable@...r.kernel.org Subject: Re: [patch] mm, thp: fix mlock statistics On Wed, 26 Sep 2012, David Rientjes wrote: > NR_MLOCK is only accounted in single page units: there's no logic to > handle transparent hugepages. This patch checks the appropriate number > of pages to adjust the statistics by so that the correct amount of memory > is reflected. > > Currently: > > $ grep Mlocked /proc/meminfo > Mlocked: 19636 kB > > #define MAP_SIZE (4 << 30) /* 4GB */ > > void *ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, > MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); > mlock(ptr, MAP_SIZE); > > $ grep Mlocked /proc/meminfo > Mlocked: 29844 kB > > munlock(ptr, MAP_SIZE); > > $ grep Mlocked /proc/meminfo > Mlocked: 19636 kB > > And with this patch: > > $ grep Mlock /proc/meminfo > Mlocked: 19636 kB > > mlock(ptr, MAP_SIZE); > > $ grep Mlock /proc/meminfo > Mlocked: 4213664 kB > > munlock(ptr, MAP_SIZE); > > $ grep Mlock /proc/meminfo > Mlocked: 19636 kB > > Reported-by: Hugh Dickens <hughd@...gle.com> I do prefer Dickins :) > Signed-off-by: David Rientjes <rientjes@...gle.com> Acked-by: Hugh Dickins <hughd@...gle.com> Yes, this now seems to be working nicely, thanks. I would have preferred you to omit the free_page_mlock() part, since that sets me wondering about what flags might be set to mean what at that point; but since it should never get there anyway, and we'll be removing it entirely from v3.7, never mind. (In doing that, I shall need to consider whether clear_page_mlock() then needs hpage_nr_pages, but your patch below is perfectly correct to omit it.) If I understand aright, in another (thp: avoid VM_BUG_ON) thread, Linus remarks that he's noticed this and your matching Unevictable patch (that I had thought too late for v3.6), and is hoping for Acks so that he can put them into v3.6 after all. So despite my earlier reluctance, please take this as an Ack on that one too (I was testing them together): it'll be odd if one of them goes to stable and the other not, but we can sort that out with GregKH later. Hugh > --- > mm/internal.h | 3 ++- > mm/mlock.c | 6 ++++-- > mm/page_alloc.c | 2 +- > 3 files changed, 7 insertions(+), 4 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -180,7 +180,8 @@ static inline int mlocked_vma_newpage(struct vm_area_struct *vma, > return 0; > > if (!TestSetPageMlocked(page)) { > - inc_zone_page_state(page, NR_MLOCK); > + mod_zone_page_state(page_zone(page), NR_MLOCK, > + hpage_nr_pages(page)); > count_vm_event(UNEVICTABLE_PGMLOCKED); > } > return 1; > diff --git a/mm/mlock.c b/mm/mlock.c > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -81,7 +81,8 @@ void mlock_vma_page(struct page *page) > BUG_ON(!PageLocked(page)); > > if (!TestSetPageMlocked(page)) { > - inc_zone_page_state(page, NR_MLOCK); > + mod_zone_page_state(page_zone(page), NR_MLOCK, > + hpage_nr_pages(page)); > count_vm_event(UNEVICTABLE_PGMLOCKED); > if (!isolate_lru_page(page)) > putback_lru_page(page); > @@ -108,7 +109,8 @@ void munlock_vma_page(struct page *page) > BUG_ON(!PageLocked(page)); > > if (TestClearPageMlocked(page)) { > - dec_zone_page_state(page, NR_MLOCK); > + mod_zone_page_state(page_zone(page), NR_MLOCK, > + -hpage_nr_pages(page)); > if (!isolate_lru_page(page)) { > int ret = SWAP_AGAIN; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -604,7 +604,7 @@ out: > */ > static inline void free_page_mlock(struct page *page) > { > - __dec_zone_page_state(page, NR_MLOCK); > + __mod_zone_page_state(page_zone(page), NR_MLOCK, -hpage_nr_pages(page)); > __count_vm_event(UNEVICTABLE_MLOCKFREED); > } > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists