lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 Jan 2015 13:35:00 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Rafael Aquini <aquini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, jweiner@...hat.com,
	dave.hansen@...ux.intel.com, rientjes@...gle.com,
	linux-mm@...ck.org
Subject: Re: [PATCH v2] fs: proc: task_mmu: show page size in
 /proc/<pid>/numa_maps

On Mon,  5 Jan 2015 12:44:31 -0500 Rafael Aquini <aquini@...hat.com> wrote:

> This patch introduces 'kernelpagesize_kB' line element to /proc/<pid>/numa_maps
> report file in order to help identifying the size of pages that are backing
> memory areas mapped by a given task. This is specially useful to
> help differentiating between HUGE and GIGANTIC page backed VMAs.
> 
> This patch is based on Dave Hansen's proposal and reviewer's follow-ups
> taken from the following dicussion threads:
>  * https://lkml.org/lkml/2011/9/21/454

Dave's changelog contains useful information which this one lacked.  I
stole some of it.

: The output of /proc/$pid/numa_maps is in terms of number of pages like
: anon=22 or dirty=54.  Here's some output:
: 
: 7f4680000000 default file=/hugetlb/bigfile anon=50 dirty=50 N0=50
: 7f7659600000 default file=/anon_hugepage\040(deleted) anon=50 dirty=50 N0=50
: 7fff8d425000 default stack anon=50 dirty=50 N0=50
: Looks like we have a stack and a couple of anonymous hugetlbfs
: areas page which both use the same amount of memory.  They don't.
: 
: The 'bigfile' uses 1GB pages and takes up ~50GB of space.  The
: anon_hugepage uses 2MB pages and takes up ~100MB of space while the stack
: uses normal 4k pages.  You can go over to smaps to figure out what the
: page size _really_ is with KernelPageSize or MMUPageSize.  But, I think
: this is a pretty nasty and counterintuitive interface as it stands.
: 
: This patch introduces 'kernelpagesize_kB' line element to
: /proc/<pid>/numa_maps report file in order to help identifying the size of
: pages that are backing memory areas mapped by a given task.  This is
: specially useful to help differentiating between HUGE and GIGANTIC page
: backed VMAs.
: 
: This patch is based on Dave Hansen's proposal and reviewer's follow-ups
: taken from the following dicussion threads:
:  * https://lkml.org/lkml/2011/9/21/454
:  * https://lkml.org/lkml/2014/12/20/66


> +	seq_printf(m, " kernelpagesize_kB=%lu", vma_kernel_pagesize(vma) >> 10);

This changes the format of the numa_maps file and can potentially break
existing parsers.  Please discuss.

I'd complain about the patch's failure to update the documentation,
except numa_maps appears to be undocumented.  Sigh.  What the heck is "N0"?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ