lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 May 2019 17:01:43 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, Roman Gushchin <guro@...com>
Cc:     Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH] proc/meminfo: add MemKernel counter

On 5/22/19 4:40 PM, Konstantin Khlebnikov wrote:
> Some kinds of kernel allocations are not accounted or not show in meminfo.
> For example vmalloc allocations are tracked but overall size is not shown

I think Roman's vmalloc patch [1] is on its way?

> for performance reasons. There is no information about network buffers.

xfs buffers can also occupy a lot, from my experience

> In most cases detailed statistics is not required. At first place we need
> information about overall kernel memory usage regardless of its structure.
> 
> This patch estimates kernel memory usage by subtracting known sizes of
> free, anonymous, hugetlb and caches from total memory size: MemKernel =
> MemTotal - MemFree - Buffers - Cached - SwapCached - AnonPages - Hugetlb.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>

I've tried this once in [2]. The name was Unaccounted and one of the objections
was that people would get worried. Yours is a bit better, perhaps MemKernMisc
would be even more descriptive? Michal Hocko worried about maintainability, that
we forget something, but I don't think that's a big issue.

Vlastimil

[1] https://lore.kernel.org/linux-mm/20190514235111.2817276-2-guro@fb.com/T/#u
[2] https://lore.kernel.org/linux-mm/20161020121149.9935-1-vbabka@suse.cz/T/#u

> ---
>  Documentation/filesystems/proc.txt |    5 +++++
>  fs/proc/meminfo.c                  |   20 +++++++++++++++-----
>  2 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
> index 66cad5c86171..a0ab7f273ea0 100644
> --- a/Documentation/filesystems/proc.txt
> +++ b/Documentation/filesystems/proc.txt
> @@ -860,6 +860,7 @@ varies by architecture and compile options.  The following is from a
>  
>  MemTotal:     16344972 kB
>  MemFree:      13634064 kB
> +MemKernel:      862600 kB
>  MemAvailable: 14836172 kB
>  Buffers:          3656 kB
>  Cached:        1195708 kB
> @@ -908,6 +909,10 @@ MemAvailable: An estimate of how much memory is available for starting new
>                page cache to function well, and that not all reclaimable
>                slab will be reclaimable, due to items being in use. The
>                impact of those factors will vary from system to system.
> +   MemKernel: The sum of all kinds of kernel memory allocations: Slab,
> +              Vmalloc, Percpu, KernelStack, PageTables, socket buffers,
> +              and some other untracked allocations. Does not include
> +              MemFree, Buffers, Cached, SwapCached, AnonPages, Hugetlb.
>       Buffers: Relatively temporary storage for raw disk blocks
>                shouldn't get tremendously large (20MB or so)
>        Cached: in-memory cache for files read from the disk (the
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index 568d90e17c17..b27d56dd619a 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -39,17 +39,27 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  	long available;
>  	unsigned long pages[NR_LRU_LISTS];
>  	unsigned long sreclaimable, sunreclaim;
> +	unsigned long anon_pages, file_pages, swap_cached;
> +	long kernel_pages;
>  	int lru;
>  
>  	si_meminfo(&i);
>  	si_swapinfo(&i);
>  	committed = percpu_counter_read_positive(&vm_committed_as);
>  
> -	cached = global_node_page_state(NR_FILE_PAGES) -
> -			total_swapcache_pages() - i.bufferram;
> +	anon_pages = global_node_page_state(NR_ANON_MAPPED);
> +	file_pages = global_node_page_state(NR_FILE_PAGES);
> +	swap_cached = total_swapcache_pages();
> +
> +	cached = file_pages - swap_cached - i.bufferram;
>  	if (cached < 0)
>  		cached = 0;
>  
> +	kernel_pages = i.totalram - i.freeram - anon_pages - file_pages -
> +		       hugetlb_total_pages();
> +	if (kernel_pages < 0)
> +		kernel_pages = 0;
> +
>  	for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
>  		pages[lru] = global_node_page_state(NR_LRU_BASE + lru);
>  
> @@ -60,9 +70,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  	show_val_kb(m, "MemTotal:       ", i.totalram);
>  	show_val_kb(m, "MemFree:        ", i.freeram);
>  	show_val_kb(m, "MemAvailable:   ", available);
> +	show_val_kb(m, "MemKernel:      ", kernel_pages);
>  	show_val_kb(m, "Buffers:        ", i.bufferram);
>  	show_val_kb(m, "Cached:         ", cached);
> -	show_val_kb(m, "SwapCached:     ", total_swapcache_pages());
> +	show_val_kb(m, "SwapCached:     ", swap_cached);
>  	show_val_kb(m, "Active:         ", pages[LRU_ACTIVE_ANON] +
>  					   pages[LRU_ACTIVE_FILE]);
>  	show_val_kb(m, "Inactive:       ", pages[LRU_INACTIVE_ANON] +
> @@ -92,8 +103,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  		    global_node_page_state(NR_FILE_DIRTY));
>  	show_val_kb(m, "Writeback:      ",
>  		    global_node_page_state(NR_WRITEBACK));
> -	show_val_kb(m, "AnonPages:      ",
> -		    global_node_page_state(NR_ANON_MAPPED));
> +	show_val_kb(m, "AnonPages:      ", anon_pages);
>  	show_val_kb(m, "Mapped:         ",
>  		    global_node_page_state(NR_FILE_MAPPED));
>  	show_val_kb(m, "Shmem:          ", i.sharedram);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ