lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 Jan 2022 10:38:51 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Waiman Long <longman@...hat.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Petr Mladek <pmladek@...e.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org, Ira Weiny <ira.weiny@...el.com>,
        Rafael Aquini <aquini@...hat.com>
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Sat 29-01-22 15:53:15, Waiman Long wrote:
> It was found that a number of offlined memcgs were not freed because
> they were pinned by some charged pages that were present. Even "echo
> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> offlined but not freed memcgs tend to increase in number over time with
> the side effect that percpu memory consumption as shown in /proc/meminfo
> also increases over time.
> 
> In order to find out more information about those pages that pin
> offlined memcgs, the page_owner feature is extended to dump memory
> cgroup information especially whether the cgroup is offlined or not.

It is not really clear to me how this is supposed to be used. Are you
really dumping all the pages in the system to find out offline memcgs?
That looks rather clumsy to me. I am not against adding memcg
information to the page owner output. That can be useful in other
contexts.

> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
>  mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 28dac73e0542..8dc5cd0fa227 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -10,6 +10,7 @@
>  #include <linux/migrate.h>
>  #include <linux/stackdepot.h>
>  #include <linux/seq_file.h>
> +#include <linux/memcontrol.h>
>  #include <linux/sched/clock.h>
>  
>  #include "internal.h"
> @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
>  		depot_stack_handle_t handle)
>  {
>  	int ret, pageblock_mt, page_mt;
> +	unsigned long __maybe_unused memcg_data;
>  	char *kbuf;
>  
>  	count = min_t(size_t, count, PAGE_SIZE);
> @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
>  			migrate_reason_names[page_owner->last_migrate_reason]);
>  	}
>  
> +#ifdef CONFIG_MEMCG

This really begs to be in a dedicated function. page_owner_print_memcg
or something like that.

> +	/*
> +	 * Look for memcg information and print it out
> +	 */
> +	memcg_data = READ_ONCE(page->memcg_data);
> +	if (memcg_data) {
> +		struct mem_cgroup *memcg = page_memcg_check(page);
> +		bool onlined;
> +		char name[80];

What does prevent memcg to go away and being reused for a different
purpose?

> +
> +		if (memcg_data & MEMCG_DATA_OBJCGS)
> +			ret += scnprintf(kbuf + ret, count - ret,
> +					"Slab cache page\n");
> +
> +		if (!memcg)
> +			goto copy_out;
> +
> +		onlined = (memcg->css.flags & CSS_ONLINE);
> +		cgroup_name(memcg->css.cgroup, name, sizeof(name));
> +		ret += scnprintf(kbuf + ret, count - ret,
> +				"Charged %sto %smemcg %s\n",
> +				PageMemcgKmem(page) ? "(via objcg) " : "",
> +				onlined ? "" : "offlined ",
> +				name);
> +	}
> +
> +copy_out:
> +#endif
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ