lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Apr 2019 20:07:37 +0000
From:   Roman Gushchin <guro@...com>
To:     Shakeel Butt <shakeelb@...gle.com>
CC:     Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] memcg: refill_stock for kmem uncharging too

On Thu, Apr 18, 2019 at 02:42:24PM -0700, Shakeel Butt wrote:
> The commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for socket
> memory uncharging") added refill_stock() for skmem uncharging path to
> optimize workloads having high network traffic. Do the same for the kmem
> uncharging as well. However bypass the refill for offlined memcgs to not
> cause zombie apocalypse.
> 
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>

Hello, Shakeel!

> ---
>  mm/memcontrol.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2535e54e7989..7b8de091f572 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -178,6 +178,7 @@ struct mem_cgroup_event {
>  
>  static void mem_cgroup_threshold(struct mem_cgroup *memcg);
>  static void mem_cgroup_oom_notify(struct mem_cgroup *memcg);
> +static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages);
>  
>  /* Stuffs for move charges at task migration. */
>  /*
> @@ -2097,10 +2098,7 @@ static void drain_stock(struct memcg_stock_pcp *stock)
>  	struct mem_cgroup *old = stock->cached;
>  
>  	if (stock->nr_pages) {
> -		page_counter_uncharge(&old->memory, stock->nr_pages);
> -		if (do_memsw_account())
> -			page_counter_uncharge(&old->memsw, stock->nr_pages);
> -		css_put_many(&old->css, stock->nr_pages);
> +		cancel_charge(old, stock->nr_pages);
>  		stock->nr_pages = 0;
>  	}
>  	stock->cached = NULL;
> @@ -2133,6 +2131,11 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>  	struct memcg_stock_pcp *stock;
>  	unsigned long flags;
>  
> +	if (unlikely(!mem_cgroup_online(memcg))) {
> +		cancel_charge(memcg, nr_pages);
> +		return;
> +	}

I'm slightly concerned about this part. Do we really need it?
The number of "zombies" which we can pin is limited by the number of CPUs,
and it will drop fast if there is any load on the machine.

If we skip offline memcgs, it can slow down charging/uncharging of skmem,
which might be a problem, if the socket is in active use by an other cgroup.
Honestly, I'd drop this part.

> +
>  	local_irq_save(flags);
>  
>  	stock = this_cpu_ptr(&memcg_stock);
> @@ -2768,17 +2771,13 @@ void __memcg_kmem_uncharge(struct page *page, int order)
>  	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
>  		page_counter_uncharge(&memcg->kmem, nr_pages);
>  
> -	page_counter_uncharge(&memcg->memory, nr_pages);
> -	if (do_memsw_account())
> -		page_counter_uncharge(&memcg->memsw, nr_pages);
> -
>  	page->mem_cgroup = NULL;
>  
>  	/* slab pages do not have PageKmemcg flag set */
>  	if (PageKmemcg(page))
>  		__ClearPageKmemcg(page);
>  
> -	css_put_many(&memcg->css, nr_pages);
> +	refill_stock(memcg, nr_pages);
>  }
>  #endif /* CONFIG_MEMCG_KMEM */
>  
> -- 
> 2.21.0.392.gf8f6787159e-goog
> 

The rest looks good to me.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ