lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Jan 2021 17:19:33 +1100
From:   Imran Khan <imran.f.khan@...cle.com>
To:     Roman Gushchin <guro@...com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc:     Michal Hocko <mhocko@...nel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] mm: memcg/slab: optimize objcg stock draining



On 6/1/21 3:22 pm, Roman Gushchin wrote:
> Imran Khan reported a regression in hackbench results caused by the
> commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
> instead of pages"). The regression is noticeable in the case of
> a consequent allocation of several relatively large slab objects,
> e.g. skb's. As soon as the amount of stocked bytes exceeds PAGE_SIZE,
> drain_obj_stock() and __memcg_kmem_uncharge() are called, and it leads
> to a number of atomic operations in page_counter_uncharge().
> 
> The corresponding call graph is below (provided by Imran Khan):
>    |__alloc_skb
>    |    |
>    |    |__kmalloc_reserve.isra.61
>    |    |    |
>    |    |    |__kmalloc_node_track_caller
>    |    |    |    |
>    |    |    |    |slab_pre_alloc_hook.constprop.88
>    |    |    |     obj_cgroup_charge
>    |    |    |    |    |
>    |    |    |    |    |__memcg_kmem_charge
>    |    |    |    |    |    |
>    |    |    |    |    |    |page_counter_try_charge
>    |    |    |    |    |
>    |    |    |    |    |refill_obj_stock
>    |    |    |    |    |    |
>    |    |    |    |    |    |drain_obj_stock.isra.68
>    |    |    |    |    |    |    |
>    |    |    |    |    |    |    |__memcg_kmem_uncharge
>    |    |    |    |    |    |    |    |
>    |    |    |    |    |    |    |    |page_counter_uncharge
>    |    |    |    |    |    |    |    |    |
>    |    |    |    |    |    |    |    |    |page_counter_cancel
>    |    |    |    |
>    |    |    |    |
>    |    |    |    |__slab_alloc
>    |    |    |    |    |
>    |    |    |    |    |___slab_alloc
>    |    |    |    |    |
>    |    |    |    |slab_post_alloc_hook
> 
> Instead of directly uncharging the accounted kernel memory, it's
> possible to refill the generic page-sized per-cpu stock instead.
> It's a much faster operation, especially on a default hierarchy.
> As a bonus, __memcg_kmem_uncharge_page() will also get faster,
> so the freeing of page-sized kernel allocations (e.g. large kmallocs)
> will become faster.
> 
> A similar change has been done earlier for the socket memory by
> the commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for
> socket memory uncharging").
> 
> Signed-off-by: Roman Gushchin <guro@...com>
> Reported-by: Imran Khan <imran.f.khan@...cle.com>

Tested-by: Imran Khan <imran.f.khan@...cle.com>

> ---
>   mm/memcontrol.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 0d74b80fa4de..8148c1df3aff 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3122,9 +3122,7 @@ void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
>   	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
>   		page_counter_uncharge(&memcg->kmem, nr_pages);
>   
> -	page_counter_uncharge(&memcg->memory, nr_pages);
> -	if (do_memsw_account())
> -		page_counter_uncharge(&memcg->memsw, nr_pages);
> +	refill_stock(memcg, nr_pages);
>   }
>   
>   /**
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ