lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527103545.4348ac10@carbon>
Date:   Wed, 27 May 2020 10:35:45 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Roman Gushchin <guro@...com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
        kernel-team@...com, linux-kernel@...r.kernel.org,
        Mel Gorman <mgorman@...hsingularity.net>, brouer@...hat.com
Subject: Re: [PATCH v3 17/19] mm: memcg/slab: use a single set of
 kmem_caches for all allocations

On Tue, 26 May 2020 16:55:05 +0200
Vlastimil Babka <vbabka@...e.cz> wrote:

> On 4/22/20 10:47 PM, Roman Gushchin wrote:
> > Instead of having two sets of kmem_caches: one for system-wide and
> > non-accounted allocations and the second one shared by all accounted
> > allocations, we can use just one.
> > 
> > The idea is simple: space for obj_cgroup metadata can be allocated
> > on demand and filled only for accounted allocations.
> > 
> > It allows to remove a bunch of code which is required to handle
> > kmem_cache clones for accounted allocations. There is no more need
> > to create them, accumulate statistics, propagate attributes, etc.
> > It's a quite significant simplification.
> > 
> > Also, because the total number of slab_caches is reduced almost twice
> > (not all kmem_caches have a memcg clone), some additional memory
> > savings are expected. On my devvm it additionally saves about 3.5%
> > of slab memory.
> > 
> > Suggested-by: Johannes Weiner <hannes@...xchg.org>
> > Signed-off-by: Roman Gushchin <guro@...com>  
> 
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> 
> However, as this series will affect slab fastpaths, and perhaps
> especially this patch will affect even non-kmemcg allocations being
> freed, I'm CCing Jesper and Mel for awareness as they AFAIK did work
> on network stack memory management performance, and perhaps some
> benchmarks are in order...

Thanks for the heads-up! 

We (should) all know Mel Gorman's tests, which is here[1]:
 [1] https://github.com/gormanm/mmtests

My guess is that these change will only be visible with micro
benchmarks of the slub/slab.  I my slab/slub micro benchmarks are
located here [2] https://github.com/netoptimizer/prototype-kernel/

It is kernel modules that is compiled against your devel tree and pushed
to the remote host.  Results are simply printk'ed in dmesg.
Usage compile+push commands documented here[3]:
 [3] https://prototype-kernel.readthedocs.io/en/latest/prototype-kernel/build-process.html

I recommend trying: "slab_bulk_test01"
 modprobe slab_bulk_test01; rmmod slab_bulk_test01
 dmesg

Result from these kernel module benchmarks are included in some
commits[4][5]. And in [4] I found some overhead caused by MEMCG.

 [4] https://git.kernel.org/torvalds/c/ca257195511d
 [5] https://git.kernel.org/torvalds/c/fbd02630c6e3
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ