lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Jun 2020 11:47:36 -0700
From:   Roman Gushchin <guro@...com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
CC:     Vlastimil Babka <vbabka@...e.cz>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Linux MM <linux-mm@...ck.org>,
        Kernel Team <kernel-team@...com>,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Larry Woodman <lwoodman@...hat.com>
Subject: Re: [PATCH v6 00/19] The new cgroup slab memory controller

On Fri, Jun 19, 2020 at 11:39:45AM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 18 Jun 2020 18:27:12 -0700
> Roman Gushchin <guro@...com> wrote:
> 
> > Theoretically speaking it should get worse (especially for non-root allocations),
> > but if the difference is not big, it still should be better, because there is
> > a big expected win from memory savings/smaller working set/less fragmentation etc.
> > 
> > The only thing I'm slightly worried is what's the effect on root allocations
> > if we're sharing slab caches between root- and non-root allocations. Because if
> > someone depends so much on the allocation speed, memcg-based accounting can be
> > ignored anyway. For most users the cost of allocation is negligible.
> > That's why the patch which merges root- and memcg slab caches is put on top
> > and can be reverted if somebody will complain.
> 
> In general I like this work for saving memory, but you also have to be
> aware of the negative consequences of sharing slab caches.  At Red Hat
> we have experienced very hard to find kernel bugs, that point to memory
> corruption at a completely wrong kernel code, because other kernel code
> were corrupting the shared slab cache.  (Hint a workaround is to enable
> SLUB debugging to disable this sharing).

I agree, but it must be related to the sharing of slab pages between different
types of objects. We've also disabled cache sharing many times in order
to compare slab usages between different major kernel version or to debug
memory corruptions.

But what about sharing between multiple cgroups, it just brings
CONFIG_MEMCG_KMEM memory layout back to the !CONFIG_MEMCG_KMEM.
I doubt that anyone ever considered the kernel memory accounting
as a debugging mechanism. Quite opposite, we've encountered a lot of
tricky issues related to the dynamic creation and destruction of kmem_caches
and their life-time. Removing this code should make things simpler and
hopefully more reliable.

Thanks!


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ