lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180905214731.GA30226@tower.DHCP.thefacebook.com>
Date:   Wed, 5 Sep 2018 14:47:34 -0700
From:   Roman Gushchin <guro@...com>
To:     Shakeel Butt <shakeelb@...gle.com>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, <kernel-team@...com>,
        Rik van Riel <riel@...riel.com>, <jbacik@...com>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v2] mm: slowly shrink slabs with a relatively small
 number of objects

On Wed, Sep 05, 2018 at 02:35:29PM -0700, Shakeel Butt wrote:
> On Wed, Sep 5, 2018 at 2:23 PM Roman Gushchin <guro@...com> wrote:
> >
> > On Wed, Sep 05, 2018 at 01:51:52PM -0700, Andrew Morton wrote:
> > > On Tue, 4 Sep 2018 15:47:07 -0700 Roman Gushchin <guro@...com> wrote:
> > >
> > > > Commit 9092c71bb724 ("mm: use sc->priority for slab shrink targets")
> > > > changed the way how the target slab pressure is calculated and
> > > > made it priority-based:
> > > >
> > > >     delta = freeable >> priority;
> > > >     delta *= 4;
> > > >     do_div(delta, shrinker->seeks);
> > > >
> > > > The problem is that on a default priority (which is 12) no pressure
> > > > is applied at all, if the number of potentially reclaimable objects
> > > > is less than 4096 (1<<12).
> > > >
> > > > This causes the last objects on slab caches of no longer used cgroups
> > > > to never get reclaimed, resulting in dead cgroups staying around forever.
> > >
> > > But this problem pertains to all types of objects, not just the cgroup
> > > cache, yes?
> >
> > Well, of course, but there is a dramatic difference in size.
> >
> > Most of these objects are taking few hundreds bytes (or less),
> > while a memcg can take few hundred kilobytes on a modern multi-CPU
> > machine. Mostly due to per-cpu stats and events counters.
> >
> 
> Beside memcg, all of its kmem caches, most empty, are stuck in memory
> as well. For SLAB even the memory overhead of an empty kmem cache is
> not negligible.

Right!

I mean the main part of the problem is not in these 4k (mostly vfs-cache related)
objects themselves, but in objects, which are referenced by these 4k objects.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ