lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Oct 2019 15:48:17 +0000
From:   Roman Gushchin <guro@...com>
To:     Michal Hocko <mhocko@...nel.org>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Johannes Weiner <hannes@...xchg.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        "Shakeel Butt" <shakeelb@...gle.com>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        "Waiman Long" <longman@...hat.com>,
        Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 00/16] The new slab memory controller

On Tue, Oct 22, 2019 at 03:28:00PM +0200, Michal Hocko wrote:
> On Tue 22-10-19 15:22:06, Michal Hocko wrote:
> > On Thu 17-10-19 17:28:04, Roman Gushchin wrote:
> > [...]
> > > Using a drgn* script I've got an estimation of slab utilization on
> > > a number of machines running different production workloads. In most
> > > cases it was between 45% and 65%, and the best number I've seen was
> > > around 85%. Turning kmem accounting off brings it to high 90s. Also
> > > it brings back 30-50% of slab memory. It means that the real price
> > > of the existing slab memory controller is way bigger than a pointer
> > > per page.
> > 
> > How much of the memory are we talking about here?
> 
> Just to be more specific. Your cover mentions several hundreds of MBs
> but there is no scale to the overal charged memory. How much of that is
> the actual kmem accounted memory.

As I wrote, on average it saves 30-45% of slab memory.
The smallest number I've seen was about 15%, the largest over 60%.

The amount of slab memory isn't a very stable metrics in general: it heavily
depends on workload pattern, memory pressure, uptime etc.
In absolute numbers I've seen savings from ~60 Mb for an empty vm to
more than 2 Gb for some production workloads.

Btw, please note that after a recent change from Vlastimil
6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
slab counters are including large allocations which are passed
directly to the page allocator. It will makes memory savings
smaller in percents, but of course not in absolute numbers.

> 
> > Also is there any pattern for specific caches that tend to utilize
> > much worse than others?

Caches which usually have many objects (e.g. inodes) initially
have a better utilization, but as some of them are getting reclaimed
the utilization drops. And if the cgroup is already dead, no one can
reuse these mostly slab empty pages, so it's pretty wasteful.

So I don't think the problem is specific to any cache, it's pretty general.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ