lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+/GxJcNykVQxcG+@P9FQF9L96D.corp.robot.car>
Date:   Fri, 17 Feb 2023 10:26:12 -0800
From:   Roman Gushchin <roman.gushchin@...ux.dev>
To:     Linux regressions mailing list <regressions@...ts.linux.dev>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Shakeel Butt <shakeelb@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Muchun Song <muchun.song@...ux.dev>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Waiman Long <longman@...hat.com>,
        Sven Luther <Sven.Luther@...driver.com>
Subject: Re: [PATCH RFC] ipc/mqueue: introduce msg cache

On Thu, Feb 16, 2023 at 01:29:59PM +0100, Linux regression tracking (Thorsten Leemhuis) wrote:
> Hi, this is your Linux kernel regression tracker.
> 
> On 20.12.22 19:48, Roman Gushchin wrote:
> > Sven Luther reported a regression in the posix message queues
> > performance caused by switching to the per-object tracking of
> > slab objects introduced by patch series ending with the
> > commit 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all
> > allocations").
> 
> Quick inquiry: what happened to below patch? It was supposed to fix a
> performance regression reported here:

Hi Thorsten!

I wouldn't call it simple a regression, things a bit more complicated:
it was a switch to a different approach with different trade-offs,
which IMO make more sense for the majority of real-world workloads.
In two words: individual kernel memory allocations became somewhat slower
(but still fast), but we've saved 40%+ of slab memory on typical systems
and reduced the memory fragmentation.

The regression reported by Sven and my "fix" are related to one very specific
case: posix message queues. To my knowledge they are not widely used for
anything that performance-sensitive, so it's quite a niche use case.
My "fix" was also hand-crafted for the benchmark provided by Sven, so it might
not work for a more generic case. And I don't think it can be easily generalized
without adding cpu or memory overhead.

On the other hand I'm working on improving the speed of kernel memory allocations
in general (I posted early versions some weeks ago). Hopefully it will mitigate
the problem for Sven as well, so we won't need these message queue-specific
hacks.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ