lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9ACFZzn2Pse0rKG@hyeyoo>
Date:   Wed, 25 Jan 2023 01:06:45 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Christoph Lameter <cl@...two.de>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Joonsoo Kim <iamjoonsoo.kim@....com>, penberg@...nel.org,
        vbabka@...e.cz, Jakub Kicinski <kuba@...nel.org>,
        "David S. Miller" <davem@...emloft.net>, edumazet@...gle.com,
        pabeni@...hat.com
Subject: Re: [PATCH RFC] mm+net: allow to set kmem_cache create flag for
 SLAB_NEVER_MERGE

On Tue, Jan 17, 2023 at 03:54:34PM +0100, Christoph Lameter wrote:
> On Tue, 17 Jan 2023, Jesper Dangaard Brouer wrote:
> 
> > When running different network performance microbenchmarks, I started
> > to notice that performance was reduced (slightly) when machines had
> > longer uptimes. I believe the cause was 'skbuff_head_cache' got
> > aliased/merged into the general slub for 256 bytes sized objects (with
> > my kernel config, without CONFIG_HARDENED_USERCOPY).
> 
> Well that is a common effect that we see in multiple subsystems. This is
> due to general memory fragmentation. Depending on the prior load the
> performance could actually be better after some runtime if the caches are
> populated avoiding the page allocator etc.
> 
> The merging could actually be beneficial since there may be more partial
> slabs to allocate from and thus avoiding expensive calls to the page
> allocator.
> 
> I wish we had some effective way of memory defragmentation.

If general memory fragmentation is actual cause of this problem, 
it may be worsening by [1] due to assumption that all allocations
are done in the same order as s->oo, when accounting and limiting the number
of percpu slabs.

[1] https://lore.kernel.org/linux-mm/76c63237-c489-b942-bdd9-5720042f52a9@suse.cz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ