lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <7bf42407-7b2f-9824-b2fb-114fb88ace06@intel.com> Date: Mon, 21 Aug 2023 15:55:37 +0200 From: Alexander Lobakin <aleksander.lobakin@...el.com> To: Jesper Dangaard Brouer <hawk@...nel.org>, <netdev@...r.kernel.org>, <vbabka@...e.cz> CC: Eric Dumazet <eric.dumazet@...il.com>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, <linux-mm@...ck.org>, Andrew Morton <akpm@...ux-foundation.org>, Mel Gorman <mgorman@...hsingularity.net>, Christoph Lameter <cl@...ux.com>, <roman.gushchin@...ux.dev>, <dsterba@...e.com> Subject: Re: [PATCH net] net: use SLAB_NO_MERGE for kmem_cache skbuff_head_cache From: Jesper Dangaard Brouer <hawk@...nel.org> Date: Tue, 15 Aug 2023 17:17:36 +0200 > Since v6.5-rc1 MM-tree is merged and contains a new flag SLAB_NO_MERGE > in commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag SLAB_NO_MERGE") > now is the time to use this flag for networking as proposed > earlier see link. > > The SKB (sk_buff) kmem_cache slab is critical for network performance. > Network stack uses kmem_cache_{alloc,free}_bulk APIs to gain > performance by amortising the alloc/free cost. > > For the bulk API to perform efficiently the slub fragmentation need to > be low. Especially for the SLUB allocator, the efficiency of bulk free > API depend on objects belonging to the same slab (page). > > When running different network performance microbenchmarks, I started > to notice that performance was reduced (slightly) when machines had > longer uptimes. I believe the cause was 'skbuff_head_cache' got > aliased/merged into the general slub for 256 bytes sized objects (with > my kernel config, without CONFIG_HARDENED_USERCOPY). > > For SKB kmem_cache network stack have other various reasons for > not merging, but it varies depending on kernel config (e.g. > CONFIG_HARDENED_USERCOPY). We want to explicitly set SLAB_NO_MERGE > for this kmem_cache to get most out of kmem_cache_{alloc,free}_bulk APIs. > > When CONFIG_SLUB_TINY is configured the bulk APIs are essentially > disabled. Thus, for this case drop the SLAB_NO_MERGE flag. > > Link: https://lore.kernel.org/all/167396280045.539803.7540459812377220500.stgit@firesoul/ > Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org> > --- > net/core/skbuff.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index a298992060e6..92aee3e0376a 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -4750,12 +4750,23 @@ static void skb_extensions_init(void) > static void skb_extensions_init(void) {} > #endif > > +/* The SKB kmem_cache slab is critical for network performance. Never > + * merge/alias the slab with similar sized objects. This avoids fragmentation > + * that hurts performance of kmem_cache_{alloc,free}_bulk APIs. > + */ > +#ifndef CONFIG_SLUB_TINY > +#define FLAG_SKB_NO_MERGE SLAB_NO_MERGE > +#else /* CONFIG_SLUB_TINY - simple loop in kmem_cache_alloc_bulk */ > +#define FLAG_SKB_NO_MERGE 0 > +#endif > + > void __init skb_init(void) > { > skbuff_cache = kmem_cache_create_usercopy("skbuff_head_cache", > sizeof(struct sk_buff), > 0, > - SLAB_HWCACHE_ALIGN|SLAB_PANIC, > + SLAB_HWCACHE_ALIGN|SLAB_PANIC| > + FLAG_SKB_NO_MERGE, That alignment tho xD > offsetof(struct sk_buff, cb), > sizeof_field(struct sk_buff, cb), > NULL); Thanks, Olek
Powered by blists - more mailing lists