[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+ppTAPYwQ2mH5cZtcMqanFU8hXzD4szdygrjOBewPb+Q@mail.gmail.com>
Date: Wed, 13 Jan 2021 05:46:05 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Alexander Lobakin <alobakin@...me>,
Edward Cree <ecree.xilinx@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Edward Cree <ecree@...arflare.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Willem de Bruijn <willemb@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Steffen Klassert <steffen.klassert@...unet.com>,
Guillaume Nault <gnault@...hat.com>,
Yadu Kishore <kyk.segfault@...il.com>,
Al Viro <viro@...iv.linux.org.uk>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing
On Wed, Jan 13, 2021 at 2:02 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Tue, 12 Jan 2021 13:23:16 +0100 Eric Dumazet wrote:
> > On Tue, Jan 12, 2021 at 12:08 PM Alexander Lobakin <alobakin@...me> wrote:
> > >
> > > From: Edward Cree <ecree.xilinx@...il.com>
> > > Date: Tue, 12 Jan 2021 09:54:04 +0000
> > >
> > > > Without wishing to weigh in on whether this caching is a good idea...
> > >
> > > Well, we already have a cache to bulk flush "consumed" skbs, although
> > > kmem_cache_free() is generally lighter than kmem_cache_alloc(), and
> > > a page frag cache to allocate skb->head that is also bulking the
> > > operations, since it contains a (compound) page with the size of
> > > min(SZ_32K, PAGE_SIZE).
> > > If they wouldn't give any visible boosts, I think they wouldn't hit
> > > mainline.
> > >
> > > > Wouldn't it be simpler, rather than having two separate "alloc" and "flush"
> > > > caches, to have a single larger cache, such that whenever it becomes full
> > > > we bulk flush the top half, and when it's empty we bulk alloc the bottom
> > > > half? That should mean fewer branches, fewer instructions etc. than
> > > > having to decide which cache to act upon every time.
> > >
> > > I though about a unified cache, but couldn't decide whether to flush
> > > or to allocate heads and how much to process. Your suggestion answers
> > > these questions and generally seems great. I'll try that one, thanks!
> >
> > The thing is : kmalloc() is supposed to have batches already, and nice
> > per-cpu caches.
> >
> > This looks like an mm issue, are we sure we want to get over it ?
> >
> > I would like a full analysis of why SLAB/SLUB does not work well for
> > your test workload.
>
> +1, it does feel like we're getting into mm territory
I read the existing code, and with Edward Cree idea of reusing the
existing cache (storage of pointers),
ths now all makes sense, since there will be not much added code (and
new storage of 64 pointers)
The remaining issue is to make sure KASAN will still work, we need
this to detect old and new bugs.
Thanks !
Powered by blists - more mailing lists