[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251013162408.76200e17@kernel.org>
Date: Mon, 13 Oct 2025 16:24:08 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: <bpf@...r.kernel.org>, <ast@...nel.org>, <daniel@...earbox.net>,
<hawk@...nel.org>, <ilias.apalodimas@...aro.org>, <toke@...hat.com>,
<lorenzo@...nel.org>, <netdev@...r.kernel.org>,
<magnus.karlsson@...el.com>, <andrii@...nel.org>, <stfomichev@...il.com>,
<aleksander.lobakin@...el.com>
Subject: Re: [PATCH bpf 2/2] veth: update mem type in xdp_buff
On Wed, 8 Oct 2025 12:37:22 +0200 Maciej Fijalkowski wrote:
> > > I guess we're slipping into a philosophical discussion but I'd say
> > > that the problem is that rxq stores part of what is de facto xdp buff
> > > state. It is evacuated into the xdp frame when frame is constructed,
> > > as packet is detached from driver context. We need to reconstitute it
> > > when we convert frame (skb, or anything else) back info an xdp buff.
> >
> > So let us have mem type per xdp_buff then. Feels clunky anyways to change
> > it on whole rxq on xdp_buff basis. Maybe then everyone will be happy?
>
> ...however would we be fine with taking a potential performance hit?
I'd think the perf hit will be a blocker, supposedly it's in rxq for
a reason. We are updating it per packet in the few places that are
coded up correctly (cpumap) so while it is indeed kinda weird we're
not making it any worse?
Maybe others disagree. I don't feel super strongly. My gut feeling is
what I drafted is best we can do in a fix.
Sorry for delay, PTO.
Powered by blists - more mailing lists