[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <183e6a98-032d-2184-6962-202210bfe4ce@redhat.com>
Date: Fri, 24 May 2019 21:06:17 +0800
From: Jason Wang <jasowang@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Saeed Mahameed <saeedm@...lanox.com>,
"stephen@...workplumber.org" <stephen@...workplumber.org>,
"jiri@...nulli.us" <jiri@...nulli.us>,
"sthemmin@...rosoft.com" <sthemmin@...rosoft.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH v3 2/2] net: core: support XDP generic on stacked devices.
On 2019/5/24 下午6:07, Jesper Dangaard Brouer wrote:
> On Fri, 24 May 2019 12:17:27 +0800
> Jason Wang <jasowang@...hat.com> wrote:
>
>>> Maybe this is acceptable, but it should be documented, as the current
>>> assumption dictates: XDP program runs on the core where the XDP
>>> frame/SKB was first seen.
>>
>> At lest for TUN, this is not true. XDP frames were built by vhost_net
>> and passed to TUN. There's no guarantee that vhost_net kthread won't
>> move to another core.
> This sound a little scary, as we depend on per-CPU variables (e.g.
> bpf_redirect_info). Can the vhost_net kthread move between CPUs
> within/during the NAPI-poll?
The RX of TUN will not go for NAPI usually. What we did is:
1) Vhost kthread prepares an array of XDP frames and pass them to TUN
through sendmsg
2) TUN will disable bh and run XDP for each frames then enable bh
So kthread can move to another CPU before 2) but we guarantee that the
per-CPU dependency of XDP work in 2).
TUN indeed has a NAPI mode which is mainly used for hardening, and XDP
was not implemented on that path (this could be fixed in the future).
Thanks
Powered by blists - more mailing lists