[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <59cd22da-7802-f19c-728d-f5f6e9e53143@lab.ntt.co.jp>
Date: Tue, 8 Jan 2019 16:25:27 +0900
From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To: William Tu <u9012063@...il.com>
Cc: Toshiaki Makita <toshiaki.makita1@...il.com>,
Björn Töpel <bjorn.topel@...il.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Yi-Hung Wei <yihung.wei@...il.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>
Subject: Re: [PATCH bpf-next RFCv3 2/6] veth: support AF_XDP TX copy-mode.
On 2019/01/06 0:55, William Tu wrote:
...
>>> + /* put into rq */
>>> + skb = veth_xdp_rcv_one(rq, xdpf, &inner_xdp_xmit);
>>> + if (!skb) {
>>> + /* Peer side has XDP program attached */
>>> + if (inner_xdp_xmit & VETH_XDP_TX) {
>>> + /* Not supported */
>>> + pr_warn("veth: peer XDP_TX not supported\n");
>>
>> As this can be triggered by users we need ratelimit at least.
> How to rate limit here? Can I slow down the napi poll or reduce the budget?
net_ratelimit()
>> But since this is envisioned to be used in OVS, XDP_TX would be a very
>> important feature to me. I expect XDP programs in containers to process
>> packets and send back to OVS.
>
> It's a little tricky here since the receiving veth pulls the packet sent from
> its peer side, but due to XDP_TX, it has to put the packet back to its peer
> side to receive. But I can see the use case you mentioned. Let me think
> about how to implement.
You have already implemented XDP_REDIRECT. XDP_TX is kind of special
case of XDP_REDIRECT, so I wonder why you think XDP_TX is especially
difficult.
--
Toshiaki Makita
Powered by blists - more mailing lists