[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a3217f2-89d0-fc1e-bca8-953cf83f5e57@lab.ntt.co.jp>
Date: Fri, 24 May 2019 13:52:46 +0900
From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To: Jason Wang <jasowang@...hat.com>,
Toshiaki Makita <toshiaki.makita1@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org, xdp-newbies@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next 3/3] veth: Support bulk XDP_TX
On 2019/05/24 12:54, Jason Wang wrote:
> On 2019/5/24 上午11:28, Toshiaki Makita wrote:
>> On 2019/05/24 12:13, Jason Wang wrote:
>>> On 2019/5/23 下午9:51, Toshiaki Makita wrote:
>>>> On 19/05/23 (木) 22:29:27, Jesper Dangaard Brouer wrote:
>>>>> On Thu, 23 May 2019 20:35:50 +0900
>>>>> Toshiaki Makita <makita.toshiaki@....ntt.co.jp> wrote:
>>>>>
>>>>>> On 2019/05/23 20:25, Toke Høiland-Jørgensen wrote:
>>>>>>> Toshiaki Makita <makita.toshiaki@....ntt.co.jp> writes:
>>>>>>>> This improves XDP_TX performance by about 8%.
>>>>>>>>
>>>>>>>> Here are single core XDP_TX test results. CPU consumptions are
>>>>>>>> taken
>>>>>>>> from "perf report --no-child".
>>>>>>>>
>>>>>>>> - Before:
>>>>>>>>
>>>>>>>> 7.26 Mpps
>>>>>>>>
>>>>>>>> _raw_spin_lock 7.83%
>>>>>>>> veth_xdp_xmit 12.23%
>>>>>>>>
>>>>>>>> - After:
>>>>>>>>
>>>>>>>> 7.84 Mpps
>>>>>>>>
>>>>>>>> _raw_spin_lock 1.17%
>>>>>>>> veth_xdp_xmit 6.45%
>>>>>>>>
>>>>>>>> Signed-off-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
>>>>>>>> ---
>>>>>>>> drivers/net/veth.c | 26 +++++++++++++++++++++++++-
>>>>>>>> 1 file changed, 25 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>>>>>>>> index 52110e5..4edc75f 100644
>>>>>>>> --- a/drivers/net/veth.c
>>>>>>>> +++ b/drivers/net/veth.c
>>>>>>>> @@ -442,6 +442,23 @@ static int veth_xdp_xmit(struct net_device
>>>>>>>> *dev, int n,
>>>>>>>> return ret;
>>>>>>>> }
>>>>>>>> +static void veth_xdp_flush_bq(struct net_device *dev)
>>>>>>>> +{
>>>>>>>> + struct xdp_tx_bulk_queue *bq = this_cpu_ptr(&xdp_tx_bq);
>>>>>>>> + int sent, i, err = 0;
>>>>>>>> +
>>>>>>>> + sent = veth_xdp_xmit(dev, bq->count, bq->q, 0);
>>>>>>> Wait, veth_xdp_xmit() is just putting frames on a pointer ring. So
>>>>>>> you're introducing an additional per-cpu bulk queue, only to avoid
>>>>>>> lock
>>>>>>> contention around the existing pointer ring. But the pointer ring is
>>>>>>> per-rq, so if you have lock contention, this means you must have
>>>>>>> multiple CPUs servicing the same rq, no?
>>>>>> Yes, it's possible. Not recommended though.
>>>>>>
>>>>> I think the general per-cpu TX bulk queue is overkill. There is a
>>>>> loop
>>>>> over packets in veth_xdp_rcv(struct veth_rq *rq, budget, *status), and
>>>>> the caller veth_poll() will call veth_xdp_flush(rq->dev).
>>>>>
>>>>> Why can't you store this "temp" bulk array in struct veth_rq ?
>>>> Of course I can. But I thought tun has the same problem and we can
>>>> decrease memory footprint by sharing the same storage between devices.
>>>
>>> For TUN and for its fast path where vhost passes a bulk of XDP frames
>>> (through msg_control) to us, we probably just need a temporary bulk
>>> array in tun_xdp_one() instead of a global one. I can post patch or
>>> maybe you if you're interested in this.
>> Of course you/I can. What I'm concerned is that could be waste of cache
>> line when softirq runs veth napi handler and then tun napi handler.
>>
>
> Well, technically the bulk queue passed to TUN could be reused. I admit
> it may save cacheline in ideal case but I wonder how much we could gain
> on real workload.
I see veth_rq ptr_ring suffering from cacheline miss, which makes me
conservative about adding more buffers for xdp_frames.
I'll wait for some more feedback from others.
> (Note TUN doesn't use napi handler to do XDP, it has a
> NAPI mode but it was mainly used for hardening and XDP was not
> implemented there, maybe we should fix this).
Ah, that's true. Sorry for confusion.
>
> Thanks
>
>
>>> Thanks
>>>
>>>
>>>> Or if other devices want to reduce queues so that we can use XDP on
>>>> many-cpu servers and introduce locks, we can use this storage for that
>>>> case as well.
>>>>
>>>> Still do you prefer veth-specific solution?
>>>>
>>>>> You could even alloc/create it on the stack of veth_poll() and send it
>>>>> along via a pointer to veth_xdp_rcv).
>>>>>
>>>> Toshiaki Makita
>>>
>
>
--
Toshiaki Makita
Powered by blists - more mailing lists