[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOwmpL3jqQRANpKLSEOsZ8MpNoPC8SAR=fUVTPwOuE2FRxop5A@mail.gmail.com>
Date: Mon, 23 Jan 2017 11:29:07 -0800
From: Xiangning Yu <yuxiangning@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: Question about veth_xmit()
On Mon, Jan 23, 2017 at 11:07 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Mon, 2017-01-23 at 10:46 -0800, Xiangning Yu wrote:
>> Hi netdev folks,
>>
>> It looks like we call dev_forward_skb() in veth_xmit(), which calls
>> netif_rx() eventually.
>>
>> While netif_rx() will enqueue the skb to the CPU RX backlog before the
>> actual processing takes place. So, this actually means a TX skb has to
>> wait some un-related RX skbs to finish. And this will happen twice for
>> a single ping, because the veth device always works as a pair?
>>
>> IMHO this might lead to some latency issue under certain workload,
>> can we change the call to dev_forward_skb() to something like this?
>>
>> if (likely(__dev_forward_skb(rcv, skb) == NET_RX_SUCCESS)) {
>> local_bh_disable();
>> netif_receive_skb(skb);
>> local_bh_enable();
>>
>> Could you please shed some light on this change? And please feel free
>> to correct my if my understanding is wrong.
>
> How veth would have different latency requirement than loopback device ?
>
The traffic from those veth device will reach external network, and
normally those are RPC type traffic.
> Calling netif_receive_skb() is considered too dangerous here (or from
> any ndo_start_xmit()) because of possible kernel stack exhaustion.
>
I agree, stack space is a concern, especially if the traffic is
loopback-ed to another namespace.
Thanks,
- Xiangning
>
>
Powered by blists - more mailing lists