[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1485209145.16328.214.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Mon, 23 Jan 2017 14:05:45 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Xiangning Yu <yuxiangning@...il.com>
Cc: Cong Wang <xiyou.wangcong@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: Question about veth_xmit()
On Mon, 2017-01-23 at 13:46 -0800, Xiangning Yu wrote:
> On Mon, Jan 23, 2017 at 12:56 PM, Cong Wang <xiyou.wangcong@...il.com> wrote:
> > On Mon, Jan 23, 2017 at 10:46 AM, Xiangning Yu <yuxiangning@...il.com> wrote:
> >> Hi netdev folks,
> >>
> >> It looks like we call dev_forward_skb() in veth_xmit(), which calls
> >> netif_rx() eventually.
> >>
> >> While netif_rx() will enqueue the skb to the CPU RX backlog before the
> >> actual processing takes place. So, this actually means a TX skb has to
> >> wait some un-related RX skbs to finish. And this will happen twice for
> >> a single ping, because the veth device always works as a pair?
> >
> > For me it is more like for the completeness of network stack of each
> > netns. The /proc net.core.netdev_max_backlog etc. are per netns, which
> > means each netns, as an independent network stack, should respect it
> > too.
> >
> > Since you care about latency, why not tune net.core.dev_weight for your
> > own netns?
>
> I haven't tried that yet, thank you for the hint! Though normally one
> of the veth device will be in the global namespace.
Well, per cpu backlog are not per net ns, but per cpu.
So Cong suggestion is not going to work.
Powered by blists - more mailing lists