[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161117144248.23500001@redhat.com>
Date: Thu, 17 Nov 2016 14:42:48 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Rick Jones <rick.jones2@....com>, netdev@...r.kernel.org,
brouer@...hat.com
Subject: Re: Netperf UDP issue with connected sockets
On Thu, 17 Nov 2016 05:20:50 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2016-11-17 at 09:16 +0100, Jesper Dangaard Brouer wrote:
>
> >
> > I noticed there is a Send-Q, and the perf-top2 is _raw_spin_lock, which
> > looks like it comes from __dev_queue_xmit(), but we know from
> > experience that this stall is actually caused by writing the
> > tailptr/doorbell in the HW. Thus, this could benefit a lot from
> > bulk/xmit_more into the qdisc layer.
>
> The Send-Q is there because of TX-completions being delayed a bit,
> because of IRQ mitigation.
>
> (ethtool -c eth0)
>
> It happens even if you do not have a qdisc in the first place.
>
> And we do have xmit_more in the qdisc layer already.
I can see that qdisc layer does not activate xmit_more in this case.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
$ ethtool -c mlx5p4
Coalesce parameters for mlx5p4:
Adaptive RX: on TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
rx-usecs: 3
rx-frames: 32
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 16
tx-frames: 32
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
Powered by blists - more mailing lists