[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141006161235.010033f5@redhat.com>
Date: Mon, 6 Oct 2014 16:12:35 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: David Miller <davem@...emloft.net>
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org,
therbert@...gle.com, hannes@...essinduktion.org, fw@...len.de,
dborkman@...hat.com, jhs@...atatu.com, alexander.duyck@...il.com,
john.r.fastabend@...el.com, dave.taht@...il.com, toke@...e.dk,
brouer@...hat.com
Subject: Re: [PATCH net-next] qdisc: validate skb without holding lock
On Fri, 03 Oct 2014 15:36:45 -0700 (PDT)
David Miller <davem@...emloft.net> wrote:
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Fri, 03 Oct 2014 15:31:07 -0700
>
> > From: Eric Dumazet <edumazet@...gle.com>
> >
> > Validation of skb can be pretty expensive :
> >
> > GSO segmentation and/or checksum computations.
> >
> > We can do this without holding qdisc lock, so that other cpus
> > can queue additional packets.
> >
[...]
> >
> > Turning TSO on or off had no effect on throughput, only few more cpu
> > cycles. Lock contention on qdisc lock disappeared.
This is good work! Lock contention significantly reduced!
My 10G tests just 2x 10G netperf TCP_STREAM shows:
With GSO=off TSO=off, _raw_spin_lock is now only at perf top#13 with
1.44% (80% from qdisc calls, 60% is from __dev_queue_xmit and 20% from
sch_direct_xmit) (before with qdisc bulking is was 2.66%).
The "show off" case is GSO=on TSO=off, where raw_spin_lock is now only
at perf top#26 with 0.85% and only 54.74% comes from qdisc calls
(52.07% sch_direct_xmit and 2.67% __dev_queue_xmit).
This is some significant improvements to the kernels xmit layer,
me very happy!!! :-))) Thanks everyone!
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists