[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1414563627.631.75.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Tue, 28 Oct 2014 23:20:27 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: David Miller <davem@...emloft.net>, netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] net: introduce napi_schedule_irqoff()
On Tue, 2014-10-28 at 22:13 -0700, Alexei Starovoitov wrote:
> tried 50 parallel netperf -t TCP_RR over ixgbe
> and perf top were tcp stack bits, qdisc locks and netperf itself.
> What do you see?
You are kidding right ?
If you save 30 nsec ( 2 * 15 nsec) per transaction, and rtt is about 20
usec, its a 0.15 % gain. Not bad for a trivial patch.
Why are you using 50 parallel netperf, instead of trying a single
netperf, as I mentioned latency impact, not overall throughput ?
Do you believe typical servers in data centers are only sending &
receiving bulk packets, with no interrupt, and one cpu busy polling in
NAPI handler ?
Every atomic op we remove/avoid, every irq masking unmasking we remove,
every cache line miss or extra bus transaction we remove, TLB miss, is
the path for better latency.
You should take a look at recent commits I did, you'll get the general
picture if you missed it.
git log --oneline --author dumazet | head -100
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists