[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1189599142.4326.38.camel@localhost>
Date: Wed, 12 Sep 2007 08:12:22 -0400
From: jamal <hadi@...erus.ca>
To: Bill Fink <billfink@...dspring.com>
Cc: James Chapman <jchapman@...alix.com>, netdev@...r.kernel.org,
davem@...emloft.net, jeff@...zik.org, mandeep.baines@...il.com,
ossthema@...ibm.com, Stephen Hemminger <shemminger@...l.org>
Subject: Re: RFC: possible NAPI improvements to reduce interrupt rates for
low traffic rates
On Wed, 2007-12-09 at 03:04 -0400, Bill Fink wrote:
> On Fri, 07 Sep 2007, jamal wrote:
> > I am going to be the devil's advocate[1]:
>
> So let me be the angel's advocate. :-)
I think this would make you God's advocate ;->
(http://en.wikipedia.org/wiki/God%27s_advocate)
> I view his results much more favorably.
The challenge is, under _low traffic_: bad bad CPU use.
Thats what is at stake, correct?
Lets bury the stats for a sec ...
1) Has that CPU situation improved? No, it has gotten worse.
2) Was there a throughput problem? No.
Remember, this is _low traffic and the complaint is not NAPI doesnt do
high throughput. I am not willing to spend 34% more cpu to get a few
hundred pps (under low traffic!).
3)Latency improvement is good. But is 34% cost worthwile for the corner
case of low traffic?
Heres an analogy:
I went to buy bread and complained that 66cents was too much for such
a tiny sliced loaf.
You tell me you have solved my problem: asking me to pay a dollar
because you made the bread slices crispier. I was complaining on the _66
cents price_ not on the crispiness of the slices ;-> Crispier slices are
good - but am i, the person who was complaining about price, willing to
pay 40-50% more? People are bitching about NAPI abusing CPU, is the
answer to abuse more CPU than NAPI?;->
The answer could be "I am not solving that problem anymore" - at least
thats what James is saying;->
Note: I am not saying theres no problem - just saying the result is not
addressing the problem.
> You can't always improve on all metrics of a workload.
But you gotta try to be consistent.
If, for example, one packet size/rate got negative results but the next
got positive results - thats lacking consistency.
> Sometimes there
> are tradeoffs to be made to be decided by the user based on what's most
> important to that user and his specific workload. And the suggested
> ethtool option (defaulting to current behavior) would enable the user
> to make that decision.
And the challenge is:
What workload is willing to invest that much cpu for low traffic?
Can you name one? One that may come close is database benchmarks for
latency - but those folks wouldnt touch this with a mile-long pole if
you told them their cpu use is going to get worse than what NAPI (that
big bad CPU hog under low traffic) is giving them.
>
> P.S. I agree that some tests run in parallel with some CPU hogs also
> running might be beneficial and enlightening.
indeed.
cheers,
jamal
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists