[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1295375676.3537.83.camel@bwh-desktop>
Date: Tue, 18 Jan 2011 18:34:36 +0000
From: Ben Hutchings <bhutchings@...arflare.com>
To: Rick Jones <rick.jones2@...com>
Cc: mi wake <wakemi.wake@...il.com>, netdev@...r.kernel.org
Subject: Re: rps testing questions
On Tue, 2011-01-18 at 10:23 -0800, Rick Jones wrote:
> Ben Hutchings wrote:
> > On Mon, 2011-01-17 at 17:43 +0800, mi wake wrote:
[...]
> >>I do ab and tbench testing also find there is less tps with enable
> >>rps.but,there is more cpu using when with enable rps.when with enable
> >>rps ,softirqs is blanced on cpus.
> >>
> >>is there something wrong with my test?
> >
> >
> > In addition to what Eric said, check the interrupt moderation settings
> > (ethtool -c/-C options). One-way latency for a single request/response
> > test will be at least the interrupt moderation value.
> >
> > I haven't tested RPS by itself (Solarflare NICs have plenty of hardware
> > queues) so I don't know whether it can improve latency. However, RFS
> > certainly does when there are many flows.
>
> Is there actually an expectation that either RPS or RFS would improve *latency*?
> Multiple-stream throughput certainly, but with the additional work done to
> spread things around, I wouldn't expect either to improve latency.
Yes, it seems to make a big improvement to latency when many flows are
active. Tom told me that one of his benchmarks was 200 * netperf TCP_RR
in parallel, and I've seen over 40% reduction in latency for that. That
said, allocating more RX queues might also help (sfc currently defaults
to one per processor package rather than one per processor thread, due
to concerns about CPU efficiency).
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists