[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1271938343.4032.30.camel@bigi>
Date: Thu, 22 Apr 2010 08:12:23 -0400
From: jamal <hadi@...erus.ca>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Changli Gao <xiaosuo@...il.com>, Rick Jones <rick.jones2@...com>,
David Miller <davem@...emloft.net>, therbert@...gle.com,
netdev@...r.kernel.org, robert@...julf.net, andi@...stfloor.org
Subject: Re: rps perfomance WAS(Re: rps: question
On Wed, 2010-04-21 at 21:01 +0200, Eric Dumazet wrote:
> Drawback of using a fixed src ip from your generator is that all flows
> share the same struct dst entry on SUT. This might explain some glitches
> you noticed (ip_route_input + ip_rcv at high level on slave/application
> cpus)
yes, that would explain it ;-> I could have flows going to each cpu
generating different unique dst. It is good i didnt ;->
> Also note your test is one way. If some data was replied we would see
> much use of the 'flows'
>
In my next step i wanted to "route" these packets at app level and for
this stage of testing just wanted to sink the data to reduce experiment
variables. Reason:
The netdev structure would hit a lot of cache misses if i started using
it to both send/recv since lots of things are shared on tx/rx (example
napi tx prunning could happen on either tx or receive path); same thing
with qdisc path which is at netdev granularity.. I think there may be
room for interesting improvements in this area..
> I notice epoll_ctl() used a lot, are you re-arming epoll each time you
> receive a datagram ?
I am using default libevent on debian. It looks very old and maybe
buggy. I will try to upgrade first and if still see the same
investigate.
> I see slave/application cpus hit _raw_spin_lock_irqsave() and
> _raw_spin_unlock_irqrestore().
>
> Maybe a ring buffer could help (instead of a double linked queue) for
> backlog, or the double queue trick, if Changli wants to respin his
> patch.
>
Ok, I will have some cycles later today/tommorow or for sure on weekend.
My setup is still intact - so i can test.
cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists