[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130813154102.35739102@vostro>
Date: Tue, 13 Aug 2013 15:41:02 +0300
From: Timo Teras <timo.teras@....fi>
To: Steffen Klassert <steffen.klassert@...unet.com>
Cc: Andrew Collins <bsderandrew@...il.com>, netdev@...r.kernel.org
Subject: Re: ipsec smp scalability and cpu use fairness (softirqs)
On Tue, 13 Aug 2013 13:56:52 +0200
Steffen Klassert <steffen.klassert@...unet.com> wrote:
> On Tue, Aug 13, 2013 at 02:33:25PM +0300, Timo Teras wrote:
> >
> > I've been now playing with pcrypt. It seems to not give significant
> > boost in throughput. I've setup the cpumaps properly, and top says
> > the work is distributed to appropriate kworkers, but for some reason
> > throughput does not get any better. I've tested with iperf in both
> > udp and tcp modes, with various amounts of threads.
> >
> > Is there any more synchronization points for single SA that might
> > limit throughput? I've been testing with auth hmac(sha1), enc
> > cbc(aes) - according to metric the CPUs are still largely idle
> > instead of processing more data for better throughput. aes-gcm
> > (without pcrypt) achieves better throughput even saturating my test
> > box links.
> >
> > Any pointers what to test, or to pinpoint the bottleneck?
> >
>
> The only pitfall that comes to my mind is that pcrypt must be
> instantiated before inserting the states. Your /proc/crypto
> should show something like:
>
> name : authenc(hmac(sha1),cbc(aes))
> driver : pcrypt(authenc(hmac(sha1-generic),cbc(aes-asm)))
> module : pcrypt
> priority : 2100
> refcnt : 1
> selftest : passed
> type : aead
> async : yes
> blocksize : 16
> ivsize : 16
> maxauthsize : 20
> geniv : <built-in>
>
> pcrypt is now instantiated, e.g. all new IPsec states (that do
> hmac-sha1, cbc-aes) will use it, adding new states increase the
> refcount.
>
> I'll do some tests with current net-next on my own tomorrow and let
> you know about the results.
Yes, I've got pcrypt there. Apparently I had some of the cpu bindings
not right, so now it's looking a lot better. But it seems that
ksoftirqd on one of the CPUs becomes first bottleneck. I'll try to
figure out why.
Thanks on all the info so far, will continue experimenting here too.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists