[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090109091814.GA10881@gondor.apana.org.au>
Date: Fri, 9 Jan 2009 20:18:14 +1100
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Huang Ying <ying.huang@...el.com>
Cc: Sebastian Siewior <linux-crypto@...breakpoint.cc>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>
Subject: Re: [RFC PATCH crypto 4/4] AES-NI: Add support to Intel AES-NI
instructions for x86_64 platform
On Fri, Jan 09, 2009 at 04:54:33PM +0800, Huang Ying wrote:
>
> - cryptd thread is not per-CPU, so I think there will be some
> unnecessary cache inter-CPU migration. Why not use a dedicate workqueue
Well it shouldn't be hard to make cryptd per-cpu.
> or just system events workqueue?
That's actually bad because the system events queue is a shared
resource which is often subject to starvation problems. On the
one hand we may be starved by others and we may also starve others
by doing too much crypto.
> - with cryptd(__*-aes-aesni), we need 4 internal tfms for each external
> tfm allocation request. For example, for one external cbc(aes) tfm
> allocation request, we need one cbc(aes) ablkcipher tfm, one
> cryptd(cbc-aes-aesni) tfm, and two cbc-aes-aesni tfm. Do we use too much
> memory? And we need to call aesni_set_key() twice.
Not at all, tfms are just "shell" objects and they were designed
to be used in thie way. Calling setkey twice is an issue but it's
not a show-stopper. We have the same problem in other places to
so this something that we can potentially optimise.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists