[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141112114316.GN6390@secunet.com>
Date: Wed, 12 Nov 2014 12:43:16 +0100
From: Steffen Klassert <steffen.klassert@...unet.com>
To: Ming Liu <ming.liu@...driver.com>
CC: Herbert Xu <herbert@...dor.apana.org.au>, <davem@...emloft.net>,
<ying.xue@...driver.com>, <linux-crypto@...r.kernel.org>,
<netdev@...r.kernel.org>
Subject: Re: [PATCH] crypto: aesni-intel - avoid IPsec re-ordering
On Wed, Nov 12, 2014 at 06:41:30PM +0800, Ming Liu wrote:
> On 11/12/2014 04:51 PM, Herbert Xu wrote:
> >On Wed, Nov 12, 2014 at 09:41:38AM +0100, Steffen Klassert wrote:
> >>Can't we just use cryptd unconditionally to fix this reordering problem?
> >I think the idea is that most of the time cryptd isn't required
> >so we want to stick with direct processing to lower latency.
> >
> >I think the simplest fix would be to punt to cryptd as long as
> >there are cryptd requests queued.
> I've tried that method when I started to think about the fix, but it
> will cause 2 other issues per test while resolving the reordering
> one, as follows:
> 1 The work queue can not handle so many packets when the traffic is
> very high(over 200M/S), and it would drop most of them when the
> queue length is beyond CRYPTD_MAX_CPU_QLEN.
That's why I've proposed to adjust CRYPTD_MAX_CPU_QLEN in my other mail.
But anyway, it still does not fix the reorder problem completely.
We still have a problem if subsequent algorithms run asynchronously
or if we get interrupted while we are processing the last request
from the queue.
I think we have only two options, either processing all calls
directly or use cryptd unconditionally. Mixing direct and
asynchronous calls will lead to problems.
If we don't want to use cryptd unconditionally, we could use
direct calls for all requests. If the fpu is not usable, we
maybe could fallback to an algorithm that does not need the
fpu, such as aes-generic.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists