[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <VI1PR0402MB3485D7664F87D8C38FA8FD5C98190@VI1PR0402MB3485.eurprd04.prod.outlook.com>
Date: Fri, 31 May 2019 05:22:50 +0000
From: Horia Geanta <horia.geanta@....com>
To: Herbert Xu <herbert@...dor.apana.org.au>
CC: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Eric Biggers <ebiggers@...nel.org>,
Iuliana Prodan <iuliana.prodan@....com>,
"David S. Miller" <davem@...emloft.net>,
Sascha Hauer <s.hauer@...gutronix.de>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
dl-linux-imx <linux-imx@....com>
Subject: Re: [PATCH] crypto: gcm - fix cacheline sharing
On 5/30/2019 4:26 PM, Herbert Xu wrote:
> On Thu, May 30, 2019 at 01:18:34PM +0000, Horia Geanta wrote:
>>
>> I guess there are only two options:
>> -either cache line sharing is avoided OR
>> -users need to be *aware* they are sharing the cache line and some rules /
>> assumptions are in place on how to safely work on the data
>
> No there is a third option and it's very simple:
>
I see this as the 2nd option.
> You must only access the virtual addresses given to you before DMA
> mapping or after DMA unmapping.
>
Unless it's clearly defined *which* virtual addresses mustn't be accessed,
things won't work properly.
In theory, any two objects could share a cache line. We can't just stop all
memory accesses from CPU while a peripheral is busy.
Thanks,
Horia
Powered by blists - more mailing lists