[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <VI1PR0402MB348582411F826968EBC59A8B98190@VI1PR0402MB3485.eurprd04.prod.outlook.com>
Date: Fri, 31 May 2019 06:10:51 +0000
From: Horia Geanta <horia.geanta@....com>
To: Herbert Xu <herbert@...dor.apana.org.au>
CC: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Eric Biggers <ebiggers@...nel.org>,
Iuliana Prodan <iuliana.prodan@....com>,
"David S. Miller" <davem@...emloft.net>,
Sascha Hauer <s.hauer@...gutronix.de>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
dl-linux-imx <linux-imx@....com>
Subject: Re: [PATCH] crypto: gcm - fix cacheline sharing
On 5/31/2019 8:43 AM, Herbert Xu wrote:
> On Fri, May 31, 2019 at 05:22:50AM +0000, Horia Geanta wrote:
>>
>> Unless it's clearly defined *which* virtual addresses mustn't be accessed,
>> things won't work properly.
>> In theory, any two objects could share a cache line. We can't just stop all
>> memory accesses from CPU while a peripheral is busy.
>
> The user obviously can't touch the memory areas potentially under
> DMA. But in this case it's not the user that's doing it, it's
> the driver.
>
> So the driver must not touch any virtual pointers given to it
> as input/output while the DMA areas are mapped.
>
Driver is not touching the DMA mapped areas, the DMA API conventions are
carefully followed.
It's touching a virtual pointer that is not DMA mapped, that just happens to be
on the same cache line with a DMA mapped buffer.
Thanks,
Horia
Powered by blists - more mailing lists