lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 21 May 2022 06:44:41 +0000
From:   Christophe Leroy <christophe.leroy@...roup.eu>
To:     Jakub Kicinski <kuba@...nel.org>
CC:     Måns Rullgård <mans@...sr.com>,
        Pantelis Antoniou <pantelis.antoniou@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Vitaly Bordug <vbordug@...mvista.com>,
        Dan Malek <dan@...eddededge.com>,
        Joakim Tjernlund <joakim.tjernlund@...entis.se>,
        "linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] net: fs_enet: sync rx dma buffer before reading



Le 20/05/2022 à 19:43, Jakub Kicinski a écrit :
> On Fri, 20 May 2022 12:54:56 +0000 Christophe Leroy wrote:
>> Le 20/05/2022 à 14:35, Måns Rullgård a écrit :
>>> Christophe Leroy <christophe.leroy@...roup.eu> writes:
>>>> See original commit 070e1f01827c. It explicitely says that the cache
>>>> must be invalidate _AFTER_ the copy.
>>>>
>>>> The cache is initialy invalidated by dma_map_single(), so before the
>>>> copy the cache is already clean.
>>>>
>>>> After the copy, data is in the cache. In order to allow re-use of the
>>>> skb, it must be put back in the same condition as before, in extenso the
>>>> cache must be invalidated in order to be in the same situation as after
>>>> dma_map_single().
>>>>
>>>> So I think your change is wrong.
>>>
>>> OK, looking at it more closely, the change is at least unnecessary since
>>> there will be a cache invalidation between each use of the buffer either
>>> way.  Please disregard the patch.  Sorry for the noise.
>>>    
>>
>> I also looked deeper.
>>
>> Indeed it was implemented in kernel 4.9 or 4.8. At that time
>> dma_unmap_single() was a no-op, it was not doing any sync/invalidation
>> at all, invalidation was done only at mapping, so when we were reusing
>> the skb it was necessary to clean the cache _AFTER_ the copy as if it
>> was a new mapping.
>>
>> Today a sync is done at both map and unmap, so it doesn't really matter
>> whether we do the invalidation before or after the copy when we re-use
>> the skb.
> 
> Hm, I think the patch is necessary, sorry if you're also saying that
> and I'm misinterpreting.

Well, I say the contrary.

On the mainline the patch may be applied as is, it won't harm.

However, it is gets applied to kernel 4.9 (based on the fixes: tag), it 
will break the driver for at least powerpc 8xx.

In 4.9, dma_direct_map_page() invalidates the cache, but 
dma_direct_unmap_page() is a no-op. It means that when we re-use a skb 
as we do in fs_enet when the received packet is small, the cache must be 
invalidated _AFTER_ reading the received data.

The driver works like this:

allocate an SKB with the largest possible packet size
dma_direct_map_page() ==> cache invalidation
loop forever
   wait for some received data in DMA
   if (received packet is small)
     allocate a new SKB with the size of the received packet
     copy received data into the new SKB
     hand new SKB to network layer
     invalidate the cache
   else
     dma_direct_unmap_page() ==> no-op
     hand SKB to network layer
     allocate a new SKB with the largest possible packet size
     dma_direct_map_page() ==> cache invalidation
   endif
endloop


If you don't invalidate the cache _AFTER_ the copy, you have stale data 
in the cache when you later hand a non-small received packet to the 
network stack.

Invalidating _BEFORE_ the copy is useless as it has already been 
invalidated at mapping time.



In mainline, the DMA handling has been make generic, and cache 
invalidation is performed at both mapping at unmapping (Which is by the 
way sub-optimal) so by change it would still work after the patch.

> 
> Without the dma_sync_single_for_cpu() if swiotlb is used the data
> will not be copied back into the original buffer if there is no sync.

I don't know how SWIOTLB works or even what it is, does any of the 
microcontrollers embedding freescale ethernet uses that at all ?

Christophe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ