lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 20 May 2022 12:54:56 +0000 From: Christophe Leroy <christophe.leroy@...roup.eu> To: Måns Rullgård <mans@...sr.com> CC: Pantelis Antoniou <pantelis.antoniou@...il.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Vitaly Bordug <vbordug@...mvista.com>, Dan Malek <dan@...eddededge.com>, Joakim Tjernlund <joakim.tjernlund@...entis.se>, "linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] net: fs_enet: sync rx dma buffer before reading Le 20/05/2022 à 14:35, Måns Rullgård a écrit : > Christophe Leroy <christophe.leroy@...roup.eu> writes: > >> Le 19/05/2022 à 21:24, Mans Rullgard a écrit : >>> The dma_sync_single_for_cpu() call must precede reading the received >>> data. Fix this. >> >> See original commit 070e1f01827c. It explicitely says that the cache >> must be invalidate _AFTER_ the copy. >> >> The cache is initialy invalidated by dma_map_single(), so before the >> copy the cache is already clean. >> >> After the copy, data is in the cache. In order to allow re-use of the >> skb, it must be put back in the same condition as before, in extenso the >> cache must be invalidated in order to be in the same situation as after >> dma_map_single(). >> >> So I think your change is wrong. > > OK, looking at it more closely, the change is at least unnecessary since > there will be a cache invalidation between each use of the buffer either > way. Please disregard the patch. Sorry for the noise. > I also looked deeper. Indeed it was implemented in kernel 4.9 or 4.8. At that time dma_unmap_single() was a no-op, it was not doing any sync/invalidation at all, invalidation was done only at mapping, so when we were reusing the skb it was necessary to clean the cache _AFTER_ the copy as if it was a new mapping. Today a sync is done at both map and unmap, so it doesn't really matter whether we do the invalidation before or after the copy when we re-use the skb. Christophe
Powered by blists - more mailing lists