[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <192a335d783a2e54f539dc5ff81bf3207dafa88f.1767089672.git.mst@redhat.com>
Date: Tue, 30 Dec 2025 05:15:49 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: Cong Wang <xiyou.wangcong@...il.com>, Jonathan Corbet <corbet@....net>,
Olivia Mackall <olivia@...enic.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Jason Wang <jasowang@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
Eugenio Pérez <eperezma@...hat.com>,
"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Gerd Hoffmann <kraxel@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Stefano Garzarella <sgarzare@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Petr Tesarik <ptesarik@...e.com>,
Leon Romanovsky <leon@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>,
linux-doc@...r.kernel.org, linux-crypto@...r.kernel.org,
virtualization@...ts.linux.dev, linux-scsi@...r.kernel.org,
iommu@...ts.linux.dev, kvm@...r.kernel.org, netdev@...r.kernel.org
Subject: [PATCH RFC 02/13] docs: dma-api: document __dma_align_begin/end
Document the __dma_align_begin/__dma_align_end annotations
introduced by the previous patch.
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
---
Documentation/core-api/dma-api-howto.rst | 42 ++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst
index 96fce2a9aa90..99eda4c5c8e7 100644
--- a/Documentation/core-api/dma-api-howto.rst
+++ b/Documentation/core-api/dma-api-howto.rst
@@ -146,6 +146,48 @@ What about block I/O and networking buffers? The block I/O and
networking subsystems make sure that the buffers they use are valid
for you to DMA from/to.
+__dma_from_device_aligned_begin/end annotations
+===============================================
+
+As explained previously, when a structure contains a DMA_FROM_DEVICE buffer
+(device writes to memory) alongside fields that the CPU writes to, cache line
+sharing between the DMA buffer and CPU-written fields can cause data corruption
+on CPUs with DMA-incoherent caches.
+
+The ``__dma_from_device_aligned_begin/__dma_from_device_aligned_end``
+annotations ensure proper alignment to prevent this::
+
+ struct my_device {
+ spinlock_t lock1;
+ __dma_from_device_aligned_begin char dma_buffer1[16];
+ char dma_buffer2[16];
+ __dma_from_device_aligned_end spinlock_t lock2;
+ };
+
+On cache-coherent platforms these macros expand to nothing. On non-coherent
+platforms, they ensure the minimal DMA alignment, which can be as large as 128
+bytes.
+
+.. note::
+
+ To isolate a DMA buffer from adjacent fields, you must apply
+ ``__dma_from_device_aligned_begin`` to the first DMA buffer field
+ **and additionally** apply ``__dma_from_device_aligned_end`` to the
+ **next** field in the structure, **beyond** the DMA buffer (as opposed
+ to the last field of the DMA buffer!). This protects both the head and
+ tail of the buffer from cache line sharing.
+
+ When the DMA buffer is the **last field** in the structure, just
+ ``__dma_from_device_aligned_begin`` is enough - the compiler's struct
+ padding protects the tail::
+
+ struct my_device {
+ spinlock_t lock;
+ struct mutex mlock;
+ __dma_from_device_aligned_begin char dma_buffer1[16];
+ char dma_buffer2[16];
+ };
+
DMA addressing capabilities
===========================
--
MST
Powered by blists - more mailing lists