[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMRc=MdaHfsJnbB2hOO6EbVMwZaWqO7zMkv8ZVugHnfOuDn=AA@mail.gmail.com>
Date: Fri, 2 Jan 2026 06:46:51 -0600
From: Bartosz Golaszewski <brgl@...nel.org>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Cong Wang <xiyou.wangcong@...il.com>, Jonathan Corbet <corbet@....net>,
Olivia Mackall <olivia@...enic.com>, Herbert Xu <herbert@...dor.apana.org.au>,
Jason Wang <jasowang@...hat.com>, Paolo Bonzini <pbonzini@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>, Eugenio Pérez <eperezma@...hat.com>,
"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>, Gerd Hoffmann <kraxel@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>, Stefano Garzarella <sgarzare@...hat.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Petr Tesarik <ptesarik@...e.com>, Leon Romanovsky <leon@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>,
linux-doc@...r.kernel.org, linux-crypto@...r.kernel.org,
virtualization@...ts.linux.dev, linux-scsi@...r.kernel.org,
iommu@...ts.linux.dev, kvm@...r.kernel.org, netdev@...r.kernel.org,
"Enrico Weigelt, metux IT consult" <info@...ux.net>, Viresh Kumar <vireshk@...nel.org>, Linus Walleij <linusw@...nel.org>,
Bartosz Golaszewski <brgl@...nel.org>, linux-gpio@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 14/13] gpio: virtio: fix DMA alignment
On Tue, 30 Dec 2025 17:40:28 +0100, "Michael S. Tsirkin" <mst@...hat.com> said:
> The res and ires buffers in struct virtio_gpio_line and struct
> vgpio_irq_line respectively are used for DMA_FROM_DEVICE via virtqueue_add_sgs().
> However, within these structs, even though these elements are tagged
> as ____cacheline_aligned, adjacent struct elements
> can share DMA cachelines on platforms where ARCH_DMA_MINALIGN >
> L1_CACHE_BYTES (e.g., arm64 with 128-byte DMA alignment but 64-byte
> cache lines).
>
> The existing ____cacheline_aligned annotation aligns to L1_CACHE_BYTES
> which is now always sufficient for DMA alignment. For example,
> with L1_CACHE_BYTES = 32 and ARCH_DMA_MINALIGN = 128
> - irq_lines[0].ires at offset 128
> - irq_lines[1].type at offset 192
> both in same 128-byte DMA cacheline [128-256)
>
> When the device writes to irq_lines[0].ires and the CPU concurrently
> modifies one of irq_lines[1].type/disabled/masked/queued flags,
> corruption can occur on non-cache-coherent platform.
>
> Fix by using __dma_from_device_aligned_begin/end annotations on the
> DMA buffers. Drop ____cacheline_aligned - it's not required to isolate
> request and response, and keeping them would increase the memory cost.
>
> Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> ---
Acked-by: Bartosz Golaszewski <bartosz.golaszewski@....qualcomm.com>
Powered by blists - more mailing lists