[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180925001615.GA14386@ming.t460p>
Date: Tue, 25 Sep 2018 08:16:16 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Bart Van Assche <bvanassche@....org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Christoph Hellwig <hch@....de>,
Ming Lei <tom.leiming@...il.com>,
linux-block <linux-block@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Linux FS Devel <linux-fsdevel@...r.kernel.org>,
"open list:XFS FILESYSTEM" <linux-xfs@...r.kernel.org>,
Dave Chinner <dchinner@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Christoph Lameter <cl@...ux.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: block: DMA alignment of IO buffer allocated from slab
On Mon, Sep 24, 2018 at 11:57:53AM -0700, Matthew Wilcox wrote:
> On Mon, Sep 24, 2018 at 09:19:44AM -0700, Bart Van Assche wrote:
> > That means that two buffers allocated with kmalloc() may share a cache line on
> > x86-64. Since it is allowed to use a buffer allocated by kmalloc() for DMA, can
> > this lead to data corruption, e.g. if the CPU writes into one buffer allocated
> > with kmalloc() and a device performs a DMA write to another kmalloc() buffer and
> > both write operations affect the same cache line?
>
> You're not supposed to use kmalloc memory for DMA. This is why we have
> dma_alloc_coherent() and friends. Also, from DMA-API.txt:
Please take a look at USB drivers, or storage drivers or scsi layer. Lot of
DMA buffers are allocated via kmalloc.
Also see the following description in DMA-API-HOWTO.txt:
If the device supports DMA, the driver sets up a buffer using kmalloc() or
a similar interface, which returns a virtual address (X). The virtual
memory system maps X to a physical address (Y) in system RAM. The driver
can use virtual address X to access the buffer, but the device itself
cannot because DMA doesn't go through the CPU virtual memory system.
Also still see DMA-API-HOWTO.txt:
Types of DMA mappings
=====================
There are two types of DMA mappings:
- Consistent DMA mappings which are usually mapped at driver
initialization, unmapped at the end and for which the hardware should
guarantee that the device and the CPU can access the data
in parallel and will see updates made by each other without any
explicit software flushing.
Think of "consistent" as "synchronous" or "coherent".
- Streaming DMA mappings which are usually mapped for one DMA
transfer, unmapped right after it (unless you use dma_sync_* below)
and for which hardware can optimize for sequential accesses.
Thanks,
Ming
Powered by blists - more mailing lists