lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 31 Dec 2015 16:50:54 +0900 From: Masahiro Yamada <yamada.masahiro@...ionext.com> To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, dmaengine@...r.kernel.org Cc: Dan Williams <dan.j.williams@...el.com>, "James E.J. Bottomley" <James.Bottomley@...senPartnership.com>, Sumit Semwal <sumit.semwal@...aro.org>, Vinod Koul <vinod.koul@...el.com>, Christoph Hellwig <hch@....de>, Lars-Peter Clausen <lars@...afoo.de>, linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>, Nicolas Ferre <nicolas.ferre@...el.com> Subject: [Question about DMA] Consistent memory? Hi. I am new to the Linux DMA APIs. First, I started by reading Documentation/DMA-API.txt, but I am confused with the term "consistent memory". Please help me understand the document correctly. The DMA-API.txt says as follows: ----------------------->8-------------------------------------------- void * dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) Consistent memory is memory for which a write by either the device or the processor can immediately be read by the processor or device without having to worry about caching effects. (You may however need to make sure to flush the processor's write buffers before telling devices to read that memory.) ------------------------8<-------------------------------------------- As far as I understand the the cited sentence, for the memory to be consistent, DMA controllers must be connected to DRAM through some special hardware that keeps the memory coherency, such as SCU. I assume the system like Fig.1 Fig.1 |------| |------| |-----| | CPU0 | | CPU1 | | DMA | |------| |------| |-----| | | | | | | |------| |------| |-----| | L1-C | | L1-C | | ACP | |------| |------| |-----| | | | |------------------------| | Snoop Control Unit | |------------------------| | |------------------------| | L2-cache | |------------------------| | |------------------------| | DRAM | |------------------------| (ACP = accelerator coherency port) But, I think such a system is rare. At least on my SoC (ARM SoC), DMA controllers for NAND, MMC, etc. are directly connected to the DRAM like Fig.2. So, cache operations must be explicitly done by software before/after DMAs are kicked. (I think this is very normal.) Fig.2 |------| |------| |-----| | CPU0 | | CPU1 | | DMA | |------| |------| |-----| | | | | | | |------| |------| | | L1-C | | L1-C | | |------| |------| | | | | |------------------| | |Snoop Control Unit| | |------------------| | | | |------------------| | | L2-cache | | |------------------| | | | |--------------------------| | DRAM | |--------------------------| In a system like Fig.2, is the memory non-consistent? As long as I read DMA-API.txt, it is non-consistent. There is no consistent memory on my SoC. But, not only dma_alloc_noncoherent, but also dma_alloc_coherent() returns a memory region on my SoC. I am confused... -- Best Regards Masahiro Yamada -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists