lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com> Date: Thu, 6 Dec 2012 21:08:42 +0800 From: Dongxiao Xu <dongxiao.xu@...el.com> To: konrad.wilk@...cle.com, xen-devel@...ts.xen.org Cc: linux-kernel@...r.kernel.org Subject: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg hook While mapping sg buffers, checking to cross page DMA buffer is also needed. If the guest DMA buffer crosses page boundary, Xen should exchange contiguous memory for it. Besides, it is needed to backup the original page contents and copy it back after memory exchange is done. This fixes issues if device DMA into software static buffers, and in case the static buffer cross page boundary which pages are not contiguous in real hardware. Signed-off-by: Dongxiao Xu <dongxiao.xu@...el.com> Signed-off-by: Xiantao Zhang <xiantao.zhang@...el.com> --- drivers/xen/swiotlb-xen.c | 47 ++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 46 insertions(+), 1 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 58db6df..e8f0cfb 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, } EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device); +static bool +check_continguous_region(unsigned long vstart, unsigned long order) +{ + unsigned long prev_ma = xen_virt_to_bus((void *)vstart); + unsigned long next_ma; + int i; + + for (i = 1; i < (1 << order); i++) { + next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE)); + if (next_ma != prev_ma + PAGE_SIZE) + return false; + prev_ma = next_ma; + } + return true; +} + /* * Map a set of buffers described by scatterlist in streaming mode for DMA. * This is the scatter-gather version of the above xen_swiotlb_map_page @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, for_each_sg(sgl, sg, nelems, i) { phys_addr_t paddr = sg_phys(sg); - dma_addr_t dev_addr = xen_phys_to_bus(paddr); + unsigned long vstart, order; + dma_addr_t dev_addr; + + /* + * While mapping sg buffers, checking to cross page DMA buffer + * is also needed. If the guest DMA buffer crosses page + * boundary, Xen should exchange contiguous memory for it. + * Besides, it is needed to backup the original page contents + * and copy it back after memory exchange is done. + */ + if (range_straddles_page_boundary(paddr, sg->length)) { + vstart = (unsigned long)__va(paddr & PAGE_MASK); + order = get_order(sg->length + (paddr & ~PAGE_MASK)); + if (!check_continguous_region(vstart, order)) { + unsigned long buf; + buf = __get_free_pages(GFP_KERNEL, order); + memcpy((void *)buf, (void *)vstart, + PAGE_SIZE * (1 << order)); + if (xen_create_contiguous_region(vstart, order, + fls64(paddr))) { + free_pages(buf, order); + return 0; + } + memcpy((void *)vstart, (void *)buf, + PAGE_SIZE * (1 << order)); + free_pages(buf, order); + } + } + + dev_addr = xen_phys_to_bus(paddr); if (swiotlb_force || !dma_capable(hwdev, dev_addr, sg->length) || -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists