[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <476B45E9.1040909@opengridcomputing.com>
Date: Thu, 20 Dec 2007 22:49:45 -0600
From: Steve Wise <swise@...ngridcomputing.com>
To: benh@....ibm.com
CC: benh@...abs.au.ibm.com, Roland Dreier <rdreier@...co.com>,
linux-kernel@...r.kernel.org,
OpenFabrics General <general@...ts.openfabrics.org>,
Benjamin Herrenschmidt <bherren@....ibm.com>,
Wen Xiong <wenxiong@...ibm.com>,
Olof Johansson <olof@...om.net>,
Paul Mackerras <pmac@....ibm.com>
Subject: Re: iommu dma mapping alignment requirements
Benjamin Herrenschmidt wrote:
>> Sounds good. Thanks!
>>
>> Note, that these smaller sub-host-page-sized mappings might pollute the
>> address space causing full aligned host-page-size maps to become
>> scarce... Maybe there's a clever way to keep those in their own segment
>> of the address space?
>
> We already have a large vs. small split in the iommu virtual space to
> alleviate this (though it's not a hard constraint, we can still get
> into the "other" side if the default one is full).
>
> Try that patch and let me know:
Seems to be working!
:)
>
> Index: linux-work/arch/powerpc/kernel/iommu.c
> ===================================================================
> --- linux-work.orig/arch/powerpc/kernel/iommu.c 2007-12-21 10:39:39.000000000 +1100
> +++ linux-work/arch/powerpc/kernel/iommu.c 2007-12-21 10:46:18.000000000 +1100
> @@ -278,6 +278,7 @@ int iommu_map_sg(struct iommu_table *tbl
> unsigned long flags;
> struct scatterlist *s, *outs, *segstart;
> int outcount, incount, i;
> + unsigned int align;
> unsigned long handle;
>
> BUG_ON(direction == DMA_NONE);
> @@ -309,7 +310,11 @@ int iommu_map_sg(struct iommu_table *tbl
> /* Allocate iommu entries for that segment */
> vaddr = (unsigned long) sg_virt(s);
> npages = iommu_num_pages(vaddr, slen);
> - entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0);
> + align = 0;
> + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && (vaddr & ~PAGE_MASK) == 0)
> + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
> + entry = iommu_range_alloc(tbl, npages, &handle,
> + mask >> IOMMU_PAGE_SHIFT, align);
>
> DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);
>
> @@ -572,7 +577,7 @@ dma_addr_t iommu_map_single(struct iommu
> {
> dma_addr_t dma_handle = DMA_ERROR_CODE;
> unsigned long uaddr;
> - unsigned int npages;
> + unsigned int npages, align;
>
> BUG_ON(direction == DMA_NONE);
>
> @@ -580,8 +585,13 @@ dma_addr_t iommu_map_single(struct iommu
> npages = iommu_num_pages(uaddr, size);
>
> if (tbl) {
> + align = 0;
> + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT &&
> + ((unsigned long)vaddr & ~PAGE_MASK) == 0)
> + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
> +
> dma_handle = iommu_alloc(tbl, vaddr, npages, direction,
> - mask >> IOMMU_PAGE_SHIFT, 0);
> + mask >> IOMMU_PAGE_SHIFT, align);
> if (dma_handle == DMA_ERROR_CODE) {
> if (printk_ratelimit()) {
> printk(KERN_INFO "iommu_alloc failed, "
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists