[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC5umyjHruhnwiKwrHLBAF+g0ZDVouuuNvrisrUH8o963GyytQ@mail.gmail.com>
Date: Fri, 3 Oct 2014 08:08:33 +0900
From: Akinobu Mita <akinobu.mita@...il.com>
To: Peter Hurley <peter@...leysoftware.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
David Woodhouse <dwmw2@...radead.org>,
Don Dutile <ddutile@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <andi@...stfloor.org>,
x86@...nel.org, iommu@...ts.linux-foundation.org,
Greg KH <greg@...ah.com>
Subject: Re: [PATCH v3 0/5] enhance DMA CMA on x86
2014-10-03 7:03 GMT+09:00 Peter Hurley <peter@...leysoftware.com>:
> On 10/02/2014 12:41 PM, Konrad Rzeszutek Wilk wrote:
>> On Tue, Sep 30, 2014 at 09:49:54PM -0400, Peter Hurley wrote:
>>> On 09/30/2014 07:45 PM, Thomas Gleixner wrote:
>>> Which is different than if the plan is to ship production units for x86;
>>> then a general purpose solution will be required.
>>>
>>> As to the good design of a general purpose solution for allocating and
>>> mapping huge order pages, you are certainly more qualified to help Akinobu
>>> than I am.
>
> What Akinobu's patches intend to support is:
>
> phys_addr = dma_alloc_coherent(dev, 64 * 1024 * 1024, &bus_addr, GFP_KERNEL);
>
> which raises three issues:
>
> 1. Where do coherent blocks of this size come from?
> 2. How to prevent fragmentation of these reserved blocks over time by
> existing DMA users?
> 3. Is this support generically required across all iommu implementations on x86?
>
> Questions 1 and 2 are non-trivial, in the general case, otherwise the page
> allocator would already do this. Simply dropping in the contiguous memory
> allocator doesn't work because CMA does not have the same policy and performance
> as the page allocator, and is already causing performance regressions even
> in the absence of huge page allocations.
Could you take a look at the patches I sent? Can they fix these issues?
https://lkml.org/lkml/2014/9/28/110
With these patches, normal alloc_pages() is used for allocation first
and dma_alloc_from_contiguous() is used as a fallback.
> So that's why I raised question 3; is making the necessary compromises to support
> 64MB coherent DMA allocations across all x86 iommu implementations actually
> required?
>
> Prior to Akinobu's patches, the use of CMA by x86 iommu configurations was
> designed to be limited to testing configurations, as the introductory
> commit states:
>
> commit 0a2b9a6ea93650b8a00f9fd5ee8fdd25671e2df6
> Author: Marek Szyprowski <m.szyprowski@...sung.com>
> Date: Thu Dec 29 13:09:51 2011 +0100
>
> X86: integrate CMA with DMA-mapping subsystem
>
> This patch adds support for CMA to dma-mapping subsystem for x86
> architecture that uses common pci-dma/pci-nommu implementation. This
> allows to test CMA on KVM/QEMU and a lot of common x86 boxes.
>
> Signed-off-by: Marek Szyprowski <m.szyprowski@...sung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@...sung.com>
> CC: Michal Nazarewicz <mina86@...a86.com>
> Acked-by: Arnd Bergmann <arnd@...db.de>
>
>
> Which brings me to my suggestion: if support for huge coherent DMA is
> required only for a special test platform, then could not this support
> be specific to a new iommu configuration, namely iommu=cma, which would
> get initialized much the same way that iommu=calgary is now.
>
> The code for such a iommu configuration would mostly duplicate
> arch/x86/kernel/pci-swiotlb.c and the CMA support would get removed from
> the other x86 iommu implementations.
I'm not sure I read correctly, though. Can boot option 'cma=0' also
help avoiding CMA from IOMMU implementation?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists