lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Nov 2019 07:58:30 +0100
From:   Thomas Hellström (VMware) 
        <thomas_os@...pmail.org>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     Christian König <christian.koenig@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: dma coherent memory user-space maps

On 11/4/19 7:38 AM, Thomas Hellström (VMware) wrote:
> Hi, Crhistoph,
>
> On 10/31/19 10:54 PM, Christoph Hellwig wrote:
>> Hi Thomas,
>>
>> sorry for the delay.  I've been travelling way to much laterly and had
>> a hard time keeping up.
>>
>> On Tue, Oct 08, 2019 at 02:34:17PM +0200, Thomas Hellström (VMware) 
>> wrote:
>>> /* Obtain struct dma_pfn pointers from a dma coherent allocation */
>>> int dma_get_dpfns(struct device *dev, void *cpu_addr, dma_addr_t 
>>> dma_addr,
>>>            pgoff_t offset, pgoff_t num, dma_pfn_t dpfns[]);
>>>
>>> I figure, for most if not all architectures we could use an ordinary 
>>> pfn as
>>> dma_pfn_t, but the dma layer would still have control over how those 
>>> pfns
>>> are obtained and how they are used in the kernel's mapping APIs.
>>>
>>> If so, I could start looking at this, time permitting,  for the 
>>> cases where
>>> the pfn can be obtained from the kernel address or from
>>> arch_dma_coherent_to_pfn(), and also the needed work to have a tailored
>>> vmap_pfn().
>> I'm not sure that infrastructure is all that helpful unfortunately, even
>> if it ended up working.  The problem with the 'coherent' DMA mappings
>> is that we they have a few different backends.  For architectures that
>> are DMA coherent everything is easy and we use the normal page
>> allocator, and your above is trivially doable as wrappers around the
>> existing functionality.  Other remap ptes to be uncached, either
>> in-place or using vmap, and the remaining ones use weird special
>> allocators for which almost everything we can mormally do in the VM
>> will fail.
>
> Hmm, yes I was hoping one could hide that behind the dma_pfn_t and the 
> interface, so that non-trivial backends would be able to define the 
> dma_pfn_t as needed and also if needed have their own special 
> implementation of the interface functions. The interface was spec'ed 
> from the user's (TTM) point of view assuming that with a page-prot and 
> an opaque dma_pfn_t we'd be able to support most non-trivial backends, 
> but that's perhaps not the case?
>
>>
>> I promised Christian an uncached DMA allocator a while ago, and still
>> haven't finished that either unfortunately.  But based on looking at
>> the x86 pageattr code I'm now firmly down the road of using the
>> set_memory_* helpers that change the pte attributes in place, as
>> everything else can't actually work on x86 which doesn't allow
>> aliasing of PTEs with different caching attributes.  The arm64 folks
>> also would prefer in-place remapping even if they don't support it
>> yet, and that is something the i915 code already does in a somewhat
>> hacky way, and something the msm drm driver wants.  So I decided to
>> come up with an API that gives back 'coherent' pages on the
>> architectures that support it and otherwise just fail.
>>
>> Do you care about architectures other than x86 and arm64?  If not I'll
>> hopefully have something for you soon.
>
> For VMware we only care about x86 and arm64, but i think Christian 
> needs to fill in here.

And also for VMware the most important missing functionality is vmap() 
of a combined set of coherent memory allocations, as TTM buffer objects 
are, when using coherent memory, built by coalescing coherent memory 
allocations from a pool.

Thanks,
/Thomas


>
> Thanks,
>
> Thomas
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ