lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <84ff140f-8162-6c27-1f4a-b25651652212@amd.com>
Date:   Mon, 4 Nov 2019 11:29:57 +0000
From:   "Koenig, Christian" <Christian.Koenig@....com>
To:     Thomas Hellström (VMware) 
        <thomas_os@...pmail.org>, Christoph Hellwig <hch@...radead.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: dma coherent memory user-space maps

Am 04.11.19 um 07:58 schrieb Thomas Hellström (VMware):
> On 11/4/19 7:38 AM, Thomas Hellström (VMware) wrote:
>> Hi, Crhistoph,
>>
>> On 10/31/19 10:54 PM, Christoph Hellwig wrote:
>>> Hi Thomas,
>>>
>>> sorry for the delay.  I've been travelling way to much laterly and had
>>> a hard time keeping up.
>>>
>>> On Tue, Oct 08, 2019 at 02:34:17PM +0200, Thomas Hellström (VMware) 
>>> wrote:
>>>> /* Obtain struct dma_pfn pointers from a dma coherent allocation */
>>>> int dma_get_dpfns(struct device *dev, void *cpu_addr, dma_addr_t 
>>>> dma_addr,
>>>>            pgoff_t offset, pgoff_t num, dma_pfn_t dpfns[]);
>>>>
>>>> I figure, for most if not all architectures we could use an 
>>>> ordinary pfn as
>>>> dma_pfn_t, but the dma layer would still have control over how 
>>>> those pfns
>>>> are obtained and how they are used in the kernel's mapping APIs.
>>>>
>>>> If so, I could start looking at this, time permitting,  for the 
>>>> cases where
>>>> the pfn can be obtained from the kernel address or from
>>>> arch_dma_coherent_to_pfn(), and also the needed work to have a 
>>>> tailored
>>>> vmap_pfn().
>>> I'm not sure that infrastructure is all that helpful unfortunately, 
>>> even
>>> if it ended up working.  The problem with the 'coherent' DMA mappings
>>> is that we they have a few different backends.  For architectures that
>>> are DMA coherent everything is easy and we use the normal page
>>> allocator, and your above is trivially doable as wrappers around the
>>> existing functionality.  Other remap ptes to be uncached, either
>>> in-place or using vmap, and the remaining ones use weird special
>>> allocators for which almost everything we can mormally do in the VM
>>> will fail.
>>
>> Hmm, yes I was hoping one could hide that behind the dma_pfn_t and 
>> the interface, so that non-trivial backends would be able to define 
>> the dma_pfn_t as needed and also if needed have their own special 
>> implementation of the interface functions. The interface was spec'ed 
>> from the user's (TTM) point of view assuming that with a page-prot 
>> and an opaque dma_pfn_t we'd be able to support most non-trivial 
>> backends, but that's perhaps not the case?
>>
>>>
>>> I promised Christian an uncached DMA allocator a while ago, and still
>>> haven't finished that either unfortunately.  But based on looking at
>>> the x86 pageattr code I'm now firmly down the road of using the
>>> set_memory_* helpers that change the pte attributes in place, as
>>> everything else can't actually work on x86 which doesn't allow
>>> aliasing of PTEs with different caching attributes.  The arm64 folks
>>> also would prefer in-place remapping even if they don't support it
>>> yet, and that is something the i915 code already does in a somewhat
>>> hacky way, and something the msm drm driver wants.  So I decided to
>>> come up with an API that gives back 'coherent' pages on the
>>> architectures that support it and otherwise just fail.
>>>
>>> Do you care about architectures other than x86 and arm64?  If not I'll
>>> hopefully have something for you soon.
>>
>> For VMware we only care about x86 and arm64, but i think Christian 
>> needs to fill in here.

The problem is that x86 is the platform where most of the standards are 
defined and at the same time it is relative graceful and forgiving when 
you do something odd.

For example on x86 it doesn't matter if a device accidentally snoops the 
CPU cache on an access even if the CPU things that bit of memory is 
uncached. On the other hand on ARM that can result in a rather hard to 
detect data corruption. That's the reason why we have disabled uncached 
DMA for now on arm32 and only use it rather restrictive on arm64.

As far as I know the situation On PowerPC is not good either. Here you 
got old systems with AGP, so in uncached system memory DMA definitely 
works somehow, but so far nobody could explain to me how.

Then last but not least you got those Loongson/MIPS guys which seems to 
got radeon/amdgpu working with their architecture as well, but 
essentially I have no idea how.

We care at least about x86, arm64 and PowerPC.

Regards,
Christian.


>
> And also for VMware the most important missing functionality is vmap() 
> of a combined set of coherent memory allocations, as TTM buffer 
> objects are, when using coherent memory, built by coalescing coherent 
> memory allocations from a pool.
>
> Thanks,
> /Thomas
>
>
>>
>> Thanks,
>>
>> Thomas
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ