lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Dec 2018 15:21:46 +0200
From:   Oleksandr Andrushchenko <andr2000@...il.com>
To:     Gerd Hoffmann <kraxel@...hat.com>
Cc:     Noralf Trønnes <noralf@...nnes.org>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, daniel.vetter@...el.com,
        jgross@...e.com, boris.ostrovsky@...cle.com,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@...m.com>
Subject: Re: [PATCH] drm/xen-front: Make shmem backed display buffer coherent

On 12/19/18 3:14 PM, Gerd Hoffmann wrote:
>    Hi,
>
>>>> +    mapping = xen_obj->base.filp->f_mapping;
>>>> +    mapping_set_gfp_mask(mapping, GFP_USER | __GFP_DMA32);
>>> Let's see if I understand what you're doing:
>>>
>>> Here you say that the pages should be DMA accessible for devices that can
>>> only see 4GB.
>> Yes, your understanding is correct. As we are a para-virtualized device we
>> do not have strict requirements for 32-bit DMA. But, via dma-buf export,
>> the buffer we create can be used by real HW, e.g. one can pass-through
>> real HW devices into a guest domain and they can import our buffer (yes,
>> they can be IOMMU backed and other conditions may apply).
>> So, this is why we are limiting to DMA32 here, just to allow more possible
>> use-cases
> Sure this actually helps?  It's below 4G in guest physical address
> space, so it can be backed by pages which are actually above 4G in host
> physical address space ...

Yes, you are right here. This is why I wrote about the IOMMU

and other conditions. E.g. you can have a device which only

expects 32-bit, but thanks to IOMMU it can access pages above

4GiB seamlessly. So, this is why I *hope* that this code *may* help

such devices. Do you think I don't need that and have to remove?

>>>> +    if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,
>>>> +            DMA_BIDIRECTIONAL)) {
>>>
>>> Are you using the DMA streaming API as a way to flush the caches?
>> Yes
>>> Does this mean that GFP_USER isn't making the buffer coherent?
>> No, it didn't help. I had a question [1] if there are any other better way
>> to achieve the same, but didn't have any response yet. So, I implemented
>> it via DMA API which helped.
> set_pages_array_*() ?
>
> See arch/x86/include/asm/set_memory.h
Well, x86... I am on arm which doesn't define that...
> HTH,
>    Gerd
>
Thank you,

Oleksandr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ