lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e24ecc5-25e1-3d5e-2092-daa95ae36cba@gmail.com>
Date:   Thu, 20 Dec 2018 13:19:04 +0200
From:   Oleksandr Andrushchenko <andr2000@...il.com>
To:     Gerd Hoffmann <kraxel@...hat.com>,
        "Oleksandr_Andrushchenko@...m.com" <Oleksandr_Andrushchenko@...m.com>
Cc:     Noralf Trønnes <noralf@...nnes.org>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, daniel.vetter@...el.com,
        jgross@...e.com, boris.ostrovsky@...cle.com
Subject: Re: [PATCH] drm/xen-front: Make shmem backed display buffer coherent

On 12/19/18 4:10 PM, Gerd Hoffmann wrote:
>    Hi,
>
>>> Sure this actually helps?  It's below 4G in guest physical address
>>> space, so it can be backed by pages which are actually above 4G in host
>>> physical address space ...
>> Yes, you are right here. This is why I wrote about the IOMMU
>> and other conditions. E.g. you can have a device which only
>> expects 32-bit, but thanks to IOMMU it can access pages above
>> 4GiB seamlessly. So, this is why I *hope* that this code *may* help
>> such devices. Do you think I don't need that and have to remove?
> I would try without that, and maybe add a runtime option (module
> parameter) later if it turns out some hardware actually needs that.
> Devices which can do 32bit DMA only become less and less common these
> days.
Good point, will remove then
>>>>>> +    if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,
>>>>>> +            DMA_BIDIRECTIONAL)) {
>>>>> Are you using the DMA streaming API as a way to flush the caches?
>>>> Yes
>>>>> Does this mean that GFP_USER isn't making the buffer coherent?
>>>> No, it didn't help. I had a question [1] if there are any other better way
>>>> to achieve the same, but didn't have any response yet. So, I implemented
>>>> it via DMA API which helped.
>>> set_pages_array_*() ?
>>>
>>> See arch/x86/include/asm/set_memory.h
>> Well, x86... I am on arm which doesn't define that...
> Oh, arm.  Maybe ask on a arm list then.  I know on arm you have to care
> about caching a lot more, but that also is where my knowledge ends ...
>
> Using dma_map_sg for cache flushing looks like a sledge hammer approach
> to me.
It is. This is why I am so unsure this is way to go
>    But maybe it is needed to make xen flush the caches (xen guests
> have their own dma mapping implementation, right?  Or is this different
> on arm than on x86?).
I'll try to figure out
> cheers,
>    Gerd
>
Thank you,
Oleksandr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ