lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <80c42ce0-6df1-71ab-81ec-e46cc56840ba@amd.com>
Date:   Mon, 15 Feb 2021 13:10:10 +0100
From:   Christian König <christian.koenig@....com>
To:     Thomas Zimmermann <tzimmermann@...e.de>,
        linux-media <linux-media@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        linaro-mm-sig@...ts.linaro.org, lkml <linux-kernel@...r.kernel.org>
Cc:     "Sharma, Shashank" <Shashank.Sharma@....com>
Subject: Re: DMA-buf and uncached system memory



Am 15.02.21 um 13:00 schrieb Thomas Zimmermann:
> Hi
>
> Am 15.02.21 um 10:49 schrieb Thomas Zimmermann:
>> Hi
>>
>> Am 15.02.21 um 09:58 schrieb Christian König:
>>> Hi guys,
>>>
>>> we are currently working an Freesync and direct scan out from system 
>>> memory on AMD APUs in A+A laptops.
>>>
>>> On problem we stumbled over is that our display hardware needs to 
>>> scan out from uncached system memory and we currently don't have a 
>>> way to communicate that through DMA-buf.
>
> Re-reading this paragrah, it sounds more as if you want to let the 
> exporter know where to move the buffer. Is this another case of the 
> missing-pin-flag problem?

No, your original interpretation was correct. Maybe my writing is a bit 
unspecific.

The real underlying issue is that our display hardware has a problem 
with latency when accessing system memory.

So the question is if that also applies to for example Intel hardware or 
other devices as well or if it is just something AMD specific?

Regards,
Christian.

>
> Best regards
> Thomas
>
>>>
>>> For our specific use case at hand we are going to implement 
>>> something driver specific, but the question is should we have 
>>> something more generic for this?
>>
>> For vmap operations, we return the address as struct dma_buf_map, 
>> which contains additional information about the memory buffer. In 
>> vram helpers, we have the interface drm_gem_vram_offset() that 
>> returns the offset of the GPU device memory.
>>
>> Would it be feasible to combine both concepts into a dma-buf 
>> interface that returns the device-memory offset plus the additional 
>> caching flag?
>>
>> There'd be a structure and a getter function returning the structure.
>>
>> struct dma_buf_offset {
>>      bool cached;
>>      u64 address;
>> };
>>
>> // return offset in *off
>> int dma_buf_offset(struct dma_buf *buf, struct dma_buf_off *off);
>>
>> Whatever settings are returned by dma_buf_offset() are valid while 
>> the dma_buf is pinned.
>>
>> Best regards
>> Thomas
>>
>>>
>>> After all the system memory access pattern is a PCIe extension and 
>>> as such something generic.
>>>
>>> Regards,
>>> Christian.
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@...ts.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@...ts.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ