lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D9F2299.5030503@shipmail.org>
Date:	Fri, 08 Apr 2011 16:58:33 +0200
From:	Thomas Hellstrom <thomas@...pmail.org>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC:	Jerome Glisse <j.glisse@...il.com>, linux-kernel@...r.kernel.org,
	Dave Airlie <airlied@...hat.com>,
	dri-devel@...ts.freedesktop.org,
	Alex Deucher <alexdeucher@...il.com>,
	Konrad Rzeszutek Wilk <konrad@...nel.org>
Subject: Re: [PATCH] cleanup: Add 'struct dev' in the TTM layer to be passed
 in for DMA API calls.

On 04/08/2011 04:57 PM, Thomas Hellstrom wrote:
> Konrad,
>
> Sorry for waiting so long to answer. Workload is quite heavy ATM.
> Please see inline.
>
>
> On 03/31/2011 05:49 PM, Konrad Rzeszutek Wilk wrote:
>>>> I can start this next week if you guys are comfortable with this idea.
>>>>
>>>>
>>> Konrad,
>>>
>>> 1) A couple of questions first. Where are the memory pools going to
>>> end up in this design. Could you draft an API? How is page
>>> accounting going to be taken care of? How do we differentiate
>>> between running on bare metal and running on a hypervisor?
>> My thought was that the memory pool's wouldn't be affected. Instead
>> of all of the calls to alloc_page/__free_page (and dma_alloc_coherent/
>> dma_free_coherent) would go through this API calls.
>>
>> What I thought off are three phases:
>>
>>   1). Get in the patch that passed in 'struct dev' to the 
>> dma_alloc_coherent
>>    for 2.6.39 so that PowerPC folks can use the it with radeon cards. My
>>    understanding is that the work you plan on to isn't going in 2.6.39
>>    but rather in 2.6.40 - and if get my stuff ready (the other phases)
>>    we can work out the kinks together. This way also the 'struct dev'
>>    is passed in the TTM layer.
>
> I'm not happy with this solution. If something goes in, it should be 
> complete, otherwise future work need to worry about not breaking 
> something that's already broken. Also it adds things to TTM api's that 
> are not really necessary.
>
>
> I'd like to see a solution that  encapsulates all device-dependent 
> stuff (including the dma adresses) in the ttm backend, so the TTM 
> backend code is the only code that needs to worry about device 
> dependent stuff. Core ttm should only need to worry about whether 
> pages can be transferrable to other devices, and whether pages can be 
> inserted into the page cache.
>
> This change should be pretty straightforward. We move the ttm::pages 
> array into the backend, and add ttm backend functions to allocate 
> pages and to free pages. The backend is then completely free to keep 
> track of page types and dma addresses completely hidden from core ttm, 
> and we don't need to shuffle those around. This opens up both for 
> completely device-private coherent memory and for "dummy device" 
> coherent memory.
>
> In the future, when TTM needs to move a ttm to another device, or when 
> it needs to insert pages into the page cache, pages that are device 
> specific will be copied and then freed. "Dummy device" pages can be 
> transferred to other devices, but not inserted into the page cache.
>
> /Thomas
>
>
Oh, I forgot, I'll be on vacation for a week with limited possibilities 
to read mail, but after that I can prototype the ttm backend api changes 
if necessary.

/Thomas


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ