lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e8dd6d17-26a9-907b-389b-a571f1521bc1@redhat.com>
Date:	Fri, 6 May 2016 10:16:16 -0700
From:	Laura Abbott <labbott@...hat.com>
To:	Chen Feng <puck.chen@...ilicon.com>, yudongbin@...ilicon.com,
	gregkh@...uxfoundation.org, arve@...roid.com,
	riandrews@...roid.com, paul.gortmaker@...driver.com,
	bmarsh94@...il.com, devel@...verdev.osuosl.org,
	linux-kernel@...r.kernel.org
Cc:	suzhuangluan@...ilicon.com, dan.zhao@...ilicon.com,
	zhaojunmin@...wei.com, xuyiping@...ilicon.com,
	puck.chen@...mail.com
Subject: Re: [PATCH] ION: Sys_heap: Makes ion buffer always alloc from page
 pool

On 05/05/2016 07:48 PM, Chen Feng wrote:
>
>
> On 2016/5/6 1:09, Laura Abbott wrote:
>> On 05/04/2016 08:27 PM, Chen Feng wrote:
>>> Makes the ion buffer always alloced from page pool, no matter
>>> it's cached or not. In this way, it can improve the efficiency
>>> of it.
>>>
>>> Currently, there is no difference from cached or non-cached buffer
>>> for the page pool.
>>
>>
>> The advantage of the uncached pool was that the pages in the pool
>> were always clean in the cache. This is lost here with the addition
>> of cached pages to the same pool as uncached pages I agree the
>> cache path could benefit from pooling but we need to keep the caching
>> model consistent.
>>
> Yes, the buffer in the pool is non-cached.
>
> I found that the ion don't have a invalid cache ops.
> Currently, we use ioctl to keep the cache coherency.
> In this way, there is no difference between these.
>
> So, how do you think add a new cached pool in the system heap?
> If yes, I can file a new patch to do this.
>

Yes, that sounds like a good approach.

>>>
>>> Signed-off-by: Chen Feng <puck.chen@...ilicon.com>
>>> ---
>>>  drivers/staging/android/ion/ion_system_heap.c | 19 ++-----------------
>>>  1 file changed, 2 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
>>> index b69dfc7..caf11fc 100644
>>> --- a/drivers/staging/android/ion/ion_system_heap.c
>>> +++ b/drivers/staging/android/ion/ion_system_heap.c
>>> @@ -56,24 +56,10 @@ static struct page *alloc_buffer_page(struct ion_system_heap *heap,
>>>                        struct ion_buffer *buffer,
>>>                        unsigned long order)
>>>  {
>>> -    bool cached = ion_buffer_cached(buffer);
>>>      struct ion_page_pool *pool = heap->pools[order_to_index(order)];
>>>      struct page *page;
>>>
>>> -    if (!cached) {
>>> -        page = ion_page_pool_alloc(pool);
>>> -    } else {
>>> -        gfp_t gfp_flags = low_order_gfp_flags;
>>> -
>>> -        if (order > 4)
>>> -            gfp_flags = high_order_gfp_flags;
>>> -        page = alloc_pages(gfp_flags | __GFP_COMP, order);
>>> -        if (!page)
>>> -            return NULL;
>>> -        ion_pages_sync_for_device(NULL, page, PAGE_SIZE << order,
>>> -                        DMA_BIDIRECTIONAL);
>>> -    }
>>> -
>>> +    page = ion_page_pool_alloc(pool);
>>>      return page;
>>>  }
>>>
>>> @@ -81,9 +67,8 @@ static void free_buffer_page(struct ion_system_heap *heap,
>>>                   struct ion_buffer *buffer, struct page *page)
>>>  {
>>>      unsigned int order = compound_order(page);
>>> -    bool cached = ion_buffer_cached(buffer);
>>>
>>> -    if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) {
>>> +    if (!(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) {
>>>          struct ion_page_pool *pool = heap->pools[order_to_index(order)];
>>>
>>>          ion_page_pool_free(pool, page);
>>>
>>
>>
>> .
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ