lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 May 2014 09:50:15 +0900
From:	Gioh Kim <gioh.kim@....com>
To:	Michal Nazarewicz <mina86@...a86.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>
CC:	Minchan Kim <minchan.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Laura Abbott <lauraa@...eaurora.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Heesub Shin <heesub.shin@...sung.com>,
	Mel Gorman <mgorman@...e.de>,
	Johannes Weiner <hannes@...xchg.org>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	이건호 <gunho.lee@....com>, gurugio@...il.com
Subject: Re: [RFC][PATCH] CMA: drivers/base/Kconfig: restrict CMA size to
 non-zero value



2014-05-20 오전 4:59, Michal Nazarewicz 쓴 글:
> On Sun, May 18 2014, Joonsoo Kim wrote:
>> I think that this problem is originated from atomic_pool_init().
>> If configured coherent_pool size is larger than default cma size,
>> it can be failed even if this patch is applied.

The coherent_pool size (atomic_pool.size) should be restricted smaller than cma size.

This is another issue, however I think the default atomic pool size is too small.
Only one port of USB host needs at most 256Kbytes coherent memory (according to the USB host spec).
If a platform has several ports, it needs more than 1MB.
Therefore the default atomic pool size should be at least 1MB.

>>
>> How about below patch?
>> It uses fallback allocation if CMA is failed.
>
> Yes, I thought about it, but __dma_alloc uses similar code:
>
> 	else if (!IS_ENABLED(CONFIG_DMA_CMA))
> 		addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);
> 	else
> 		addr = __alloc_from_contiguous(dev, size, prot, &page, caller);
>
> so it probably needs to be changed as well.

If CMA option is not selected, __alloc_from_contiguous would not be called.
We don't need to the fallback allocation.

And if CMA option is selected and initialized correctly,
the cma allocation can fail in case of no-CMA-memory situation.
I thinks in that case we don't need to the fallback allocation also,
because it is normal case.

Therefore I think the restriction of CMA size option and make CMA work can cover every cases.

I think below patch is also good choice.
If both of you, Michal and Joonsoo, do not agree with me, please inform me.
I will make a patch including option restriction and fallback allocation.

>
>> -----------------8<---------------------
>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>> index 6b00be1..2909ab9 100644
>> --- a/arch/arm/mm/dma-mapping.c
>> +++ b/arch/arm/mm/dma-mapping.c
>> @@ -379,7 +379,7 @@ static int __init atomic_pool_init(void)
>>          unsigned long *bitmap;
>>          struct page *page;
>>          struct page **pages;
>> -       void *ptr;
>> +       void *ptr = NULL;
>>          int bitmap_size = BITS_TO_LONGS(nr_pages) * sizeof(long);
>>
>>          bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>> @@ -393,7 +393,7 @@ static int __init atomic_pool_init(void)
>>          if (IS_ENABLED(CONFIG_DMA_CMA))
>>                  ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page,
>>                                                atomic_pool_init);
>> -       else
>> +       if (!ptr)
>>                  ptr = __alloc_remap_buffer(NULL, pool->size, gfp, prot, &page,
>>                                             atomic_pool_init);
>>          if (ptr) {
>>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ