lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 27 Oct 2014 09:48:45 +0900
From:	Gioh Kim <gioh.kim@....com>
To:	Laura Abbott <lauraa@...eaurora.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	John Stultz <john.stultz@...aro.org>,
	Rebecca Schultz Zavin <rebecca@...roid.com>
CC:	devel@...verdev.osuosl.org, gunho.lee@....com,
	linux-kernel@...r.kernel.org
Subject: Re: [RFCv2 3/3] staging: ion: limit pool size



2014-10-25 오전 5:53, Laura Abbott 쓴 글:
> Hi,
>
> On 10/23/2014 11:47 PM, Gioh Kim wrote:
>> This patch limits pool size by page unit.
>>
>
> This looks useful. Might be nice to add a debugfs option
> to change this at runtime as well.
>
>> Signed-off-by: Gioh Kim <gioh.kim@....com>
>> ---
>>   drivers/staging/android/ion/Kconfig         |    4 ++++
>>   drivers/staging/android/ion/ion_page_pool.c |   26 ++++++++++++++++----------
>>   2 files changed, 20 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/staging/android/ion/Kconfig b/drivers/staging/android/ion/Kconfig
>> index 3452346..e6b1a54 100644
>> --- a/drivers/staging/android/ion/Kconfig
>> +++ b/drivers/staging/android/ion/Kconfig
>> @@ -33,3 +33,7 @@ config ION_TEGRA
>>       help
>>         Choose this option if you wish to use ion on an nVidia Tegra.
>>
>> +config ION_POOL_LIMIT
>> +    int "Limit count of pages in pool"
>> +    depends on ION
>> +    default "0"
>
> Can you add help text here? It would be useful to clarify that the
> units are in pages and that 0 will allow unlimited growth of the
> pool. This should also clarify that this is a limit per
> individual pool and not a limit for all page pools in the system.
>
>> diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c
>> index 165152f..d63e93f 100644
>> --- a/drivers/staging/android/ion/ion_page_pool.c
>> +++ b/drivers/staging/android/ion/ion_page_pool.c
>> @@ -22,8 +22,11 @@
>>   #include <linux/module.h>
>>   #include <linux/slab.h>
>>   #include <linux/swap.h>
>> +#include <linux/kconfig.h>
>>   #include "ion_priv.h"
>>
>> +#define POOL_LIMIT CONFIG_ION_POOL_LIMIT
>> +
>
> I don't think the extra #define helps anything here, was
> there something else intended here?

No, I just follow other codes.
If it isn't necessary, I will remove it at v2 patch.


>
>>   static void *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
>>   {
>>       struct page *page = alloc_pages(pool->gfp_mask, pool->order);
>> @@ -41,8 +44,21 @@ static void ion_page_pool_free_pages(struct ion_page_pool *pool,
>>       __free_pages(page, pool->order);
>>   }
>>
>> +static int ion_page_pool_total(struct ion_page_pool *pool, bool high)
>> +{
>> +    int count = pool->low_count;
>> +
>> +    if (high)
>> +        count += pool->high_count;
>> +
>> +    return count << pool->order;
>> +}
>> +
>>   static int ion_page_pool_add(struct ion_page_pool *pool, struct page *page)
>>   {
>> +    if (POOL_LIMIT && ion_page_pool_total(pool, 1) > POOL_LIMIT)
>> +        return 1;
>> +
>>       mutex_lock(&pool->mutex);
>>       if (PageHighMem(page)) {
>>           list_add_tail(&page->lru, &pool->high_items);
>> @@ -103,16 +119,6 @@ void ion_page_pool_free(struct ion_page_pool *pool, struct page *page)
>>           ion_page_pool_free_pages(pool, page);
>>   }
>>
>> -static int ion_page_pool_total(struct ion_page_pool *pool, bool high)
>> -{
>> -    int count = pool->low_count;
>> -
>> -    if (high)
>> -        count += pool->high_count;
>> -
>> -    return count << pool->order;
>> -}
>> -
>>   int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask,
>>                   int nr_to_scan)
>>   {
>>
>
> Thanks,
> Laura
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ