lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 26 Jan 2011 13:50:51 -0600
From:	Robert Jennings <rcj@...ux.vnet.ibm.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Nitin Gupta <ngupta@...are.org>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Robert Jennings <rcj@...ux.vnet.ibm.com>,
	devel@...verdev.osuosl.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/7] zram: Speed insertion of new pages with cached idx

* Pekka Enberg (penberg@...helsinki.fi) wrote:
> On Wed, Jan 26, 2011 at 7:23 PM, Robert Jennings> <rcj@...ux.vnet.ibm.com> wrote:
>> Calculate the first- and second-level indices for new page when the pool
>> is initialized rather than calculating them on each insertion.
>> 
>> Signed-off-by: Robert Jennings <rcj@...ux.vnet.ibm.com>
>> ---
>>  drivers/staging/zram/xvmalloc.c     |   13 +++++++++++--
>>  drivers/staging/zram/xvmalloc_int.h |    4 ++++
>>  2 files changed, 15 insertions(+), 2 deletions(-)
>> 
>> diff --git a/drivers/staging/zram/xvmalloc.c b/drivers/staging/zram/xvmalloc.c
>> index 3fdbb8a..a507f95 100644
>> --- a/drivers/staging/zram/xvmalloc.c
>> +++ b/drivers/staging/zram/xvmalloc.c
>> @@ -184,8 +184,13 @@ static void insert_block(struct xv_pool *pool, struct page *page, u32 offset,
>>  	u32 flindex, slindex;
>>  	struct block_header *nextblock;
>>  
>> -	slindex = get_index_for_insert(block->size);
>> -	flindex = slindex / BITS_PER_LONG;
>> +	if (block->size >= (PAGE_SIZE - XV_ALIGN)) {
>> +		slindex = pagesize_slindex;
>> +		flindex = pagesize_flindex;
>> +	} else {
>> +		slindex = get_index_for_insert(block->size);
>> +		flindex = slindex / BITS_PER_LONG;
>> +	}
>>  
>>  	block->link.prev_page = 0;
>>  	block->link.prev_offset = 0;
>> @@ -316,6 +321,10 @@ struct xv_pool *xv_create_pool(void)
>>  	if (!pool)
>>  		return NULL;
>>  
>> +	/* cache the first/second-level indices for PAGE_SIZE allocations */
>> +	pagesize_slindex = get_index_for_insert(PAGE_SIZE);
>> +	pagesize_flindex = pagesize_slindex / BITS_PER_LONG;
> 
> Why is this in xv_create_pool(). AFAICT, it can be called multiple
> times if there's more than one zram device. Do we really need
> variables for these? They look like something GCC constant propagation
> should take care of if they would be defines or static inline
> functions.

It should have been a define rather than in xv_create_pool but as I read
more about GCC constant propagation and look at the get_index_for_insert
I believe that this patch is unnecessary.  For sizes near PAGE_SIZE
(>XV_MAX_ALLOC_SIZE) I believe GCC constant propagation should do
exactly what I though I was trying to do.  I will drop this patch.
Thank you for your reviews.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ