lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinF=r_wUw_ZgfcdrFD6nWh3VKwUOjQ+wsbVax0C@mail.gmail.com>
Date:	Wed, 26 Jan 2011 19:53:28 +0200
From:	Pekka Enberg <penberg@...nel.org>
To:	Nitin Gupta <ngupta@...are.org>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Pekka Enberg <penberg@...helsinki.fi>,
	devel@...verdev.osuosl.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/7] zram: Speed insertion of new pages with cached idx

On Wed, Jan 26, 2011 at 7:23 PM, Robert Jennings <rcj@...ux.vnet.ibm.com> wrote:
> Calculate the first- and second-level indices for new page when the pool
> is initialized rather than calculating them on each insertion.
>
> Signed-off-by: Robert Jennings <rcj@...ux.vnet.ibm.com>
> ---
>  drivers/staging/zram/xvmalloc.c     |   13 +++++++++++--
>  drivers/staging/zram/xvmalloc_int.h |    4 ++++
>  2 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/staging/zram/xvmalloc.c b/drivers/staging/zram/xvmalloc.c
> index 3fdbb8a..a507f95 100644
> --- a/drivers/staging/zram/xvmalloc.c
> +++ b/drivers/staging/zram/xvmalloc.c
> @@ -184,8 +184,13 @@ static void insert_block(struct xv_pool *pool, struct page *page, u32 offset,
>        u32 flindex, slindex;
>        struct block_header *nextblock;
>
> -       slindex = get_index_for_insert(block->size);
> -       flindex = slindex / BITS_PER_LONG;
> +       if (block->size >= (PAGE_SIZE - XV_ALIGN)) {
> +               slindex = pagesize_slindex;
> +               flindex = pagesize_flindex;
> +       } else {
> +               slindex = get_index_for_insert(block->size);
> +               flindex = slindex / BITS_PER_LONG;
> +       }
>
>        block->link.prev_page = 0;
>        block->link.prev_offset = 0;
> @@ -316,6 +321,10 @@ struct xv_pool *xv_create_pool(void)
>        if (!pool)
>                return NULL;
>
> +       /* cache the first/second-level indices for PAGE_SIZE allocations */
> +       pagesize_slindex = get_index_for_insert(PAGE_SIZE);
> +       pagesize_flindex = pagesize_slindex / BITS_PER_LONG;

Why is this in xv_create_pool(). AFAICT, it can be called multiple
times if there's more than one zram device. Do we really need
variables for these? They look like something GCC constant propagation
should take care of if they would be defines or static inline
functions.

> +
>        spin_lock_init(&pool->lock);
>
>        return pool;
> diff --git a/drivers/staging/zram/xvmalloc_int.h b/drivers/staging/zram/xvmalloc_int.h
> index 051a49b..68db384 100644
> --- a/drivers/staging/zram/xvmalloc_int.h
> +++ b/drivers/staging/zram/xvmalloc_int.h
> @@ -52,6 +52,10 @@
>
>  /* End of user params */
>
> +/* Cache of the first/second-level indices for PAGE_SIZE allocations */
> +static u32 pagesize_slindex;
> +static u32 pagesize_flindex;

This doesn't look right. You shouldn't put variable declarations in
header files.

> +
>  enum blockflags {
>        BLOCK_FREE,
>        PREV_FREE,
> --
> 1.6.0.2
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ