lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Jul 2016 08:57:04 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Ganesh Mahendran <opensource.ganesh@...il.com>
CC:	<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
	<akpm@...ux-foundation.org>, <ngupta@...are.org>,
	<sergey.senozhatsky.work@...il.com>, <rostedt@...dmis.org>,
	<mingo@...hat.com>
Subject: Re: [PATCH 3/8] mm/zsmalloc: take obj index back from
 find_alloced_obj

On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> the obj index value should be updated after return from
> find_alloced_obj()
 
        to avoid CPU buring caused by unnecessary object scanning.

Description should include what's the goal.

> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@...il.com>
> ---
>  mm/zsmalloc.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 405baa5..5c96ed1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
>   * return handle.
>   */
>  static unsigned long find_alloced_obj(struct size_class *class,
> -					struct page *page, int index)
> +					struct page *page, int *index)
>  {
>  	unsigned long head;
>  	int offset = 0;
> +	int objidx = *index;

Nit:

We have used obj_idx so I prefer it for consistency with others.

Suggestion:
Could you mind changing index in zs_compact_control and
migrate_zspage with obj_idx in this chance?

Strictly speaking, such clean up is separate patch but I don't mind
mixing them here(Of course, you will send it as another clean up patch,
it would be better). If you mind, just let it leave as is. Sometime,
I wil do it.

>  	unsigned long handle = 0;
>  	void *addr = kmap_atomic(page);
>  
>  	offset = get_first_obj_offset(page);
> -	offset += class->size * index;
> +	offset += class->size * objidx;
>  
>  	while (offset < PAGE_SIZE) {
>  		head = obj_to_head(page, addr + offset);
> @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
>  		}
>  
>  		offset += class->size;
> -		index++;
> +		objidx++;
>  	}
>  
> +	*index = objidx;

We can do this out of kmap section right before returing handle.

Thanks!

> +
>  	kunmap_atomic(addr);
>  	return handle;
>  }
> @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  	unsigned long handle;
>  	struct page *s_page = cc->s_page;
>  	struct page *d_page = cc->d_page;
> -	unsigned long index = cc->index;
> +	unsigned int index = cc->index;
>  	int ret = 0;
>  
>  	while (1) {
> -		handle = find_alloced_obj(class, s_page, index);
> +		handle = find_alloced_obj(class, s_page, &index);
>  		if (!handle) {
>  			s_page = get_next_page(s_page);
>  			if (!s_page)
> -- 
> 1.9.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ