lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130201130230.GA6125@konrad-lan.dumpdata.com>
Date:	Fri, 1 Feb 2013 08:02:31 -0500
From:	Konrad Rzeszutek Wilk <konrad@...nok.org>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Matt Sealey <matt@...esi-usa.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, stable@...r.kernel.org,
	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Russell King <linux@....linux.org.uk>,
	Nitin Gupta <ngupta@...are.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>
Subject: Re: [PATCH] zsmalloc: Fix TLB coherency and build problem

On Mon, Jan 28, 2013 at 10:00:08AM +0900, Minchan Kim wrote:
> Recently, Matt Sealey reported he fail to build zsmalloc caused by
> using of local_flush_tlb_kernel_range which are architecture dependent
> function so !CONFIG_SMP in ARM couldn't implement it so it ends up
> build error following as.
> 
>   MODPOST 216 modules
>   LZMA    arch/arm/boot/compressed/piggy.lzma
>   AS      arch/arm/boot/compressed/lib1funcs.o
> ERROR: "v7wbi_flush_kern_tlb_range"
> [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> The reason we used that function is copy method by [1]
> was really slow in ARM but at that time.
> 
> More severe problem is ARM can prefetch speculatively on other CPUs
> so under us, other TLBs can have an entry only if we do flush local
> CPU. Russell King pointed that. Thanks!
> We don't have many choices except using flush_tlb_kernel_range.
> 
> My experiment in ARMv7 processor 4 core didn't make any difference with
> zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
> but still page-table based is much better than copy-based.
> 
> * bigger is better.
> 
> 1. local_flush_tlb_kernel_range: 3918795 mappings
> 2. flush_tlb_kernel_range : 3989538 mappings
> 3. copy-based: 635158 mappings
> 
> This patch replace local_flush_tlb_kernel_range with
> flush_tlb_kernel_range which are avaialbe in all architectures
> because we already have used it in vmalloc allocator which are
> generic one so build problem should go away and performane loss
> shoud be void.
> 
> [1] f553646, zsmalloc: add page table mapping method
> [2] https://github.com/spartacus06/zsmapbench
> 
> Cc: stable@...r.kernel.org
> Cc: Dan Magenheimer <dan.magenheimer@...cle.com>
> Cc: Russell King <linux@....linux.org.uk>
> Cc: Konrad Rzeszutek Wilk <konrad@...nok.org>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>

> Cc: Nitin Gupta <ngupta@...are.org>
> Cc: Seth Jennings <sjenning@...ux.vnet.ibm.com>
> Reported-by: Matt Sealey <matt@...esi-usa.com>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
> ---
> 
> Matt, Could you test this patch?
> 
>  drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> index eb00772..82e627c 100644
> --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> @@ -222,11 +222,9 @@ struct zs_pool {
>  /*
>   * By default, zsmalloc uses a copy-based object mapping method to access
>   * allocations that span two pages. However, if a particular architecture
> - * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
> - * faster than copying, then it should be added here so that
> - * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
> - * mapping rather than copying
> - * for object mapping.
> + * performs VM mapping faster than copying, then it should be added here
> + * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
> + * page table mapping rather than copying for object mapping.
>  */
>  #if defined(CONFIG_ARM)
>  #define USE_PGTABLE_MAPPING
> @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
>  
>  	flush_cache_vunmap(addr, end);
>  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> -	local_flush_tlb_kernel_range(addr, end);
> +	flush_tlb_kernel_range(addr, end);
>  }
>  
>  #else /* USE_PGTABLE_MAPPING */
> -- 
> 1.7.9.5
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ