lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Feb 2013 11:50:28 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Nitin Gupta <ngupta@...are.org>,
	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Konrad Rzeszutek Wilk <konrad@...nok.org>
Subject: Re: [PATCH v2] zsmalloc: Add Kconfig for enabling PTE method

On Tue, Feb 05, 2013 at 06:28:54PM -0800, Greg Kroah-Hartman wrote:
> On Wed, Feb 06, 2013 at 11:17:08AM +0900, Minchan Kim wrote:
> > diff --git a/drivers/staging/zsmalloc/Kconfig b/drivers/staging/zsmalloc/Kconfig
> > index 9084565..232b3b6 100644
> > --- a/drivers/staging/zsmalloc/Kconfig
> > +++ b/drivers/staging/zsmalloc/Kconfig
> > @@ -8,3 +8,15 @@ config ZSMALLOC
> >  	  non-standard allocator interface where a handle, not a pointer, is
> >  	  returned by an alloc().  This handle must be mapped in order to
> >  	  access the allocated space.
> > +
> > +config PGTABLE_MAPPING
> > +        bool "Use page table mapping to access allocations that span two pages"
> 
> No tabs?
> 
> Please also put "ZSmalloc somewhere in the text here, otherwise it
> really doesn't make much sense when seeing it in a menu.
> 
> > +        depends on ZSMALLOC
> > +        default n
> 
> That's the default, so it can be dropped.
> 
> > +        help
> > +	  By default, zsmalloc uses a copy-based object mapping method to access
> > +	  allocations that span two pages. However, if a particular architecture
> > +	  performs VM mapping faster than copying, then you should select this.
> > +	  This causes zsmalloc to use page table mapping rather than copying
> > +	  for object mapping. You can check speed with zsmalloc benchmark[1].
> > +	  [1] https://github.com/spartacus06/zsmalloc
> 
> Care to specify exactly _what_ architectures this should be set for or
> not?  That will help the distros out a lot in determining if this should
> be enabled or not.
> 
> > diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> > index 06f73a9..2c1805c 100644
> > --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> > +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> > @@ -207,6 +207,7 @@ struct zs_pool {
> >  	struct size_class size_class[ZS_SIZE_CLASSES];
> >  
> >  	gfp_t flags;	/* allocation flags used when growing pool */
> > +
> >  };
> >  
> >  /*
> 
> Why add this extra line?
> 
> thanks,
> 
> greg k-h

Sorry for bothering you.
I fixed all you pointed out.
Thanks for the review, Greg!

Here it goes.

------------------- 8< -------------------

>From 506acea72916c9a12cf80290bc5cd87f4af1914d Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Wed, 6 Feb 2013 11:10:59 +0900
Subject: [PATCH v3] zsmalloc: Add Kconfig for enabling PTE method

Zsmalloc has two methods 1) copy-based and 2) pte-based to access
allocations that span two pages. You can see history why we supported
two approach from [1].

In summary, copy-based method is 3 times fater in x86 while pte-based
is 6 times faster in ARM.

But it was bad choice that adding hard coding to select architecture
which want to use pte based method. This patch removed it and adds
new Kconfig to select the approach.

This patch is based on next-20130205.

[1] https://lkml.org/lkml/2012/7/11/58

* Changelog from v2
  * Add tab and drop "default n" - Greg
  * Modify description - Greg
  * Drop unnecessary extra line - Greg

* Changelog from v1
  * Fix CONFIG_PGTABLE_MAPPING in zsmalloc-main.c - Greg

Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Seth Jennings <sjenning@...ux.vnet.ibm.com>
Cc: Nitin Gupta <ngupta@...are.org>
Cc: Dan Magenheimer <dan.magenheimer@...cle.com>
Cc: Konrad Rzeszutek Wilk <konrad@...nok.org>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
 drivers/staging/zsmalloc/Kconfig         | 13 +++++++++++++
 drivers/staging/zsmalloc/zsmalloc-main.c | 19 ++++---------------
 2 files changed, 17 insertions(+), 15 deletions(-)

diff --git a/drivers/staging/zsmalloc/Kconfig b/drivers/staging/zsmalloc/Kconfig
index 9084565..83f9cec 100644
--- a/drivers/staging/zsmalloc/Kconfig
+++ b/drivers/staging/zsmalloc/Kconfig
@@ -8,3 +8,16 @@ config ZSMALLOC
 	  non-standard allocator interface where a handle, not a pointer, is
 	  returned by an alloc().  This handle must be mapped in order to
 	  access the allocated space.
+
+config PGTABLE_MAPPING
+	bool "Use page table mapping to access object in zsmalloc"
+	depends on ZSMALLOC
+	help
+	  By default, zsmalloc uses a copy-based object mapping method to
+	  access allocations that span two pages. However, if a particular
+	  architecture (ex, ARM) performs VM mapping faster than copying,
+	  then you should select this. This causes zsmalloc to use page table
+	  mapping rather than copying for object mapping.
+
+	  You can check speed with zsmalloc benchmark[1].
+	  [1] https://github.com/spartacus06/zsmalloc
diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
index 06f73a9..aa6aac4 100644
--- a/drivers/staging/zsmalloc/zsmalloc-main.c
+++ b/drivers/staging/zsmalloc/zsmalloc-main.c
@@ -218,19 +218,8 @@ struct zs_pool {
 #define CLASS_IDX_MASK	((1 << CLASS_IDX_BITS) - 1)
 #define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
 
-/*
- * By default, zsmalloc uses a copy-based object mapping method to access
- * allocations that span two pages. However, if a particular architecture
- * performs VM mapping faster than copying, then it should be added here
- * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
- * page table mapping rather than copying for object mapping.
-*/
-#if defined(CONFIG_ARM)
-#define USE_PGTABLE_MAPPING
-#endif
-
 struct mapping_area {
-#ifdef USE_PGTABLE_MAPPING
+#ifdef CONFIG_PGTABLE_MAPPING
 	struct vm_struct *vm; /* vm area for mapping object that span pages */
 #else
 	char *vm_buf; /* copy buffer for objects that span pages */
@@ -622,7 +611,7 @@ static struct page *find_get_zspage(struct size_class *class)
 	return page;
 }
 
-#ifdef USE_PGTABLE_MAPPING
+#ifdef CONFIG_PGTABLE_MAPPING
 static inline int __zs_cpu_up(struct mapping_area *area)
 {
 	/*
@@ -663,7 +652,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
 	flush_tlb_kernel_range(addr, end);
 }
 
-#else /* USE_PGTABLE_MAPPING */
+#else /* CONFIG_PGTABLE_MAPPING*/
 
 static inline int __zs_cpu_up(struct mapping_area *area)
 {
@@ -741,7 +730,7 @@ out:
 	pagefault_enable();
 }
 
-#endif /* USE_PGTABLE_MAPPING */
+#endif /* CONFIG_PGTABLE_MAPPING */
 
 static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
 				void *pcpu)
-- 
1.8.1.1

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ