lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Dec 2016 10:39:16 +0800
From:   Ganesh Mahendran <opensource.ganesh@...il.com>
To:     Greg KH <gregkh@...uxfoundation.org>
Cc:     arve@...roid.com, riandrews@...roid.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] binder: replace kzalloc with kmem_cache

Hi, Greg:

Sorry for the late response.

On Tue, Nov 22, 2016 at 02:53:02PM +0100, Greg KH wrote:
> On Tue, Nov 22, 2016 at 07:17:30PM +0800, Ganesh Mahendran wrote:
> > This patch use kmem_cache to allocate/free binder objects.
> 
> Why do this?

I am not very familiar with kmem_cache. I think if we have thousands of
active binder objects in system, kmem_cache would be better.

Below is binder object number in my android system:
-----
$ cat /d/binder/stats
...
proc: active 100 total 6735
thread: active 1456 total 180807
node: active 5668 total 1027387
ref: active 7141 total 1214877
death: active 844 total 468056
transaction: active 0 total 54736890
transaction_complete: active 0 total 54736890
-----

binder objects are allocated/freed frequently.

> 
> > It will have better memory efficiency.
> 
> Really?  How?  It should be the same, if not a bit worse.  Have you
> tested this?  What is the results?

kzalloc will use object with size 2^n to store user data.
Take "struct binder_thread" as example, its size is 296 bytes.
If use kzalloc(), slab system will use 512 object size to store the 296
bytes. But if use kmem_cache to create a seperte(may be merged with other
slab allocator) allocator, it will use 304 object size to store the 296
bytes. Below is information get from /proc/slabinfo :
------
name          <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> 
binder_thread      858          858       304         26           2

memmory efficiency is : (296 * 26) / (2 * 4096) = 93.9%

> 
> > And we can also get object usage details in /sys/kernel/slab/* for
> > futher analysis.
> 
> Why do we need this?  Who needs this information and what are you going
> to do with it?

This is only for debug purpuse to see how much memory is used by binder.

> 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@...il.com>
> > ---
> >  drivers/android/binder.c | 127 ++++++++++++++++++++++++++++++++++++++---------
> >  1 file changed, 104 insertions(+), 23 deletions(-)
> > 
> > diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> > index 3c71b98..f1f8362 100644
> > --- a/drivers/android/binder.c
> > +++ b/drivers/android/binder.c
> > @@ -54,6 +54,14 @@
> >  static HLIST_HEAD(binder_deferred_list);
> >  static HLIST_HEAD(binder_dead_nodes);
> >  
> > +static struct kmem_cache *binder_proc_cachep;
> > +static struct kmem_cache *binder_thread_cachep;
> > +static struct kmem_cache *binder_node_cachep;
> > +static struct kmem_cache *binder_ref_cachep;
> > +static struct kmem_cache *binder_transaction_cachep;
> > +static struct kmem_cache *binder_work_cachep;
> > +static struct kmem_cache *binder_ref_death_cachep;
> 
> That's a lot of different caches, are you sure they don't just all get
> merged together anyway for most allocators?

If binder kmem_cache have the same flag with other allocator, it may be
merged with other allocator. But I think it would be better than using
kzalloc().

> 
> Don't create lots of little caches for no good reason, and without any
> benchmark numbers, I'd prefer to leave this alone.  You are going to
> have to prove this is a win to allow this type of churn.

I test binder with this patch. There is no performance regression.
---
I run 10 times with below command:
     $binderThroughputTest -w 100

               Before                after(with patch)
avg:           9848.4                 9878.8

Thanks.

> 
> thanks,
> 
> greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ