lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190902141225.GA21112@mark-All-Series>
Date:   Mon, 2 Sep 2019 22:12:25 +0800
From:   Peikan Tsai <peikantsai@...il.com>
To:     Greg KH <gregkh@...uxfoundation.org>
Cc:     Christian Brauner <christian.brauner@...ntu.com>,
        devel@...verdev.osuosl.org, tkjos@...roid.com,
        linux-kernel@...r.kernel.org, arve@...roid.com,
        Joel Fernandes <joel@...lfernandes.org>, maco@...roid.com
Subject: Re: [PATCH] binder: Use kmem_cache for binder_thread

On Fri, Aug 30, 2019 at 08:39:43AM +0200, Greg KH wrote:
> On Fri, Aug 30, 2019 at 02:59:01AM +0800, Peikan Tsai wrote:
> > On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> > > On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > > > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > > [snip] 
> > > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > > > for the binder_thread.
> > > > > 
> > > > > Are you _sure_ it really will save that much memory?  You want to do
> > > > > allocations based on a nice alignment for lots of good reasons,
> > > > > especially for something that needs quick accesses.
> > > > 
> > > > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > > > align argument. I am not sure what the default alignment of objects is
> > > > though (probably no default alignment). What is an optimal alignment in your
> > > > view?
> > > 
> > > Probably SLAB_HWCACHE_ALIGN would make most sense.
> > > 
> > 
> > Agree. Thanks for yours comments and suggestions.
> > I'll put SLAB_HWCACHE_ALIGN it in patch v2.
> > 
> > > > 
> > > > > Did you test your change on a system that relies on binder and find any
> > > > > speed improvement or decrease, and any actual memory savings?
> > > > > 
> > > > > If so, can you post your results?
> > > > 
> > > > That's certainly worth it and I thought of asking for the same, but spoke too
> > > > soon!
> > > 
> > > Yeah, it'd be interesting to see what difference this actually makes. 
> > > 
> > > Christian
> > 
> > I tested this change on an Android device(arm) with AOSP kernel 4.19 and
> > observed
> > memory usage of binder_thread. But I didn't do binder benchmark yet.
> > 
> > On my platform the memory usage of binder_thread reduce about 90 KB as
> > the
> > following result.
> >         nr obj          obj size        total
> > 	before: 624             512             319488 bytes
> > 	after:  728             312             227136 bytes
> 
> You have more objects???
> 

Sorry, it's total objects which include some inactive objects ...
And because I tested it on an Android platform so there may be some noise.

So I try 'adb stop' and 'echo 3 > /proc/sys/vm/drop_caches' before starting
test to reduce the noise, and the result are as following.

                    objs
kzalloc              220  (kmalloc-512 alloc by binder_get_thread)

             active_objs  total objs   objperslab  slabdata
kmem_cache           194         403           13        31

Seems there are more objects when use kmemcache for binder_thread...
But as I understand it, those inactive objects can be free by kmemcahe shrink?

Also, I tested the throughput by using performace test of Android VTS.

size(bytes)	kzalloc(byte/ns)	kmemcache(byte/ns)
4		0.17			0.17
8		0.33			0.32
16		0.66			0.66
32		1.36			1.42
64		2.66			2.61
128		5.4			5.26
256		10.29			10.77
512		21.51			21.36
1k		41			40.26
2k		82.12			80.28
4k		149.24			146.95
8k		262.34			256
16k		417.96			422.2
32k		596.66			590.23
64k		600.84			601.25


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ