[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200421143609.GM5820@bombadil.infradead.org>
Date: Tue, 21 Apr 2020 07:36:09 -0700
From: Matthew Wilcox <willy@...radead.org>
To: 赵军奎 <bernard@...o.com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, opensource.kernel@...o.com
Subject: Re: Re: [PATCH V2] kmalloc_index optimization(code size & runtime
stable)
On Tue, Apr 21, 2020 at 07:55:03PM +0800, 赵军奎 wrote:
> Sure, i just received some kbuild compiler error mails and prompt me to do something?
> I don`t know why this happened, so i update the patch again.
Don't. The patch has been NACKed, so there's no need to post a v2.
If you want to do something useful, how about looking at the effect
of adding different slab sizes? There's a fairly common pattern of
allocating things which are a power of two + a header. So it may make
sense to have kmalloc caches of 320 (256 + 64), 576 (512 + 64) and 1088
(1024 + 64). I use 64 here as that's the size of a cacheline, so we
won't get false sharing between users.
This could save a fair quantity of memory; today if you allocate 512 +
8 bytes, it will round up to 1024. So we'll get 4 allocations per 4kB
page, but with a 576-byte slab, we'd get 7 allocations per 4kB page.
Of course, if there aren't a lot of users which allocate memory in this
range, then it'll be a waste of memory. On my laptop, it seems like
there might be a decent amount of allocations in the right range:
kmalloc-2k 3881 4384 2048 16 8 : tunables 0 0 0 : sla
bdata 274 274 0
kmalloc-1k 6488 7056 1024 16 4 : tunables 0 0 0 : slabdata 441 441 0
kmalloc-512 7700 8256 512 16 2 : tunables 0 0 0 : slabdata 516 516 0
Now, maybe 576 isn't quite the right size. Need to try it on a variety
of configurations and find out. Want to investigate this?
Powered by blists - more mailing lists