[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151104145445.GL7637@e104818-lin.cambridge.arm.com>
Date: Wed, 4 Nov 2015 14:54:46 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Christoph Lameter <cl@...ux.com>
Cc: Robert Richter <rric@...nel.org>, Joonsoo Kim <js1304@...il.com>,
Linux-sh list <linux-sh@...r.kernel.org>,
Will Deacon <will.deacon@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Robert Richter <rrichter@...ium.com>, linux-mm@...ck.org,
Tirumalesh Chalamarla <tchalamarla@...ium.com>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH] arm64: Increase the max granular size
On Wed, Nov 04, 2015 at 07:53:50AM -0600, Christoph Lameter wrote:
> On Wed, 4 Nov 2015, Catalin Marinas wrote:
>
> > The simplest option would be to make sure that off slab isn't allowed
> > for caches of KMALLOC_MIN_SIZE or smaller, with the drawback that not
> > only "kmalloc-128" but any other such caches will be on slab.
>
> The reason for an off slab configuration is denser object packing.
>
> > I think a better option would be to first check that there is a
> > kmalloc_caches[] entry for freelist_size before deciding to go off-slab.
>
> Hmmm.. Yes seems to be an option.
>
> Maybe we simply revert commit 8fc9cf420b36 instead?
I'm fine with this. Also note that the arm64 commit changing
L1_CACHE_BYTES to 128 hasn't been pushed yet (it's queued for 4.4).
> That does not seem to make too much sense to me and the goal of the
> commit cannot be accomplished on ARM. Your patch essentially reverts
> the effect anyways.
In theory it only reverts the effect for the first kmalloc_cache
("kmalloc-128" in the arm64 case). Any other bigger cache which would
not be mergeable with an existing one still has the potential of
off-slab management.
> Smaller slabs really do not need off slab management anyways since they
> will only loose a few objects per slab page.
IIUC, starting with 128 slab size for a 4KB page, you have 32 objects
per page. The freelist takes 32 bytes (or 31), therefore you waste a
single slab object. However, only 1/4 of it is used for freelist and the
waste gets bigger with 256 slab size, hence the original commit.
BTW, assuming L1_CACHE_BYTES is 512 (I don't ever see this happening but
just in theory), we potentially have the same issue. What would save us
is that INDEX_NODE would match the first "kmalloc-512" cache, so we have
it pre-populated.
--
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists