[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.02.1804171318010.5023@file01.intranet.prod.int.rdu2.redhat.com>
Date: Tue, 17 Apr 2018 13:26:51 -0400 (EDT)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Vlastimil Babka <vbabka@...e.cz>
cc: Christopher Lameter <cl@...ux.com>,
Mike Snitzer <snitzer@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org,
dm-devel@...hat.com, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE
On Tue, 17 Apr 2018, Vlastimil Babka wrote:
> On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> > On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> >
> >> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> >> flag causes allocation of larger slab caches in order to minimize wasted
> >> space.
> >>
> >> This is needed because we want to use dm-bufio for deduplication index and
> >> there are existing installations with non-power-of-two block sizes (such
> >> as 640KB). The performance of the whole solution depends on efficient
> >> memory use, so we must waste as little memory as possible.
> >
> > Hmmm. Can we come up with a generic solution instead?
>
> Yes please.
>
> > This may mean relaxing the enforcement of the allocation max order a bit
> > so that we can get dense allocation through higher order allocs.
> >
> > But then higher order allocs are generally seen as problematic.
>
> I think in this case they are better than wasting/fragmenting 384kB for
> 640kB object.
Wasting 37% of memory is still better than the kernel randomly returning
-ENOMEM when higher-order allocation fails.
> > That
> > means that callers need to be able to tolerate failures.
>
> Is it any different from now? I suppose there would still be
> smallest-order fallback involved in sl*b itself? And if your allocation
> is so large it can fail even with the fallback (i.e. >= costly order),
> you need to tolerate failures anyway?
>
> One corner case I see is if there is anyone who would rather use their
> own fallback instead of the space-wasting smallest-order fallback.
> Maybe we could map some GFP flag to indicate that.
For example, if you create a cache with 17KB objects, the slab subsystem
will pad it up to 32KB. You are wasting almost 1/2 memory, but the
allocation is realiable and it won't fail.
If you use order higher than 32KB, you get less wasted memory, but you
also get random -ENOMEMs (yes, we had a problem in dm-thin that it was
randomly failing during initialization due to 64KB allocation).
Mikulas
Powered by blists - more mailing lists