[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190411084433.GC18914@techsingularity.net>
Date: Thu, 11 Apr 2019 09:44:33 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Michal Hocko <mhocko@...nel.org>
Cc: "Tobin C. Harding" <me@...in.cc>, Vlastimil Babka <vbabka@...e.cz>,
"Tobin C. Harding" <tobin@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Tejun Heo <tj@...nel.org>, Qian Cai <cai@....pw>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/1] mm: Remove the SLAB allocator
On Thu, Apr 11, 2019 at 09:55:56AM +0200, Michal Hocko wrote:
> > > FWIW, our enterprise kernel use it (latest is 4.12 based), and openSUSE
> > > kernels as well (with openSUSE Tumbleweed that includes latest
> > > kernel.org stables). AFAIK we don't enable SLAB_DEBUG even in general
> > > debug kernel flavours as it's just too slow.
> >
> > Ok, so that probably already kills this. Thanks for the response. No
> > flaming, no swearing, man! and they said LKML was a harsh environment ...
> >
> > > IIRC last time Mel evaluated switching to SLUB, it wasn't a clear
> > > winner, but I'll just CC him for details :)
> >
> > Probably don't need to take up too much of Mel's time, if we have one
> > user in production we have to keep it, right.
>
> Well, I wouldn't be opposed to dropping SLAB. Especially when this is
> not a longterm stable kmalloc implementation anymore. It turned out that
> people want to push features from SLUB back to SLAB and then we are just
> having two featurefull allocators and double the maintenance cost.
>
Indeed.
> So as long as the performance gap is no longer there and the last data
> from Mel (I am sorry but I cannot find a link handy) suggests that there
> is no overall winner in benchmarks then why to keep them both?
>
The link isn't public. It was based on kernel 5.0 but I still haven't
gotten around to doing a proper writeup. The very short summary is that
with the defaults, SLUB is either performance-neutral or a win versus slab
which is a big improvement over a few years ago. It's worth noting that
there still is a partial relianace on it using high-order pages to get
that performance. If the max order is 0 then there are cases when SLUB
is a loss *but* even that is not universal. hackbench using processes
and sockets to communicate seems to be the hardest hit when SLUB is not
using high-order pages. This still allows the possibility that SLUB can
degrade over time if the system gets badly enough fragmented and there
are cases where kcompactd and fragmentation avoidance will be more active
than it was relative to SLAB. Again, this is much better than it was a
few years ago and I'm not aware of bug reports that point to compaction
overhead due to SLUB.
> That being said, if somebody is willing to go and benchmark both
> allocators to confirm Mel's observations and current users of SLAB
> can confirm their workloads do not regress either then let's just drop
> it.
>
Independent verification would be nice. Of particular interest would be
a real set of networking tests on a high-speed network. The hardware in
the test grid I use doesn't have a fast enough network for me to draw a
reliable conclusion.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists