[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57B69D8F.5000101@oracle.com>
Date: Thu, 18 Aug 2016 22:47:59 -0700
From: aruna.ramakrishna@...cle.com
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mike Kravetz <mike.kravetz@...cle.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3] mm/slab: Improve performance of gathering slabinfo
stats
On 08/18/2016 04:52 AM, Michal Hocko wrote:
> I am not opposing the patch (to be honest it is quite neat) but this
> is buggering me for quite some time. Sorry for hijacking this email
> thread but I couldn't resist. Why are we trying to optimize SLAB and
> slowly converge it to SLUB feature-wise. I always thought that SLAB
> should remain stable and time challenged solution which works reasonably
> well for many/most workloads, while SLUB is an optimized implementation
> which experiment with slightly different concepts that might boost the
> performance considerably but might also surprise from time to time. If
> this is not the case then why do we have both of them in the kernel. It
> is a lot of code and some features need tweaking both while only one
> gets testing coverage. So this is mainly a question for maintainers. Why
> do we maintain both and what is the purpose of them.
Michal,
Speaking about this patch specifically - I'm not trying to optimize SLAB
or make it more similar to SLUB. This patch is a bug fix for an issue
where the slowness of 'cat /proc/slabinfo' caused timeouts in other
drivers. While optimizing that flow, it became apparent (as Christoph
pointed out) that one could converge this patch to SLUB's current
implementation. Though I have not done that in this patch (because that
warrants a separate patch), I think it makes sense to converge where
appropriate, since they both do share some common data structures and
code already.
Thanks,
Aruna
Powered by blists - more mailing lists