[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84144f020906021106l7927e005o6bcd555b8f51b03b@mail.gmail.com>
Date: Tue, 2 Jun 2009 21:06:41 +0300
From: Pekka Enberg <penberg@...helsinki.fi>
To: Chris Mason <chris.mason@...cle.com>,
Ulrich Drepper <drepper@...il.com>,
Ingo Molnar <mingo@...e.hu>, Nick Piggin <npiggin@...e.de>,
Jeremy Fitzhardinge <jeremy@...p.org>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Avi Kivity <avi@...hat.com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [benchmark] 1% performance overhead of paravirt_ops on native
kernels
Hi Chris,
On Tue, Jun 2, 2009 at 6:03 PM, Chris Mason <chris.mason@...cle.com> wrote:
>> I find it ridiculous to use the "but it's used" argument to try to
>> force the code into the kernel. By this argument you can say the same
>> about crap like ndiswrapper and similarly harmful code.
>
> I'm not saying to take harmful code, I'm saying to take code with a
> small performance regression under a specific CONFIG_. Slub regresses
> more than 1% on database loads, CONFIG_SCHED_GROUPS, the list goes on
> and on.
Maybe it's just me but you make it sound like the SLUB regression is
okay. It's not.
Unfortunately we're now in a position where we can't just remove SLUB
(it's an improvement over SLAB for NUMA) so we're stuck with two
allocators with third one on its way to the kernel. So yes, it makes a
lot of sense to me to fix CONFIG_PARAVIRT regression before merging
more of the xen stuff in the kernel. It's always easier to fix these
things before they hit the kernel and people start to depend on them.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists