[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.01.0906090849080.6847@localhost.localdomain>
Date: Tue, 9 Jun 2009 09:00:08 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Nick Piggin <npiggin@...e.de>
cc: Rusty Russell <rusty@...tcorp.com.au>, Ingo Molnar <mingo@...e.hu>,
Jeremy Fitzhardinge <jeremy@...p.org>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Avi Kivity <avi@...hat.com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [benchmark] 1% performance overhead of paravirt_ops on native
kernels
On Tue, 9 Jun 2009, Nick Piggin wrote:
>
> If it's such a problem, it could be made a lot faster without too
> much problem. You could just introduce a FIFO of ptes behind it
> and flush them all in one go. 4K worth of ptes per CPU might
> hopefully bring your overhead down to < 1%.
We already have that. The regular kmap() does that. It's just not usable
in atomic context.
We'd need to fix the locking: right now kmap_high() uses non-irq-safe
locks, and it does that whole cross-cpu flushing thing (which is why
those locks _have_ to be non-irq-safe.
The way to fix that, though, would be to never do any cross-cpu calls, and
instead just have a cpumask saying "you need to flush before you do
anything with kmap". So you'd just set that cpumask inside the lock, and
if/when some other CPU does a kmap, they'd flush their local TLB at _that_
point instead of having to have an IPI call.
If we can get rid of kmap_atomic(), I'd already like HIGHMEM more. Right
now I absolutely _hate_ all the different "levels" of kmap_atomic() and
having to be careful about crazy nesting rules etc.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists