[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091108164706.GB3286@elte.hu>
Date: Sun, 8 Nov 2009 17:47:06 +0100
From: Ingo Molnar <mingo@...e.hu>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Avi Kivity <avi@...hat.com>, Gleb Natapov <gleb@...hat.com>,
kvm@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Fr??d??ric Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH 02/11] Add "handle page fault" PV helper.
* H. Peter Anvin <hpa@...or.com> wrote:
> On 11/08/2009 04:51 AM, Ingo Molnar wrote:
> >
> > * Avi Kivity <avi@...hat.com> wrote:
> >
> >> On 11/08/2009 01:36 PM, Ingo Molnar wrote:
> >>>> Three existing callbacks are: kmemcheck, mmiotrace, notifier. Two
> >>>> of them kmemcheck, mmiotrace are enabled only for debugging, should
> >>>> not be performance concern. And notifier call sites (two of them)
> >>>> are deliberately, as explained by comment, not at the function entry,
> >>>> so can't be unified with others. (And kmemcheck also has two different
> >>>> call site BTW)
> >>>
> >>> We want mmiotrace to be generic distro capable so the overhead when
> >>> the hook is not used is of concern.
> >>
> >> Maybe we should generalize paravirt-ops patching in case if (x) f() is
> >> deemed too expensive.
> >
> > Yes, that's a nice idea. We have quite a number of 'conditional
> > callbacks' in various critical paths that could be made lighter via such
> > a technique.
> >
> > It would also free new callbacks from the 'it increases overhead
> > even if unused' criticism and made it easier to add them.
>
> There are a number of other things were we permanently bind to a
> single instance of something, too. Optimizing those away would be
> nice. Consider memcpy(), where we may want to have different
> implementations for different processors.
yeah.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists