[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 11:33:00 +0000
From: Stefano Stabellini <stefano.stabellini@...citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC: Stefano Stabellini <stefano.stabellini@...citrix.com>,
<xen-devel@...ts.xenproject.org>, <linux-kernel@...r.kernel.org>,
<boris.ostrovsky@...cle.com>, <david.vrabel@...rix.com>,
<hpa@...or.com>, Mukesh Rathor <mukesh.rathor@...cle.com>
Subject: Re: [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead of native.
On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@...cle.com>
> > >
> > > We also optimize one - the TLB flush. The native operation would
> > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > Xen one avoids that and lets the hypervisor determine which
> > > VCPU needs the TLB flush.
> > >
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@...cle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
> > > ---
> > > arch/x86/xen/mmu.c | 9 +++++++++
> > > 1 file changed, 9 insertions(+)
> > >
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index 490ddb3..c1d406f 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> > > void __init xen_init_mmu_ops(void)
> > > {
> > > x86_init.paging.pagetable_init = xen_pagetable_init;
> > > +
> > > + /* Optimization - we can use the HVM one but it has no idea which
> > > + * VCPUs are descheduled - which means that it will needlessly IPI
> > > + * them. Xen knows so let it do the job.
> > > + */
> > > + if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > + pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > + return;
> > > + }
> > > pv_mmu_ops = xen_mmu_ops;
> > >
> > > memset(dummy_mapping, 0xff, PAGE_SIZE);
> >
> > Regarding this patch, the next one and the other changes to
> > xen_setup_shared_info, xen_setup_mfn_list_list,
> > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > stuff is very different between PV and PVH guests, I wonder if it makes
> > any sense to keep calling xen_init_mmu_ops on PVH.
> >
> > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > under a new xen_pvh_pagetable_init.
> > Just to give you an idea, not even compiled tested:
>
> There is something to be said about sharing the same code path
> that "old-style" PV is using with the new-style - code coverage.
>
> That is the code gets tested under both platforms and if I (or
> anybody else) introduce a bug in the "common-PV-paths" it will
> be immediately obvious as hopefully the regression tests
> will pick it up.
>
> It is not nice - as low-level code is sprinkled with the one-offs
> for the PVH - which mostly is doing _less_.
I thought you would say that. However in this specific case the costs
exceed the benefits. Think of all the times we'll have to debug
something, we'll be staring at the code, and several dozens of minutes
later we'll realize that the code we have been looking at all along is
not actually executed in PVH mode.
> What I was thinking is to flip this around. Make the PVH paths
> the default and then have something like 'if (!xen_pvh_domain())'
> ... the big code.
>
> Would you be OK with this line of thinking going forward say
> after this patchset?
I am not opposed to it in principle but I don't expect that you'll be
able to improve things significantly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists