lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 3 Jan 2014 16:48:00 -0800
From:	Mukesh Rathor <mukesh.rathor@...cle.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Stefano Stabellini <stefano.stabellini@...citrix.com>,
	xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
	boris.ostrovsky@...cle.com, david.vrabel@...rix.com,
	jbeulich@...e.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.

On Wed, 18 Dec 2013 16:17:39 -0500
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> wrote:

> On Wed, Dec 18, 2013 at 06:31:43PM +0000, Stefano Stabellini wrote:
> > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@...cle.com>
> > > 
> > > PVH is a PV guest with a twist - there are certain things
> > > that work in it like HVM and some like PV. There is
> > > a similar mode - PVHVM where we run in HVM mode with
> > > PV code enabled - and this patch explores that.
> > > 
> > > The most notable PV interfaces are the XenBus and event channels.
> > > For PVH, we will use XenBus and event channels.
> > > 
> > > For the XenBus mechanism we piggyback on how it is done for
> > > PVHVM guests.
> > > 
> > > Ditto for the event channel mechanism - we piggyback on PVHVM -
> > > by setting up a specific vector callback and that
> > > vector ends up calling the event channel mechanism to
> > > dispatch the events as needed.
> > > 
> > > This means that from a pvops perspective, we can use
> > > native_irq_ops instead of the Xen PV specific. Albeit in the
> > > future we could support pirq_eoi_map. But that is
> > > a feature request that can be shared with PVHVM.
> > > 
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@...cle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
> > > ---
> > >  arch/x86/xen/enlighten.c           | 6 ++++++
> > >  arch/x86/xen/irq.c                 | 5 ++++-
> > >  drivers/xen/events.c               | 5 +++++
> > >  drivers/xen/xenbus/xenbus_client.c | 3 ++-
> > >  4 files changed, 17 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > > index e420613..7fceb51 100644
> > > --- a/arch/x86/xen/enlighten.c
> > > +++ b/arch/x86/xen/enlighten.c
> > > @@ -1134,6 +1134,8 @@ void xen_setup_shared_info(void)
> > >  	/* In UP this is as good a place as any to set up shared
> > > info */ xen_setup_vcpu_info_placement();
> > >  #endif
> > > +	if (xen_pvh_domain())
> > > +		return;
> > >  
> > >  	xen_setup_mfn_list_list();
> > >  }
> > 
> > This is another one of those cases where I think we would benefit
> > from introducing xen_setup_shared_info_pvh instead of adding more
> > ifs here.
> 
> Actually this one can be removed.
> 
> > 
> > 
> > > @@ -1146,6 +1148,10 @@ void xen_setup_vcpu_info_placement(void)
> > >  	for_each_possible_cpu(cpu)
> > >  		xen_vcpu_setup(cpu);
> > >  
> > > +	/* PVH always uses native IRQ ops */
> > > +	if (xen_pvh_domain())
> > > +		return;
> > > +
> > >  	/* xen_vcpu_setup managed to place the vcpu_info within
> > > the percpu area for all cpus, so make use of it */
> > >  	if (have_vcpu_info_placement) {
> > 
> > Same here?
> 
> Hmmm, I wonder if the vcpu info placement could work with PVH.

It should now (after a patch I sent while ago)... the comment implies
that PVH uses native IRQs even case of vcpu info placlement...

perhaps it would be more clear to do:

        for_each_possible_cpu(cpu)
                xen_vcpu_setup(cpu);
        /* PVH always uses native IRQ ops */
        if (have_vcpu_info_placement && !xen_pvh_domain) {
            pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
            .........

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists