lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1299746595.17339.722.camel@zakaz.uk.xensource.com>
Date:	Thu, 10 Mar 2011 08:43:15 +0000
From:	Ian Campbell <Ian.Campbell@...citrix.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC:	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Jeremy Fitzhardinge" <jeremy@...p.org>,
	Stefano Stabellini <Stefano.Stabellini@...citrix.com>
Subject: Re: [Xen-devel] [PATCH 10/14] xen: events: maintain a list of Xen
 interrupts

On Thu, 2011-03-10 at 05:22 +0000, Konrad Rzeszutek Wilk wrote:
> On Wed, Mar 09, 2011 at 05:41:22PM +0000, Ian Campbell wrote:
> > In a PVHVM kernel not all interrupts are Xen interrupts (APIC interrupts can also be present).
> > 
> > Currently we get away with walking over all interrupts because the
> > lookup in the irq_info array simply returns IRQT_UNBOUND and we ignore
> > it. However this array will be going away in a future patch so we need
> > to manually track which interrupts have been allocated by the Xen
> > events infrastructure.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@...rix.com>
> > ---
> >  drivers/xen/events.c |   59 +++++++++++++++++++++++++++++++++++++------------
> >  1 files changed, 44 insertions(+), 15 deletions(-)
> > 
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index cf372d4..e119989 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -56,6 +56,8 @@
> >   */
> >  static DEFINE_SPINLOCK(irq_mapping_update_lock);
> >  
> > +static LIST_HEAD(xen_irq_list_head);
> > +
> >  /* IRQ <-> VIRQ mapping. */
> >  static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
> >  
> > @@ -85,7 +87,9 @@ enum xen_irq_type {
> >   */
> >  struct irq_info
> >  {
> > +	struct list_head list;
> >  	enum xen_irq_type type;	/* type */
> > +	unsigned irq;
> >  	unsigned short evtchn;	/* event channel */
> >  	unsigned short cpu;	/* cpu bound */
> >  
> > @@ -135,6 +139,7 @@ static void xen_irq_info_common_init(struct irq_info *info,
> >  	BUG_ON(info->type != IRQT_UNBOUND && info->type != type);
> >  
> >  	info->type = type;
> > +	info->irq = irq;
> >  	info->evtchn = evtchn;
> >  	info->cpu = cpu;
> >  
> > @@ -311,10 +316,11 @@ static void init_evtchn_cpu_bindings(void)
> >  {
> >  	int i;
> >  #ifdef CONFIG_SMP
> > -	struct irq_desc *desc;
> > +	struct irq_info *info;
> >  
> >  	/* By default all event channels notify CPU#0. */
> > -	for_each_irq_desc(i, desc) {
> > +	list_for_each_entry(info, &xen_irq_list_head, list) {
> > +		struct irq_desc *desc = irq_to_desc(info->irq);
> >  		cpumask_copy(desc->irq_data.affinity, cpumask_of(0));
> >  	}
> >  #endif
> > @@ -397,6 +403,21 @@ static void unmask_evtchn(int port)
> >  	put_cpu();
> >  }
> >  
> > +static void xen_irq_init(unsigned irq)
> > +{
> > +	struct irq_info *info;
> > +	struct irq_desc *desc = irq_to_desc(irq);
> > +
> > +	/* By default all event channels notify CPU#0. */
> > +	cpumask_copy(desc->irq_data.affinity, cpumask_of(0));
> > +
> > +	info = &irq_info[irq];
> > +
> > +	info->type = IRQT_UNBOUND;
> > +
> > +	list_add_tail(&info->list, &xen_irq_list_head);
> 
> Should we use some form of spinlock lock? Just in case
> there are two drivers that are being unloaded?

The callers are xen_allocate_irq_dynamic and xen_allocate_irq_gsi both
of which expect to be protected by irq_mapping_update_lock already.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ