lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <m13ao28lfg.fsf@frodo.ebiederm.org>
Date:	Wed, 28 May 2008 09:04:19 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Avi Kivity <avi@...ranet.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Keir Fraser <Keir.Fraser@...citrix.com>
Subject: Re: Question about interrupt routing and irq allocation

Jeremy Fitzhardinge <jeremy@...p.org> writes:

> Eric W. Biederman wrote:
>> - I think using create_irq is a good step.
>> - I think all vectors are wasted in the case of Xen.
>>
>
> The case I'm discussing now is in hvm domains - ie, fully virtualized PC
> platform. I'm adding a driver to poke a hole through all the emulated hardware
> to get directly to the underlying Xen layer so that we can run paravirtual
> drivers to get better performance. Only the irqs associated with pv drivers will
> waste their vectors.

I see. The fully virtualized machine case.  So we do have apics
visible to us.

>> - I think we want a individual irq for each xen irq source.
>>   Sparc already does a demux in similar circumstances with
>>   a queue of received MSI messages an a single cpu irq
>>   that these get demuxed from.
>>   If we don't have individual irqs per drivers it will be hard
>>   to share a source base with native drivers.
>>
>
> In this case the sharing is between fully paravirtualized paravirt_ops Xen and
> pv-on-hvm drivers. In general I want those drivers to look as normal as
> possible, so they should use irqs in a normal way.

Right.  We should be able to assume that the native irqs for
those devices are not shared, and we should be able to extend
that property (among others) to the virtualzed irqs for the
devices.

Under other hypervisors sparc, ppc we can run unmodified pci
drivers just the OS platform code changes.  How close to that
can we come in the Xen case?

I think running unmodified drivers with the OS platform code doing
the adaption should be the goal, unless there is a real need for
the driver to know about Xen.  Is that compatible with what you
are trying to achieve?

>> - I think it would be very nice if we could get irqs allocated
>>   in request_irq instead of create_irq (and equivalents).
>>
>
> Something along the lines of passing -1 as the irq, and it would return the
> allocated irq? It's not clear to me how all that would fit together.

Groan.  I mispoke.  I meant:
- I think it would be very nice if we could get vectors allocated
  in request_irq instead of in create_irq (and equivalents).

Just delayed vector allocation.  I wasn't after something driver
visible.

>> - I think ultimately it makes sense to port the per vector
>>   code to 32bit linux.  On single cpu systems the cost should
>>   be just a hair more code, but no extra data structures.  We
>>   can easily restrict the irq allocation to allocating the same
>>   vector on all cpus for any old machines that prove flaky with
>>   irq migration.
>>
>>   The code between the two architectures we kept fairly close
>>   in sync when I worked on it so a merge should not be a big deal.
>
> Well, if I find myself at a loose end, I'll have a look at it.

Thanks.

Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ