lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Jun 2009 19:58:49 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Keir Fraser <keir.fraser@...citrix.com>
Subject: Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs just because there's no local APIC

Jeremy Fitzhardinge <jeremy@...p.org> writes:

> On 06/17/09 05:02, Eric W. Biederman wrote:
>> Trying to understand what is going on I just read through Xen 3.4 and the
>> accompanying 2.6.18 kernel source.
>>    
>
> Thanks very much for spending time on this.  I really appreciate it.
>
>> Xen has a horrible api with respect to io_apics.  They aren't even real
>> io_apics when Xen is done ``abstracting'' them.
>>
>> Xen gives us the vector to write.  But we get to assign that
>> vector arbitrarily to an ioapic and vector.
>>
>> We are required to use a hypercall when performing the write.
>> Xen overrides the delivery_mode and destination, and occasionally
>> the mask bit.
>>    
>
> Yes, it's a bit mad.  All those writes are really conveying is the vector, and
> Xen gave that to us in the first place.

Pretty much.  After seeing the pirq to event channel binding I had to hunt
like mad to figure out why you needed anything else.

>> We still have to handle polarity and the trigger mode.  Despite
>> the fact that Xen has acpi and mp tables parsers of it's own.
>>
>> I expect it would have been easier and simpler all around if there
>> was just a map_gsi event channel hypercall.  But Xen has an abi
>> and an existing set of calls so could aren't worth worrying about
>> much.
>>    
>
> Actually I was discussing this with Keir yesterday.  We're definitely open to
> changing the dom0 API to make things simpler on the Linux side.  (The dom0 ABI
> is more fluid than the domU one, and these changes would be backwards-compatible
> anyway.)
>
> One of the options we discussed was changing the API to get rid of the exposed
> vector, and just replace it with an operation to directly bind a gsi to a pirq
> (internal Xen physical interrupt handle, if you will), so that Xen ends up doing
> all the I/O APIC programming internally, as well as the local APIC.

As an abstraction layer I think that will work out a lot better long term.

Given what iommus with irqs and DMA I expect you want something like
that, that can be used from domU.  Then you just make allowing the
operation conditional on if you happen to have the associated hardware
mapped into your domain.

> On the Linux side, I think it means we can just point pcibios_enable/disable_irq
> to our own xen_pci_irq_enable/disable functions to create the binding between a
> PCI device and an irq.

If you want xen to assign the linux irq number that is absolutely the properly place
to hook.

> I haven't prototyped this yet, or even looked into it very closely, but it seems
> like a promising approach to avoid almost all interaction with the apic layer of
> the kernel.  xen_pci_irq_enable() would have to make its own calls
> acpi_pci_irq_lookup() to map pci_dev+pin -> gsi, so we would still need to make
> sure ACPI is up to that job.
>
>> Xen's ioapic affinity management logic looks like it only works
>> on sunny days if you don't stress it too hard.
> Could you be a bit more specific?  Are you referring to problems that you've
> fixed in the kernel which are still present in Xen?

Problems I have avoided.

When I was messing with the irq code I did not recall finding many
cases where migrating irqs from process context worked without hitting
hardware bugs.  ioapic state machine lockups and the like.

I currently make that problem harder on myself by not allocating vectors
globally, but it gives an irq architecture that should work for however
much I/O we have in the future.  

The one case that it is most likely to work is lowest priority interrupt
delivery where the hardware decides which cpu it should go to and it only
takes a single register write to change the cpu mask, and the common case
in Xen.

When you start directing irqs at specific cpus things get a lot easier
to break.

>> It looks like the only thing Xen gains by pushing out the work of
>> setting the polarity and setting edge/level triggering is our database
>> of motherboards which get those things wrong.
>>    
>
> Avoiding duplication of effort is a non-trivial benefit.
>
>> So I expect the thing to do is factor out acpi_parse_ioapic,
>> mp_register_ioapic so we can share information on borked BIOS's
>> between the Xen dom0 port and otherwise push Xen pseudo apic handling
>> off into it's strange little corner.
>
> Yes, that's what I'll look into.

How does Xen handle domU with hardware directly mapped?

Temporally ignoring what we have to do to work with Xen 3.4.  I'm curious
if we could make the Xen dom0 irq case the same as the Xen domU case.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ