[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1ps8ac4fc.fsf@ebiederm.dsl.xmission.com>
Date: Fri, 16 Feb 2007 12:01:43 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <ak@...e.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Ingo Molnar <mingo@...e.hu>,
Alan Cox <alan@...rguk.ukuu.org.uk>
Subject: Re: [RFC] killing the NR_IRQS arrays.
Jeremy Fitzhardinge <jeremy@...p.org> writes:
> Eric W. Biederman wrote:
>> So I propose we remove all assumptions from the code that we actually
>> have an array of irqs. That will allow for irq_desc to be dynamically
>> allocated instead of statically allocated saving memory and reducing
>> kernel complexity.
>>
>
> Sounds good to me. In Xen we have 1024 event channels which we need to
> map down into a smaller irq. Aside from the complexity of maintaining a
> mapping table, that's not a huge issue for now, but when we start
> exposing pci devices to guests it all becomes more complex. The ideal
> for us is to simply use event channel == irq, which this would allow.
Well you shouldn't need to wait just run with a kernel with NR_IRQS >= 1024.
1024 is stretch but it isn't to bad. There are already x86 boxes that have
more pins on their ioapics then that. So x86_64 and with this latest
round of patches from Len Brown and I i386 should be able to support that.
On the other side 1024 looks extremely limiting for exposing pci devices.
If someone gets serious about pushing what is legal with MSI-X you may be
in trouble. As a single device is allowed to have 4096 interrupts. Not
that I can think of a user for so many but...
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists