[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1ljhvwmiu.fsf@fess.ebiederm.org>
Date: Tue, 24 Nov 2009 15:06:33 -0800
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Dimitri Sivanich <sivanich@....com>,
Arjan van de Ven <arjan@...radead.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Yinghai Lu <yinghai@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
David Miller <davem@...emloft.net>,
Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v6] x86/apic: limit irq affinity
Thomas Gleixner <tglx@...utronix.de> writes:
> Please do not put anything complex into x86 code at all. Such designs
> are likely to happen on other architectures and as I said before we
> want to have
>
> 1) the decision function what's valid and not in the generic code
For the UV problem I don't have an issue. assign_irq_vector
enforces some rules that I don't see being able to expose
to user space.
> 2) a way to expose that information as part of the irq interface to
> user space.
-EINVAL?
> So what's wrong with a per irq_chip function which returns the cpumask
> which is valid for irq N ?
I have no problems with a generic function to do that.
> That function would be called to check the affinity mask in
> set_irq_affinity and to dump the mask to /proc/irq/N/possible_cpus or
> whatever name we agree on.
>
> That way we don't have to worry about where in the x86 code the
> decision should reside as you simply would always get valid masks from
> the core code.
Impossible. assign_irq_vector is the only function that can tell you
if a mask is valid or not. Currently we support roughly 240 irqs
per cpu. Currently we support more than 240 irqs. I don't see
how you can enforce that limit.
Furthermore irq migration on x86 is a very non-trivial exercise.
We must wait until we get a new irq at the new location before
we cleanup the irq state at the old location, to ensure that the
state change has gone through. At which point again we can not
know.
So Thomas the logical conclusion that you are requesting. An
architecture specific interface for migrating irqs that does not
need to return error codes because the generic code has enough
information to avoid all problem cases is not going to happen.
It is totally unreasonable.
> That just works and is neither restricted to UV nor to x86.
Doing it all in the core totally fails as it gets the initial irq
assignment wrong.
Last I looked set_irq_affinity was a horribly broken interface. We
can not return error codes to user space when they ask us to do the
impossible. Right now irq->affinity is a hint that occasionally we
ignore when what it requests is impossible.
....
Thomas my apologies for ranting but I am extremely sensitive about
people placing demands on the irq code that would be very convenient
and simple for the rest of the world, except that the hardware does not
work the way people envision it should work. The worst offender is
the cpu hotunplug logic that requests we perform the impossible when
it comes to irq migration. In the case of UV I expect cpu hotplug is
going to request we migrate irqs to another node.
Right now a lot of the generic irq code is living in a deluded fantasy and
I really don't want to see more impossible requests from the irq code
added to the pile.
...
The architecture specific function setup_irq_vector has all of the
information available to it to make the decision. We use it
consistently everywhere. For the case of UV it needs to know about
another possible hardware limitation, to do it's job. I am happy
if that information comes from an architecture agnostic source but
making the decision somewhere else is just a guarantee that we will
have more subtle breakage that occasionally fail for people but at
too low a rate that people will care enough to fix.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists