[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47CEF1CD.6040503@qualcomm.com>
Date: Wed, 05 Mar 2008 11:17:33 -0800
From: Max Krasnyansky <maxk@...lcomm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Paul Jackson <pj@....com>, mingo@...e.hu, tglx@...utronix.de,
oleg@...sign.ru, rostedt@...dmis.org, linux-kernel@...r.kernel.org,
rientjes@...gle.com
Subject: Re: [RFC/PATCH] cpuset: cpuset irq affinities
Peter Zijlstra wrote:
> How about we make this in-kernel boot set, that by default contains all
> IRQs, all unbounded kthreads and all of user-space.
I assumed that this exactly what we meant by the boot set all along :).
One thing I wanted to clarify that by all IRQs we literally mean all of them
even those that do not have handlers yet. /proc/irqN/smp_affinity for example
is not available for the irqs are not active. In other words an IRQ must not
all of the sudden show up in the root set when something does request_irq() on
it.
This applies to other thing too.
>> How does all this interact with /proc/irq/N/smp_affinity?
>
> Much the same way the cpuset cpus_allowed interacts with a task's
> cpus_allowed. That is, cs->cpus_allowed is a mask on top of the provided
> affinity.
>
> If for some reason the cs->cpus_allowed changes in such a way that the
> user-specified mask becomes empty (irq->cpus_allowed & cs->cpus_allowed
> == 0), then print a message and set it to the full mask
> (irq->cpus_allowed = cs->cpus_allowed).
>
> If for some reason the cs->cpus_allowed changes in such a way that the
> mask is physically impossible (set_irq_affinity(cs->cpus_allowed)
> fails), then print a message and move the IRQ to the parent set.
I think Paul missed my earlier reply where I pointed out that original patch
conflicted with /proc/irq/N/smp_affinity API. The solution is for
irq_set_affinity() to enforce cpus_allowed just like sched_setaffinity does
for the tasks.
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists