[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1340315939.3696.95.camel@sbsiddha-desk.sc.intel.com>
Date: Thu, 21 Jun 2012 14:58:59 -0700
From: Suresh Siddha <suresh.b.siddha@...el.com>
To: Alexander Gordeev <agordeev@...hat.com>
Cc: yinghai@...nel.org, linux-kernel@...r.kernel.org, x86@...nel.org,
gorcunov@...nvz.org, Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 1/2] x86, irq: update irq_cfg domain unless the new
affinity is a subset of the current domain
On Thu, 2012-06-21 at 13:00 +0200, Alexander Gordeev wrote:
> On Tue, Jun 19, 2012 at 05:18:42PM -0700, Suresh Siddha wrote:
> > On Mon, 2012-06-18 at 17:51 -0700, Suresh Siddha wrote:
> > BTW, there is still one open that I would like to address. How to handle
> > the vector pressure during boot etc (as the default vector assignment
> > specifies all online cpus) when there are lot interrupt sources but
> > fewer x2apic clusters (like one or two socket server case).
> >
> > We should be able to do something like the appended. Any better
> > suggestions? I don't want to add boot parameters to limit the x2apic
> > cluster membership etc (to fewer than 16 logical cpu's) if possible.
>
> This cpu_online_mask approach should work IMO. Although it looks little bit
> hacky for me. May be we could start with default_vector_allocation_domain()
> and explicitly switch to cluster_vector_allocation_domain() once booted?
It is not just during boot. Module load/unload will also go through
these paths.
> As of boot parameters, I can think of multi-pass walk thru a cpumask to find a
> free cluster => core => sibling. In worst case I can imagine a vector space
> defragmentator. But nothing really small to avoid current code reshake.
We can probably go with something simple as I sent earlier (third patch
in the new version does this). Depending on the need/future use-cases,
we can further enhance this later.
> Also, how heavy the vector pressure actually is?
I don't know. but I suspect one socket server where they can be one
x2apic cluster (for example 8-core with HT), will see some pressure on
some platforms.
thanks,
suresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists