[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337707275.1997.184.camel@sbsiddha-desk.sc.intel.com>
Date: Tue, 22 May 2012 10:21:15 -0700
From: Suresh Siddha <suresh.b.siddha@...el.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: agordeev@...hat.com, yinghai@...nel.org,
linux-kernel@...r.kernel.org, x86@...nel.org, gorcunov@...nvz.org
Subject: Re: [PATCH 2/2] x2apic, cluster: use all the members of one cluster
specified in the smp_affinity mask for the interrupt desintation
On Tue, 2012-05-22 at 09:04 +0200, Ingo Molnar wrote:
> * Suresh Siddha <suresh.b.siddha@...el.com> wrote:
>
> > If the HW implements round-robin interrupt delivery, this
> > enables multiple cpu's (which are part of the user specified
> > interrupt smp_affinity mask and belong to the same x2apic
> > cluster) to service the interrupt.
>
> Could/should we do something similar for regular APICs as well?
> They too support masks and LowestPrio delivery - and doing that
> will increase test coverage rather significantly.
Existing logical flat xapic mode already takes advantage of this today.
And that apic driver allows multiple cpu's to be set in the destination
field allowing round-robin/power-aware interrupt delivery etc depending
on the platform capabilities etc.
So most of the laptops with 8 or less logical cpu's should take
advantage of this today.
For bigger platforms, we use physical xapic mode. Some older kernels
used xapic cluster mode, which allows 4 members in a cluster. With two
HT siblings, that will leave room for only 2 cores. So the benefit will
be limited. And on the multi-socket platforms, x2apic/vt-d will be
available and used for various other reasons too (virtualization etc).
x2apic is available on desktop/laptop models too.
So for legacy xapic, we can use logical flat mode to take advantage of
these HW modes.
thanks,
suresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists