[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1203291154010.2542@ionos>
Date: Thu, 29 Mar 2012 11:58:49 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Jiang Liu <liuj97@...il.com>
cc: Tony Luck <tony.luck@...el.com>, Fenghua Yu <fenghua.yu@...el.com>,
Jes Sorensen <jes@....com>, Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Yinghai Lu <yinghai@...nel.org>,
Thomas Meyer <thomas@...3r.de>,
Jiang Liu <jiang.liu@...wei.com>, linux-ia64@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-altix@....com, x86@...nel.org,
chenkeping@...wei.com
Subject: Re: [PATCH] IRQ: normalize chip->irq_set_affinity return value on
x86 and IA64
On Tue, 6 Mar 2012, Jiang Liu wrote:
> On x86 and IA64 platforms, interrupt controller chip's irq_set_affinity()
> method always copies affinity mask to irq_data->affinity field but still
> returns 0(IRQ_SET_MASK_OK). That return value causes the interrupt core
> logic unnecessarily copies the mask to irq_data->affinity field again.
> So return IRQ_SET_MASK_OK_NOCOPY instead of IRQ_SET_MASK_OK to get rid of
> the duplicated copy operation.
>
> This patch applies to v3.3-rc6 and has been tested on x86 platforms.
>
> Signed-off-by: Jiang Liu <jiang.liu@...wei.com>
> ---
> arch/ia64/kernel/iosapic.c | 4 +++-
> arch/ia64/kernel/msi_ia64.c | 4 ++--
> arch/ia64/sn/kernel/irq.c | 2 +-
> arch/ia64/sn/kernel/msi_sn.c | 2 +-
> arch/x86/kernel/apic/io_apic.c | 11 ++++++-----
> arch/x86/platform/uv/uv_irq.c | 2 +-
> kernel/irq/internals.h | 3 +++
> kernel/irq/manage.c | 39 ++++++++++++++++++++++-----------------
> kernel/irq/migration.c | 6 +-----
This does not work that way. The patch wants to be split in 3 parts
(core, x86, ia64). The patches are completely independent. Please
resend.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists