[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140729084433.GB11179@pd.tnic>
Date: Tue, 29 Jul 2014 10:44:33 +0200
From: Borislav Petkov <bp@...en8.de>
To: "Chen, Gong" <gong.chen@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, tglx@...utronix.de,
paulus@...ba.org, benh@...nel.crashing.org, tony.luck@...el.com,
hpa@...or.com, jkosina@...e.cz, rafael.j.wysocki@...el.com,
linux@....linux.org.uk, ralf@...ux-mips.org,
schwidefsky@...ibm.com, davem@...emloft.net,
viro@...iv.linux.org.uk, fweisbec@...il.com, cl@...ux.com,
akpm@...ux-foundation.org, axboe@...nel.dk,
JBottomley@...allels.com, neilb@...e.de,
christoffer.dall@...aro.org, rostedt@...dmis.org, rric@...nel.org,
gregkh@...uxfoundation.org, mhocko@...e.cz, david@...morbit.com
Subject: Re: [RFC PATCH v1 13/70] x86, x2apic_cluster: _FROZEN Cleanup
On Mon, Jul 28, 2014 at 02:04:55AM -0400, Chen, Gong wrote:
> On Wed, Jul 23, 2014 at 10:36:28PM +0200, Borislav Petkov wrote:
> > Those checks dealing with CPU_TASKS_FROZEN in-between make the whole
> > switch statement hard to follow.
> >
> > How about we go a step further and deal with CPU_UP_CANCELED_FROZEN
> > upfront and even simplify the rest:
> >
>
> --------8<--------
> Subject: [RFC PATCH v2 13/70] x86, x2apic_cluster: _FROZEN Cleanup
>
> Remove XXX_FROZEN state from x86/x2apic_cluster.
>
> Signed-off-by: Chen, Gong <gong.chen@...ux.intel.com>
> Suggested-by: Borislav Petkov <bp@...en8.de>
> ---
> arch/x86/kernel/apic/x2apic_cluster.c | 37 +++++++++++++++++++++++------------
> 1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
> index e66766b..b8a6ea8 100644
> --- a/arch/x86/kernel/apic/x2apic_cluster.c
> +++ b/arch/x86/kernel/apic/x2apic_cluster.c
> @@ -144,6 +144,20 @@ static void init_x2apic_ldr(void)
> }
> }
>
> +static void __update_clusterinfo(unsigned int this_cpu)
> +{
> + unsigned int cpu;
> +
> + for_each_online_cpu(cpu) {
> + if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
> + continue;
> + __cpu_clear(this_cpu, per_cpu(cpus_in_cluster, cpu));
> + __cpu_clear(cpu, per_cpu(cpus_in_cluster, this_cpu));
> + }
> + free_cpumask_var(per_cpu(cpus_in_cluster, this_cpu));
> + free_cpumask_var(per_cpu(ipi_mask, this_cpu));
> +}
> +
> /*
> * At CPU state changes, update the x2apic cluster sibling info.
> */
> @@ -151,34 +165,31 @@ static int
> update_clusterinfo(struct notifier_block *nfb, unsigned long action, void *hcpu)
> {
> unsigned int this_cpu = (unsigned long)hcpu;
> - unsigned int cpu;
> int err = 0;
>
> switch (action) {
> case CPU_UP_PREPARE:
> if (!zalloc_cpumask_var(&per_cpu(cpus_in_cluster, this_cpu),
> - GFP_KERNEL)) {
> + GFP_KERNEL))
> err = -ENOMEM;
> - } else if (!zalloc_cpumask_var(&per_cpu(ipi_mask, this_cpu),
> - GFP_KERNEL)) {
> + else if (!zalloc_cpumask_var(&per_cpu(ipi_mask, this_cpu),
> + GFP_KERNEL)) {
> free_cpumask_var(per_cpu(cpus_in_cluster, this_cpu));
> err = -ENOMEM;
You need to start restraining yourself and doing clean patches. Those
changes here are unrelated, please drop them.
Go and reread Documentation/SubmittingPatches, section 3 in particular.
> }
> break;
> case CPU_UP_CANCELED:
> - case CPU_UP_CANCELED_FROZEN:
> case CPU_DEAD:
> - for_each_online_cpu(cpu) {
> - if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
> - continue;
> - __cpu_clear(this_cpu, per_cpu(cpus_in_cluster, cpu));
> - __cpu_clear(cpu, per_cpu(cpus_in_cluster, this_cpu));
> - }
> - free_cpumask_var(per_cpu(cpus_in_cluster, this_cpu));
> - free_cpumask_var(per_cpu(ipi_mask, this_cpu));
> + __update_clusterinfo(this_cpu);
> + break;
> + default:
> break;
> }
>
> + if (test_and_clear_bit(CPU_TASKS_FROZEN, &action) &&
What.. why?
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists