[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191216213125.9536-3-peterx@redhat.com>
Date: Mon, 16 Dec 2019 16:31:24 -0500
From: Peter Xu <peterx@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Juri Lelli <juri.lelli@...hat.com>, peterx@...hat.com,
Marcelo Tosatti <mtosatti@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count
Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the tick_broadcast_count counter any more.
Signed-off-by: Peter Xu <peterx@...hat.com>
---
arch/mips/kernel/smp.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index f510c00bda88..0678901c214d 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -696,21 +696,16 @@ EXPORT_SYMBOL(flush_tlb_one);
#ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
-static DEFINE_PER_CPU(atomic_t, tick_broadcast_count);
static DEFINE_PER_CPU(call_single_data_t, tick_broadcast_csd);
void tick_broadcast(const struct cpumask *mask)
{
- atomic_t *count;
call_single_data_t *csd;
int cpu;
for_each_cpu(cpu, mask) {
- count = &per_cpu(tick_broadcast_count, cpu);
csd = &per_cpu(tick_broadcast_csd, cpu);
-
- if (atomic_inc_return(count) == 1)
- smp_call_function_single_async(cpu, csd);
+ smp_call_function_single_async(cpu, csd);
}
}
@@ -718,7 +713,6 @@ static void tick_broadcast_callee(void *info)
{
int cpu = smp_processor_id();
tick_receive_broadcast();
- atomic_set(&per_cpu(tick_broadcast_count, cpu), 0);
}
static int __init tick_broadcast_init(void)
--
2.23.0
Powered by blists - more mailing lists