[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1484922862.16328.117.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Fri, 20 Jan 2017 06:34:22 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
Tejun Heo <tj@...nel.org>
Subject: [PATCH] percpu_counter: percpu_counter_hotcpu_callback() cleanup
From: Eric Dumazet <edumazet@...gle.com>
In commit ebd8fef304f9 ("percpu_counter: make percpu_counters_lock
irq-safe") we disabled irqs in percpu_counter_hotcpu_callback()
We can grab every counter spinlock without having to disable
irqs again.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Cc: Tejun Heo <tj@...nel.org>
---
lib/percpu_counter.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index c8cebb1370765fac92170cd0d2f8ed1ede0a01da..9c21000df0b5ea1b99a83fd73a338073cb7fd016 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -176,13 +176,12 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
spin_lock_irq(&percpu_counters_lock);
list_for_each_entry(fbc, &percpu_counters, list) {
s32 *pcount;
- unsigned long flags;
- raw_spin_lock_irqsave(&fbc->lock, flags);
+ raw_spin_lock(&fbc->lock);
pcount = per_cpu_ptr(fbc->counters, cpu);
fbc->count += *pcount;
*pcount = 0;
- raw_spin_unlock_irqrestore(&fbc->lock, flags);
+ raw_spin_unlock(&fbc->lock);
}
spin_unlock_irq(&percpu_counters_lock);
#endif
Powered by blists - more mailing lists