[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221216150441.200533-2-manfred@colorfullife.com>
Date: Fri, 16 Dec 2022 16:04:40 +0100
From: Manfred Spraul <manfred@...orfullife.com>
To: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: 1vier1@....de, Manfred Spraul <manfred@...orfullife.com>,
"Sun, Jiebin" <jiebin.sun@...el.com>
Subject: [PATCH 2/3] include/linux/percpu_counter.h: Race in uniprocessor percpu_counter_add()
The percpu interface is supposed to be preempt and irq safe.
But:
The uniprocessor implementation of percpu_counter_add() is not irq safe:
if an interrupt happens during the +=, then the result is undefined.
Therefore: switch from preempt_disable() to local_irq_save().
This prevents interrupts from interrupting the +=, and as a side effect
prevents preemption.
Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
Cc: "Sun, Jiebin" <jiebin.sun@...el.com>
---
include/linux/percpu_counter.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index a3aae8d57a42..521a733e21a9 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
static inline void
percpu_counter_add(struct percpu_counter *fbc, s64 amount)
{
- preempt_disable();
+ unsigned long flags;
+
+ local_irq_save(flags);
fbc->count += amount;
- preempt_enable();
+ local_irq_restore(flags);
}
/* non-SMP percpu_counter_add_local is the same with percpu_counter_add */
--
2.38.1
Powered by blists - more mailing lists