[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230305134053.461554895@redhat.com>
Date: Sun, 05 Mar 2023 10:36:59 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Christoph Lameter <cl@...ux.com>
Cc: Aaron Tomlin <atomlin@...mlin.com>,
Frederic Weisbecker <frederic@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Russell King <linux@...linux.org.uk>,
Huacai Chen <chenhuacai@...nel.org>,
Heiko Carstens <hca@...ux.ibm.com>, x86@...nel.org,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: [PATCH v4 02/12] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function
Goal is to have vmstat_shepherd to transfer from
per-CPU counters to global counters remotely. For this,
an atomic this_cpu_cmpxchg is necessary.
Following the kernel convention for cmpxchg/cmpxchg_local,
change ARM's this_cpu_cmpxchg_ helpers to be atomic,
and add this_cpu_cmpxchg_local_ helpers which are not atomic.
Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
Index: linux-vmstat-remote/arch/arm64/include/asm/percpu.h
===================================================================
--- linux-vmstat-remote.orig/arch/arm64/include/asm/percpu.h
+++ linux-vmstat-remote/arch/arm64/include/asm/percpu.h
@@ -232,13 +232,23 @@ PERCPU_RET_OP(add, add, ldadd)
_pcp_protect_return(xchg_relaxed, pcp, val)
#define this_cpu_cmpxchg_1(pcp, o, n) \
- _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+ _pcp_protect_return(cmpxchg, pcp, o, n)
#define this_cpu_cmpxchg_2(pcp, o, n) \
- _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+ _pcp_protect_return(cmpxchg, pcp, o, n)
#define this_cpu_cmpxchg_4(pcp, o, n) \
- _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+ _pcp_protect_return(cmpxchg, pcp, o, n)
#define this_cpu_cmpxchg_8(pcp, o, n) \
+ _pcp_protect_return(cmpxchg, pcp, o, n)
+
+#define this_cpu_cmpxchg_local_1(pcp, o, n) \
_pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+#define this_cpu_cmpxchg_local_2(pcp, o, n) \
+ _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+#define this_cpu_cmpxchg_local_4(pcp, o, n) \
+ _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+#define this_cpu_cmpxchg_local_8(pcp, o, n) \
+ _pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
+
#ifdef __KVM_NVHE_HYPERVISOR__
extern unsigned long __hyp_per_cpu_offset(unsigned int cpu);
Powered by blists - more mailing lists