[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191202192643.GA19946@dennisz-mbp>
Date: Mon, 2 Dec 2019 14:26:43 -0500
From: Dennis Zhou <dennis@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [GIT PULL] percpu changes for v5.5-rc1
Hi Linus,
This pull request has a change to fix percpu-refcount for RT kernels
because rcu-sched disables preemption and the refcount release callback
might acquire a spinlock.
In the works is to add memcg counting for percpu by Roman Gushchin. That
may land in either for-5.6 or for-5.7. There is also some sparse
warnings that we're sorting out now.
Thanks,
Dennis
The following changes since commit 4f5cafb5cb8471e54afdc9054d973535614f7675:
Linux 5.4-rc3 (2019-10-13 16:37:36 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-5.5
for you to fetch changes up to ba30e27405afa0b13b79532a345977b3e58ad501:
Revert "percpu: add __percpu to SHIFT_PERCPU_PTR" (2019-11-25 14:28:04 -0800)
----------------------------------------------------------------
Ben Dooks (1):
percpu: add __percpu to SHIFT_PERCPU_PTR
Dennis Zhou (1):
Revert "percpu: add __percpu to SHIFT_PERCPU_PTR"
Sebastian Andrzej Siewior (1):
percpu-refcount: Use normal instead of RCU-sched"
include/linux/percpu-refcount.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 7aef0abc194a..390031e816dc 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -186,14 +186,14 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
{
unsigned long __percpu *percpu_count;
- rcu_read_lock_sched();
+ rcu_read_lock();
if (__ref_is_percpu(ref, &percpu_count))
this_cpu_add(*percpu_count, nr);
else
atomic_long_add(nr, &ref->count);
- rcu_read_unlock_sched();
+ rcu_read_unlock();
}
/**
@@ -223,7 +223,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref)
unsigned long __percpu *percpu_count;
bool ret;
- rcu_read_lock_sched();
+ rcu_read_lock();
if (__ref_is_percpu(ref, &percpu_count)) {
this_cpu_inc(*percpu_count);
@@ -232,7 +232,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref)
ret = atomic_long_inc_not_zero(&ref->count);
}
- rcu_read_unlock_sched();
+ rcu_read_unlock();
return ret;
}
@@ -257,7 +257,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref)
unsigned long __percpu *percpu_count;
bool ret = false;
- rcu_read_lock_sched();
+ rcu_read_lock();
if (__ref_is_percpu(ref, &percpu_count)) {
this_cpu_inc(*percpu_count);
@@ -266,7 +266,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref)
ret = atomic_long_inc_not_zero(&ref->count);
}
- rcu_read_unlock_sched();
+ rcu_read_unlock();
return ret;
}
@@ -285,14 +285,14 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
{
unsigned long __percpu *percpu_count;
- rcu_read_lock_sched();
+ rcu_read_lock();
if (__ref_is_percpu(ref, &percpu_count))
this_cpu_sub(*percpu_count, nr);
else if (unlikely(atomic_long_sub_and_test(nr, &ref->count)))
ref->release(ref);
- rcu_read_unlock_sched();
+ rcu_read_unlock();
}
/**
Powered by blists - more mailing lists