[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200818210355.GM27891@paulmck-ThinkPad-P72>
Date: Tue, 18 Aug 2020 14:03:55 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Uladzislau Rezki <urezki@...il.com>, qiang.zhang@...driver.com,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
rcu <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] rcu: shrink each possible cpu krcp
On Tue, Aug 18, 2020 at 03:00:35PM -0400, Joel Fernandes wrote:
> On Tue, Aug 18, 2020 at 1:18 PM Paul E. McKenney <paulmck@...nel.org> wrote:
> >
> > On Mon, Aug 17, 2020 at 06:03:54PM -0400, Joel Fernandes wrote:
> > > On Fri, Aug 14, 2020 at 2:51 PM Uladzislau Rezki <urezki@...il.com> wrote:
> > > >
> > > > > From: Zqiang <qiang.zhang@...driver.com>
> > > > >
> > > > > Due to cpu hotplug. some cpu may be offline after call "kfree_call_rcu"
> > > > > func, if the shrinker is triggered at this time, we should drain each
> > > > > possible cpu "krcp".
> > > > >
> > > > > Signed-off-by: Zqiang <qiang.zhang@...driver.com>
> > > > > ---
> > > > > kernel/rcu/tree.c | 6 +++---
> > > > > 1 file changed, 3 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > > > index 8ce77d9ac716..619ccbb3fe4b 100644
> > > > > --- a/kernel/rcu/tree.c
> > > > > +++ b/kernel/rcu/tree.c
> > > > > @@ -3443,7 +3443,7 @@ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
> > > > > unsigned long count = 0;
> > > > >
> > > > > /* Snapshot count of all CPUs */
> > > > > - for_each_online_cpu(cpu) {
> > > > > + for_each_possible_cpu(cpu) {
> > > > > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
> > > > >
> > > > > count += READ_ONCE(krcp->count);
> > > > > @@ -3458,7 +3458,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > > > int cpu, freed = 0;
> > > > > unsigned long flags;
> > > > >
> > > > > - for_each_online_cpu(cpu) {
> > > > > + for_each_possible_cpu(cpu) {
> > > > > int count;
> > > > > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
> > > > >
> > > > > @@ -3491,7 +3491,7 @@ void __init kfree_rcu_scheduler_running(void)
> > > > > int cpu;
> > > > > unsigned long flags;
> > > > >
> > > > > - for_each_online_cpu(cpu) {
> > > > > + for_each_possible_cpu(cpu) {
> > > > > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
> > > > >
> > > > > raw_spin_lock_irqsave(&krcp->lock, flags);
> > > > >
> > > > I agree that it can happen.
> > > >
> > > > Joel, what is your view?
> > >
> > > Yes I also think it is possible. The patch LGTM. Another fix could be
> > > to drain the caches in the CPU offline path and save the memory. But
> > > then it will take hit during __get_free_page(). If CPU
> > > offlining/online is not frequent, then it will save the lost memory.
> > >
> > > I wonder how other per-cpu caches in the kernel work in such scenarios.
> > >
> > > Thoughts?
> >
> > Do I count this as an ack or a review? If not, what precisely would
> > you like the submitter to do differently?
>
> Hi Paul,
> The patch is correct and is definitely an improvement. I was thinking
> about whether we should always do what the patch is doing when
> offlining CPUs to save memory but now I feel that may not be that much
> of a win to justify more complexity.
>
> You can take it with my ack:
>
> Acked-by: Joel Fernandes <joel@...lfernandes.org>
Thank you all! I wordsmithed a bit as shown below, so please let
me know if I messed anything up.
Thanx, Paul
------------------------------------------------------------------------
commit fe5d89cc025b3efe682cac122bc4d39f4722821e
Author: Zqiang <qiang.zhang@...driver.com>
Date: Fri Aug 14 14:45:57 2020 +0800
rcu: Shrink each possible cpu krcp
CPUs can go offline shortly after kfree_call_rcu() has been invoked,
which can leave memory stranded until those CPUs come back online.
This commit therefore drains the kcrp of each CPU, not just the
ones that happen to be online.
Acked-by: Joel Fernandes <joel@...lfernandes.org>
Signed-off-by: Zqiang <qiang.zhang@...driver.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 02ca8e5..d9f90f6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3500,7 +3500,7 @@ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
unsigned long count = 0;
/* Snapshot count of all CPUs */
- for_each_online_cpu(cpu) {
+ for_each_possible_cpu(cpu) {
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
count += READ_ONCE(krcp->count);
@@ -3515,7 +3515,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
int cpu, freed = 0;
unsigned long flags;
- for_each_online_cpu(cpu) {
+ for_each_possible_cpu(cpu) {
int count;
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
@@ -3548,7 +3548,7 @@ void __init kfree_rcu_scheduler_running(void)
int cpu;
unsigned long flags;
- for_each_online_cpu(cpu) {
+ for_each_possible_cpu(cpu) {
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
raw_spin_lock_irqsave(&krcp->lock, flags);
Powered by blists - more mailing lists