[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220908083859.24c989f08d62ddbd031005de@linux-foundation.org>
Date: Thu, 8 Sep 2022 08:38:59 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: "Sun, Jiebin" <jiebin.sun@...el.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>, vasily.averin@...ux.dev,
shakeelb@...gle.com, dennis@...nel.org, tj@...nel.org,
cl@...ux.com, ebiederm@...ssion.com, legion@...nel.org,
manfred@...orfullife.com, alexander.mikhalitsyn@...tuozzo.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
tim.c.chen@...el.com, feng.tang@...el.com, ying.huang@...el.com,
tianyou.li@...el.com, wangyang.guo@...el.com
Subject: Re: [PATCH v4] ipc/msg: mitigate the lock contention with percpu
counter
On Thu, 8 Sep 2022 16:25:47 +0800 "Sun, Jiebin" <jiebin.sun@...el.com> wrote:
> In our case, if the local
> percpu counter is near to INT_MAX and there comes a big msgsz, the
> overflow issue could happen.
percpu_counter_add_batch() handles this - your big message
won't overflow an s64.
Lookng at percpu_counter_add_batch(), is this tweak right?
- don't need to update *fbc->counters inside the lock
- that __this_cpu_sub() is an obscure way of zeroing the thing
--- a/lib/percpu_counter.c~a
+++ a/lib/percpu_counter.c
@@ -89,8 +89,8 @@ void percpu_counter_add_batch(struct per
unsigned long flags;
raw_spin_lock_irqsave(&fbc->lock, flags);
fbc->count += count;
- __this_cpu_sub(*fbc->counters, count - amount);
raw_spin_unlock_irqrestore(&fbc->lock, flags);
+ __this_cpu_write(*fbc->counters, 0);
} else {
this_cpu_add(*fbc->counters, amount);
}
_
Powered by blists - more mailing lists