[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1594389708-60781-4-git-send-email-feng.tang@intel.com>
Date: Fri, 10 Jul 2020 22:01:47 +0800
From: Feng Tang <feng.tang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>, Qian Cai <cai@....pw>,
Dennis Zhou <dennis@...nel.org>, andi.kleen@...el.com,
tim.c.chen@...el.com, dave.hansen@...el.com, ying.huang@...el.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Feng Tang <feng.tang@...el.com>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>
Subject: [PATCH v6 3/4] percpu_counter: add percpu_counter_sync()
percpu_counter's accuracy is related to its batch size. For a percpu_counter
with a big batch, its deviation could be big, so when the counter's batch is
runtime changed to a smaller value for better accuracy, there could also be
requirment to reduce the big deviation.
So add a percpu-counter sync function to be run on each CPU.
Reported-by: kernel test robot <rong.a.chen@...el.com>
Signed-off-by: Feng Tang <feng.tang@...el.com>
Cc: Dennis Zhou <dennis@...nel.org>
Cc: Tejun Heo <tj@...nel.org>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Qian Cai <cai@....pw>
Cc: Andi Kleen <andi.kleen@...el.com>
Cc: Huang Ying <ying.huang@...el.com>
---
include/linux/percpu_counter.h | 4 ++++
lib/percpu_counter.c | 19 +++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index 0a4f54d..01861ee 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -44,6 +44,7 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount,
s32 batch);
s64 __percpu_counter_sum(struct percpu_counter *fbc);
int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch);
+void percpu_counter_sync(struct percpu_counter *fbc);
static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
{
@@ -172,6 +173,9 @@ static inline bool percpu_counter_initialized(struct percpu_counter *fbc)
return true;
}
+static inline void percpu_counter_sync(struct percpu_counter *fbc)
+{
+}
#endif /* CONFIG_SMP */
static inline void percpu_counter_inc(struct percpu_counter *fbc)
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index a66595b..a2345de 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -99,6 +99,25 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
EXPORT_SYMBOL(percpu_counter_add_batch);
/*
+ * For percpu_counter with a big batch, the devication of its count could
+ * be big, and there is requirement to reduce the deviation, like when the
+ * counter's batch could be runtime decreased to get a better accuracy,
+ * which can be achieved by running this sync function on each CPU.
+ */
+void percpu_counter_sync(struct percpu_counter *fbc)
+{
+ unsigned long flags;
+ s64 count;
+
+ raw_spin_lock_irqsave(&fbc->lock, flags);
+ count = __this_cpu_read(*fbc->counters);
+ fbc->count += count;
+ __this_cpu_sub(*fbc->counters, count);
+ raw_spin_unlock_irqrestore(&fbc->lock, flags);
+}
+EXPORT_SYMBOL(percpu_counter_sync);
+
+/*
* Add up all the per-cpu counts, return the result. This is a more accurate
* but much slower version of percpu_counter_read_positive()
*/
--
2.7.4
Powered by blists - more mailing lists