[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210803175519.22298-1-longman@redhat.com>
Date: Tue, 3 Aug 2021 13:55:19 -0400
From: Waiman Long <longman@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>, Roman Gushchin <guro@...com>
Cc: linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>,
Luis Goncalves <lgoncalv@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH] mm/memcg: Disable task obj_stock for PREEMPT_RT
For PREEMPT_RT kernel, preempt_disable() and local_irq_save()
are typically converted to local_lock() and local_lock_irqsave()
respectively. These two variants of local_lock() are essentially
the same. Thus, there is no performance advantage in choosing one
over the other.
As there is no point in maintaining two different sets of obj_stock,
it is simpler and more efficient to just disable task_obj and use
only irq_obj for PREEMPT_RT. However, task_obj will still be there
in the memcg_stock_pcp structure even though it is not used in this
configuration.
Signed-off-by: Waiman Long <longman@...hat.com>
---
mm/memcontrol.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 87c883227f90..4f80770cb97b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2120,12 +2120,22 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
* which is cheap in non-preempt kernel. The interrupt context object stock
* can only be accessed after disabling interrupt. User context code can
* access interrupt object stock, but not vice versa.
+ *
+ * For PREEMPT_RT kernel, preempt_disable() and local_irq_save() may have
+ * to be changed to variants of local_lock(). This eliminates the
+ * performance advantage of using preempt_disable(). Fall back to always
+ * use local_irq_save() and use only irq_obj for simplicity.
*/
+static inline bool use_task_obj_stock(void)
+{
+ return !IS_ENABLED(CONFIG_PREEMPT_RT) && likely(in_task());
+}
+
static inline struct obj_stock *get_obj_stock(unsigned long *pflags)
{
struct memcg_stock_pcp *stock;
- if (likely(in_task())) {
+ if (use_task_obj_stock()) {
*pflags = 0UL;
preempt_disable();
stock = this_cpu_ptr(&memcg_stock);
@@ -2139,7 +2149,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags)
static inline void put_obj_stock(unsigned long flags)
{
- if (likely(in_task()))
+ if (use_task_obj_stock())
preempt_enable();
else
local_irq_restore(flags);
@@ -2212,7 +2222,7 @@ static void drain_local_stock(struct work_struct *dummy)
stock = this_cpu_ptr(&memcg_stock);
drain_obj_stock(&stock->irq_obj);
- if (in_task())
+ if (use_task_obj_stock())
drain_obj_stock(&stock->task_obj);
drain_stock(stock);
clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
@@ -3217,7 +3227,7 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
{
struct mem_cgroup *memcg;
- if (in_task() && stock->task_obj.cached_objcg) {
+ if (use_task_obj_stock() && stock->task_obj.cached_objcg) {
memcg = obj_cgroup_memcg(stock->task_obj.cached_objcg);
if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))
return true;
--
2.18.1
Powered by blists - more mailing lists