[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1573567588-47048-7-git-send-email-alex.shi@linux.alibaba.com>
Date: Tue, 12 Nov 2019 22:06:26 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: alex.shi@...ux.alibaba.com, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, mgorman@...hsingularity.net,
tj@...nel.org, hughd@...gle.com, khlebnikov@...dex-team.ru,
daniel.m.jordan@...cle.com, yang.shi@...ux.alibaba.com
Cc: Johannes Weiner <hannes@...xchg.org>, Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v2 6/8] mm/lru: remove rcu_read_lock to fix performance regression
Intel 0day report there are performance regression on this patchset.
The detailed info points to rcu_read_lock + PROVE_LOCKING which causes
queued_spin_lock_slowpath waiting too long time to get lock.
Remove rcu_read_lock is safe here since we had a spinlock hold.
Reported-by: kbuild test robot <lkp@...el.com>
Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Roman Gushchin <guro@...com>
Cc: Shakeel Butt <shakeelb@...gle.com>
Cc: Chris Down <chris@...isdown.name>
Cc: Tejun Heo <tj@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
---
include/linux/memcontrol.h | 29 ++++++++++++-----------------
1 file changed, 12 insertions(+), 17 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 2421b720d272..f869897a68f0 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1307,20 +1307,18 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
struct pglist_data *pgdat = page_pgdat(page);
struct lruvec *lruvec;
- rcu_read_lock();
+ if (!locked_lruvec)
+ goto lock;
+
lruvec = mem_cgroup_page_lruvec(page, pgdat);
- if (locked_lruvec == lruvec) {
- rcu_read_unlock();
+ if (locked_lruvec == lruvec)
return lruvec;
- }
- rcu_read_unlock();
- if (locked_lruvec)
- spin_unlock_irq(&locked_lruvec->lru_lock);
+ spin_unlock_irq(&locked_lruvec->lru_lock);
+lock:
lruvec = lock_page_lruvec_irq(page, pgdat);
-
return lruvec;
}
@@ -1331,21 +1329,18 @@ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page,
struct pglist_data *pgdat = page_pgdat(page);
struct lruvec *lruvec;
- rcu_read_lock();
+ if (!locked_lruvec)
+ goto lock;
+
lruvec = mem_cgroup_page_lruvec(page, pgdat);
- if (locked_lruvec == lruvec) {
- rcu_read_unlock();
+ if (locked_lruvec == lruvec)
return lruvec;
- }
- rcu_read_unlock();
- if (locked_lruvec)
- spin_unlock_irqrestore(&locked_lruvec->lru_lock,
- locked_lruvec->flags);
+ spin_unlock_irqrestore(&locked_lruvec->lru_lock, locked_lruvec->flags);
+lock:
lruvec = lock_page_lruvec_irqsave(page, pgdat);
-
return lruvec;
}
--
1.8.3.1
Powered by blists - more mailing lists