[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1287177279-30876-5-git-send-email-gthelen@google.com>
Date: Fri, 15 Oct 2010 14:14:32 -0700
From: Greg Thelen <gthelen@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
containers@...ts.osdl.org, Andrea Righi <arighi@...eler.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Minchan Kim <minchan.kim@...il.com>,
Ciju Rajan K <ciju@...ux.vnet.ibm.com>,
David Rientjes <rientjes@...gle.com>,
Greg Thelen <gthelen@...gle.com>
Subject: [PATCH v2 04/11] memcg: disable softirq in lock_page_cgroup()
If pages are being migrated from a memcg, then updates to that
memcg's page statistics are protected by grabbing a bit spin lock
using lock_page_cgroup(). In an upcoming commit memcg dirty page
accounting will be updating memcg page accounting (specifically:
num writeback pages) from softirq. Avoid a deadlocking nested
spin lock attempt by disabling softirq on the local processor
when grabbing the page_cgroup bit_spin_lock in lock_page_cgroup().
This avoids the following deadlock:
statistic
CPU 0 CPU 1
inc_file_mapped
rcu_read_lock
start move
synchronize_rcu
lock_page_cgroup
softirq
test_clear_page_writeback
mem_cgroup_dec_page_stat(NR_WRITEBACK)
rcu_read_lock
lock_page_cgroup /* deadlock */
unlock_page_cgroup
rcu_read_unlock
unlock_page_cgroup
rcu_read_unlock
By disabling softirq in lock_page_cgroup, nested calls are avoided.
The softirq would be delayed until after inc_file_mapped enables
softirq when calling unlock_page_cgroup().
The normal, fast path, of memcg page stat updates typically
does not need to call lock_page_cgroup(), so this change does
not affect the performance of the common case page accounting.
Signed-off-by: Andrea Righi <arighi@...eler.com>
Signed-off-by: Greg Thelen <gthelen@...gle.com>
---
include/linux/page_cgroup.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
index b59c298..0585546 100644
--- a/include/linux/page_cgroup.h
+++ b/include/linux/page_cgroup.h
@@ -3,6 +3,8 @@
#ifdef CONFIG_CGROUP_MEM_RES_CTLR
#include <linux/bit_spinlock.h>
+#include <linux/hardirq.h>
+
/*
* Page Cgroup can be considered as an extended mem_map.
* A page_cgroup page is associated with every page descriptor. The
@@ -119,12 +121,16 @@ static inline enum zone_type page_cgroup_zid(struct page_cgroup *pc)
static inline void lock_page_cgroup(struct page_cgroup *pc)
{
+ /* This routine is only deadlock safe from softirq or lower. */
+ VM_BUG_ON(in_irq());
+ local_bh_disable();
bit_spin_lock(PCG_LOCK, &pc->flags);
}
static inline void unlock_page_cgroup(struct page_cgroup *pc)
{
bit_spin_unlock(PCG_LOCK, &pc->flags);
+ local_bh_enable();
}
#else /* CONFIG_CGROUP_MEM_RES_CTLR */
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists