[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160203104136.GA517@swordfish>
Date: Wed, 3 Feb 2016 19:41:36 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Michal Hocko <mhocko@...e.cz>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [PATCH] mm/workingset: do not forget to unlock page
On (02/03/16 18:58), Sergey Senozhatsky wrote:
>
> Do not leave page locked (and RCU read side locked) when
> return from workingset_activation() due to disabled memcg
> or page not being a page_memcg().
d'oh... sorry, the commit message is simply insane.
apparently the patch fixes a new code
mm-workingset-per-cgroup-cache-thrash-detection.patch added to -mm tree
mm-simplify-lock_page_memcg.patch added to -mm tree
so if there is an option to fold this patch into mm-simplify-lock_page_memcg,
for example, as a -fix, then I wouldn't mind at all.
a better commit message
===8<====8<====
>From 1d6315221f2f81c53c99f9980158f8ae49dbd582 Mon Sep 17 00:00:00 2001
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Date: Wed, 3 Feb 2016 18:49:16 +0900
Subject: [PATCH] mm/workingset: do not forget to unlock_page in workingset_activation
Do not return from workingset_activation() with locked rcu and page.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
---
mm/workingset.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/workingset.c b/mm/workingset.c
index 14522ed..54138a9 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -315,8 +315,10 @@ void workingset_activation(struct page *page)
* XXX: See workingset_refault() - this should return
* root_mem_cgroup even for !CONFIG_MEMCG.
*/
- if (!mem_cgroup_disabled() && !page_memcg(page))
+ if (!mem_cgroup_disabled() && !page_memcg(page)) {
+ unlock_page_memcg(page);
return;
+ }
lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
atomic_long_inc(&lruvec->inactive_age);
unlock_page_memcg(page);
--
2.7.0
Powered by blists - more mailing lists