[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160203131939.1a35d9bc03f13b2b143d27c0@linux-foundation.org>
Date: Wed, 3 Feb 2016 13:19:39 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Michal Hocko <mhocko@...e.cz>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH] mm/workingset: do not forget to unlock page
On Wed, 3 Feb 2016 11:24:00 -0500 Johannes Weiner <hannes@...xchg.org> wrote:
> On Wed, Feb 03, 2016 at 07:41:36PM +0900, Sergey Senozhatsky wrote:
> > From 1d6315221f2f81c53c99f9980158f8ae49dbd582 Mon Sep 17 00:00:00 2001
> > From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> > Date: Wed, 3 Feb 2016 18:49:16 +0900
> > Subject: [PATCH] mm/workingset: do not forget to unlock_page in workingset_activation
> >
> > Do not return from workingset_activation() with locked rcu and page.
> >
> > Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
>
> Thanks Sergey. Even though I wrote this function, my brain must have
> gone "it can't be locking anything when it returns NULL, right?" It's
> a dumb interface. Luckily, that's fixed with follow-up patches in -mm.
>
> As for this one:
>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
> Fixes: mm: workingset: per-cgroup cache thrash detection
>
> Andrew, can you please fold this?
Yup. I turned it into a fix against
mm-workingset-per-cgroup-cache-thrash-detection.patch, which is where
the bug was added. And I did the goto thing instead, so the final
result will be
void workingset_activation(struct page *page)
{
struct lruvec *lruvec;
lock_page_memcg(page);
/*
* Filter non-memcg pages here, e.g. unmap can call
* mark_page_accessed() on VDSO pages.
*
* XXX: See workingset_refault() - this should return
* root_mem_cgroup even for !CONFIG_MEMCG.
*/
if (!mem_cgroup_disabled() && !page_memcg(page))
goto out;
lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
atomic_long_inc(&lruvec->inactive_age);
out:
unlock_page_memcg(page);
}
Powered by blists - more mailing lists