[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160203220253.GA6859@cmpxchg.org>
Date: Wed, 3 Feb 2016 17:02:53 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Michal Hocko <mhocko@...e.cz>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH] mm/workingset: do not forget to unlock page
On Wed, Feb 03, 2016 at 01:19:39PM -0800, Andrew Morton wrote:
> Yup. I turned it into a fix against
> mm-workingset-per-cgroup-cache-thrash-detection.patch, which is where
> the bug was added. And I did the goto thing instead, so the final
> result will be
>
> void workingset_activation(struct page *page)
> {
> struct lruvec *lruvec;
>
> lock_page_memcg(page);
> /*
> * Filter non-memcg pages here, e.g. unmap can call
> * mark_page_accessed() on VDSO pages.
> *
> * XXX: See workingset_refault() - this should return
> * root_mem_cgroup even for !CONFIG_MEMCG.
> */
> if (!mem_cgroup_disabled() && !page_memcg(page))
> goto out;
> lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
> atomic_long_inc(&lruvec->inactive_age);
> out:
> unlock_page_memcg(page);
> }
LGTM, thank you.
Powered by blists - more mailing lists