[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160204001900.GB1861@swordfish>
Date: Thu, 4 Feb 2016 09:19:00 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Michal Hocko <mhocko@...e.cz>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH] mm/workingset: do not forget to unlock page
On (02/03/16 17:02), Johannes Weiner wrote:
> On Wed, Feb 03, 2016 at 01:19:39PM -0800, Andrew Morton wrote:
> > Yup. I turned it into a fix against
> > mm-workingset-per-cgroup-cache-thrash-detection.patch, which is where
> > the bug was added. And I did the goto thing instead, so the final
> > result will be
> >
> > void workingset_activation(struct page *page)
> > {
> > struct lruvec *lruvec;
> >
> > lock_page_memcg(page);
> > /*
> > * Filter non-memcg pages here, e.g. unmap can call
> > * mark_page_accessed() on VDSO pages.
> > *
> > * XXX: See workingset_refault() - this should return
> > * root_mem_cgroup even for !CONFIG_MEMCG.
> > */
> > if (!mem_cgroup_disabled() && !page_memcg(page))
> > goto out;
> > lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
> > atomic_long_inc(&lruvec->inactive_age);
> > out:
> > unlock_page_memcg(page);
> > }
>
> LGTM, thank you.
Thanks!
-ss
Powered by blists - more mailing lists