[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3bbbbf5-52c5-374c-0897-899e787cecb4@linux.alibaba.com>
Date: Fri, 22 Nov 2019 10:36:32 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org,
mgorman@...hsingularity.net, tj@...nel.org, hughd@...gle.com,
khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com,
yang.shi@...ux.alibaba.com, willy@...radead.org,
shakeelb@...gle.com, Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Roman Gushchin <guro@...com>,
Chris Down <chris@...isdown.name>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, Qian Cai <cai@....pw>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
swkhack <swkhack@...il.com>,
"Potyra, Stefan" <Stefan.Potyra@...ktrobit.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Colin Ian King <colin.king@...onical.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
Peng Fan <peng.fan@....com>,
Nikolay Borisov <nborisov@...e.com>,
Ira Weiny <ira.weiny@...el.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Yafang Shao <laoar.shao@...il.com>
Subject: Re: [PATCH v4 3/9] mm/lru: replace pgdat lru_lock with lruvec lock
在 2019/11/22 上午6:06, Johannes Weiner 写道:
>>
>> Forgive my idiot, I still don't know the details of unsafe lruvec here.
>> From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right?
>> If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here.
> You are right about the freeing part, I missed this. And I should have
> read this email here before sending out my "fix" to the current code;
> thankfully Hugh re-iterated my mistake on that thread. My apologies.
>
That's all right. You and Hugh do give me a lot of help! :)
> But I still don't understand how the moving part is safe. You look up
> the lruvec optimistically, lock it, then verify the lookup. What keeps
> page->mem_cgroup from changing after you verified it?
>
> lock_page_lruvec(): mem_cgroup_move_account():
> again:
> rcu_read_lock()
> lruvec = page->mem_cgroup->lruvec
> isolate_lru_page()
> spin_lock_irq(&lruvec->lru_lock)
> rcu_read_unlock()
> if page->mem_cgroup->lruvec != lruvec:
> spin_unlock_irq(&lruvec->lru_lock)
> goto again;
> page->mem_cgroup = new cgroup
> putback_lru_page() // new lruvec
> SetPageLRU()
> return lruvec; // old lruvec
>
> The caller assumes page belongs to the returned lruvec and will then
> change the page's lru state with a mismatched page and lruvec.
>
Yes, that's the problem we have to deal.
> If we could restrict lock_page_lruvec() to working only on PageLRU
> pages, we could fix the problem with memory barriers. But this won't
> work for split_huge_page(), which is AFAICT the only user that needs
> to freeze the lru state of a page that could be isolated elsewhere.
>
> So AFAICS the only option is to lock out mem_cgroup_move_account()
> entirely when the lru_lock is held. Which I guess should be fine.
I guess we can try from lock_page_memcg, is that a good start?
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7e6387ad01f0..f4bbbf72c5b8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1224,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd
goto out;
}
- memcg = page->mem_cgroup;
+ memcg = lock_page_memcg(page);
/*
* Swapcache readahead pages are added to the LRU - and
* possibly migrated - before they are charged.
Thanks a lot!
Alex
Powered by blists - more mailing lists