[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf600d82-82f5-414c-b880-71133379d5d4@linux.dev>
Date: Thu, 20 Nov 2025 21:45:19 +0800
From: Qi Zheng <qi.zheng@...ux.dev>
To: Chen Ridong <chenridong@...weicloud.com>, hannes@...xchg.org,
hughd@...gle.com, mhocko@...e.com, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, david@...hat.com,
lorenzo.stoakes@...cle.com, ziy@...dia.com, harry.yoo@...cle.com,
imran.f.khan@...cle.com, kamalesh.babulal@...cle.com,
axelrasmussen@...gle.com, yuanchu@...gle.com, weixugc@...gle.com,
akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, Muchun Song <songmuchun@...edance.com>,
Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v1 25/26] mm: memcontrol: eliminate the problem of dying
memory cgroup for LRU folios
On 11/20/25 7:56 PM, Chen Ridong wrote:
>
>
> On 2025/10/28 21:58, Qi Zheng wrote:
>> static void reparent_locks(struct mem_cgroup *src, struct mem_cgroup *dst)
>> {
>> + int nid, nest = 0;
>> +
>> spin_lock_irq(&objcg_lock);
>> + for_each_node(nid) {
>> + spin_lock_nested(&mem_cgroup_lruvec(src,
>> + NODE_DATA(nid))->lru_lock, nest++);
>> + spin_lock_nested(&mem_cgroup_lruvec(dst,
>> + NODE_DATA(nid))->lru_lock, nest++);
>> + }
>> }
>>
>> static void reparent_unlocks(struct mem_cgroup *src, struct mem_cgroup *dst)
>> {
>> + int nid;
>> +
>> + for_each_node(nid) {
>> + spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock);
>> + spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock);
>> + }
>> spin_unlock_irq(&objcg_lock);
>> }
>>
>
> The lock order follows S0→D0→S1→D1→…, and the correct unlock sequence should be Dn→Sn→…→D1→S0
>
> However, the current unlock implementation uses D0→S0→D1→S1→…
>
> I’m not certain whether this unlock order will cause any issues—could this lead to potential
> problems like deadlocks or lock state inconsistencies?
As long as the order in which the locks are held is consistent, there
should be no deadlock problem?
>
Powered by blists - more mailing lists