lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BF7CAAA2-E42B-4D90-8E35-C5936596D4EB@nvidia.com>
Date: Thu, 25 Sep 2025 15:49:52 -0400
From: Zi Yan <ziy@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: Qi Zheng <zhengqi.arch@...edance.com>, hannes@...xchg.org,
 hughd@...gle.com, mhocko@...e.com, roman.gushchin@...ux.dev,
 shakeel.butt@...ux.dev, muchun.song@...ux.dev, lorenzo.stoakes@...cle.com,
 harry.yoo@...cle.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
 npache@...hat.com, ryan.roberts@....com, dev.jain@....com, baohua@...nel.org,
 lance.yang@...ux.dev, akpm@...ux-foundation.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [PATCH v2 4/4] mm: thp: reparent the split queue during memcg
 offline

On 25 Sep 2025, at 15:35, David Hildenbrand wrote:

> On 25.09.25 08:11, Qi Zheng wrote:
>> Hi David,
>
> Hi :)
>
> [...]
>
>>>> +++ b/include/linux/mmzone.h
>>>> @@ -1346,6 +1346,7 @@ struct deferred_split {
>>>>        spinlock_t split_queue_lock;
>>>>        struct list_head split_queue;
>>>>        unsigned long split_queue_len;
>>>> +    bool is_dying;
>>>
>>> It's a bit weird to query whether the "struct deferred_split" is dying.
>>> Shouldn't this be a memcg property? (and in particular, not exist for
>>
>> There is indeed a CSS_DYING flag. But we must modify 'is_dying' under
>> the protection of the split_queue_lock, otherwise the folio may be added
>> back to the deferred_split of child memcg.
>
> Is there no way to reuse the existing mechanisms, and find a way to have the shrinker / queue locking sync against that?
>
> There is also the offline_css() function where we clear CSS_ONLINE. But it happens after calling ss->css_offline(css);

I see CSS_DYING will be set by kill_css() before offline_css() is called.
Probably the code can check CSS_DYING instead.

>
> Being able to query "is the memcg going offline" and having a way to sync against that would be probably cleanest.

So basically, something like:
1. at folio_split_queue_lock*() time, get folio’s memcg or
   its parent memcg until there is no CSS_DYING set or CSS_ONLINE is set.
2. return the associated deferred_split_queue.

>
> I'll let all the memcg people comment on how that could be done best.
>
>>
>>> the pglist_data part where it might not make sense at all?).
>>
>> Maybe:
>>
>> #ifdef CONFIG_MEMCG
>>       bool is_dying;
>> #endif
>>
>
> Still doesn't quite look like it would belong here :(
>
> Also, is "dying" really the right terminology? It's more like "going offline"?
>
> But then, the queue is not going offline, the memcg is ...
>
> -- 
> Cheers
>
> David / dhildenb


Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ