[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f622eec-a039-4e82-9f37-3cad1692f268@huaweicloud.com>
Date: Fri, 27 Jun 2025 17:02:37 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: Kairui Song <ryncsn@...il.com>, Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Muchun Song <muchun.song@...ux.dev>,
Muchun Song <songmuchun@...edance.com>, hannes@...xchg.org,
mhocko@...nel.org, shakeel.butt@...ux.dev, akpm@...ux-foundation.org,
david@...morbit.com, zhengqi.arch@...edance.com, yosry.ahmed@...ux.dev,
nphamcs@...il.com, chengming.zhou@...ux.dev, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-mm@...ck.org,
hamzamahfooz@...ux.microsoft.com, apais@...ux.microsoft.com,
yuzhao@...gle.com
Subject: Re: [PATCH RFC 00/28] Eliminate Dying Memory Cgroup
On 2025/4/28 11:43, Kairui Song wrote:
> On Fri, Apr 18, 2025 at 5:45 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>>
>> On Fri, Apr 18, 2025 at 02:22:12AM +0800, Kairui Song wrote:
>>>
>>> We currently have some workloads running with `nokmem` due to objcg
>>> performance issues. I know there are efforts to improve them, but so
>>> far it's still not painless to have. So I'm a bit worried about
>>> this...
>>
>> Do you mind sharing more details here?
>>
>> Thanks!
>
> Hi,
>
> Sorry for the late response, I was busy with another series and other works.
>
> It's not hard to observe such slow down, for example a simple redis
> test can expose it:
>
> Without nokmem:
> redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
> SET: 16393.44 requests per second, p50=0.055 msec
> GET: 16956.34 requests per second, p50=0.055 msec
>
> With nokmem:
> redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
> SET: 17263.70 requests per second, p50=0.055 msec
> GET: 17410.23 requests per second, p50=0.055 msec
>
> And I'm testing with latest kernel:
> uname -a
> Linux localhost 6.15.0-rc2+ #1594 SMP PREEMPT_DYNAMIC Sun Apr 27
> 15:13:27 CST 2025 x86_64 GNU/Linux
>
> This is just an example. For redis, it can be a workaround by using
> things like redis pipeline, but not all workloads can be adjusted
> that flexibly.
>
> And the slowdown could be amplified in some cases.
Hi Kairui,
We've also encountered this issue in our Redis scenario. May I confirm
whether your testing is based on cgroup v1 or v2?
In our environment using cgroup v1, we've identified memcg_account_kmem
as the critical performance bottleneck function - which, as you know, is
specific to the v1 implementation.
Best regards,
Ridong
Powered by blists - more mailing lists