[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lnrtyl66sz6iiw74mf6nurcm5tqmsyecnbmhrlouswp6kgfyqi@umvk6uxb3y7h>
Date: Fri, 27 Jun 2025 12:14:07 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Kairui Song <ryncsn@...il.com>
Cc: Chen Ridong <chenridong@...weicloud.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
Muchun Song <songmuchun@...edance.com>, hannes@...xchg.org, mhocko@...nel.org, akpm@...ux-foundation.org,
david@...morbit.com, zhengqi.arch@...edance.com, yosry.ahmed@...ux.dev,
nphamcs@...il.com, chengming.zhou@...ux.dev, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-mm@...ck.org, hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com, yuzhao@...gle.com
Subject: Re: [PATCH RFC 00/28] Eliminate Dying Memory Cgroup
On Sat, Jun 28, 2025 at 02:54:10AM +0800, Kairui Song wrote:
> On Fri, Jun 27, 2025 at 5:02 PM Chen Ridong <chenridong@...weicloud.com> wrote:
> > On 2025/4/28 11:43, Kairui Song wrote:
> > > On Fri, Apr 18, 2025 at 5:45 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
> > >>
> > >> On Fri, Apr 18, 2025 at 02:22:12AM +0800, Kairui Song wrote:
> > >>>
> > >>> We currently have some workloads running with `nokmem` due to objcg
> > >>> performance issues. I know there are efforts to improve them, but so
> > >>> far it's still not painless to have. So I'm a bit worried about
> > >>> this...
> > >>
> > >> Do you mind sharing more details here?
> > >>
> > >> Thanks!
> > >
> > > Hi,
> > >
> > > Sorry for the late response, I was busy with another series and other works.
> > >
> > > It's not hard to observe such slow down, for example a simple redis
> > > test can expose it:
> > >
> > > Without nokmem:
> > > redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
> > > SET: 16393.44 requests per second, p50=0.055 msec
> > > GET: 16956.34 requests per second, p50=0.055 msec
> > >
> > > With nokmem:
> > > redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
> > > SET: 17263.70 requests per second, p50=0.055 msec
> > > GET: 17410.23 requests per second, p50=0.055 msec
> > >
> > > And I'm testing with latest kernel:
> > > uname -a
> > > Linux localhost 6.15.0-rc2+ #1594 SMP PREEMPT_DYNAMIC Sun Apr 27
> > > 15:13:27 CST 2025 x86_64 GNU/Linux
> > >
> > > This is just an example. For redis, it can be a workaround by using
> > > things like redis pipeline, but not all workloads can be adjusted
> > > that flexibly.
> > >
> > > And the slowdown could be amplified in some cases.
> >
> > Hi Kairui,
> >
> > We've also encountered this issue in our Redis scenario. May I confirm
> > whether your testing is based on cgroup v1 or v2?
> >
> > In our environment using cgroup v1, we've identified memcg_account_kmem
> > as the critical performance bottleneck function - which, as you know, is
> > specific to the v1 implementation.
> >
> > Best regards,
> > Ridong
>
> Hi Ridong
>
> I can confirm I was testing using Cgroup V2, and I can still reproduce
> it, it seems the performance gap is smaller with the latest upstream
> though, but still easily observable.
>
> My previous observation is that the performance drain behaves
> differently with different CPUs, my current test machine is an Intel
> 8255C. I'll do a more detailed performance analysis of this when I
> have time to work on this. Thanks for the tips!
Please try with the latest upstream kernel i.e. 6.16 as the charging
code has changed a lot.
Powered by blists - more mailing lists