[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMgjq7BNUMFzsFCOt--mvTqSmgdA65PWcn57G_6-gEj0ps-jCg@mail.gmail.com>
Date: Mon, 28 Apr 2025 11:43:38 +0800
From: Kairui Song <ryncsn@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Muchun Song <muchun.song@...ux.dev>, Muchun Song <songmuchun@...edance.com>, hannes@...xchg.org,
mhocko@...nel.org, shakeel.butt@...ux.dev, akpm@...ux-foundation.org,
david@...morbit.com, zhengqi.arch@...edance.com, yosry.ahmed@...ux.dev,
nphamcs@...il.com, chengming.zhou@...ux.dev, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-mm@...ck.org, hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com, yuzhao@...gle.com
Subject: Re: [PATCH RFC 00/28] Eliminate Dying Memory Cgroup
On Fri, Apr 18, 2025 at 5:45 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> On Fri, Apr 18, 2025 at 02:22:12AM +0800, Kairui Song wrote:
> >
> > We currently have some workloads running with `nokmem` due to objcg
> > performance issues. I know there are efforts to improve them, but so
> > far it's still not painless to have. So I'm a bit worried about
> > this...
>
> Do you mind sharing more details here?
>
> Thanks!
Hi,
Sorry for the late response, I was busy with another series and other works.
It's not hard to observe such slow down, for example a simple redis
test can expose it:
Without nokmem:
redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
SET: 16393.44 requests per second, p50=0.055 msec
GET: 16956.34 requests per second, p50=0.055 msec
With nokmem:
redis-benchmark -h 127.0.0.1 -q -t set,get -n 80000 -c 1
SET: 17263.70 requests per second, p50=0.055 msec
GET: 17410.23 requests per second, p50=0.055 msec
And I'm testing with latest kernel:
uname -a
Linux localhost 6.15.0-rc2+ #1594 SMP PREEMPT_DYNAMIC Sun Apr 27
15:13:27 CST 2025 x86_64 GNU/Linux
This is just an example. For redis, it can be a workaround by using
things like redis pipeline, but not all workloads can be adjusted
that flexibly.
And the slowdown could be amplified in some cases.
Powered by blists - more mailing lists