[<prev] [next>] [day] [month] [year] [list]
Message-ID: <Y45PuH2C8VdHbrzD@P9FQF9L96D>
Date: Mon, 5 Dec 2022 12:08:24 -0800
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: "Luther, Sven" <Sven.Luther@...driver.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"regressions@...ts.linux.dev" <regressions@...ts.linux.dev>,
Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Vlastimil Babka <vbabka@...e.cz>,
"kernel-team@...com" <kernel-team@...com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Muchun Song <songmuchun@...edance.com>,
Waiman Long <longman@...hat.com>,
Alexey Gladkov <legion@...nel.org>,
"Bonn, Jonas" <Jonas.Bonn@...driver.com>
Subject: Re: [Regression] mqueue performance degradation after "The new
cgroup slab memory controller" patchset.
On Mon, Dec 05, 2022 at 02:55:48PM +0000, Luther, Sven wrote:
> #regzbot ^introduced 10befea91b61c4e2c2d1df06a2e978d182fcf792
>
> We are making heavy use of mqueues, and noticed a degradation of performance between 4.18 & 5.10 linux kernels.
>
> After a gross per-version tracing, we did kernel bisection between 5.8 and 5.9
> and traced the issue to a 10 patches (of which 9 where skipped as they didn't boot) between:
>
>
> commit 10befea91b61c4e2c2d1df06a2e978d182fcf792 (HEAD, refs/bisect/bad)
> Author: Roman Gushchin <guro@...com>
> Date: Thu Aug 6 23:21:27 2020 -0700
>
> mm: memcg/slab: use a single set of kmem_caches for all allocations
>
> and:
>
> commit 286e04b8ed7a04279ae277f0f024430246ea5eec (refs/bisect/good-286e04b8ed7a04279ae277f0f024430246ea5eec)
> Author: Roman Gushchin <guro@...com>
> Date: Thu Aug 6 23:20:52 2020 -0700
>
> mm: memcg/slab: allocate obj_cgroups for non-root slab pages
>
> All of them are part of the "The new cgroup slab memory controller" patchset:
>
> https://lore.kernel.org/all/20200623174037.3951353-18-guro@fb.com/T/
>
> from Roman Gushchin, which moves the accounting for page level to the object level.
>
> Measurements where done using the a test programmtest, which measures mix/average/max time mqueue_send/mqueue_rcv,
> and average for getppid, both measured over 100 000 runs. Results are shown in the following table
>
> +----------+--------------------------+-------------------------+----------------+
> | kernel | mqueue_rcv (ns) | mqueue_send (ns) | getppid |
> | version | min avg max variation | min avg max variation | (ns) variation |
> +----------+--------------------------+-------------------------+----------------+
> | 4.18.45 | 351 382 17533 base | 383 410 13178 base | 149 base |
> | 5.8-good | 380 392 7156 -2,55% | 376 384 6225 6,77% | 169 -11,83% |
> | 5.8-bad | 524 530 5310 -27,92% | 512 519 8775 -21,00% | 169 -11,83% |
> | 5.10 | 520 533 4078 -28,33% | 518 534 8108 -23,22% | 167 -10,78% |
> | 5.15 | 431 444 8440 -13,96% | 425 437 6170 -6,18% | 171 -12,87% |
> | 6.03 | 474 614 3881 -37,79% | 482 693 931 -40,84% | 171 -12,87% |
> +----------+--------------------------+-------------------------+-----------------
Hi Sven!
Thank you for the report! As Waiman said, it's not a secret that per-object tracking
makes individual allocations slower, but for the majority of workloads it's well
compensated by significant memory savings and a lower fragmentation.
It seems there is another regression between 5.15 and 6.03, which is a separate
topic, but how big is the real regression between 4.18 and 5.15? The benchmark
shows about 14%, but is you real workload suffering at the same level?
If the answer is yes, the right thing to do is to introduce some sort of
mqueue-specific caching for allocated objects.
Thanks!
Roman
Powered by blists - more mailing lists