[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d8d0032-261f-fe35-0c12-23043083e676@canonical.com>
Date: Fri, 23 Jun 2023 00:16:35 -0700
From: John Johansen <john.johansen@...onical.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Swapnil Sapkal <Swapnil.Sapkal@....com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-tip-commits@...r.kernel.org,
Aaron Lu <aaron.lu@...el.com>, x86@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [tip: sched/core] sched: Fix performance regression introduced by
mm_cid
On 6/22/23 23:37, Sebastian Andrzej Siewior wrote:
> On 2023-06-21 16:59:31 [-0700], John Johansen wrote:
>>> Which turned a per-cpu cache into a global memory pool protected by a spinlock. It may benefit RT, but it does not appear to be so great at scaling.
>>>
>> it is not. And I have a patch that needs some more formal testing for some stats.
>> Ubuntu pulled it in last cycle so it has gotten a fair bit of use and is looking good
>> on that end. There are probably some tweaks that can be done to improve it. The
>> backoff in particular is something that has mostly been adjusted in response to some
>> basic benchmarking.
>>
>> anyways patch below
>>
>> commit e057e9b47f1749882ea0efb4427d6b9671c761ab
>
> I think I've been looking at this patch, or a former version of it, and
> it looked good.
>
so, I am not satisfied with the way the scaling works, it feels like it
is more complicated than it needs to be. I also wanted to see with the percpu
caching if it was worth dropping the global pool and just going to the allocator.
With that said the patch does work, and seems to be stable in the broader
testing it has gotten. Its more complicated than I would like, and wanted to
play with ideas around improving it. I also wanted get some better benchmark data.
All things I just haven't had time for
I am also not opposed to using some version of this patch now, to solve the
immediate needs and working to improve it after the fact.
Powered by blists - more mailing lists