[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230413152023.GO4253@hirez.programming.kicks-ass.net>
Date: Thu, 13 Apr 2023 17:20:23 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Aaron Lu <aaron.lu@...el.com>, linux-kernel@...r.kernel.org,
Olivier Dion <odion@...icios.com>, michael.christie@...cle.com
Subject: Re: [RFC PATCH v4] sched: Fix performance regression introduced by
mm_cid
On Thu, Apr 13, 2023 at 09:56:38AM -0400, Mathieu Desnoyers wrote:
> > Mathieu, WDYT? -- other than that the patch is an obvious hack :-)
>
> I hate it with passion :-)
>
> It is quite specific to your workload/configuration.
>
> If we take for instance a process with a large mm_users count which is
> eventually affined to a subset of the cpus with cpusets or
> sched_setaffinity, your patch will prevent compaction of the concurrency ids
> when it really should not.
I don't think it will, it will only kick in once the higest cid is
handed out (I should've used num_online_cpus() instead of nr_cpu_ids),
and with affinity at play that should never happen.
Now, the more fancy scheme with:
min(t->nr_cpus_allowed, atomic_read(&t->mm->mm_users))
that does get to be more complex; and I've yet to find a working version
that doesn't also need a for_each_cpu() loop on for reclaim :/
Anyway, I think the hack as presented is safe, but a hack none-the-less.
Powered by blists - more mailing lists