[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8e51adf20a9eafd3046c1189989f87734576bd57.camel@redhat.com>
Date: Wed, 24 Sep 2025 17:22:09 +0200
From: Gabriele Monaco <gmonaco@...hat.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Ingo Molnar
<mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner
<tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, linux-mm@...ck.org, "Paul E.
McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH v2 2/4] rseq: Run the mm_cid_compaction from
rseq_handle_notify_resume()
On Tue, 2025-08-26 at 14:01 -0400, Mathieu Desnoyers wrote:
> Your approach looks good, but please note that this will probably
> need to be rebased on top of the rseq rework from Thomas Gleixner.
>
> Latest version can be found here:
>
> https://lore.kernel.org/lkml/20250823161326.635281786@linutronix.de/
I rebased and adapted the patches on the V4 of that series.
To get close functionality I went back to the task_work and I'm scheduling it
from switches (rseq_sched_switch_event).
Quick recap:
My series tries to reduce the latency caused by `task_mm_cid_work` on many-CPU
systems. While at it, it improves reliability for bursty tasks that can miss the
tick.
It reduces the latency by splitting the work in batches. This requires more
reliability as compaction now needs more runs, which is achieved enqueuing on
switches instead of ticks.
While this solution works, my doubt is whether running something there is still
acceptable, considering Thomas' effort is going in the opposite direction.
My tests don't show any significant performance difference, but I'd gladly try
different workloads.
Any thoughts on this?
If the approach still looks reasonable I can submit a proper series for review.
You can find the series at:
git://git.kernel.org/pub/scm/linux/kernel/git/gmonaco/linux.git mm_cid_batches_rebased
Thanks,
Gabriele
Powered by blists - more mailing lists