[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <de5654b277f424d56ac00b4b9d336491bafff110.camel@redhat.com>
Date: Wed, 02 Jul 2025 15:58:28 +0200
From: Gabriele Monaco <gmonaco@...hat.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, kernel test robot
<oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, aubrey.li@...ux.intel.com,
yu.c.chen@...el.com, Andrew Morton <akpm@...ux-foundation.org>, David
Hildenbrand <david@...hat.com>, Ingo Molnar <mingo@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, "Paul E. McKenney" <paulmck@...nel.org>,
Ingo Molnar <mingo@...hat.org>
Subject: Re: [RESEND PATCH v13 2/3] sched: Move task_mm_cid_work to mm
work_struct
On Wed, 2025-06-25 at 09:57 -0400, Mathieu Desnoyers wrote:
> On 2025-06-25 04:01, kernel test robot wrote:
> >
> > Hello,
> >
> > kernel test robot noticed a 10.1% regression of
> > hackbench.throughput on:
>
> Hi Gabriele,
>
> This is a significant regression. Can you investigate before it gets
> merged ?
>
Hi Mathieu,
I run some tests, the culprit for this performance regression seems to
be the interference due to more consistent `mm_cid` scans and them
running in `work_struct`, which brings some more scheduling overhead.
One solution could be to reduce the frequency: now they run
(sporadically) about every 100ms, if the minimum delay is 1s, the test
results seem ok.
However, I tried another approach that seems promising: work_struct get
scheduled relatively fast and this ends up giving a lot of contention
with kworkers, however something like timer_list seems less aggressive
and we obtain a similar reliability with respect to calls to the mm_cid
scan, without the same performance impact.
At the moment I just kept roughly the same structure of the patch and
used a timer delayed by 1 jiffy in place of the work_struct.
It may look cleaner if we use the timer directly for the 100ms delay
instead of storing and checking the time, in fact running a scan about
100ms after every rseq_handle_notify_resume.
What do you think?
Thanks,
Gabriele
Powered by blists - more mailing lists