[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <07272f98-859b-4a10-9096-9cba763af429@efficios.com>
Date: Wed, 20 Mar 2024 14:35:52 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
linux-rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: [RFC][PATCH] tracing: Introduce restart_critical_timings()
On 2024-03-20 13:58, Steven Rostedt wrote:
> On Wed, 20 Mar 2024 13:15:39 -0400
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
>
>>> I would like to introduce restart_critical_timings() and place it in
>>> locations that have this behavior.
>>
>> Is there any way you could move this to need_resched() rather than
>> sprinkle those everywhere ?
>
> Because need_resched() itself does not mean it's going to schedule
> immediately. I looked at a few locations that need_resched() is called.
> Most are in idle code where the critical timings are already handled.
>
> I'm not sure I'd add it for places like mm/memory.c or drivers/md/bcache/btree.c.
>
> A lot of places look to use it more for PREEMPT_NONE situations as a open
> coded cond_resched().
>
> The main reason this one is particularly an issue, is that it spins as long
> as the owner is still running. Which may be some time, as here it was 7ms.
What I think we should be discussing here is how calling need_resched()
should interact with the latency tracked by critical timings.
AFAIU, when code explicitly calls need_resched() in a loop, there are
two cases:
- need_resched() returns false: This means the loop can continue without
causing long latency on the system. Technically we could restart the
critical timings at this point.
- need_resched() returns true: This means the loop should exit quickly
and call the scheduler. I would not reset the critical timings there,
as whatever code is executed between need_resched() returning true
and calling the scheduler is adding to latency.
Having stop/start critical timings around idle loops seems to just be
an optimization over that.
As for mm and driver/md code, what is wrong with doing a critical
timings reset when need_resched() returns false ? It would prevent
a whole class of false-positives rather than playing whack-a-mole with
those that pop up.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
Powered by blists - more mailing lists