[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5065c29035be39dee954f2b233a40ae15dcc5035.camel@redhat.com>
Date: Wed, 30 Jul 2025 18:18:45 +0200
From: Gabriele Monaco <gmonaco@...hat.com>
To: Nam Cao <namcao@...utronix.de>
Cc: Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu
<mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
linux-trace-kernel@...r.kernel.org, linux-kernel@...r.kernel.org, Ingo
Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Juri
Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin
Schneider <vschneid@...hat.com>
Subject: Re: [PATCH 4/5] sched: Add rt task enqueue/dequeue trace points
On Wed, 2025-07-30 at 17:18 +0200, Nam Cao wrote:
> On Wed, Jul 30, 2025 at 03:53:14PM +0200, Gabriele Monaco wrote:
> > On Wed, 2025-07-30 at 14:45 +0200, Nam Cao wrote:
> > > Add trace points into enqueue_task_rt() and dequeue_task_rt().
> > > They
> > > are useful to implement RV monitor which validates RT scheduling.
> > >
> >
> > I get it's much simpler this way, but is it that different to
> > follow
> > the task's existing tracepoints?
> >
> > * task going to sleep (switch:prev_state != RUNNING) is dequeued
> > * task waking up is enqueued
> > * changing the tasks's policy (setpolicy and setattr syscalls)
> > should
> > enqueue/dequeue as well
> >
> > This is more thinking out loud, but I'm doing right now something
> > rather similar with the deadline tasks and this seems reasonable,
> > at
> > least on paper.
> >
> > What do you think?
>
> I think more or less the same. The fewer tracepoints, the better. But
> the
> monitor is way more obvious this way.
>
> Let me see how hard it is to use the existing tracepoints...
Well, thinking about it again, these tracepoints might simplify things
considerably when tasks change policy..
Syscalls may fail, for that you could register to sys_exit and check
the return value, but at that point the policy changed already, so you
cannot tell if it's a relevant event or not (e.g. same policy).
Also sched_setscheduler_nocheck would be out of the picture here, not
sure how recurrent that is though (and might not matter if you only
focus on userspace tasks).
If you go down the route of adding tracepoints, why not have other
classes benefit too? I believe calling them from the enqueue_task /
dequeue_task in sched/core.c would allow you to easily filter out by
policy anyway (haven't tested).
Thanks,
Gabriele
Powered by blists - more mailing lists