lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b348f7ade8b028e3affe911da287b15640f012fe.camel@redhat.com>
Date: Tue, 03 Feb 2026 12:06:48 +0100
From: Gabriele Monaco <gmonaco@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: K Prateek Nayak <kprateek.nayak@....com>, Tomas Glozar
 <tglozar@...hat.com>,  Clark Williams <williams@...hat.com>, John Kacur
 <jkacur@...hat.com>, linux-kernel@...r.kernel.org, Steven Rostedt
 <rostedt@...dmis.org>, Nam Cao <namcao@...utronix.de>, Juri Lelli
 <jlelli@...hat.com>,  Ingo Molnar <mingo@...hat.com>,
 linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH v5 09/15] sched: Add task enqueue/dequeue trace points

Hi Peter,

On Thu, 2026-01-22 at 16:54 +0100, Gabriele Monaco wrote:
> From: Nam Cao <namcao@...utronix.de>
> 
> Add trace points into enqueue_task() and dequeue_task().

Can I have an Ack for these tracepoints?

Thanks,
Gabriele

> 
> Suggested-by: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Nam Cao <namcao@...utronix.de>
> Reviewed-by: K Prateek Nayak <kprateek.nayak@....com>
> Signed-off-by: Gabriele Monaco <gmonaco@...hat.com>
> ---
> 
> Notes:
>     V5:
>     * Do not fire enqueue tracepoint for delayed enqueues
> 
>  include/trace/events/sched.h | 13 +++++++++++++
>  kernel/sched/core.c          | 10 +++++++++-
>  kernel/sched/sched.h         |  2 ++
>  3 files changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> index 366b2e8ec40c..f4e1d3554e3e 100644
> --- a/include/trace/events/sched.h
> +++ b/include/trace/events/sched.h
> @@ -912,6 +912,19 @@ DECLARE_TRACE(sched_dl_server_stop,
>  	TP_PROTO(struct sched_dl_entity *dl_se, int cpu),
>  	TP_ARGS(dl_se, cpu));
>  
> +/*
> + * The two trace points below may not work as expected for fair tasks due
> + * to delayed dequeue. See:
> + * https://lore.kernel.org/lkml/179674c6-f82a-4718-ace2-67b5e672fdee@amd.com/
> + */
> +DECLARE_TRACE(sched_enqueue,
> +	TP_PROTO(struct task_struct *tsk, int cpu),
> +	TP_ARGS(tsk, cpu));
> +
> +DECLARE_TRACE(sched_dequeue,
> +	TP_PROTO(struct task_struct *tsk, int cpu),
> +	TP_ARGS(tsk, cpu));
> +
>  #endif /* _TRACE_SCHED_H */
>  
>  /* This part must be outside protection */
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f6293fa02fb7..c885d7885172 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2087,6 +2087,9 @@ unsigned long get_wchan(struct task_struct *p)
>  
>  void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
>  {
> +	if (trace_sched_enqueue_tp_enabled() && !(flags & ENQUEUE_DELAYED))
> +		trace_sched_enqueue_tp(p, rq->cpu);
> +
>  	if (!(flags & ENQUEUE_NOCLOCK))
>  		update_rq_clock(rq);
>  
> @@ -2114,6 +2117,8 @@ void enqueue_task(struct rq *rq, struct task_struct *p,
> int flags)
>   */
>  inline bool dequeue_task(struct rq *rq, struct task_struct *p, int flags)
>  {
> +	int ret;
> +
>  	if (sched_core_enabled(rq))
>  		sched_core_dequeue(rq, p, flags);
>  
> @@ -2131,7 +2136,10 @@ inline bool dequeue_task(struct rq *rq, struct
> task_struct *p, int flags)
>  	 */
>  	uclamp_rq_dec(rq, p);
>  	rq->queue_mask |= p->sched_class->queue_mask;
> -	return p->sched_class->dequeue_task(rq, p, flags);
> +	ret = p->sched_class->dequeue_task(rq, p, flags);
> +	if (trace_sched_dequeue_tp_enabled() && !(flags & DEQUEUE_SLEEP))
> +		trace_sched_dequeue_tp(p, rq->cpu);
> +	return ret;
>  }
>  
>  void activate_task(struct rq *rq, struct task_struct *p, int flags)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index e885a935b716..8465472b40fa 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2918,6 +2918,8 @@ static inline void sub_nr_running(struct rq *rq,
> unsigned count)
>  
>  static inline void __block_task(struct rq *rq, struct task_struct *p)
>  {
> +	trace_sched_dequeue_tp(p, rq->cpu);
> +
>  	if (p->sched_contributes_to_load)
>  		rq->nr_uninterruptible++;
>  


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ