lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmh7ccky4mr.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Tue, 13 Aug 2024 14:43:56 +0200
From: Valentin Schneider <vschneid@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
 peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, linux-kernel@...r.kernel.org
Cc: kprateek.nayak@....com, wuyun.abel@...edance.com,
 youssefesmat@...omium.org, tglx@...utronix.de, efault@....de
Subject: Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE

On 27/07/24 12:27, Peter Zijlstra wrote:
> Note that tasks that are kept on the runqueue to burn off negative
> lag, are not in fact runnable anymore, they'll get dequeued the moment
> they get picked.
>
> As such, don't count this time towards runnable.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/sched/fair.c  |    2 ++
>  kernel/sched/sched.h |    6 ++++++
>  2 files changed, 8 insertions(+)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
>                       if (cfs_rq->next == se)
>                               cfs_rq->next = NULL;
>                       se->sched_delayed = 1;
> +			update_load_avg(cfs_rq, se, 0);

Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
see the time delta as spent being runnable.

>                       return false;
>               }
>       }
> @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
>       }
>
>       se->sched_delayed = 0;
> +	update_load_avg(cfs_rq, se, 0);

Ditto on the ordering

>  }
>
>  /*
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -816,6 +816,9 @@ static inline void se_update_runnable(st
>
>  static inline long se_runnable(struct sched_entity *se)
>  {
> +	if (se->sched_delayed)
> +		return false;
> +

Per __update_load_avg_se(), delayed-dequeue entities are still ->on_rq, so
their load signal will increase. Do we want a similar helper for the @load
input of ___update_load_sum()?


>       if (entity_is_task(se))
>               return !!se->on_rq;
>       else
> @@ -830,6 +833,9 @@ static inline void se_update_runnable(st
>
>  static inline long se_runnable(struct sched_entity *se)
>  {
> +	if (se->sched_delayed)
> +		return false;
> +
>       return !!se->on_rq;
>  }
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ