lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3Gwr2p5BcofuZ8e@google.com>
Date:   Mon, 14 Nov 2022 03:06:23 +0000
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
        qyousef@...alina.io, chris.hyser@...cle.com,
        patrick.bellasi@...bug.net, David.Laight@...lab.com,
        pjt@...gle.com, pavel@....cz, tj@...nel.org, qperret@...gle.com,
        tim.c.chen@...ux.intel.com, joshdon@...gle.com, timj@....org,
        kprateek.nayak@....com, yu.c.chen@...el.com,
        youssefesmat@...omium.org, riel@...hat.com
Subject: Re: [PATCH v8 1/9] sched/fair: fix unfairness at wakeup

Hi Vincent,

On Thu, Nov 10, 2022 at 06:50:01PM +0100, Vincent Guittot wrote:
> At wake up, the vruntime of a task is updated to not be more older than
> a sched_latency period behind the min_vruntime. This prevents long sleeping
> task to get unlimited credit at wakeup.
> Such waking task should preempt current one to use its CPU bandwidth but
> wakeup_gran() can be larger than sched_latency, filter out the
> wakeup preemption and as a results steals some CPU bandwidth to
> the waking task.

Just a thought: one can argue that this also hurts the running task because
wakeup_gran() is expected to not preempt the running task for a certain
minimum amount of time right?

So for example, if I set sysctl_sched_wakeup_granularity to a high value, I
expect the current task to not be preempted for that long, even if the
sched_latency cap in place_entity() makes the delta smaller than
wakeup_gran(). The place_entity() in current code is used to cap the sleep
credit, it does not really talk about preemption.

I don't mind this change, but it does change the meaning a bit of
sysctl_sched_wakeup_granularity I think.

> Make sure that a task, which vruntime has been capped, will preempt current
> task and use its CPU bandwidth even if wakeup_gran() is in the same range
> as sched_latency.

nit: I would prefer we say, instead of "is in the same range", "is greater
than". Because it got confusing a bit for me.

> If the waking task failed to preempt current it could to wait up to
> sysctl_sched_min_granularity before preempting it during next tick.
> 
> Strictly speaking, we should use cfs->min_vruntime instead of
> curr->vruntime but it doesn't worth the additional overhead and complexity
> as the vruntime of current should be close to min_vruntime if not equal.

Could we add here,
Reported-by: Youssef Esmat <youssefesmat@...omium.org>

> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>

Just a few more comments below:

> ---
>  kernel/sched/fair.c  | 46 ++++++++++++++++++++------------------------
>  kernel/sched/sched.h | 30 ++++++++++++++++++++++++++++-
>  2 files changed, 50 insertions(+), 26 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 5ffec4370602..eb04c83112a0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4345,33 +4345,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
>  {
>  	u64 vruntime = cfs_rq->min_vruntime;
>  
> -	/*
> -	 * The 'current' period is already promised to the current tasks,
> -	 * however the extra weight of the new task will slow them down a
> -	 * little, place the new task so that it fits in the slot that
> -	 * stays open at the end.
> -	 */
> -	if (initial && sched_feat(START_DEBIT))
> -		vruntime += sched_vslice(cfs_rq, se);
> -
> -	/* sleeps up to a single latency don't count. */
> -	if (!initial) {
> -		unsigned long thresh;
> -
> -		if (se_is_idle(se))
> -			thresh = sysctl_sched_min_granularity;
> -		else
> -			thresh = sysctl_sched_latency;
> -
> +	if (!initial)
> +		/* sleeps up to a single latency don't count. */
> +		vruntime -= get_sched_latency(se_is_idle(se));
> +	else if (sched_feat(START_DEBIT))
>  		/*
> -		 * Halve their sleep time's effect, to allow
> -		 * for a gentler effect of sleepers:
> +		 * The 'current' period is already promised to the current tasks,
> +		 * however the extra weight of the new task will slow them down a
> +		 * little, place the new task so that it fits in the slot that
> +		 * stays open at the end.
>  		 */
> -		if (sched_feat(GENTLE_FAIR_SLEEPERS))
> -			thresh >>= 1;
> -
> -		vruntime -= thresh;
> -	}
> +		vruntime += sched_vslice(cfs_rq, se);
>  
>  	/* ensure we never gain time by being placed backwards. */
>  	se->vruntime = max_vruntime(se->vruntime, vruntime);
> @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
>  		return -1;
>  
>  	gran = wakeup_gran(se);
> +
> +	/*
> +	 * At wake up, the vruntime of a task is capped to not be older than
> +	 * a sched_latency period compared to min_vruntime. This prevents long
> +	 * sleeping task to get unlimited credit at wakeup. Such waking up task
> +	 * has to preempt current in order to not lose its share of CPU
> +	 * bandwidth but wakeup_gran() can become higher than scheduling period
> +	 * for low priority task. Make sure that long sleeping task will get a
> +	 * chance to preempt current.
> +	 */
> +	gran = min_t(s64, gran, get_latency_max());
> +

Can we move this to wakeup_gran(se)? IMO, it belongs there because you are
adjusting the wakeup_gran().

>  	if (vdiff > gran)
>  		return 1;
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 1fc198be1ffd..14879d429919 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
>  extern const_debug unsigned int sysctl_sched_nr_migrate;
>  extern const_debug unsigned int sysctl_sched_migration_cost;
>  
> -#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_latency;
>  extern unsigned int sysctl_sched_min_granularity;
> +#ifdef CONFIG_SCHED_DEBUG
>  extern unsigned int sysctl_sched_idle_min_granularity;
>  extern unsigned int sysctl_sched_wakeup_granularity;
>  extern int sysctl_resched_latency_warn_ms;
> @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_period_max;
>  extern unsigned int sysctl_numa_balancing_scan_size;
>  #endif
>  
> +static inline unsigned long  get_sched_latency(bool idle)
> +{

IMO, since there are other users of sysctl_sched_latency, it would be better
to call this get_max_sleep_credit() or something.

> +	unsigned long thresh;
> +
> +	if (idle)
> +		thresh = sysctl_sched_min_granularity;
> +	else
> +		thresh = sysctl_sched_latency;
> +
> +	/*
> +	 * Halve their sleep time's effect, to allow
> +	 * for a gentler effect of sleepers:
> +	 */
> +	if (sched_feat(GENTLE_FAIR_SLEEPERS))
> +		thresh >>= 1;
> +
> +	return thresh;
> +}
> +
> +static inline unsigned long  get_latency_max(void)
> +{
> +	unsigned long thresh = get_sched_latency(false);
> +
> +	thresh -= sysctl_sched_min_granularity;

Could you clarify, why are you subtracting sched_min_granularity here? Could
you add some comments here to make it clear?

thanks,

 - Joel


> +
> +	return thresh;
> +}
> +
>  #ifdef CONFIG_SCHED_HRTICK
>  
>  /*
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ