lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201122103529.GC110669@balbir-desktop>
Date:   Sun, 22 Nov 2020 21:35:29 +1100
From:   Balbir Singh <bsingharora@...il.com>
To:     "Joel Fernandes (Google)" <joel@...lfernandes.org>
Cc:     Nishanth Aravamudan <naravamudan@...italocean.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Vineeth Pillai <viremana@...ux.microsoft.com>,
        Aaron Lu <aaron.lwe@...il.com>,
        Aubrey Li <aubrey.intel@...il.com>, tglx@...utronix.de,
        linux-kernel@...r.kernel.org, mingo@...nel.org,
        torvalds@...ux-foundation.org, fweisbec@...il.com,
        keescook@...omium.org, kerrnel@...gle.com,
        Phil Auld <pauld@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>, vineeth@...byteword.org,
        Chen Yu <yu.c.chen@...el.com>,
        Christian Brauner <christian.brauner@...ntu.com>,
        Agata Gruza <agata.gruza@...el.com>,
        Antonio Gomez Iglesias <antonio.gomez.iglesias@...el.com>,
        graf@...zon.com, konrad.wilk@...cle.com, dfaggioli@...e.com,
        pjt@...gle.com, rostedt@...dmis.org, derkling@...gle.com,
        benbjiang@...cent.com,
        Alexandre Chartre <alexandre.chartre@...cle.com>,
        James.Bottomley@...senpartnership.com, OWeisse@...ch.edu,
        Dhaval Giani <dhaval.giani@...cle.com>,
        Junaid Shahid <junaids@...gle.com>, jsbarnes@...gle.com,
        chris.hyser@...cle.com, Ben Segall <bsegall@...gle.com>,
        Josh Don <joshdon@...gle.com>, Hao Luo <haoluo@...gle.com>,
        Tom Lendacky <thomas.lendacky@....com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Tim Chen <tim.c.chen@...el.com>
Subject: Re: [PATCH -tip 08/32] sched/fair: Fix forced idle sibling
 starvation corner case

On Tue, Nov 17, 2020 at 06:19:38PM -0500, Joel Fernandes (Google) wrote:
> From: Vineeth Pillai <viremana@...ux.microsoft.com>
> 
> If there is only one long running local task and the sibling is
> forced idle, it  might not get a chance to run until a schedule
> event happens on any cpu in the core.
> 
> So we check for this condition during a tick to see if a sibling
> is starved and then give it a chance to schedule.
> 
> Tested-by: Julien Desfossez <jdesfossez@...italocean.com>
> Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> Signed-off-by: Vineeth Pillai <viremana@...ux.microsoft.com>
> Signed-off-by: Julien Desfossez <jdesfossez@...italocean.com>
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
>  kernel/sched/core.c  | 15 ++++++++-------
>  kernel/sched/fair.c  | 40 ++++++++++++++++++++++++++++++++++++++++
>  kernel/sched/sched.h |  2 +-
>  3 files changed, 49 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1bd0b0bbb040..52d0e83072a4 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5206,16 +5206,15 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
>  
>  	/* reset state */
>  	rq->core->core_cookie = 0UL;
> +	if (rq->core->core_forceidle) {
> +		need_sync = true;
> +		rq->core->core_forceidle = false;
> +	}
>  	for_each_cpu(i, smt_mask) {
>  		struct rq *rq_i = cpu_rq(i);
>  
>  		rq_i->core_pick = NULL;
>  
> -		if (rq_i->core_forceidle) {
> -			need_sync = true;
> -			rq_i->core_forceidle = false;
> -		}
> -
>  		if (i != cpu)
>  			update_rq_clock(rq_i);
>  	}
> @@ -5335,8 +5334,10 @@ next_class:;
>  		if (!rq_i->core_pick)
>  			continue;
>  
> -		if (is_task_rq_idle(rq_i->core_pick) && rq_i->nr_running)
> -			rq_i->core_forceidle = true;
> +		if (is_task_rq_idle(rq_i->core_pick) && rq_i->nr_running &&
> +		    !rq_i->core->core_forceidle) {
> +			rq_i->core->core_forceidle = true;
> +		}
>  
>  		if (i == cpu) {
>  			rq_i->core_pick = NULL;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index f53681cd263e..42965c4fd71f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10692,6 +10692,44 @@ static void rq_offline_fair(struct rq *rq)
>  
>  #endif /* CONFIG_SMP */
>  
> +#ifdef CONFIG_SCHED_CORE
> +static inline bool
> +__entity_slice_used(struct sched_entity *se, int min_nr_tasks)
> +{
> +	u64 slice = sched_slice(cfs_rq_of(se), se);

I wonder if the definition of sched_slice() should be revisited for core
scheduling?

Should we use sched_slice = sched_slice / cpumask_weight(smt_mask)?
Would that resolve the issue your seeing? Effectively we need to answer
if two sched core siblings should be treated as executing one large
slice?

Balbir Singh.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ