lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181030091938.GE27587@codeaurora.org>
Date:   Tue, 30 Oct 2018 14:49:38 +0530
From:   Pavan Kondeti <pkondeti@...eaurora.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     peterz@...radead.org, mingo@...nel.org,
        linux-kernel@...r.kernel.org, rjw@...ysocki.net,
        dietmar.eggemann@....com, Morten.Rasmussen@....com,
        patrick.bellasi@....com, pjt@...gle.com, bsegall@...gle.com,
        thara.gopinath@...aro.org
Subject: Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT

Hi Vincent,

On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6806c27..7a69673 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  	return calc_delta_fair(sched_slice(cfs_rq, se), se);
>  }
>  
> -#ifdef CONFIG_SMP
>  #include "pelt.h"
> -#include "sched-pelt.h"
> +#ifdef CONFIG_SMP
>  
>  static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
>  static unsigned long task_h_load(struct task_struct *p);
> @@ -764,7 +763,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
>  			 * such that the next switched_to_fair() has the
>  			 * expected state.
>  			 */
> -			se->avg.last_update_time = cfs_rq_clock_task(cfs_rq);
> +			se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
>  			return;
>  		}
>  	}
> @@ -3466,7 +3465,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>  /* Update task and its cfs_rq load average */
>  static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  {
> -	u64 now = cfs_rq_clock_task(cfs_rq);
> +	u64 now = cfs_rq_clock_pelt(cfs_rq);
>  	struct rq *rq = rq_of(cfs_rq);
>  	int cpu = cpu_of(rq);
>  	int decayed;
> @@ -6694,6 +6693,12 @@ done: __maybe_unused;
>  	if (new_tasks > 0)
>  		goto again;
>  
> +	/*
> +	 * rq is about to be idle, check if we need to update the
> +	 * lost_idle_time of clock_pelt
> +	 */
> +	update_idle_rq_clock_pelt(rq);
> +
>  	return NULL;
>  }

Do you think it is better to call this from pick_next_task_idle()? I don't see
any functional difference, but it may be easier to follow.

Thanks,
Pavan
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ