lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160523202602.GB18670@intel.com>
Date:	Tue, 24 May 2016 04:26:02 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	juri.lelli@....com
Subject: Re: [RFC PATCH 0/9] Clean up and optimize sched averages

Hi Peter,

Could you give this a look?

Thanks,
Yuyang

On Mon, May 16, 2016 at 02:59:25AM +0800, Yuyang Du wrote:
> Hi Peter,
> 
> Continue the left patches in this series. I realized some patches should
> need thorough discussion (finally), so this post is marked RFC.
> 
>  - For LOAD_AVG_MAX_N, I am OK to stick to the old value, but it is worthwhile
>    to get it cleared to the true value.
> 
>  - About the renames, I noticed there is an existing sched_avg_update(), but
>    anyway, please NAK the renames you don't want, hopefully not all, ;)
> 
>  - Removing scale_load_down() for load_avg may have some unknown ramifications,
>    is it worth trying?
> 
> The previous post is at: http://thread.gmane.org/gmane.linux.kernel/2214387/focus=2218488
> 
> Thanks,
> Yuyang
> 
> --
> 
> Yuyang Du (9):
>   sched/fair: Chance LOAD_AVG_MAX_N from 345 to 347
>   documentation: Add scheduler/sched-avg.txt
>   sched/fair: Add static to remove_entity_load_avg()
>   sched/fair: Rename variable names for sched averages
>   sched/fair: Change the variable to hold the number of periods to
>     32-bit
>   sched/fair: Add __always_inline compiler attribute to
>     __accumulate_sum()
>   sched/fair: Optimize __update_sched_avg()
>   sched/fair: Remove scale_load_down() for load_avg
>   sched/fair: Rename scale_load() and scale_load_down()
> 
>  Documentation/scheduler/sched-avg.txt |   94 ++++++++
>  include/linux/sched.h                 |   21 +-
>  kernel/sched/core.c                   |    8 +-
>  kernel/sched/fair.c                   |  382 +++++++++++++++++----------------
>  kernel/sched/sched.h                  |   18 +-
>  5 files changed, 317 insertions(+), 206 deletions(-)
>  create mode 100644 Documentation/scheduler/sched-avg.txt
> 
> -- 
> 1.7.9.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ