lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56c3a7c0-0d41-809d-6929-086d7a9251b9@arm.com>
Date:   Thu, 13 Feb 2020 10:49:09 +0000
From:   Douglas Raillard <douglas.raillard@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, rjw@...ysocki.net,
        viresh.kumar@...aro.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        qperret@...gle.com, linux-pm@...r.kernel.org
Subject: Re: [RFC PATCH v4 4/6] sched/cpufreq: Introduce sugov_cpu_ramp_boost



On 2/10/20 1:08 PM, Peter Zijlstra wrote:
> On Wed, Jan 22, 2020 at 05:35:36PM +0000, Douglas RAILLARD wrote:
> 
>> +static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu)
>> +{
>> +	struct rq *rq = cpu_rq(sg_cpu->cpu);
>> +	unsigned long util_est_enqueued;
>> +	unsigned long util_avg;
>> +	unsigned long boost = 0;
>> +
> 
> Should we NO-OP this function when !sched_feat(UTIL_EST) ?
> 
>> +	util_est_enqueued = READ_ONCE(rq->cfs.avg.util_est.enqueued);
> 
> Otherwise you're reading garbage here, no?

Most likely indeed. The boosting should be disabled in that case.

> 
>> +	util_avg = READ_ONCE(rq->cfs.avg.util_avg);
>> +
>> +	/*
>> +	 * Boost when util_avg becomes higher than the previous stable
>> +	 * knowledge of the enqueued tasks' set util, which is CPU's
>> +	 * util_est_enqueued.
>> +	 *
>> +	 * We try to spot changes in the workload itself, so we want to
>> +	 * avoid the noise of tasks being enqueued/dequeued. To do that,
>> +	 * we only trigger boosting when the "amount of work" enqueued
>> +	 * is stable.
>> +	 */
>> +	if (util_est_enqueued == sg_cpu->util_est_enqueued &&
>> +	    util_avg >= sg_cpu->util_avg &&
>> +	    util_avg > util_est_enqueued)
>> +		boost = util_avg - util_est_enqueued;
>> +
>> +	sg_cpu->util_est_enqueued = util_est_enqueued;
>> +	sg_cpu->util_avg = util_avg;
>> +	WRITE_ONCE(sg_cpu->ramp_boost, boost);
>> +	return boost;
>> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ