lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <40c984f3-289d-4f8a-b06a-57052dad565e@arm.com>
Date: Tue, 25 Jun 2024 11:30:21 +0100
From: Hongyan Xia <hongyan.xia2@....com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Juri Lelli <juri.lelli@...hat.com>, Steven Rostedt <rostedt@...dmis.org>,
 Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
 Daniel Bristot de Oliveira <bristot@...hat.com>,
 Valentin Schneider <vschneid@...hat.com>, Qais Yousef <qyousef@...alina.io>,
 Morten Rasmussen <morten.rasmussen@....com>,
 Lukasz Luba <lukasz.luba@....com>,
 Christian Loehle <christian.loehle@....com>,
 Pierre Gondois <pierre.gondois@....com>,
 Youssef Esmat <youssefesmat@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 7/7] Propagate negative bias

Hi,

Thanks for taking a look!

On 25/06/2024 05:48, K Prateek Nayak wrote:
> Hello Hongyan,
> 
> On 6/24/2024 3:53 PM, Hongyan Xia wrote:
>> Negative bias is interesting, because dequeuing such a task will
>> actually increase utilization.
>>
>> Solve by applying PELT decay to negative biases as well. This in fact
>> can be implemented easily with some math tricks.
>>
>> Signed-off-by: Hongyan Xia <hongyan.xia2@....com>
>> ---
>>   kernel/sched/fair.c  | 40 ++++++++++++++++++++++++++++++++++++++++
>>   kernel/sched/sched.h |  4 ++++
>>   2 files changed, 44 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 3bb077df52ae..d09af6abf464 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4878,6 +4878,45 @@ static inline unsigned long 
>> root_cfs_util_uclamp(struct rq *rq)
>>       return max(ret, 0L);
>>   }
>> +
>> +/*
>> + * Negative biases are tricky. If we remove them right away then 
>> dequeuing a
>> + * uclamp_max task has the interesting effect that dequeuing results 
>> in a higher
>> + * rq utilization. Solve this by applying PELT decay to the bias itself.
>> + *
>> + * Keeping track of a PELT-decayed negative bias is extra overhead. 
>> However, we
>> + * observe this interesting math property, where y is the decay 
>> factor and p is
>> + * the number of periods elapsed:
>> + *
>> + *    util_new = util_old * y^p - neg_bias * y^p
>> + *         = (util_old - neg_bias) * y^p
>> + *
>> + * Therefore, we simply subtract the negative bias from util_avg the 
>> moment we
>> + * dequeue, then the PELT signal itself is the total of util_avg and 
>> the decayed
>> + * negative bias, and we no longer need to track the decayed bias 
>> separately.
>> + */
>> +static void propagate_negative_bias(struct task_struct *p)
>> +{
>> +    if (task_util_bias(p) < 0 && !task_on_rq_migrating(p)) {
>> +        unsigned long neg_bias = -task_util_bias(p);
>> +        struct sched_entity *se = &p->se;
>> +        struct cfs_rq *cfs_rq;
>> +
>> +        p->se.avg.util_avg_bias = 0;
>> +
>> +        for_each_sched_entity(se) {
>> +            u32 divider, neg_sum;
>> +
>> +            cfs_rq = cfs_rq_of(se);
>> +            divider = get_pelt_divider(&cfs_rq->avg);
>> +            neg_sum = neg_bias * divider;
>> +            sub_positive(&se->avg.util_avg, neg_bias);
>> +            sub_positive(&se->avg.util_sum, neg_sum);
> 
> Most cases where I've seen "get_pelt_divider()" followed by
> "add_positive()" or "sub_positive()" on "util_avg" and "util_sum" I've
> seen a correction step that does:
> 
>      util_sum = max_t(u32, util_sum, util_avg * PELT_MIN_DIVIDER)
> 
> There is a comment on its significance in "update_cfs_rq_load_avg()".
> Would it also apply in this case?
> 

That's a good point. The problem in update_cfs_rq_load_avg() should also 
be possible here. I can add the guard logic in the next rev.

But if we change the code in a way suggested below, then this problem is 
solved anyway.

>> +            sub_positive(&cfs_rq->avg.util_avg, neg_bias);
>> +            sub_positive(&cfs_rq->avg.util_sum, neg_sum);
>> +        }
>> +    }
>> +}
>>   #else
>>   static inline long task_util_bias(struct task_struct *p)
>>   {
>> @@ -6869,6 +6908,7 @@ static void dequeue_task_fair(struct rq *rq, 
>> struct task_struct *p, int flags)
>>       /* At this point se is NULL and we are at root level*/
>>       sub_nr_running(rq, 1);
>>       util_bias_dequeue(rq, p);
>> +    propagate_negative_bias(p);
> 
> Perhaps I'm pointing to a premature optimization but since the hierarchy
> is traversed above in "dequeue_task_fair()", could the "neg_bias" and
> "neg_sum" removal be done along the way above instead of
> "propagate_negative_bias()" traversing the hierarchy again? I don't see
> a dependency on "util_bias_dequeue()" (which modifies
> "rq->cfs.avg.util_avg_bias") for "propagate_negative_bias()" (which
> works purely with task_util_bias() or "p->se.avg.util_avg_bias") but if
> I'm missing something please do let me know.
> 
> Since you mentioned this patch isn't strictly necessary in the cover
> letter, I would wait for other folks to chime in before changing this :)

I've been thinking about similar things for both enqueue() and 
dequeue(). Currently this series makes util_avg_bias completely separate 
from util_avg to ease review, acting more like util_est, but like you 
said we do things twice in a couple of places.

enqueue_task_fair():
	for_each_sched_entity()
		enqueue_entity()
			if root_cfs()
				cpufreq_update_util()
	util_bias_enqueue(p)
	cpufreq_update_util()  // duplicate cpufreq update

dequeue_task_fair():
	for_each_sched_entity()
		dequeue_entity()
			if root_cfs()
				cpufreq_update_util()
	util_bias_dequeue(p)
	propagate_negative_bias() // duplicate tree traversal
	cpufreq_update_util()  // duplicate cpufreq update

But we can integrate the bias closer into the hierarchy, like this:

enqueue_task_fair():
	for_each_sched_entity()
		enqueue_entity()
			if (entity_is_task())
				util_bias_enqueue(p)
			if root_cfs()
				// No duplicate cpufreq updates
				cpufreq_update_util()

dequeue_task_fair():
	for_each_sched_entity()
		dequeue_entity()
			if (entity_is_task())
				util_bias_dequeue(p)
				// No need to traverse twice.
				propagate_negative_bias(p)
			if root_cfs()
				// No duplicate cpufreq updates
				cpufreq_update_util()

This new structure will address both of your concerns.

>>       /* balance early to pull high priority tasks */
>>       if (unlikely(!was_sched_idle && sched_idle_rq(rq)))
>> [..snip..]
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ