lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c787d2c1-aa28-2f44-48ef-e706e6c8cb2b@bytedance.com>
Date:   Fri, 10 Jun 2022 18:55:29 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, rostedt@...dmis.org,
        bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
        vschneid@...hat.com
Cc:     linux-kernel@...r.kernel.org, duanxiongchun@...edance.com,
        songmuchun@...edance.com
Subject: Re: [External] Re: [PATCH v2] sched/fair: combine detach into dequeue
 when migrating task

Hi,

On 2022/6/10 18:49, Dietmar Eggemann wrote:
> On 09/06/2022 05:53, Chengming Zhou wrote:
>> When we are migrating task out of the CPU, we can combine detach
>> into dequeue_entity() to save the independent detach_entity_cfs_rq()
>> in migrate_task_rq_fair().
>>
>> This optimization is like combining DO_ATTACH in the enqueue_entity()
>> when migrating task to the CPU.
>>
>> So we don't have to traverse the CFS tree twice to do these load
>> detach and propagation.
> 
> By `propagation` you refer to the detach_entity_cfs_rq() ->
> propagate_entity_cfs_rq() call?
> This one wouldn't be called anymore with your change.

Yes.

> 
> [...]
> 
>> @@ -4426,6 +4435,14 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
>>  static void
>>  dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>  {
>> +	int action = UPDATE_TG;
>> +
>> +	/*
>> +	 * If we are migrating task from the CPU, detach load_avg when dequeue.
>> +	 */
>> +	if (entity_is_task(se) && task_of(se)->on_rq == TASK_ON_RQ_MIGRATING)
> 
> - if (entity_is_task(se) && task_of(se)->on_rq == TASK_ON_RQ_MIGRATING)
> + if (entity_is_task(se) && task_on_rq_migrating(task_of(se)))

Better, will do.

> 
>> +		action |= DO_DETACH;
> 
> With the `entity_is_task(se)` you make sure we only call
> detach_entity_load_avg() and update_tg_load_avg() for the se
> representing the task itself (and not taskgroups the task might run in).
> So IMHO this looks good.
> 
> You save the propagate_entity_cfs_rq(&p->se) call from (2) by doing the
> detach_entity_load_avg(), update_tg_load_avg() for a migrating task
> inside (1) (the task being the first se in the loop )
> 
> detach_task()
>   deactivate_task()
>     dequeue_task_fair()
>       for_each_sched_entity(se)
>         dequeue_entity()
>           update_load_avg() /* (1) */
> 
>   set_task_cpu()
>     migrate_task_rq_fair()
>       /* called detach_entity_cfs_rq() before the patch (2) */

Thanks for the description, it's clearer, I should put it in the commit message.

> 
> [...]
> 
>> @@ -6940,15 +6957,10 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
>>  		se->vruntime -= min_vruntime;
>>  	}
>>  
>> -	if (p->on_rq == TASK_ON_RQ_MIGRATING) {
>> -		/*
>> -		 * In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old'
>> -		 * rq->lock and can modify state directly.
>> -		 */
>> -		lockdep_assert_rq_held(task_rq(p));
>> -		detach_entity_cfs_rq(&p->se);
>> -
>> -	} else {
>> +	/*
>> +	 * In case of TASK_ON_RQ_MIGRATING we already detach in dequeue_entity.
>> +	 */
>> +	if (p->on_rq != TASK_ON_RQ_MIGRATING) {
> 
> - if (p->on_rq != TASK_ON_RQ_MIGRATING) {
> + if (!task_on_rq_migrating(p)) {

Will do. Thanks!

> 
> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ