lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Nov 2022 14:40:36 +0800
From:   Tianchen Ding <dtcccc@...ux.alibaba.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Chen Yu <yu.c.chen@...el.com>
Cc:     Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: Clear ttwu_pending after enqueue_task

On 2022/11/1 22:59, Peter Zijlstra wrote:
> On Tue, Nov 01, 2022 at 09:51:25PM +0800, Chen Yu wrote:
> 
>>> Could you try the below instead? Also note the comment; since you did
>>> the work to figure out why -- best record that for posterity.
>>>
>>> @@ -3737,6 +3730,13 @@ void sched_ttwu_pending(void *arg)
>>>   			set_task_cpu(p, cpu_of(rq));
>>>   
>>>   		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0, &rf);
>>> +		/*
>>> +		 * Must be after enqueueing at least once task such that
>>> +		 * idle_cpu() does not observe a false-negative -- if it does,
>>> +		 * it is possible for select_idle_siblings() to stack a number
>>> +		 * of tasks on this CPU during that window.
>>> +		 */
>>> +		WRITE_ONCE(rq->ttwu_pending, 0);
>> Just curious why do we put above code inside llist_for_each_entry_safe loop?
> 
>> My understanding is that once 1 task is queued, select_idle_cpu() would not
>> treat this rq as idle anymore because nr_running is not 0. But would this bring
>> overhead to write the rq->ttwu_pending multiple times, do I miss something?
> 
> So the consideration is that by clearing it late, you might also clear a
> next set; consider something like:
> 
> 
> 	cpu0			cpu1			cpu2
> 
> 	ttwu_queue()
> 	  ->ttwu_pending = 1;
> 	  llist_add()
> 
> 				sched_ttwu_pending()
> 				  llist_del_all()
> 				  ... long ...
> 							ttwu_queue()
> 							  ->ttwu_pending = 1
> 							  llist_add()
> 
> 				  ... time ...
> 				  ->ttwu_pending = 0
> 
> Which leaves you with a non-empty list but with ttwu_pending == 0.
> 
> But I suppose that's not actually better with my variant, since it keeps
> writing 0s. We can make it more complicated again, but perhaps it
> doesn't matter and your version is good enough.
> 

Yeah. Since your version repeats writting 0 to ttwu_pending, it finally reaches 
the same effect with mine. Although the performance results in my tests seem to 
be no difference, it may still bring more overhead.

IMO, according to the latest linux-next code, all callers querying 
rq->ttwu_pending only take cares about whether the cpu is idle because they 
always combine with querying nr_running. Actually no one cares about whether 
wake_entry.llist is empty. So for the use of checking cpu idle state, move 
rq->ttwu_pending=0 after enqueuing task can help fully cover the whole state.

For your case, although ttwu_pending is set to 0 with some tasks really pending, 
at this time nr_running is sure to be >0, so callers who query both ttwu_pending 
and nr_running will know this cpu is not idle.
(Now the callers querying these two values are lockless, so there may be race in 
a really small window? But this case is extremely rare, I think we should not 
make it more complicated.)

> But please update with a comment on why it needs to be after
> ttwu_do_activate().

OK. Should I send v2 or you directly add the comment?

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ