lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 1 Jun 2022 20:02:13 +0800
From:   Tianchen Ding <dtcccc@...ux.alibaba.com>
To:     Valentin Schneider <vschneid@...hat.com>
Cc:     Ingo Molnar <mingo@...hat.com>, Mel Gorman <mgorman@...e.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: Queue task on wakelist in the same llc if the
 wakee cpu is idle

On 2022/6/1 18:58, Valentin Schneider wrote:
> On 01/06/22 13:54, Tianchen Ding wrote:
>> On 2022/5/31 23:56, Valentin Schneider wrote:
>>
>>> Thanks!
>>>
>>> So I'm thinking we could first make that into
>>>
>>> 	if ((wake_flags & WF_ON_CPU) && !cpu_rq(cpu)->nr_running)
>>>
>>> Then building on this, we can generalize using the wakelist to any remote
>>> idle CPU (which on paper isn't as much as a clear win as just WF_ON_CPU,
>>> depending on how deeply idle the CPU is...)
>>>
>>> We need the cpu != this_cpu check, as that's currently served by the
>>> WF_ON_CPU check (AFAIU we can only observe p->on_cpu in there for remote
>>> tasks).
>>>
>>> ---
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index 66c4e5922fe1..60038743f2f1 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -3830,13 +3830,20 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>>>    	if (!cpus_share_cache(smp_processor_id(), cpu))
>>>    		return true;
>>>    
>>> +	if (cpu == smp_processor_id())
>>> +		return false;
>>> +
>>>    	/*
>>>    	 * If the task is descheduling and the only running task on the
>>>    	 * CPU then use the wakelist to offload the task activation to
>>>    	 * the soon-to-be-idle CPU as the current CPU is likely busy.
>>>    	 * nr_running is checked to avoid unnecessary task stacking.
>>> +	 *
>>> +	 * Note that we can only get here with (wakee) p->on_rq=0,
>>> +	 * p->on_cpu can be whatever, we've done the dequeue, so
>>> +	 * the wakee has been accounted out of ->nr_running
>>>    	 */
>>> -	if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
>>> +	if (!cpu_rq(cpu)->nr_running)
>>>    		return true;
>>>    
>>>    	return false;
>>
>> Hi Valentin. I've done a simple unixbench test (Pipe-based Context
>> Switching) on my x86 machine with full threads (104).
>>
>>                old            patch1           patch1+patch2
>> score       7825.4     7500(more)-8000          9061.6
>>
>> patch1: use !cpu_rq(cpu)->nr_running instead of cpu_rq(cpu)->nr_running <= 1
>> patch2: ignore WF_ON_CPU check
>>
>> The score of patch1 is not stable. I've tested for many times and the
>> score is floating between about 7500-8000 (more at 7500).
>>
>> patch1 means more strict limit on using wakelist. But it may cause
>> performance regression.
>>
>> It seems that, using wakelist properly can help improve wakeup
>> performance, but using it too much may cause more IPIs. It's a trade-off
>> about how strict the ttwu_queue_cond() is.
>>
>> Anyhow, I think patch2 should be a pure improvement. What's your idea?
> 
> Thanks for separately testing these two.
> 
> I take it the results for patch1 are noticeably more swingy than the
> baseline? (FWIW boxplots are usually a nice way to summarize those sort of
> results).
> 

Hmm... I'm not familiar with this...
T want to say that I'm not sure about the performance impact about 
patch1. While from the view of logic, patch1 should be correct.

> WF_ON_CPU && nr_running == 1 means the wakee is scheduling out *and* there
> is another task queued, I'm guessing that's relatively common in your
> unixbench scenario...
> 
> Either way, I think we want to keep the two changes separate for the sake
> of testing and bisecting.

Yes. I'll split the patch to 2 parts. One for logic fix and another for 
performance improvement.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ