lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6838a26a-7cfe-4c89-a68e-f8eab57a23fe@nvidia.com>
Date: Tue, 18 Mar 2025 23:37:31 +0100
From: Joel Fernandes <joelagnelf@...dia.com>
To: Andrea Righi <arighi@...dia.com>
Cc: Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org,
 David Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>,
 Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
 Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH RFC] sched_ext: Choose prev_cpu if idle and cache affine
 without WF_SYNC



On 3/18/2025 6:46 PM, Andrea Righi wrote:
> Hi Joel,
> 
> On Tue, Mar 18, 2025 at 01:00:47PM -0400, Joel Fernandes wrote:
> ...
>> From: Joel Fernandes <joelagnelf@...dia.com>
>> Subject: [PATCH] sched/ext: Make default idle CPU selection better
>>
>> Currently, sched_ext's default CPU selection is roughly something like
>> this:
>>
>> 1. Look for FULLY IDLE CORES:
>>     1.1. Select prev CPU (wakee) if its CORE is fully idle.
>>     1.2. Or, pick any CPU from fully idle CORE in the L3, then NUMA.
>>     1.3. Or, any idle CPU from fully idle CORE usable by task.
>> 2. Or, use PREV CPU if it is idle.
>> 3. Or any idle CPU in the LLC, NUMA.
>> 4. Or finally any CPU usable by the task.
>>
>> This can end up select any idle core in the system even if that means
>> jumping across NUMA nodes (basically 1.3 happens before 3.).
>>
>> Improve this by moving 1.3 to after 3 (so that skipping over NUMA
>> happens only later) and also add selection of fully idle target (waker)
>> core before looking for fully-idle cores in the LLC/NUMA. This is similar to
>> what FAIR scheduler does.
>>
>> The new sequence is as follows:
>>
>> 1. Look for FULLY IDLE CORES:
>>     1.1. Select prev CPU (wakee) if its CORE is fully idle.
>>     1.2. Select target CPU (waker) if its CORE is fully idle and shares cache
>>         with prev. <- Added this.
>>     1.3. Or, pick any CPU from fully idle CORE in the L3, then NUMA.
>> 2. Or, use PREV CPU if it is idle.
>> 3. Or any idle CPU in the LLC, NUMA.
>> 4. Or, any idle CPU from fully idle CORE usable by task. <- Moved down.
>> 5. Or finally any CPU usable by the task.
>>
>> Signed-off-by: Joel Fernandes <joelagnelf@...dia.com>
>> ---
>>  kernel/sched/ext.c | 26 +++++++++++++++++++-------
>>  1 file changed, 19 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
>> index 5a81d9a1e31f..324e442319c7 100644
>> --- a/kernel/sched/ext.c
>> +++ b/kernel/sched/ext.c
>> @@ -3558,6 +3558,16 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
>>  			goto cpu_found;
>>  		}
>>  
>> +		/*
>> +		 * If the waker's CPU shares cache with @prev_cpu and is part
>> +		 * of a fully idle core, select it.
>> +		 */
>> +		if (cpus_share_cache(cpu, prev_cpu) &&
>> +		    cpumask_test_cpu(cpu, idle_masks.smt) &&
>> +		    test_and_clear_cpu_idle(cpu)) {
> 
> I think this is always false, because cpu is still in use by the waker and
> its state hasn't been updated to idle yet.
> 
Summarizing our conference in-person meeting, we verified that in case of
wakeups having from IRQ, the waker could be in idle state at the time of select.
So I'll keep the change and will rebase and send it out again soon.

thanks,

 - Joel


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ