lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Oct 2022 19:10:22 +0800
From:   Hao Jia <jiahao.os@...edance.com>
To:     Mel Gorman <mgorman@...e.de>
Cc:     mingo@...hat.com, peterz@...radead.org, mingo@...nel.org,
        juri.lelli@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        bristot@...hat.com, vschneid@...hat.com,
        mgorman@...hsingularity.net, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH 1/2] sched/numa: Stop an exhastive search
 if an idle core is found



On 2022/10/25 Mel Gorman wrote:
> On Tue, Oct 25, 2022 at 11:16:29AM +0800, Hao Jia wrote:
>>> Remove the change in the first hunk and call break in the second hunk
>>> after updating ns->idle_cpu.
>>>
>>
>> Yes, thanks for your review.
>> If I understand correctly, some things might look like this.
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index e4a0b8bd941c..dfcb620bfe50 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -1792,7 +1792,7 @@ static void update_numa_stats(struct task_numa_env
>> *env,
>>                  ns->nr_running += rq->cfs.h_nr_running;
>>                  ns->compute_capacity += capacity_of(cpu);
>>
>> -               if (find_idle && !rq->nr_running && idle_cpu(cpu)) {
>> +               if (find_idle && idle_core < 0 && !rq->nr_running &&
>> idle_cpu(cpu)) {
>>                          if (READ_ONCE(rq->numa_migrate_on) ||
>>                              !cpumask_test_cpu(cpu, env->p->cpus_ptr))
>>                                  continue;
>>
> 
> I meant more like the below but today I wondered why did I not do this in
> the first place? The answer is because it's wrong and broken in concept.
> 
> The full loop is needed to calculate approximate NUMA stats at a
> point in time. For example, the src and dst nr_running is needed by
> task_numa_find_cpu. The search for an idle CPU or core in update_numa_stats
> is simply taking advantage of the fact we are scanning anyway to keep
> track of an idle CPU or core to avoid a second search as per ff7db0bf24db
> ("sched/numa: Prefer using an idle CPU as a migration target instead of
> comparing tasks")
> 
> The patch I had in mind is below but that said, for both your version and
> my initial suggestion
> 
> Naked-by: Mel Gorman <mgorman@...e.de>
> 
> For the record, this is what I was suggesting initially because it's more
> efficient but it's wrong, don't do it.
> 

Thanks for the detailed explanation, maybe my commit message misled you.

Yes, we can't stop the whole loop of scanning the CPU because we have a 
lot of NUMA information to count.

But we can stop looking for the next idle core or idle cpu after finding 
an idle core.

So, please review the previous code.


Thanks,
Hao

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e4a0b8bd941c..7f1f6a1736a5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1800,7 +1800,12 @@ static void update_numa_stats(struct task_numa_env *env,
>   			if (ns->idle_cpu == -1)
>   				ns->idle_cpu = cpu;
>   
> +			/* If we find an idle core, stop searching. */
>   			idle_core = numa_idle_core(idle_core, cpu);
> +			if (idle_core >= 0) {
> +				ns->idle_cpu = idle_core;
> +				break;
> +			}
>   		}
>   	}
>   	rcu_read_unlock();
> @@ -1808,9 +1813,6 @@ static void update_numa_stats(struct task_numa_env *env,
>   	ns->weight = cpumask_weight(cpumask_of_node(nid));
>   
>   	ns->node_type = numa_classify(env->imbalance_pct, ns);
> -
> -	if (idle_core >= 0)
> -		ns->idle_cpu = idle_core;
>   }
>   
>   static void task_numa_assign(struct task_numa_env *env,
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ