lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Feb 2014 10:42:51 +0800
From:	Lei Wen <adrian.wenl@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Lei Wen <leiwen@...vell.com>, mingo@...hat.com,
	preeti.lkml@...il.com, daniel.lezcano@...aro.org,
	viresh.kumar@...aro.org, xjian@...vell.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote:
>> Since cpu which is put into quiescent mode, would remove itself
>> from kernel's sched_domain. So we could use search sched_domain
>> method to check whether this cpu don't want to be disturbed as
>> idle load balance would send IPI to it.
>>
>> Signed-off-by: Lei Wen <leiwen@...vell.com>
>> ---
>>  kernel/sched/fair.c | 14 +++++++++++---
>>  1 file changed, 11 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 235cfa7..14230ae 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6783,6 +6783,8 @@ out_unlock:
>>   * - When one of the busy CPUs notice that there may be an idle rebalancing
>>   *   needed, they will kick the idle load balancer, which then does idle
>>   *   load balancing for all the idle CPUs.
>> + * - exclude those cpus not inside current call_cpu's sched_domain, so that
>> + *   those isolated cpu could be kept in their quisecnt mode.
>>   */
>>  static struct {
>>       cpumask_var_t idle_cpus_mask;
>> @@ -6792,10 +6794,16 @@ static struct {
>>
>>  static inline int find_new_ilb(void)
>>  {
>> -     int ilb = cpumask_first(nohz.idle_cpus_mask);
>> +     int ilb;
>> +     int cpu = smp_processor_id();
>> +     struct sched_domain *tmp;
>>
>> -     if (ilb < nr_cpu_ids && idle_cpu(ilb))
>> -             return ilb;
>> +     for_each_domain(cpu, tmp) {
>> +             ilb = cpumask_first_and(nohz.idle_cpus_mask,
>> +                             sched_domain_span(tmp));
>> +             if (ilb < nr_cpu_ids && idle_cpu(ilb))
>> +                     return ilb;
>> +     }
>
> The ILB code is bad; but you just made it horrible. Don't add pointless
> for_each_domain() iterations.
>
> I'm thinking something like:
>
>   ilb = cpumask_first_and(nohz.idle_cpus_mask, this_rq()->rd.span);
>
> Should work just fine, no?

Yes, it has the same result as my previous patch did.

>
> Better still would be to maybe not participate in the ILB in the first
> place and leave this selection loop alone.

Not quitely get your point here...
Do you mean that you want idle cpu selection be put in earlier place
than current find_new_ilb is?

Thanks,
Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ