lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Feb 2018 09:37:02 -0800
From:   Subhra Mazumdar <subhra.mazumdar@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Steven Sistare <steven.sistare@...cle.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        dhaval.giani@...cle.com
Subject: Re: [RESEND RFC PATCH V3] sched: Improve scalability of
 select_idle_sibling using SMT balance



On 2/2/18 9:17 AM, Peter Zijlstra wrote:
> On Fri, Feb 02, 2018 at 11:53:40AM -0500, Steven Sistare wrote:
>>>> +static int select_idle_smt(struct task_struct *p, struct sched_group *sg)
>>>>   {
>>>> +	int i, rand_index, rand_cpu;
>>>> +	int this_cpu = smp_processor_id();
>>>>   
>>>> +	rand_index = CPU_PSEUDO_RANDOM(this_cpu) % sg->group_weight;
>>>> +	rand_cpu = sg->cp_array[rand_index];
>>> Right, so yuck.. I know why you need that, but that extra array and
>>> dereference is the reason I never went there.
>>>
>>> How much difference does it really make vs the 'normal' wrapping search
>>> from last CPU ?
>>>
>>> This really should be a separate patch with separate performance numbers
>>> on.
>> For the benefit of other readers, if we always search and choose starting from
>> the first CPU in a core, then later searches will often need to traverse the first
>> N busy CPU's to find the first idle CPU.  Choosing a random starting point avoids
>> such bias.  It is probably a win for processors with 4 to 8 CPUs per core, and
>> a slight but hopefully negligible loss for 2 CPUs per core, and I agree we need
>> to see performance data for this as a separate patch to decide.  We have SPARC
>> systems with 8 CPUs per core.
> Which is why the current code already doesn't start from the first cpu
> in the mask. We start at whatever CPU the task ran last on, which is
> effectively 'random' if the system is busy.
>
> So how is a per-cpu rotor better than that?
In the scheme of SMT balance, if the idle cpu search is done _not_ in 
the last run core, then we need a random cpu to start from. If the idle 
cpu search is done in the last run core we can start the search from 
last run cpu. Since we need the random index for the first case I just 
did it for both.

Thanks,
Subhra

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ